Machine learning #2658
Replies: 7 comments 5 replies
-
|
The error occurs because your model only returns two values (category and confidence), but you're trying to unpack more, which is why it's failing. You need to adjust the code to match what the block actually delivers. In Pybricks ML, classification only provides label and confidence, not x/y or width/height. Those values are only available with object detection, which isn't included in that system. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for trying these out! Glad you liked it. There is a third model already that you can try out here: It can detect objects with position without any training. It can detect every-day objects such as For each object that you choose for detection, you will get 5 values. So if you choose 2 objects you will get lists of 10 values. It is quite similar the simple color tracker, with one extra value per object. You get X/Y/Width/Height/Confidence. If you make something, share a video here 😄
|
Beta Was this translation helpful? Give feedback.
-
|
Hello Lauren: I was able to create a basic program that has an orange and a car as objects to find. I have attached the Pybricks block program for those interested to see what I did. Here is the Google Drive link to a short video I created. The program begins to rotate the robot in place in 10 degree increaments. Once the orange is spotted the robot changes to 5 degree increaments. If the orange is centered (roughly) the robot moves towards the orange. When the robot is very close it stops getting closer. and starts beeping. How do I convert the percentage values to coordinates that would make it easier to move towards the object? I see that I can use the confidence variable, confidence1 = orange, confidence2 = car. So if the confince is greater than 0 an object is spotted. This is what I did in my program. Or is there another way? My next step is to work out is how to control which object I am looking for in a better way. Maybe I need to create a function that handles obtaining the values x, y, width, height, based on which object and confidence level I am looking for? Anyone with suggestions to make the program better your suggestions would be most appreciated. |
Beta Was this translation helpful? Give feedback.
-
|
This link should work |
Beta Was this translation helpful? Give feedback.
-
|
I have managed to make use of the x and y values. I used the unpack block to call the get x and y app block and assign them to x and y variables. I thought I could add more variables but that did not work. So, how do I either get all 5 app values or get specific ones like x, y, and confidence? Any help would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
|
Okay??? After a bit of playing around I discover that the get x and y my block is not setting up the first and second oject but rather the first and second values for each object. So in order to get specific values the list get block sets up which value to get. I added a third list get block with an add of 4 for the fifth value -- confidence. |
Beta Was this translation helpful? Give feedback.
-
|
Here are two videos showing my current robot: |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
-
I have tried all three of the new Machine Learning tutorials. Thank for making t:his and Pybricks possible.
A couple of questions:
Is it possible to create other smart sensor setups like line following, getures? What about using the phone's microphone for speech recognition?
Is it possible when using the image classification to get the x, y, width and height values?
When I add more values to be unpacked I get this error:
File "L13_2_classify_minfig.py", line 17, in
ValueError: need more than 2 values to unpack
my block code:

Keep up the Great Work!
Beta Was this translation helpful? Give feedback.
All reactions