Vision is still one of the very few fields where a human being could outsmart a computer. Still. Colour/Light sensors are the cornerstones of implementing a smart LEGO Mindstorms robot that could at least partially do "vision". In this video tutorial, we are using the robotics sensor to detect loading and unloading of the catapult.
- 05 Apr 2015
Light & Colors are all around us. Humans are designed to do vision and to do pattern recognition. What we know for certain is that when an artificial intelligence becomes smart enough to do vision, as good as humans do, we would mark a great milestone in robots development.
The EV3/NXT Catapult robot construction is very fun and we use the EV3 color sensor to detect when the Catapult has just fired and when to stop loading it. In this way, we completely automate the robot. The program is implemented with EV3-G Software.
All episodes from the series:
- In this video lesson, we are returning back to the catapult. As you recall we can load the catapult manually and automatically and you can fire small, LEGO parts. The problem was how do we load the catapult, and when do we stop loading and then we're going to use the EV3 Color Sensor.
First, let's think of where should we place the sensor. It's a good idea to place the sensor in a way that it can detect the back of your brick. In this way when you load the catapult and get the sensor right here, it will see nothing but when we fire and we will start it, again it will see the back of the brick. Let's place sensor. Here it is and from your angle, again the purpose of placing the sensor here is that when we fire an element at the end we will see the back of the brick and we know that we should load the catapult again and then we fire. We see the back of the brick. We load again. Let's go and do the problem. Initially the catapult has just fired an element and the brick is down. The current sensor detects the back of the brick, and the lights are affected from the back of the brick and the value of this color sensor is currently 22. So if we now load the catapult we can see that the color sensor no longer detects the back of the brick and the value of the color sensor is two. Twenty something and two. We're going to this threshold values to implement our algorithm. Keeping in mind these two values 2 and 24. This is our problem. We wait until we have just fired an element and we can detect the back of the brick and we detect the back of the brick if we see a value of more than let's say 15. If we see something that's more than 15 this means that we should just fire an element the brick is down and we are ready for loading. We wait for about a second and then we start the medium motor. We start the medium motor in the opposite direction because it's how the construction is. and we just turn it on. The medium motor, turn it on opposite direction and we wait. We wait until we see something that's less than two or at least three-value threshold. We've got to make it more, not that fragile to make it working in more cases we use a lot of five. It's just an experimental value, experiment with this and the problem we have. We are waiting for the catapult to fire, put the brick down then we start loading and then we wait until the catapult is loaded and then we stop. We stop motor A and we put all this in the loop. Let's now see how the program works. Let's now run the program. We start, we load the catapult, we are currently waiting because you can see this block to detect something that's more than 15 which means the back of the brick, we fire. We load again, we fire, load again and this is how you have a catapult using a color sensor to detect when to load and when to fire. In the next video, we're going to calibrate the sensor because in this video, we are not calibrating it, and you might have different light conditions in the room and you always want to calibrate the sensor before using it.