Course on "Fast Robots", offered Spring 2022 in the ECE dept at Cornell University
This project is maintained by CEI-lab
In this lab you will perform localization, using only the update step of the Bayes filter, on your actual robot. The point of the lab is to appreciate the difference between simulation and real-world systems. We will also provide you with the localization code.
Consider going through the lectures on Sensor Models, Motion models.
Please refer Lab 10 and Lab 11.
We provide you with a fully-functional and optimized Bayes filter implementation that works on the virtual robot. You will make changes in the code so that the module can work with your real robot. Note the time it takes to run the prediction and update step. You will find that we took several measures to enable fast computation in practice:
Since there is no automated method to get the real robot’s ground truth pose; you will need to identify certain known poses. You can measure the distance from the origin (0,0) of the map in terms of the number of (1 feet long) tiles in the x and y directions. For this lab, you will need to place your robot in the 4 marked positions in the map.
Your robot needs to output ToF sensor readings taken at 20 degree increments, starting from the robot’s heading. You can use both your ToF sensors to collectively output sensor range readings at 20 degree increments (i.e. +0, +20, +40 ,… +340 degrees from the current robot heading). Positive angles are along the counter-clockwise direction.
If you are unable to get 18 sensor readings that are 20 degrees apart, you can change the number of sensor readings per loop (
observation_count
) in world.yaml. Remember that reducing the number of observations reduces the accuracy of your localized pose.
Do not copy the utils.py file.
The four marked poses in the lab are:
np.array(array)[np.newaxis].T
sensor_sigma
in sensor_sigma).await syncio.sleep(3)
) inside a function, you need to add the keyword async
to the function definition. Here is an example:
async def sleep_for_3_secs():
await asyncio.sleep(3)
await sleep_for_3_secs()
NOTE: Notice the keyword “async” in the function definition. Any function that uses the keyword “await” (i.e. calls ascyncio coroutine), should have the async keyword in the definition. To call the async “function” or coroutine, use the “await” keyword.
await
inside RealRobot.perform_observation_loop()
for some reason (for e.g. you are using BLE handlers to get the observation data from the real robot), you need to make changes to the localization functions which call this function. Here is a walkthrough of the function definitions that need to be modified:
async
keyword to the function defintion RealRobot.perform_observation_loop()
async
keyword to the function defintion BaseLocalization.get_observation_data()
[Ref]await
keyword when calling the async coroutine perform_observation_loop()
[Ref]await
keyword to the line loc.get_observation_data()
NOTE: You could skip the above steps and instead directly call the asyncio sleep coroutine as
asyncio.run(asyncio.sleep(3))
inside the non-async functionRealRobot.perform_observation_loop()
, however, this may not be the right way to use asyncio coroutines and could pose issues.
To demonstrate that you’ve successfully completed the lab, please upload a brief lab report (<1.000 words), with code (not included in the word count), photos, and videos documenting that everything worked and what you did to make it happen. Include the robot’s belief after localization for each pose, compare it with the ground truth pose, and write down your inference.