ECE4960-2022

Course on "Fast Robots", offered Spring 2022 in the ECE dept at Cornell University

This project is maintained by CEI-lab

Cornell University: ECE 4960

Return to main page

Lab 6: Closed-loop control (PID)

Objective

The purpose of this lab is to get experience with PID control. The lab is fairly open ended, you can choose to do closed loop control on position or orientation, and you may do so with whatever works best for your system (P, PI, PID, PD). Your hand-in will be judged upon your demonstrated understanding of PID control and practical implementation constraints, and the quality of your solution.

This lab is part of a series of labs (6-8) on PID control, sensor fusion, and stunts, and you will continue to improve on your system throughout these weeks. This week, we will simply aim to get the basic behavior working. While we give you tips for improving the behavior now, if you run out of time, you can optimize more over the coming weeks.

Parts Required

Prelab / BLE

No matter which task you take on, it will be essential that you first setup a good system for debugging.

Please attempt to implement this before your lab session

A good technique will be to:

  1. Have the robot controller to start on an input from your computer sent over Bluetooth
  2. Execute PID control over a fixed amount of time (e.g. 5s) while storing debugging data in arrays
  3. Upon completion of the behavior, send the debugging data back to the computer over Bluetooth.

Debugging data may for example include sensor data with time stamps, output from the individual branches of your PID controller, and/or the input that you are sending to your motors. Remember, however, the storage cannot exceed the internal RAM of 384kB. If you plan to do a lot of tweaking of your gains, you can also consider writing a Bluetooth command that lets you update the gains without having to reprogram the Artemis.

Feel free to implement this in the way that works best for you, but we recommend using a callback function in Python to automate data gathering. (Please refer to Lab2 for information on the robot command protocol and the BLE code base.) The following text includes helpful hints to incorporate callback functions in your code:

Here is a reference notebook to give you some ideas.

NOTE: The BLE codebase has been updated on 21 Feb, 2022 (Version: 1.1) to fix a bug (in the Arduino code) related to converting negative float values into a string. Please re-download the codebase or apply (manually or using git) this patch to your old codebase.

You may consider the following best practices:

  1. Familiarize yourself with the Arduino code structure before you design your robot control architecture.
  2. Use the EString class and modify it as required. It has most of the low level data handling code.
  3. Beware of buffer overflows when dealing with character arrays (C-strings) in the Arduino code! The EString class utilizes a C-string member variable char_array of size 150. The append member functions are not memory safe!
  4. Refer to the “Global Variables” section of ble_arduino.ino to create new characteristics. You will need to create a corresponding entry in connection.yaml. You are not expected to follow the naming paradigm (“RX_” in ble_arduino.ino has a corresponding “TX_” in connection.yaml) used in the base code.
  5. Please checkout the Jupyter introduction notebook and/or tutorial (~10 min read)! It will familiarize you with some basic terminology and functionality.
  6. Always check the FAQ when you get stuck with a problem. For example, it shows you a quick way to generate new UUIDs.
  7. Your notification function callbacks should perform minimal processing. All you have to do is to store the value and/or time of the event.
  8. Do not call a “receive_*” function inside the callback handler. It defeats the purpose of the BLE notify functionality.
  9. Checkout the Tutorials section for primers (python classes, CLI, Jupyter Lab) and other helpful resources.

Lab Procedure

Choose on of the following three tasks. The objective of any of these tasks will be to achieve reliable and accurate, yet fast control. Note that the task you choose in this lab will eventually create the basis of the stunt you can attempt in Lab 8.

Tips and tricks:

Task A: Don’t Hit the Wall!!

For this task, you will have your robot drive as fast as possible (given the quality of your controller) towards a wall, then stop when it is exactly 300mm away from the wall using feedback from the time of flight sensor. Your solution should be robust to changing conditions, such as the starting distance from the wall (2-4m) and the surface (linoleum/carpet). The catch is that any overshoot or processing delay may lead to crashing into the wall.

Beyond the considerations mentioned above, think about the following:

Below you can see an example of a simple PI controller acting on the TOF signal.

Corresponding videos are here:

Solution 1 Solution 2.

** NOTE: If you choose this task, your eventual stunt will involve speeding towards the 300mm position (where a small sticky matt will be located), then doing a vertical flip and driving back in the direction from where you came. In this lab you will be limited by sensor sampling rate and you may have to lower the motor speed accordingly. In Lab 7 we will look at sensor fusion as a way to overcome this problem.

Task B: Drift much?

For this task, you will have the robot drive fast forward, then turn with drift to do a 180 degree turn, and return the direction it came from without stopping. The PID controller should be controlling the orientation of your robot by introducing a difference in motor speeds. Before trying to get your car to drift, try controlling the orientation of your car while it is stationary, then introduce a base speed.

Beyond the considerations mentioned above, think about the following:

Below you can see an example of the controller maintaining a constant orientation even when the robot is kicked.

Kick

Below is an example of the robot drifting and doing a 180 degree turn as well as graphs of the setpoint, angle, and control signal for the PID controller.

Drift

** NOTE: If you choose this task, your eventual stunt will involve speeding towards a wall until you are 800mm out, then turning 180 degrees and driving back from where you came (as fast as possible). Just like in Task A, lab 7 will involve sensor fusion on the time of flight sensor to help you estimate the distance to the wall at a high sampling rate.

Task C: Thread the needle!

For this task, you will have your robot drive as fast as possible (given the quality of your controller) along a wall at a fixed distance of 300mm, given feedback from your time of flight sensor and the gyroscope. You know what the catch is!

Beyond the considerations mentioned above, think about the following:

Below you can see an example of a simple P controller acting on the TOF signal from a distance sensor mounted sideways:

Corresponding videos are here:

Solution 1 Solution 2.

** NOTE: If you choose this task, your eventual stunt will involve speeding as fast as possible along a wall with a foam figure balancing on top; the goal being to perfectly hit a similarly figure-shaped hole in a perpendicular wall placed at least 2m in front of your starting position.

Write-up

To demonstrate that you’ve successfully completed the lab, please upload a brief lab report (<800 words), with code snippets (not included in the word count), graphs of sampled data, videos and/or photos documenting that everything worked reliably and what you did to make it happen.