Objectives

Part I

Lab 2 Part 1 will ask you to program your robot to perform a few maneuvers using open loop control. The instructions at the end of this assignment are one way to do this but not the only way. You will be graded on whether your code, demo,  and other submissions meet the specifications below, not on following the instructions exactly.

This lab requires that you create:

  • A new repo for your lab code. Follow the Setting Up instructions to create a new Github repo and share with your lab partner. If you are in a group of one you do not need to do this.
    NOTE: Name your repo using ONLY lowercase letters, numbers, and dashes.
  • A new package to hold your open loop control code
  • Your code (defined below) must use the Duckietown FSM node to allow joystick control to take over. Unless you want to experiment with the config file for the FSM node (optional), your code should:
    • Publish to the same topic that lane_controller_node sends to car_cmd_switch_node. (Run the lane following demo to find this topic)
    • Listen for the mode switch from FSM node to begin your pattern. It is difficult to run the entire pattern from the FSM callback, be wary about receiving duplicate messages when transferring modes.
    • DEMONSTRATE this functionality (can be in conjunction with other demos or before).
  • A node or nodes that perform the following maneuver:
    • Drive straight for 1m then stop for 5 seconds. DEMONSTRATE this before moving on. Note that it does not need to be incredibly precise.
    • Turn 90 degrees, drive 1m then stop for 5 seconds. DEMONSTRATE this before moving on.
    • Turn 90 degrees
    • Repeat THESE EXACT COMMANDS twice more to make something like a square. Note that it will not be a perfect square. Only the first two corners are graded. DEMONSTRATE this.
  • A launch file to start this maneuver

Part II (EECE.5560 only)

The goal of this part of the lab is for you to develop a ROS node that estimates robot odometry on the Duckiebot. This odometry will not be exact because the robot will not do exactly what we tell it to do, but let’s try to get close. In this lab, you should:

  • Create one new node to calculate the current position of the robot based on its motion. This node should subscribe to either the wheel encoder data or the wheel velocities (see detailed instructions below) and publish a Pose2D message with the current calculated estimate of the robot’s (x, y, theta) pose. See below for technical details.
  • Create a new launch file that starts your odometry node with the pattern made in Part 1.
  • Test the pattern at least three times and report the final position that your odometry node calculates for each test. Record the position at each corner of your square by measuring the actual position of the robot and noting the calculated position of the robot. There are meter sticks in the lab for your use.
  • Note that you will not be able to use the graphing function from HW6, but you can still measure the accuracy of your odometry.

Submission Instructions

Part I

Submit a PDF to Blackboard with the following information:

  1. The names of the people in your group
  2. A link to your lab repo and the tag that corresponds to this code (2 points)
  3. What node selects which command (joystick or lane following) to run on your Duckiebot? Justify your answer. (2 points)
  4. What do you think car_cmd_switch_node does? (2 points)
  5. What do the inputs to car_cmd_switch_node look like? You can focus on the joystick input here. Move the robot around and watch it change. Describe the components of the message (data types) and their purpose. (2 points)
  6. At some point, the command is converted from linear/angular velocity to commands for the left and right wheels. Which node does this? (2 points)
  7. During Lab 1, you calibrated the wheels. Which node do you think accepts this calibration value and adjusts commands as needed? Justify your answer. (2 points)
  8. What difficulties did you have in tuning your robot to make a pattern? (4 points)
  9. Do you think you could make your robot reliably drive around a circular/oval track like in the lab without any feedback from the camera? Why or why not? (4 points)

Demonstrate the pattern in class.

Part II (EECE.5560 only)

Submit a PDF to Blackboard with the following information.

  1. Provide a link to your repo and the tag that corresponds to this code
  2. Report measured vs calculated (by your odometry node) positions for each leg of three trials of  your square (so 12 total measurements). There are meter sticks in the lab to measure your accuracy.
  3. A description of any problems you had completing this lab.

Demonstrate the odometry of your square pattern in class.

Rubric

Part I

5 points: Package creation

25 points algorithm

10 points accuracy at first corner (-1 point for every 12.5 cm from corner)

10 points accuracy at second corner (-1 point for every 25 cm from corner)

20 points running same commands again to complete square (not graded on accuracy)

10 points starts when A is pressed in keyboard control, stops when S is pressed.

20 points: Answers to questions

Part II (EECE.5560 only)

15 points Algorithm

20 points Accuracy in demo (measured versus actual position, different from Part 1)

15 points: Accuracy as reported by student

Detailed Instructions

Part I

  1. Experiment with your robot to answer the questions above. Start it up, start the lane following demo, and move it around with keyboard_control. Refer to Lab 1 if you have any problems here.
  2. Start your container and create a new package for this lab in the “packages” folder of your repo. Note that you will need to add duckietown_msgs as a dependency. This can be added to the end of the catkin_create_package command that you used before.
  3. Create a new node in that package that publishes a command in the right format for car_cmd_switch_node to accept. Do NOT include the vehicle name in the topic, use a namespace in your launch file as shown below instead. This will allow your code to be run on any vehicle. Select values that make your robot drive ~1m in a straight line. You will have to guess in this first trial, we will test in the next few steps. Make sure that you make this node executable
    $ chmod +x <node_name>
  4. Create a launch file for your node such that it looks something like this:
    <launch>
        <group ns="$(env VEHICLE_NAME)">
            START YOUR NODE HERE
        </group>
        <include file="$(find anti_instagram)/launch/anti_instagram_node.launch">
            <arg name="veh" value="$(env VEHICLE_NAME)"/>
        </include>
        <include file="$(find line_detector)/launch/line_detector_node.launch">
            <arg name="veh" value="$(env VEHICLE_NAME)"/>
        </include>
        <include file="$(find lane_filter)/launch/lane_filter_node.launch">
            <arg name="veh" value="$(env VEHICLE_NAME)"/>
        </include>
        <include file="$(find ground_projection)/launch/ground_projection_node.launch">
            <arg name="veh" value="$(env VEHICLE_NAME)"/>
        </include>
        <include file="$(find led_emitter)/launch/led_emitter_node.launch">
            <arg name="veh" value="$(env VEHICLE_NAME)"/>
        </include>
        <remap from="fsm_node/set_pattern" to="led_emitter_node/set_pattern"/>
        <include file="$(find fsm)/launch/fsm_node.launch">
            <arg name="veh" value="$(env VEHICLE_NAME)"/>
            <arg name="param_file_name" value="lane_following"/>
        </include>
    </launch>

    This will start your node inside the namespace for your robot. Keep <group ns=”$(env VEHICLE_NAME)”> EXACTLY as shown above, it will find your vehicle name.

  5. Commit these files. At this point, you can exit your container as we will do the rest on the robot
  6. Start up your robot. We need to build the container with this new package in it, but we want to build it on the robot this time instead of on your laptop. Run:
    $ dts devel build -H <duckiebot_name>.local --pull

    The first time this command runs it may take a while. To get some additional feedback, you may want to run this first to download the core docker containers.

    OPTIONAL COMMANDS BELOW
    $ docker -H <duckiebot_name>.local pull duckietown/dt-core:daffy-arm64v8

  7. Now we need to run the code. Start the container on your robot with:
    $ dts devel run -H <duckiebot_name>.local -s -M --cmd bash

    The -s flag copies your repo to the /code folder on the robot so that it can be mounted by the -M flag and you do not need to rebuild for every small change. It is unlikely that you will have to rebuild during this lab.

  8. Run your code within the container you just started on your robot. You should be able to see that the robot moves with joystick control but your code moves the robot when it is in autonomous mode. If not, debug! You can use
    $ dts start_gui_tools <duckiebot_name>

    to figure out what is going on using standard ROS tools.

  9. Revise your code until it can reliably drive 1m in a straight line then stop. The slides provide additional information for syncing code with your robot.
  10. Experiment to find the right command/timing for the robot to make a 90 degree turn in place.
  11. Follow the above instructions to complete the square pattern. Tune the first two legs until they perform acceptably.

Part II (EECE.5560 only)

There are two ways to complete this lab:

1. Assume that the vehicle is performing the commands exactly as ordered and integrate the wheel velocities provided by the topic.
wheels_driver_node/wheels_cmd. You will need to perform some experiments to determine the correspondence between the numbers given here and the actual speed of the wheels. It is highly unlikely that 1 unit of velocity input to the robot is exactly 1 m/s in the real world!

2. Use the wheel encoder data provided by left_wheel_encoder_node/tick and right_wheel_encoder_node/tick. You will need to find the correspondence of ticks to each wheel movement in real-life. You will then need to record each tick as they come in and, likely using a timer. Calculate odometry using both.

Find the correspondence between the data from the Duckiebot (using either method) and real-world measurements. Drive your robot around and record how the values change. You may find that the size of your wheels and the baseline distance between the wheels is important. You can use keyboard control or directly publish to the command topic to make your vehicle drive at a constant speed.

When you complete your node, test it thoroughly. You may find that the robot has a consistent bias in one direction or another. You may need to add a scaling factor to one or more components of the input or output. You are free to do this, as long as the same scaling factors are present in all trials.