Objectives

Part I

The purpose of this lab is for you to create your own lane detector to compare with the Duckiebot lane detector. Comparisons in Part I will be largely qualitative but you should be able to get a sense of how well you did. You do NOT need to implement everything in the Tips section below, many different solutions are possible. Creativity is encouraged!

You must generate a node that:

  1. Subscribes to the camera topic on the robot.
  2. Publishes at least two images that overlay detected lines on the input image, much like line_detector_node/debug/segments/compressed does in Duckietown code. Please publish these to new topics. The format of the line overlays (color, style) is up to you. You may use the code provided in Homework 8 if you wish.
    1. One image for white lines.
    2. One image for yellow lines.
    3. Optionally: an image with both lines. If you do this, use a different color for white and yellow lines.

Create a launch file that starts this node and Duckietown lane following (see tip below) to demonstrate your lane detector.

Part II (EECE.5560 ONLY)

In Part II, you will connect your lane detector code to other Duckiebot code. You should:

  1. Modify your node from Part I so that it publishes the location of white and yellow lines such that the Duckietown lane controller can use them. Publishing to the ground projection node is suggested but other methods are possible. See tips below for details.
  2. Also, generate a launch file that starts your node and all Duckietown nodes needed to perform lane following. The Duckietown line detection node should NOT be started. See below for an example.

Tips

Part I

Code Structure You already have most of the code needed to complete this assignment from Homework 7 and Homework 8. It is recommended to combine these nodes into one node with only one callback. Typically, it is better to have multiple smaller nodes for extensibility purposes, but in image processing, this comes with the cost of sending and receiving several images. On an embedded system like a Raspberry Pi or Jetson Nano, this can produce significant lag. Thus, it is recommended to combine these nodes into one which takes an image from the Duckiebot camera and outputs the resulting line segments post-line detection. Still, you may wish to publish the results of each step for debugging purposes and you MUST publish an image with the line segments drawn on, similar to the Duckietown debug image. Simply publishing to rqt_image_view is unlikely to have much of an effect on the system.

Compressed Images On the robot, you will subscribe to the output of the camera. Note that this is a compressed image, so you will need to use a slightly different function in your OpenCV bridge:

self.bridge.compressed_imgmsg_to_cv2(<ros_image>)

There is no need to publish compressed images.

Processing Time Your image processing code will probably take a relatively long time to run on the robot. Images are published about every 50 ms and your code could take a whole second to complete. Left to default settings, ROS will patiently compile messages in a queue waiting for you to get to them, but this will mean your images will be seconds (or more) out of date by the time your code is ready. To prevent this, we need to give two more parameters to the subscriber function: queue_size=1 and buff_size=2**24. Note that buff_size can be anything bigger than we expect the image to be. Your subscriber line should look something like this:

rospy.Subscriber("camera_node/image/compressed", CompressedImage, self.lanefilter_cb, queue_size=1, buff_size=2**24)

Image Size To match the expectations of the ground projection node, we need to use the same size image as the Duckietown code and ensure we meet their conventions. You need to resize your image to 160 pixels wide and 120 pixels tall THEN remove the top 40 pixels from the image. This code should do that but feel free to use the method of your choice:

image_size = (160, 120)
offset = 40
new_image = cv2.resize(old_image, image_size, interpolation=cv2.INTER_NEAREST)
cropped_image = new_image[offset:, :]

Part II

Output Format Investigate the topic name and message type by starting the lane following demo and inspecting the ground projection node. Notice that the message has the type SegmentList, which contains a list of messages of type Segment. You can treat this list just like any other list in Python. Within the Segment message, you only need to provide values for color and pixels_normalized. No other component of that message needs a value. Note that this object is immutable, which means you will have to create a new object for each new image rather than reusing the old object.

Output Convention The Duckietown convention is to pass the line coordinates normalized to the image size, between 0 and 1. This is the pixels_normalized component of the Segment message. The fastest way to do this is with a numpy function. The example below uses the same variables as used in the resize step and should be put in your callback. If you changed any variable names you will have to modify the code below.

arr_cutoff = np.array([0, offset, 0, offset])
arr_ratio = np.array([1. / image_size[0], 1. / image_size[1], 1. / image_size[0], 1. / image_size[1]])

Then change each line segment your Hough transform outputs (below, this is the line variable) to this new format like this:

line_normalized = (line + arr_cutoff) * arr_ratio

Now your output should be coordinates between 0 and 1, just like in the lane following demo. Publish the normalized output of your Hough transform to the ground projection node. You should see your lines in the debug image from the ground projection node.

Launch File Take a look at the lane following demo launch file to see how Duckietown launch files are created. You will need a similar launch file here. You will need to turn off the Duckietown line detector functionality but leave everything else on (see the line in red below). It should look something like this:

THIS LAUNCH FILE IS FOR PART II ONLY

<launch>

    <arg name="veh" default="$(env VEHICLE_NAME)"/>
    <arg name="ai_trafo_mode" default="cb" doc="'cb' for colo balance only; 'both' for color balance and linear trafo"/>
    <arg name="ai_interval" default="5" doc="interval with which the linear trafo gets updated. color balance is performed every second."/>
    <arg name="verbose" default="false"/>

    <!-- start Duckietown nodes -->
    <arg name="demo_name" value="lane_following"/>

    <include file="$(find led_emitter)/launch/led_emitter_node.launch">

        <arg name="veh" value="$(env VEHICLE_NAME)"/>

    </include>
    
    <!-- start basic args -->
    <include file="$(find duckietown_demos)/launch/master.launch">

        <!-- Basic arguments -->
        <arg name="veh" value="$(arg veh)"/>
        <arg name="demo_name" value="$(arg demo_name)"/>
        <arg name="param_file_name" value="default" />
        <arg name="visualization" value="true" />

        <!-- Finite state machine -->
        <arg name="fsm" value="true"/>
        <arg name="/fsm/logic_gate" value="false"/>

        <!-- Camera and anti intagram -->
        <arg name="/camera/raw" value="false" />
        <arg name="anti_instagram" value="true" />

        <!-- Lane Following stack -->
        <arg name="lane_following" value="true"/>
        <arg name="/lane_following/line_detection" value="false"/>
        <arg name="line_detector_param_file_name" value="default" />
        <arg name="/lane_following/ground_projection" value="true"/>
        <arg name="/lane_following/lane_filter" value="true"/>
        <arg name="/lane_following/lane_controller" value="true"/>

    </include>

    <group ns="$(env VEHICLE_NAME)">
        YOUR NODE/PARAMS/ETC GO HERE
    </group>

</launch>

Code Due to some recent changes in the Duckietown infrastructure, you will have to add one of two things to your code.

Method 1 Inherit your class from DTROS as in the documentation here: https://docs.duckietown.com/daffy/devmanual-software/intermediate/dtros/index.html

Method 2 Add two services to your node similar to the example code below.

Towards the top of your file, where you import libraries, add the service types we will need:

from std_srvs.srv import SetBool, SetBoolResponse

In your main class’ __init__ function, setup the two services with these two lines:

rospy.Service('line_detector_node/switch', SetBool, self.ld_switch)
rospy.Service('lane_filter_node/switch', SetBool, self.lf_switch)

Then add the following two functions to your class:

def ld_switch(self, msg):
    return True, ""

def lf_switch(self, msg):
    return True, ""

Submission Instructions

DEMONSTRATE your code in class/office hours and turn in to blackboard:

Part I

  1. A link to your repo and the tag that corresponds to this code.
  2. A paragraph explaining how you think your lane detector compares with the Duckietown lane detector. When does it do better/worse? Why might that be?
  3. A description of any issues you had with this assignment.

Part II (EECE.5560 ONLY)

  1. Submit an image of the output of rqt_graph showing your lane detector code running as when you produced the screen recording above.
  2. The tag that corresponds to this code, if different from Part I.

Rubric

Part I

  • Algorithm: 20 points
    • Color filtering: 5 points
    • Edge detection: 5 points
    • Hough transform: 10 points
  • Implementation: 30 points
    • ROS package setup: 5 points
    • ROS node implementation: 15 points
    • Speed considerations (as described above): 10 points
  • Quality of lane detector: 40 points
  • Comparison with Duckietown implementation: 10 points

Part II

  • Data sent to Duckietown code (i.e. line segments showing up in ground_projection_node): 15 points
  • Launch file starts all code: 5 points
  • Quality of lane following (compared to Duckietown demo): 30 points