Line Following with Camera Sensor (Lego NXT Robot) !Under Construction!

OpenRave: Line Following with Camera Sensor (Lego NXT)

Keywords: openrave, installation, matlab, helloworld, startup, robot, simulation

The video below shows the simulation of a line following robot  that has been created in openRave. This simulation utilizes the Lego NXT REM robot model and a simulated camera sensor that is available in openRave. The reason this tutorial has been written for both line following and using the Lego NXT robot is to act as an introduction to the openRave program for beginner users. OpenRave has an extensive list or real-world robot that can be simulated including the PR2, Barret Arm, Puma, Segway and more. The problem for many users is that this simulation can only act as a simulation because the capitol involved in acquiring one of the mentioned robots is too steep. However, the  Lego NXT robot kit is affordable to any college or laboratory and offers a good starting platform for roboticist interested in exploring the capabilities  openRave. The line following problem was chosen because it is a standard starting problem for a mobile robot platform with either camera or light sensing sensor feedback. This tutorial will show the read how to create a line following workspace, attach a camera sensor, and execute a line following control algorithm in python.

Motivation and Audience

This tutorial’s motivation is to show how a line following problem can be simulated in OpenRave by using the Lego NXT robot built in my previous tutorial, a custom environment that includes a line path, and a camera sensor that is available in openRave. Before starting this tutorial the reader should :

  • Have openRave installed
  • Have the Lego NXT robot built and defined in openRave as an XML file
  • Understand python syntax
  • Understand XML syntax

The rest of the tutorial is presented as follows:

  • Create the Workspace (Kinbody)
  • Create the Environment
  • Control Algorithm

Create Workspace

<Temporarily unavailable, because I lost access to solid works when a lab computer was wiped>

Create Environment

The hierarchical structure for the environment is as follows:

environment - attributes: file
  bkgndcolor - 3 floats
  camrotaxis - 4 floats
  camtrans - 3 floats
  kinbody - attributes: name, file
  robot - attributes: name, file
  planner - attributes: type, file
  sensorsystem - attributes: type, file
  controller - attributes: type, file, robot, args
  probleminstance - attributes: type, file
  inversekinematicssolver - attributes: type, file
  physicsengine - attributes: type, file
  sensor - attributes: type, file
  collisionchecker - attributes: type, file
  trajectory - attributes: type, file
  viewer - attributes: type, file
  server - attributes: type, file

Using the structure above I defined my environment as follows:

    <!-- Position initial camera viewer for simulation -->
    <camtrans>.4 -1.1 1.94</camtrans>
    <camrotaxis>.91 .39 -.005 39.12</camrotaxis>

    <Robot file="remNXT3.robot.xml" name="remNXT">
        <Translation>-.63 .1 .02</Translation>
        <!-- attach a camera -->
        <AttachedSensor name="camera">
            <link>base</link> <!-- reference the orgin of the base body -->
            <translation>0 0.09 -.0</translation> <!-- position camera -->
            <rotationaxis>1 0 0 -160</rotationaxis><!-- rotate camera to face the ground -->
            <sensor file="data/camera.sensor.xml"> <!-- use the camera sensor file -->
        <rotationaxis>0 0 1 -180</rotationaxis> <!-- rotate the robot globally -->

    <!--Define Base with line pattern -->
    <KinBody file="linefollowingbase.kinbody.xml" name="base">
        <Translation>0 0 0</Translation>

    <!-- Define all of the walls -->
    <KinBody name="floorwalls">
        <Body type="static"><!-- static means that the body is not affected by the phyisics -->
          <Geom type="box">
            <extents>2 0.01 0.2</extents> <!-- define dimensions -->
            <translation>0 -2 0.2</translation><!-- define position -->
            <diffuseColor>0 1 0</diffuseColor> <!-- define color (rgb) -->
            <ambientColor>0 1 0</ambientColor>
          <Geom type="box">
            <extents>2 0.01 0.2</extents>
            <translation>0 2 0.2</translation>
            <diffuseColor>1 0 0</diffuseColor>
            <ambientColor>1 0 0</ambientColor>
          <Geom type="box">
            <extents>0.01 2 0.2</extents>
            <translation>2 0 0.2</translation>
            <diffuseColor>0 0 1</diffuseColor>
            <ambientColor>0 0 1</ambientColor>
          <Geom type="box">
            <extents>0.01 2 0.2</extents>
            <translation>-2 0 0.2</translation>
            <diffuseColor>.6 .6 .6</diffuseColor>
            <ambientColor>0.6 0.6 0.6</ambientColor>
This results in the following environment when opened in openRave:

Control Algorithm

In order to control the robot I implemented a simple turn left/ right control that looks at one column of the camera data. The psudo code is as follows:

Line Follow Control Algorithm

Step 1: Get Camera Data

Step 3-5: Find First, Last, and Calculate Middle Black Pixel

Step 6: Calculate Distance from Center to Midpoint of Line

# Copyright (C) 2009-2010 Rosen Diankov (
# Code written by Sean Mason ( with the help of Rosen Diankov
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import with_statement # for python 2.5
__author__ = 'Rosen Diankov'
__copyright__ = '2009-2010 Rosen Diankov ('
__license__ = 'Apache License, Version 2.0'

from openravepy import *
from numpy import *
import time, signal, threading
from optparse import OptionParser

    from Tkinter import *
    import tkFileDialog
    import Image, ImageDraw, ImageTk
except ImportError:

class CameraViewerGUI(threading.Thread):
    class Container:
    def __init__(self,sensor,title='Camera Viewer'):
        self.sensor = sensor
        self.title = title
        self.lastid = -1
        self.imagelck = threading.Lock()

    def updateimage(self):
        data = self.sensor.GetSensorData()
        if data is not None and not self.lastid == data.stamp:
            width = data.imagedata.shape[1]
            height = data.imagedata.shape[0]
            self.image = Image.frombuffer('RGB',[width,height], data.imagedata.tostring(), 'raw','RGB',0,1)

            photo = ImageTk.PhotoImage(self.image)
            if self.container is None:
                self.container = self.Container()
                self.container.width = width
                self.container.height = height
                self.container.main = self.main
                self.container.canvas = Canvas(self.main, width=width, height=height)
                self.container.canvas.pack(expand=1, fill=BOTH)#side=TOP,fill=X)#
                self.container.obr = None

            self.container.canvas.create_image(self.container.width/2, self.container.height/2, image=photo)
            self.container.obr = photo
            self.lastid = data.stamp

    def saveimage(self,filename):

    def run(self):
        self.main = Tk()
        self.main.title(self.title)      # window title
        self.main.resizable(width=True, height=True)
        self.container = None

class OpenRAVEScene:
    def __init__(self,env,scenefilename,robotname=None):
        self.orenv = env
        if not self.orenv.Load(scenefilename):
            raise ValueError('failed to open %s openrave file'%scenefilename)
        if len(self.orenv.GetRobots()) == 0:
            raise ValueError('no robots found in scene %s'%scenefilename)

        with env:
            if robotname is None:
                self.robot = self.orenv.GetRobots()[0]
                self.robot = [r for r in self.orenv.GetRobots() if r.GetName()==robotname][0]

            # create a camera viewer for every camera sensor
            self.viewers = []
            for attachedsensor in self.robot.GetAttachedSensors():
                if attachedsensor.GetSensor() is not None:
                    sensordata = attachedsensor.GetSensor().GetSensorData()
                    if sensordata is not None and sensordata.type == Sensor.Type.Camera:
                        attachedsensor.GetSensor().SendCommand('power 1')
                        title = attachedsensor.GetName()
                        if len(title) == 0:
                            title = attachedsensor.GetSensor().GetName()
                            if len(title) == 0:
                                title = 'Camera Sensor'
        print 'found %d camera sensors on robot %s'%(len(self.viewers),self.robot.GetName())
        for viewer in self.viewers:
        self.prevaction = signal.signal(signal.SIGINT,lambda x,y: self.sighandle(x,y))

    def sighandle(self,x,y):

    def quitviewers(self):
        for viewer in self.viewers:
    def __del__(self):

def run(args=None):
    """Executes the testcamerasensor example

    :type args: arguments for script to parse, if not specified will use sys.argv
    parser = OptionParser(description='Displays all images of all camera sensors attached to a robot.')
                      #sets the default enviroment as the custom created line follow example
                      help='OpenRAVE scene to load')
                      help='Specific robot sensors to display (otherwise first robot found will be displayed)')
    (options, leftargs) = parser.parse_args(args=args)
    env = OpenRAVEGlobalArguments.parseAndCreate(options,defaultviewer=True)
    scene = OpenRAVEScene(env,options.scene,options.robotname)

    if options._physics is None:
        # no physics engine set, so set one
        physics = RaveCreatePhysicsEngine(env,'ode')
    #sets gravity to -9.8    

    with env:
        robot = env.GetRobots()[0]
        #get DOFs of the active joints
        wheeldofs = [robot.GetJoint('wheelL').GetDOFIndex(), robot.GetJoint('wheelR').GetDOFIndex()]

        #Set the robot controller as "odevelocity". This allows velocity control over the joints.
        #Get data from camera sensor. Data comes in as a 640 x 480 x 3 matrix of rgb values 0-255.
        data = array(robot.GetAttachedSensors()[0].GetData().imagedata)
        #Convert camera data from rgb to greyscale in row 100
        dataGrey = ones(640)
        for col in range (640):
            r = data[100,col,0]
            g = data[100,col,1]
            b = data[100,col,2]
            brightness = int(round(0.299*r + 0.587*g + 0.114*b))
            dataGrey[col] = brightness 

        #Find midpoint of the line (i.e. midpoint between first and last black pixel)
        first = 0
        last = 0
        for i in range(640):
            if dataGrey[i] < 100:
                first = i
                for j in range(first,640):
                    if dataGrey[j] < 100:
                        last = j
        mid = (first+last)/2

        #Print midpoint value to the terminal
        print 'mid found: ', mid

        #sensativity value
        sen = 120 

        if mid < 320-sen:
            print 'right'
            #lock the enviroment
            with env:
                #execute a right turn
                robot.GetController().SendCommand('setvelocity 0 -2')
                #Confirm that controler recieved velocites by printing to the terminal
                print robot.GetJoint('wheelL').GetVelocities(), robot.GetJoint('wheelR').GetVelocities()
        elif mid > 320+sen:
            print 'left'
            #lock the enviroment
            with env:
                #execute a left trun
                robot.GetController().SendCommand('setvelocity 2 0')
                #Confirm that controler recieved velocites by printing to the terminal
                print robot.GetJoint('wheelL').GetVelocities(), robot.GetJoint('wheelR').GetVelocities()
            print 'forward'
            #lock the enviroment
            with env:
                #execute drive forwards
                robot.GetController().SendCommand('setvelocity 1.5 -1.5')
                #Confirm that controler recieved velocites by printing to the terminal
                print robot.GetJoint('wheelL').GetVelocities(), robot.GetJoint('wheelR').GetVelocities()
if __name__=='__main__':

Final Words

  1. No comments yet.

  1. No trackbacks yet.