On-line evolution

We recently finished our proof-of-concepts experiments with on-line evolution. In on-line evolution a human trainer drives the robot and while the robot is being driven it is learning (via an evolutionary algorithm) to imitate the human’s control behaviors. After a … Continue reading

Gallery | Tagged , , | Leave a comment

Code for on-board, evolved path following

The code for the paper on on-board, evolved path following is now available via bit-bucket at bitbucket.org/uidaholair/learning-from-demonstration-for-encapsulated-evolution-of. It runs on two android phones. One phone acts as the robot’s “brains”, including performing all of the image processing and running the … Continue reading

Gallery | Leave a comment

On-board, evolved path following.

Our research group recently submitted a paper on on-board, evolved path following to the journal Evolutionary Intelligence.  Very briefly a human drives the robot for a few minutes on a training track, during which time the robot (we used the Rover 5 chassis with an Android smartphone) collects data on the color pattern of the track and information on what it was seeing when the drive chose to turn or go forward (image-action pairs).  After a few minutes of driving the robot paused and ran an on-board evolutionary algorithm to train a NN network to learn how to imitate the human driver’s actions.

About 5 minutes of evolution was sufficient for the robot to navigate fairly well.  We tested it on novel tracks with much sharper turns than it saw during training.  It was ~95% successful on the training track and ~50% successful on the novel test track, with most of the failures occurring on the sharp corners it hadn’t seen before.  I.e. just a few minutes of training and evolution is sufficient to train a robot to follow a path fairly well.

Aside | Posted on by | Tagged , , | Leave a comment

Clicker training

Last semester (Spring 2013) we implemented a clicker training algorithm.  Clicker training is regularly used with dogs (and other animals) as a way of shaping behaviors.  In our robotic clicker training the trainer supplies positive and negative feedback to train the robot for tasks like: go to the red ball, then turn around and return to the green ball.  The code is available on the new Android Code page (see list of pages on the right).

Posted in Uncategorized | Leave a comment

We’ve now added android code for basic color

We’ve now added android code for basic color following, follow the Android Color Following link on the right to access the code and install instructions.

Aside | Posted on by | Leave a comment

Advanced Arduino Code

Just added a page with more advanced Arduino code (see the link on the right).  This allows the same ‘spinal column’ to receive either simple or complex messages (e.g. just “go forward” or to set specific motors to specific speeds).  It can also be used to control a number of different body types: skidsteer with two DC motors, RC cars with steering and drive servos, and skidsteer controlled with two servo signals.

This code can be installed on the Arduinos on all of our robot bodies, regardless of the actual robot body (assuming it matches one of the three expected configurations).  And can work with either very simple “brains” (ones that just give forward, left, right type commands) or with complex “brains” that need to be able to specify specific speeds for each motor.

Posted in Uncategorized | Leave a comment

Semi-autonomous, tele-operation

This gallery contains 1 photo.

Thanks to Dallas’ hard work we can now have (small) armies of semi-autonomous, tele-operated bots.  The bots do color following and stream their video to a webpage.  The operator drives the bot at the front of the line via a virtual joystick (see image) and … Continue reading

Gallery | Leave a comment