POLUCS was constructed by Christian Balkenius and Lars Kopp in 1995 and is the second mobile built within the project A Robot with Autonomous Spatial Learning. The robot is remote controlled from a stationary computer through a single cable with bidirectional RS232 communication and two video-signals.

What Can It Do?

POLUCS is our first mobile robot to use the XT-1 vision architecture which is an attemt to design a uniform model of a number of visual behaviors such as object tracking, orienting and anticipatory saccades, place and landmark recognition, as well as visual servoing during locomotion. The following behaviors have been implemented in the robot:

  • Visual Landmark Tracking The robot can autonomously learn visual landmarks that it can latter recognize and approach.
  • Anticipatory Saccades When the robot expects to find landmarks outside of the current camera image, it performs anticipatory saccades toward the expected location of the desired landmark.
  • Visual Navigation The robot can follow complex routes as long they can be divided into sequences of landmark approach behaviors.
  • Visual Orientation When the robot has been moved or finds itself lost it performs an orienting behavior which tries to locate previously learned landmarks in the environment. Once such a landmark has been found, the navigation behavior can continue.

An inside view of POLUCS:

POLUCS fitted with a one degree-of-freedom movable camera head and a line-laser. There is also a second camera on the body used for detecting the line projected from the laser.