Underwater Vision

AVT Sponsored Team from Cornell University Wins 2010 RoboSub Challenge with Guppy equipped Autonomous Underwater Vehicle

Now in its 13th edition, the RoboSub challenge, co-sponsored by the AUVSI foundation and the U.S. Office of Naval Research (ONR), aims to advance the development of Autonomous Underwater Vehicles (AUVs) by challenging a new generation of engineers to perform realistic missions in an underwater environment. 

Each vehicle is required to go around a course and autonomously complete a sequence of pre-determined tasks while remaining fully submerged. Vehicles are expected to pass through a validation gate, follow colored paths, locate buoys, avoid obstacles, fire torpedoes, drop markers into appropriate bins and track an acoustic pinger before surfacing and then releasing a recovery object.

This year’s winner, Tachyon, an AVT sponsored Autonomous Underwater Vehicle, is the product of one year of design and manufacturing from a team of undergraduate students from Cornell University.

Tachyon: Autonomous Underwater Vehicle

Tachyon is a 44” long AUV that travels at a maximum speed of 1.5 knots and can operate at depths up to 100 ft for a maximum of 6 hours. Based on its predecessor Nova, Tachyon benefits from several innovations such as a smaller and lighter frame as well as an improved plug-and-play vision system based on two Guppy FireWire cameras from Allied Vision Technologies.

Tachyon’s mechanical infrastructure consists of a central vehicle frame that supports all the pressure vessels, actuators for tasks completion and sensor mountings. The upper hull pressure vessel contains and protects all of the power and serial infrastructure for the vehicle as well as the computer and actuator control infrastructure. Other external pressure vessels protect other system components such as the battery pods and Guppy camera enclosures.

Tachyon is powered by two rechargeable 6-cell lithium-polymer batteries and is propelled by six off-the-shelf thrusters that heave, surge, sway, yaw and pitch the vehicle.

The system uses several sensors to navigate autonomously around the course and perform acoustic and visual tasks. Sensors include a passive acoustic array to determine the location of underwater pingers, a depth sensor, an orientation sensor, an OceanServer compass that provides heading, pitch and roll data, and two color Guppy FireWire cameras, a Guppy F-080 for the vehicle’s forward vision and a Guppy F-046 for the downward vision. 

Measuring only 48.2 x 30 x 30 mm (including connectors), the Guppy cameras from Allied Vision Technologies are one of the smallest IEEE 1394a machine vision cameras on the market.

 

Machine vision recognition

The Guppy F-080 is an XGA resolution industrial camera that features the sensitive Sony ICX204 CCD sensor. Located at the front of the system, the Guppy F-080 runs 12 frames per second and captures image data that is instantly processed to detect any forward facing elements such as buoys, hedges or windows. The WVGA resolution Guppy F-046 camera is used to detect all downward facing elements such as pipes, bins and recovery objects. The Guppy F-046 is also used for the marker dropping task with both markers placed on either side of the camera so that they drop exactly where the camera is looking. Both cameras are fitted with Fujinon lenses (a varifocal lens for the forward looking Guppy F-080 and a fixed lens for the Guppy F046). All the automatic settings including white balance, exposure and gain are utilized for optimal image quality in low underwater light conditions.

Tachyon’s vision system includes various algorithms capable of a number of object recognition tasks. Written in C++, it uses the open source OpenCV and libdc1394 libraries as a development base. An integrated vision daemon uses multithreading to efficiently capture images from cameras, video files, and image directories. It also provides a modular framework for multithreaded vision processing algorithms system architecture that allows the mission to enable and disable the various vision algorithms whenever necessary.

The majority of the machine vision modules rely on color-based segmentation. Each image is converted from RGB to HSV and CIELUV and split into color channels that are then segmented through pre-determined thresholds. The segmented channels are combined to form a binary image. Image contours are detected in each binary image and run through a set of probabilistic filters and moment analyses. All of the vision modules provide the location, orientation, size and probability for a specific mission element. In addition, the machine vision system features several tuning modules that allow the user to highlight an image in an object and allow for the calibration of vision tuning parameters in real time.

Tachyon is guided through the course via its integrated mission planner, a multi-threaded program written in Python that is based on two subsystems; a planner and a task subsystem. This structure allows complex dynamic missions to be written quickly. The planner is designed to instantiate each element on the task list, allowing tasks to add sub-tasks. Always running in the background, the planner culls completed tasks and notifies planned tasks and sub-tasks that it is their turn to run. The planner also makes sure exclusive tasks (such as movement primitives) only run one-at-a-time.

The key to success

The key to Tachyon’s success in the 2010 competition is due to thousands of hours of testing. Parts were tested individually before the vehicle was assembled. Weekly pool tests were run during the course of the year before transitioning to six day a week pool testings closer to the competition date. The team at Cornell is currently working on a new vehicle that will feature an enhanced stereo vision system.