Brian Yamauchi

 

Principal Roboticist

Boston Dynamics

200 Smith St

Waltham, MA 02451



My CV

My Robot Gallery

Frontier-Based Exploration


Boston Dynamics

I'm a Principal Roboticist at Boston Dynamics, working to increase the autonomous capabilities of our quadruped robot, Spot.  I'm currently developing software to enable Spot to navigate more intelligently and robustly in dynamic environments and industrial facilities.

mpm_0114.jpg


 

Argo AI

I previously worked on self-driving cars for Argo AI in Palo Alto, as a Senior Staff Software Engineer on the Autonomy Integration Team.

argo-med.jpg

(Image source: Argo AI)


Anki

I was a Technical Director for robot navigation at Anki.  I provided technical direction in navigation for Anki's planned line of future home robot products, and I designed and implemented the navigation system for a prototype home robot.  This system combined open-source and custom software for frontier-based exploration, SLAM, path planning, and obstacle avoidance in dynamic environments.


iRobot Terra Robot Mower

Terra_Photo_InSitu_NavigationOverlay-860x400.jpg

I worked at iRobot for 19 years on a wide range of robotics programs ranging from advanced military research to mass-market consumer robots. 

Most recently, I spent several years as a Principal Robotics Engineer working on iRobot's recently-unveiled Terra robot lawnmower.

I developed the initial proof-of-concept prototype for the wireless robot confinement system, and also worked on the navigation, path following, obstacle avoidance, and lawn training software.

 

 


Robots Podcast Interview

I was interviewed for the July 2nd, 2010 episode of the Robots Podcast, where I discussed robotics research at iRobot and the future of robotics.  You can listen to this interview on the Robots Podcast website or download the episode via iTunes.


The Dynamo Project

 

Adaptive_Mobility.jpg

I was the Principal Investigator (PI) for the Dynamo Project, a project funded by the DARPA Defense Sciences Office to develop fast learning techniques for mobile robots driving at high speeds and over rough terrain.  Dynamo is part of the DARPA Maximum Mobility and Manipulation (M3) Program.  For Dynamo, we developed a new algorithm, Dynamic Threshold Learning (DTL), for rapid learning of robot control behaviors.

 

 


The Stingray Project

I was the PI for Stingray, a Phase II SBIR project funded by the US Army Tank-Automotive Research, Development, and Engineering Center (TARDEC).  The goal of this research project was to develop techniques for high-speed teleoperation of small unmanned ground vehicles (UGVs).  We worked with Chatten Associates to integrate their Head-Aimed Remote Viewer (HARV) with iRobot unmanned ground vehicle (UGV) platforms.  The HARV combines a head-tracking system with a head-mounted display and a remote gimbaled camera.  The camera tracks every motion of the operator's head, providing the illusion of being in the vehicle, and greatly increasing situational awareness.  We also developed semi-autonomous behaviors to help the operator control these UGVs at high speeds.

This video shows our initial experiments using the HARV to drive the prototype Warrior and a high-speed surrogate UGV (a modified, gas-powered R/C car used for testing) through a slalom course.

 


 

The Daredevil Project

 

daredevil-clear-med

daredevil-fog-med

I was the PI for the Daredevil Project, a Phase II SBIR project funded by TARDEC.  For Daredevil, we developed the perception techniques to allow robots to see through adverse weather (fog, rain, snow) and sparse foliage.  Robots often use LIDAR (laser ranging) or vision to detect obstacles, but these sensors have difficulty seeing through adverse weather and can be completely blocked by fog, smoke, or dust.  For Daredevil we used ultra wideband (UWB) radar in combination with LIDAR and vision to allow the Daredevil PackBot to avoid obstacles in all-weather conditions.

These images show how UWB radar can see through dense fog that blinds LIDAR and vision.  On the left, the Daredevil PackBot is in clear air, and both radar (green points) and LIDAR (red points) can see the obstacles in the room.  In these conditions, LIDAR provides more precise range data with better angular resolution.  On the right, the Daredevil PackBot is immersed in dense fog in the same room.  The LIDAR is unable to penetrate the fog beyond a depth of about one meter, but the UWB radar is completely unaffected.  If the robot were only equipped with LIDAR and vision, it would not be able to move safely in a fog-filled environment, but using UWB radar, the Daredevil PackBot is able to successfully avoid obstacles even in dense fog.

 

For more details, see my papers for ICRA 2010 and SPIE Unmanned Systems 2010.

clear

fog

 


The Wayfarer Project

I was also the PI for the TARDEC-funded Wayfarer Project, a two-year, $1.3 million effort to develop autonomous urban navigation capabilities for man-portable mobile robots, such as the iRobot PackBot.  We equipped two Wayfarer PackBot prototypes with stereo vision and LIDAR to perform autonomous reconnaissance missions in urban terrain, including GPS-denied areas.  The new ruggedized Wayfarer navigation payload is shown at left, and a 3D map generated by the Instant Scene Modeler (iSM) during perimeter reconnaissance is shown at right.  (iSM was developed by Stephen Se at MDA Corporation and was integrated with Wayfarer at iRobot Corporation.)


Wayfarer Videos


Previous Research

View the robots I've developed in my Robot Gallery.

I've previously conducted research and development in mobile robotics at:

While at the Naval Research Laboratory, I developed frontier-based exploration, a technique that allows mobile robots to explore and map unknown environments.


Selected Publications

All-Weather Perception for Man-Portable Robots Using Ultra-Wideband Radar
Brian Yamauchi, Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA 2010), Anchorage, AK, May 2010

 

Fusing Ultra-Wideband Radar and LIDAR for Small UGV Navigation in All-Weather Conditions
Brian Yamauchi, Proceedings of SPIE Vol. 7692 (DS117): Unmanned Systems Technology XII, Orlando, FL, April 2010


Autonomous Urban Reconnaissance Using Man-Portable UGVs

Brian Yamauchi, Proceedings of SPIE Vol. 6230: Unmanned Systems Technology VIII, Orlando, FL, April 2006


Wayfarer: An Autonomous Navigation Payload for the PackBot
Brian Yamauchi, Proceedings of AUVSI Unmanned Vehicles North America 2005, Baltimore, MD, June 2005

The Wayfarer Modular Navigation Payload for Intelligent Robot Infrastructure

Brian Yamauchi, Proceedings of SPIE Vol. 5804: Unmanned Ground Vehicle Technology VII, Orlando, FL, March 2005

      
Griffon: A Man-Portable Hybrid UGV/UAV
Brian Yamauchi and Pavlo Rudakevych, Industrial Robot, Vol. 31, No. 5, pp. 443-450, 2004


PackBot: A Versatile Platform for Military Robotics
Brian Yamauchi, Proceedings of SPIE Vol. 5422: Unmanned Ground Vehicle Technology VI, Orlando, FL, April 2004

Integrating Exploration and Localization for Mobile Robots
Brian Yamauchi, Alan Schultz, and William Adams, Adaptive Behavior, Vol. 7, No. 2, Spring 2000

Mobile Robot Exploration and Map-Building with Continuous Localization
Brian Yamauchi, Alan Schultz, and William Adams, Proceedings of the 1998 IEEE International Conference on Robotics and Automation, Leuven, Belgium, May 1998, pp. 3715-3720

Frontier-Based Exploration Using Multiple Robots
Brian Yamauchi, Proceedings of the Second International Conference on Autonomous Agents (Agents '98), Minneapolis, MN, May 1998, pp. 47-53

A Frontier-Based Approach for Autonomous Exploration
Brian Yamauchi, Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation, Monterey, CA, July 1997, pp. 146-151

Place Recognition in Dynamic Environments
Brian Yamauchi and Pat Langley, Journal of Robotic Systems, Special Issue on Mobile Robots, Vol. 14, No. 2, February 1997, pp. 107-120

Spatial Learning for Navigation in Dynamic Environments
Brian Yamauchi and Randall Beer, IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, Special Issue on Learning Autonomous Robots, Vol. 26, No. 3, June 1996, pp. 496-505


iRobot marks and media are used with permission.  All rights reserved.  Most content and media hosted on robotfrontier.com is available for public, private, and media use, on the condition that the content or media is used with appropriate attribution.  Please contact me for permission to use any content or media from robotfrontier.com in your own work.


RobotFrontier.com