Frontier-Based Exploration





The central question in exploration:

Given what you know about the world, where should you move to gain as much new information as possible?

The key idea behind frontier-based exploration:

To gain the most new information, move to the boundary between open space and uncharted territory.

Most mobile robot applications require the ability to navigate. While many robots can navigate using maps, and some can map what they can see, few can explore autonomously beyond their immediate surroundings. Usually, a human must map the territory in advance, providing either the exact locations of obstacles (for metric maps) or a graph representing the connectivity between open regions (for topological maps). As a result, most navigating robots become useless when placed in unknown environments.

Exploration has the potential to free robots from this limitation. We define exploration to be the act of moving through an unknown environment, building a map that can be used for subsequent navigation. A good exploration strategy is one that generates a complete or nearly complete map in a reasonable amount of time.

Our goal is to develop exploration strategies for the complex environments typically found in real office buildings. Our approach is based on the detection of frontiers, regions on the border between open space and unexplored space.

From any frontier, the robot can see into unexplored space and add the new observations to its map. From each new vantage point, the robot may see new frontiers lying at the edge of its perception. By exploring each frontier, or determining that frontier to be inaccessible, the robot can build a map of every reachable location in the environment.

Frontier-based exploration has been implemented on a Nomad 200 mobile robot equipped with sonar, infrared, and laser range sensors. This system has been used to explore a real-world office environment containing chairs, desks, bookshelves, cabinets, a large conference table, a sofa, and other obstacles.

We use evidence grids as our spatial representation. The images above show the mobile robot along with the evidence grid learned for this environment. Large black dots represent regions known to be occupied; white space represents space known to be unoccupied, and small black dots represent unknown territory.

We fuse information from both sonar and laser range sensors to eliminate specular reflections in the evidence grid, using a technique we call laser-limited sonar.

Simplified Example     Laser-Limited Sonar     Detecting Frontiers     Navigation     Real‑World Experiments