Land Navigation

Surface Navigation for Robots

Tyler C. Folsom

From Raw GPS

GPS coordinates are spherical: Latitude, Longitude and distance from the center of the earth (or altitude above mean sea level). Angle measurements are given in either decimal degrees or degrees, minutes and seconds. This paper assumes use of decimal degrees.

Local maps are in two dimensions; the Earth is locally flat. We will assume that the robot is operating in a small enough area where the curvature of the Earth is not significant. We will pick an origin for a local map, and measure distances in meters. The positive X-axis measures distance east from the origin and the Y-axis gives distance north. Compass direction uses 0 degrees as north, 90 degrees as east, 180 as south, and 270 as west. Compass direction will be referred to as “bearing”. It differs from the “angle” of a standard coordinate system in which 0 is east and 90 is north.

angle = 90 – bearing

There are sophisticated algorithms to find the great circle distance between two (latitude, longitude) readings. With the flat earth assumption, conversion between the two coordinate systems is simple.

x = EARTH_RADIUS * (Longitude – LONGITUDE_ORIGIN) * π / 180 * cos(LATITUDE_ORIGIN *π /180)

y = EARTH_RADIUS * (Latitude – LATITUDE_ORIGIN) * π / 180


Latitude = (y * 180) / (π * EARTH_RADIUS) + LATITUDE_ORIGIN

Longitude = (x * cos(LATITUDE_ORIGIN *π /180) * 180) / (π * EARTH_RADIUS) + LONGITUDE_ORIGIN

You should set the origin to be near the center of the area in which you intend to operate.

Example: using Seattle Center House as the origin



EARTH_RADIUS = 6371000

A reading of (47.620941, -122.351206) is 22.934 meters west of Center House and 39.918 meters south. A point 151.77 meters east and 66.16 meters north is at (47.621895, -122.348875).

Note that measuring distance to a centimeter requires about 7 places beyond the decimal point in latitude and longitude. Thus latitudes and longitudes should be stored as 64 bit doubles. If your processor (such as the Arduino) does not have this capability, an alternative is to record distances in mm as a 32 bit long integer, which will provide a range of 2147 km from the origin. If you travel that far, the flat earth assumption is no longer valid, and you are on a new map with a new origin.

GPS is not sufficiently accurate for robot navigation. It typically has a standard deviation of 3-5 meters, which would mean that it can drift by 15 meters. Even if you achieve a 1 meter standard deviation, you do not have enough accuracy to keep a vehicle in its lane. The GPGGA format includes latitude, longitude, time and Horizontal Dilution of Precision (HDOP). The standard deviation of the GPS reading can be estimated as

sigma = sqrt((3.04*HDOP)2 + 3.572)

GPS can be further degraded by multipath reflections from buildings. It may become unavailable indoors, in tunnels, or if satellites are not properly positioned.

Latitude and longitude coordinates are available from Google Earth, Google Maps, and OpenStreetMaps. Open Street maps lets you download the positions of streets and buildings in latitude and longitude coordinates precise to seven decimal places. Thus you can construct a map of your robot’s expected environment.


Refining GPS with Kalman Filter

A Kalman filter (KF) can use all available information to provide the optimal prediction of position. We will start with a simplified version based only on the GPS position. Please refer to the references for the theory.

Start with a column vector of the state:

X = [x y dx/dt dy/dt]T

We could use [x y z dx/dt dydt dz/dt d2x/dt2 d2y/dt2 d2z/dt2]T

The state might also include the yaw, pitch and roll of the vehicle. We will start with the simpler 4 by 1 matrix. All data are in meters and seconds. Suppose that we had determined our state at a previous time t0 and we want to estimate the state at time t1 = t0 + Δt. We estimate the new state as

Xet1 = F * Xt0

where F is the state transition matrix

┌ 1 0 Δt 0 ┐

F = │ 0 1 0 Δt │

│ 0 0 1 0 │

└ 0 0 0 1 ┘

Suppose that we also knew the uncertainties of the estimated positions and velocities at time t0. For simplicity, assume that all the measurements have the same standard deviation σ and that they are uncorrelated. The covariance matrix is P

┌ σ2 0 0 0 ┐

P = │ 0 σ2 0 0 │

│ 0 0 σ2 0 │

└ 0 0 0 σ2 ┘

Since time has elapsed, uncertainty increases. We estimate the new uncertainty by

Pet1 = F * Pt0 * FT

Thus the extrapolated covariance matrix is

┌ σ2 0 σ2Δt 0 ┐

Pet1 = │ 0 σ2 0 σ2Δt │

 σ2Δt 0 σ2(1+Δt2) 0 │

└ 0 σ2Δt 0 σ2(1+Δt2) 

Now at time t1 we get a new GPS fix and have a new measured position

Zt1 = [e n]T

Where e is position east in meters and n is position north in meters. We define a matrix H which reduces our estimated state to the parts that we can measure.

┌ 1 0 0 0 ┐

H =  0 1 0 0 ┘


The difference between our extrapolated position and measured position is Yt1 = Zt1 – H * Xet1.

┌ e ┐ ┌ 1 0 0 0┐ ┌x+Δt*dx/dt┐ ┌ e – x  Δt*dx/dt

Y = └ n ┘ – └ 0 1 0 0 │y+Δt*dy/dt│ = └ n – y  Δt*dy/dt

│ dx/dt │

└ dy/dt ┘

The matrix S reduces the uncertainties to the items that we can measure. St1 = H Pet1 HT + R

The diagonal matrix R is the measurement noise, which is formed from the HDOP of the GPS. The GPS variance is r. The covariance matrix P describes how much confidence we have in our estimate. Theinitial estimate of σ is that of the GPS.

┌ r 0 ┐

R = └ 0 r ┘


┌ σ2+r 0 ┐

S = └ 0 σ2+r


┌ 1/(σ2+r) 0 ┐

S-1 = └ 0 1/(σ2+r)


Kt1 = Pt1 HT St1-1


┌ 1 0

K = σ2/(σ2+r) │ 0 1

│ Δt 0

└ 0 Δt


The Kalman matrix K lets us establish our best estimate of state and uncertainty at time t1.

Xt1 = Xet1 + Kt1 Yt1

Pt1 = (I – Kt1 H) Pet1

If we stay in one place, the filter is averaging the GPS signals, and reducing the uncertainty of position. If a GPS reading is noisy compared to the position averaged from several previous readings, it is lightlyweighted. If our previous readings were based on signals with high HDOP and the new reading is much better, the new reading will be heavily weighted.


┌ e * σ2/(σ2+r) + (x + Δt*dx/dt) * r/(σ2+r)

X = │ n * σ2/(σ2+r) + (y + Δt*dy/dt) * r/(σ2+r)

│ dx/dt * σ2/(σ2+r) + Δt*dx/dt * r/(σ2+r)

└ dy/dt * σ2/(σ2+r) + Δt*dy/dt * r/(σ2+r)


┌ r 0 rΔt 0 

P = σ2/(σ2+r)│ 0 r 0 rΔt 

 0 0 σ2+r(1+Δt2) 0 

└ 0 0 0 σ2+r(1+Δt2)

The Kalman filter algorithm without matrices

A vehicle starts at x=2 meters from a brick wall, moving away from it at v=0.1 m/sec. The uncertainty in x is 0.01 m = σ2 The only sensor is a backwards pointing sonar range finder.

X = [x v]

After 10 seconds = t1, our estimated position is 2+10*0.1 = 3 m = xest(t1). Extrapolating position based on velocity is called “dead reckoning”.

As time goes by, we become less certain about where we really are. The uncertainty of dead reckoning increases with time as σ2(1+Δt*v), so after 10 seconds it is σ2(1+10*0.1) = 0.02.

At t1 = 10 sec we pulse the sonar detector and it reports that we are 3.2 m from the wall. The sonar has an error of r = 0.01. We weight the two estimates of position by their reliability. The Kalman predictor is K = σ2/(σ2+r)= 0.02 /(0.02 + 0.01) = 0.667.

The best estimate of position is 0.667 * 3.2 + 0.333 * 3 = 3.134 m.

We had an uncertainty of r = 0.01 from the sonar, and an uncertainty in the dead reckoning position of 0.02. When we combine them, the uncertainty goes down to 0.01 * 0.667 = 0.0067. Thus as we combine measurements, the position uncertainty goes down.

The vehicle keeps moving at the same speed. 5 seconds after the first sonar measurement we take a second reading. This time we are starting from x = 3.134 m, v = (3.134 – 2)m/10 sec = 0.134 m/sec, σ2 =0.0067*(1+5*0.134) = 0.0112. We update velocity, since we have no speedometer, and no direct way of knowing our real speed. Our estimated position is 3.134 + 5*0.134 = 3.804 m.

The sonar says that we are 4.1 m from the wall. The Kalman predictor is K = σ2/(σ2+r)= 0.0112 /(0.0112 + 0.01) = 0.528. It is telling us to put increased faith in the dead reckoning position.

The best estimate of position is 0.528 * 4.1 + 0.472 * 3.804 = 3.962 m.

In the last 5 seconds, we have travelled (3.962 – 3.134) m, so we update our velocity to 0.166 m/sec. Uncertainty is the product of previous uncertainty by the Kalman predictor, so it becomes 0.01 * 0.528 = 0.0053. The uncertainty of the sonar has decreased almost in half by combining with our dead reckoning position.

What we have loosely been calling “uncertainty” is the variance. The standard deviation is the square root of variance. A variance of 0.0053 m2 is a standard deviation of 0.073 m. The true position should bewithin one standard deviation 68% of the time. Thus there is 68% probability that the true position is between 3.889 and 4.035 m. The position will almost always be between 3.743 and 4.181 m, providedthat our assumption of a normal (a.k.a. Gaussian) error distribution is valid.

Sensor fusion is a difficult problem in embedded systems. The Kalman filter gives a systematic way of combining measurements from different instruments and dead reckoning. It says that we should not blindly trust our instruments. We are better off combining them with the previous system state and known dynamics.

Expanding the filter

The basic KF will average the readings, but it is not as smooth as we want. This is because we take the difference between previous fixes as the velocity, and use the velocity to predict the next position. If we are standing still, we get a psuedo-velocity due to GPS drift which can make the drift worse. Thus it is common to augment GPS with an Inertial Measurement Unit (IMU), also known as an Inertial Navigation Unit (INU). Set

X = [x y dx/dt dy/dt d2x/dt2 d2y/dt2]T

The IMU will measure acceleration and bearing. Add a Hall effect sensor (cyclometer pick-up) to click on each wheel revolution. Let each wheel click trigger an interrupt to measure the time since the lastclick and compute speed = wheel circumference / time of revolution. We then have instruments that can measure speed, bearing and acceleration.

┌ 1 0 Δt 0 0 0 ┐

F = │ 0 1 0 Δt 0 0 │

│ 0 0 1 0 Δt 0 │

│ 0 0 0 1 0 Δt │

│ 0 0 0 0 1 0 │

└ 0 0 0 0 0 1 ┘

H is a 6 by 6 identity matrix; Z and Y are 6 by 1; R, S, P, K and I are 6 by 6. A more sophisticated form of state estimation is

Xet1 = F * Xt0 + B * Ut1

where U = [ 0 0 0 0 Δa Δb]T is the commanded change in acceleration, and B scales the controls into a predicted acceleration in m/sec2 in the x and y direction. The acceleration U will include any change ofspeed or direction that is about to happen.

The predicted acceleration and speed must be restricted by the vehicle’s physical limits. The minimum turn radius is a function of speed.

We can also define a 4 by 6 form of H to use when GPS is not available.


S. Thrun, W. Burgard and D. Fox, Probabilistic Robotics, MIT Press, 2006.

A. Gelb (ed) Applied Optimal Estimation, MIT Press, 1974.

P. S. Maybeck, Stochastic Models, Estimation, and Control, Vol 1, Navtech, 1994.

Open Source Autonomy