Skip to content

How to Map

Orthomosaic Mapping

At a high level, we give a drone a series of waypoints (GPS coordinates) to visit over a region. As the drone flies the route, we can use an onboard sensor (camera) to capture data at regular intervals. We time this spacing so that the field of view of the camera never changes more than ~80% frame to frame. This means that after the flight, we can re-combine the photos using software to chop off the edges and produce a larger map, in a process similar to a solving a jigsaw puzzle, using a computer. This process is called orthomosaic mapping.

Input sensors

For capturing remote sensing scenes, resolutions of ~33-50 cm/pixel are the limit of commercially available satellite imagery. Drones allow us to capture imagery at higher resolutions (~5cm/pixel) by flying at lower altitudes and below clouds, on our own schedule. However, now we have to compensate for device vibration and variable light. We can reduce vibration by improving the rigidity of our frame and adding shock mounts. Optimal images for aerial mapping combine short shutter speeds with a high ISO (without introducing noise) and large sensors to minimize pixel blur from motion. Technically speaking, we can also use lower-resolution sensors (same light over larger pixels) or increase our FOV, but this reduces output quality. Another method of reducing blur is to fly slower and/or higher to reduce the amount of change between frames.

GPS / GNSS / RTK

GPS systems can provide ~10m accuracy out of the box. By combining samples to reduce noise and using data from other satellite systems (GNSS), we can reduce this variance to ~3-5m.

RTK provides higher-resolution GPS data by pairing a GPS receiver with a GPS emitter. Conceptually, the base station emulates an actual GPS satellite locally once it has gathered a precise fix for its location by gathering enough samples. GPS L1 satellites are ~12500 miles / 20k kilometers away, so having a result from <1% of that distance (TOF) allows us to improve our triangulation/trilateration down to the ~1 cm range.

Ground control points

We define survey points/markers on the ground with known sizes (24-48 inches) to provide a common reference/calibration point between groups of photographs taken at different points in time. By precisely surveying these spots, we can in turn resize our map to more accurately match the underlying geometry.

Output data

Conceptually, we automate the capture, tiling and rasterization process and provide different deliverables. We expose our maps using web standards (Slippy maps with custom Javascript/GeoJSON) and can export traditional formats like GeoTiff and DEM.

Autonomy

Prior to surveying an area, we define a set of safety boundaries/no-fly zones and a coverage target. Next, we compute a set of non-intersecting flights to achieve this objective. A drone swarm operation then is the process of planning, flying and logging data for a particular mission. True swarming (multiple drones flying concurrently) is not currently allowed by the FAA. We emulate swarming behavior under Part 107 today by flying one drone at a time with visual observer and line of sight rules for safety and regulatory compliance.

Tools

  • WebODM -- this is a useful frontend for stitching together imagery and other related tasks.
  • PostGIS -- postgres (database) extensions for geospatial data.
  • QGIS -- tools for visualizing and working with geospatial data.
  • STAC -- language standard for interfacing with geospatial data.
  • GDAL -- tools for translating geospatial data between different coordinate frames.