Drone Flight Tips When Using Global Mapper’s Pixels to Points Tool

Written by: Mackenzie Mills,  Application Specialist

As drones gain popularity and more people begin collecting their own data for analysis, tools like Pixels to Points in Global Mapper become more important in workflows. The Pixels to Points tool is the structure from motion (SfM) process to create three-dimensional data and image outputs from sets of drone-collected images. In many situations, this is a great and cost-effective alternative to collecting lidar (light detection and ranging) data.

The SfM process used in the Pixels to Points tool identifies features in multiple images by matching pixel patterns in the images. Features identified in multiple images are then triangulated and constructed in 3D space to generate three-dimensional outputs, including a point cloud.

Whether you are experienced with drone data collection or are new to this method, it is worth learning or reviewing some tips for quality data collection. The most important requirements for drone-collected images that you intend to process with Pixels to Points (structure from motion) are overlap and clarity.

The overlap between collected images is very important as it allows the tool to identify features in multiple images in order to triangulate them in space and construct the output layers for the area. With sets of images that contain little to no overlap, the Pixels to Points tool cannot identify features from multiple views in order to triangulate and construct the outputs. This will result in an error, or outputs that contain missing data. We recommend a minimum of 60% overlap between adjacent images, but you should always plan for more.

Drone images of a baseball field.
Drone images of a baseball field being aligned.

The images you intend to use need to have clear and identifiable features so that the Pixels to Points tool can identify them based on clear pixel patterns. This means two things, (1) the images need to be in focus, and (2) the images need to have identifiable features. Images that are blurry due to the camera shaking or vibrating, or out of focus will yield incomplete or no results from the structure from motion process. This is because the noise(or movement that makes the images blurred) inhibits the program’s ability to identify features. Areas with no identifiable features will similarly result in errors. Common scenes that encounter this issue are areas of snow cover, all white with no features, or bodies of water with no lasting features that can be identified.

Comparison of a focused and a blurred image due to noise.
Comparison of a focused and a blurred image due to noise.

Depending on the goals of your project, you may want to use different methods for collecting images. Some basic variables that go into how you plan your drone flight are pattern, height, and angle.

Drone flight patterns.
Drone flight patterns.

For two-dimensional mapping to generate an elevation model of a ground area and not surface features, capturing nadir images (looking straight down) from as high up as possible is best. For this data collection, you can use a simple mow the lawn pattern moving back and forth over the area of interest.

For 3D modeling of high relief terrain, buildings, and structures, you’ll want to capture oblique images in order to capture the sides of features. Here you would fly at a lower height, 150 to 200 feet, with a front-facing camera and collect data in a checkerboard pattern, going back and forth over the study area, then back and forth again crossing over the previous flight lines. This will help to capture the sides of terrain features from various angles for a better three-dimensional reconstruction.

For structural modeling of a specific building or stockpile feature, you’ll want to capture oblique images as you fly at a lower height in a circle around the object of interest. This will capture images covering the sides of the feature to create a detailed model.

Types of angles of drone images.
Types of angles of drone images.

Flight planning is an important part of data collection when working with drone collected data. Understanding the variables and data requirements for the Pixels to Points tool and other SfM processes will help you to collect images better suited for processing. In turn, this will create higher quality results for further work.

2 Replies to “Drone Flight Tips When Using Global Mapper’s Pixels to Points Tool”

  1. Thanks for the article. I’m looking for a software alternative to Pix4D, DroneDeploy, etc. that can stitch and produce georefenced UAV images and DTMs. Can the Pixels to Points tool be used for this?

    1. Hello Andy,

      Thank you for your comment. A Blue Marble team member will be in touch with you via email to answer your questions shortly.

Leave a Reply

Your email address will not be published. Required fields are marked *