Over the past few years, Blue Marble Geographics®’ advanced point cloud processing tool has developed into a professional photogrammetry and drone-mapping software. The latest version of the Global Mapper LiDAR Module comes with several enhancements, many of which are to the Pixels-to-Points tool for generating point clouds and 3D meshes from drone-captured images.
Here are the top 5 new features of Blue Marble’s Global Mapper LiDAR Module:
1. Automatic point cloud classification of pole-like objects
Manually classifying point cloud data can be time-consuming and tedious. This is why the Global Mapper LiDAR Module comes with automatic point cloud classification tools for points representing ground, buildings, vegetation, noise, powerlines, and most recently added poles.
The new pole classification tool identifies and classifies points of pole-like objects, such as signs, lamp-posts, utility poles, basketball hoops, and other cylindrical features.
With this tool, users can define the characteristics of the poles they would like to see classified. For example, they can define the minimum height and number of points per pole. They can also define a “pole-like” threshold, allowing for either rigid or relaxed definitions of a pole. For instance, a simple post would typically have a pole-like threshold of 90 – 100%, whereas some straight trees may have a pole-like threshold of 35 – 40%.
2. Photo masking in the photogrammetric tool Pixels-to-Points for eliminating unwanted backgrounds or data from images
Not all image-data is ideal or necessary in photogrammetrically generated point clouds. This is why an option for photo masking was introduced to the Pixels-to-Points tool in version 21 of the Global Mapper LiDAR Module. Masking allows users to cut out unwanted areas from images, such as swaths of data that tend to not reconstruct well in a point cloud, like sky or water. It also allows users to crop their data down to focus on specific interest areas, which also shortens the point cloud generation process.
3. Ground coverage polygons for showing the approximate ground coverage of drone-captured photos
Photogrammetrically generated point clouds can require hundreds of drone-captured images. To make it easier to manage and visualize the ground-coverage area of each photo, the latest version of the Global Mapper LiDAR Module’s Pixels-to-Points tool displays the ground extent of each input photo. Displaying these coverage-area polygons can also help users visualize the overlap of adjacent selected images.
4. Additional support for importing accurate GPS information from external text files
In the latest version of the LiDAR Module, users can update the image-capture location — EXIF information — from a text file. This allows users who have high-accuracy positioning in their orthoimages and point clouds — such as PPK — the ability to overwrite the initial geotag information that comes with their drone-captured images. This is a valuable feature for surveyors who need highly accurate photogrammetric point clouds or meshes.
5. Identification of images that contain selected ground control points based on their location
Another improvement to the Pixels-to-Points tool is the ability to see images that contain the same ground control points. When a single image is selected, the tool automatically suggests and highlights all image file names that may contain common ground control points. This makes selecting images based on location much easier and faster.
Accessible photogrammetry and point cloud processing software
Most of the improvements to the latest release of the Global Mapper LiDAR Module are to the photogrammetric point cloud generation tool Pixels-to-Points. This functionality allows GIS professionals easier access to point cloud data as drones and cameras become more affordable.
To try the Pixels-to-Points tool and the other powerful tools that come with the Global Mapper LiDAR Module, request a free two-week trial after downloading Global Mapper® here.
Let’s start with a question. How many of you currently own a Segway? Unless you moonlight as a mall cop or run an urban tour company, you probably decided not to jump on that gyroscopically-controlled bandwagon. If the hype that surrounded the release of this ‘revolutionary technology’ was to be believed, we would long since have abandoned our cars, redesigned our cities, and be living much more fulfilling lives. Alas, the reality has fallen a little short.
The emergence and proliferation of Unmanned Aerial Vehicles (UAVs) or Drones, on the other hand, while not accompanied by a cacophony of hyped-up fanfare, promises to have a much more profound impact on our lives. If current speculation is to be believed, within a few short years, the skies overhead will be swarming with delivery drones, traffic monitoring drones, and even people-moving drones.
For those of us in the mapping industry, this eye-in-the-sky technology effectively addresses one of the perennial challenges that we face: where do we get data, and more specifically, where do we get current data? Traditionally, we have depended on often inadequate and outdated public geospatial data archives or expensive commercial sources. With the advent of readily accessible UAV technology, on-demand data is within anyone’s reach.
The rapid growth of UAV ownership has resulted in an interesting dilemma for some would-be pilots. Having purchased the hardware and collected some data, many are often unclear as to what exactly they can do with it? Over the last couple of years, I have attended several UAV-focused tradeshows and a question that I am often asked is, ‘What can I do with Global Mapper?’ The answer: many things.
Initial Flight Planning
Before hitting the launch button, it is a good idea to virtually reconnoiter the project area. What possible obstructions are in the vicinity, what are the terrain characteristics, are there any nearby buildings or other facilities that might have overflight restrictions, what is the coverage area? These questions and more can be answered by loading the relevant data into Global Mapper and conducting some rudimentary pre-flight analysis. Among the freely available online data services are high-resolution aerial imagery, Digital Elevation Models (DEMs), aviation charts, and topographic maps. Global Mapper’s drawing tools can be used to delineate the extent of the project site to determine coverage area and to draft an initial flight plan to optimize the data capture process. All of this data can be transferred to an iOS or Android device running Global Mapper Mobile to allow field checking of the flight plan parameters.
Geotagged Image Viewing
One of the most basic functions of a UAV is taking photographs and as we will discuss below, with sufficient overlap, these images can be processed into a 3D representation of the local area. Before proceeding with this more advanced functionality, the images themselves can be loaded into Global Mapper as picture points creating a geographic photo album. Derived from the coordinate values embedded in the image files, the location at which each photo was taken is represented by a camera icon in the map view. Using Global Mapper’s Feature Info tool, each photo is displayed using the computer’s default image viewer. Viewed in the 3D Viewer, the camera icons will appear above the terrain or ground providing a precise representation of the drone’s altitude when each image was captured.
Incorporated into the optional LiDAR Module, beginning with the version 19 release of Global Mapper, the Pixels-to-Points tool is used to analyze an array of overlapping images to create a 3D representation of the environment. This powerful component identifies recurring patterns of pixels within multiple photographs and employs the basic principles of photogrammetry to determine the three-dimensional structure of the corresponding surfaces. While the underlying technology is extremely complex, as is typical in Global Mapper, the user’s experience is very straightforward. Simply load the images, apply the necessary settings for the camera system, add ground control points if available, click the Run button, and wait while it creates a high-density point cloud and, if required, a 3D model or mesh. The functionality of the Pixels-to-Points tool transforms simple drone-collected image files into a dataset that can be used for countless 3D analysis procedures.
A byproduct of the aforementioned point cloud generation process is the option to create an orthoimage. Defined as a raster layer in which each pixel’s coordinates are geographically correct, the orthoimage is generated by gridding the RGB values in the point cloud. Given its inherent accuracy, this 2D imagery layer can be used for precise measurements or as a base layer for digitizing or drawing operations.
DTM creation and Terrain Analysis
As mentioned previously, the Pixels-to-Points-generated point cloud represents the raw material for numerous analysis procedures in Global Mapper. As with any unprocessed dataset, some QA, cleanup, and processing will be required before embarking on any meaningful workflow. Fortunately, the software offers a plethora of editing and filtering options, including noise point removal, spatial cropping, ground point identification, and automatic reclassification. After isolating the points representing bare earth, the gridding tool is employed to create a Digital Terrain Model (DTM), a 3D raster layer that depicts the ground surface. In turn, this terrain layer can be used to create custom contour lines, to calculate volume, to delineate a watershed, to conduct line-of-site analysis, and, if overlaid on a previously created DTM, to identify and measure change over time.
Aside from capturing still images, most UAVs are equipped with the necessary hardware to record video. Beyond simple recreational use, this functionality is useful for building or asset inspection, strategic reconnaissance, forestry inspection, and in countless other situations where a remote perspective is needed. Global Mapper includes an embedded video player that will play this recording while displaying the corresponding position of the UAV in the map window. The determination of position is derived from the per-vertex time stamp recorded in the track file recorded during the flight. After loading this file as a line feature, and associating it with the corresponding video file, the playback is initiated from the Digitizer’s right-click menu.
Not too long ago, it was generally accepted that, due of the size and weight of the required equipment, LiDAR collection could only be carried out using a manned aircraft. This simple fact contributed to the high cost and logistical challenges of the LiDAR collection process. Today, miniaturization of the LiDAR apparatus has reached the point where it is within the payload capacity of many larger drones. Given the limited range of the aircraft, drone-collected LiDAR is only viable for small, localized projects however it does allow frequent re-flying of a project site and is thus ideally suited for change detection. Global Mapper, along with the accompanying LiDAR Module, offers a wide range of tools for processing LiDAR data. As previously mentioned, points can be filtered and edited before creating a surface model for terrain analysis. Compared to photogrammetrically created point cloud data, LiDAR provides a more complete three-dimensional representation of non-ground features such as buildings, powerlines, and trees. The LiDAR Module offers a set of tools for identifying, reclassifying, and extracting these features as vector objects.
Fundamentally, UAVs and maps have much in common. Both are intended to provide a remote, detached perspective of an area of interest and allow us to see spatial distribution and patterns in our data that would not otherwise be detectable. It is understandable, therefore, that one of the primary functions of a drone is to provide data that can be used for creating maps and other spatial datasets. Global Mapper is ideally suited for this type of workflow and it provides an extensive list of tools that can be used by drone operators.
A thirty-year veteran in the field of GIS and mapping, and a lifelong geographer, David McKittrick is currently Outreach and Training Manager at Blue Marble Geographics. A graduate of the University of Ulster in Northern Ireland, McKittrick’s experience encompasses many aspects of the geospatial industry, including cartographic production, data management, marketing and sales, as well as software training and implementation services. McKittrick has designed and delivered hundreds of GIS training classes, seminars, and presentations and has authored dozens of articles and papers for numerous business and trade publications.
In anticipation of the increasing availability and use of LiDAR and other point cloud datasets, the LiDAR Module, an add-on to Global Mapper, was first introduced in version 15 of the software. Over the last five years, this popular component has rapidly evolved and offers an array of powerful tools.
In this blog entry, we highlight the top five most important tools and functions in the LiDAR Module, including extracting vector features, processing UAV-collected images into point clouds, filtering LiDAR data, and generating 3D meshes or models.
The newest addition to the LiDAR Module, Pixels-to-Points is a tool for creating a high-density point cloud, orthoimage, and 3D meshe from overlapping images, especially those captured using a drone
Based on the principles of photogrammetry, the Pixels-to-Points process identifies objects in multiple images and from multiple perspectives to generate a 3D point cloud. As a by-product of the point-generation process, the tool can also create an orthoimage by gridding the RGB values in each point, as well a 3D mesh, complete with photorealistic textures.
The LiDAR Module’s automatic reclassification tools can accurately identify points representing ground, vegetation, buildings, and utility cables.
Algorithms in the LiDAR Module analyze the geometric properties and characteristics of point clouds to quickly classify these features. This process is commonly used to identify, classify, and filter ground points when creating a Digital Terrain Model (DTM), or as a first step in the process of isolating specific feature types when extracting vector features, such as buildings or trees, from a point cloud.
3) Feature Extraction
The Feature Extraction tool is used to create vector objects from appropriately classified points.
Based on a series of customizable settings, points representing buildings, trees, and utility cables are analyzed and automatically delineated as a series of 3D vector objects or, in the case of buildings, as a 3D mesh.
Feature extraction is particularly useful for creating building footprints, defining roof structures, powerlines, and other 3D features from classified LiDAR data.
4) Custom Feature Extraction
Custom Feature Extraction is a function for delineating atypical 3D features from point cloud data.
This function allows for the creation of accurate 3D line or area features by defining control vertices in a sequential series of perpendicular path profile views. Examples of using Custom Feature Extraction might be for defining road curbs, pipelines, or drainage ditches,
5) Mesh Creation from LiDAR Points
Mesh Creation is a function that uses a selected group of points to create a 3D vector object complete with photorealistic colors or textures.
The LiDAR Module offers the ability to create a mesh or model using the 3D geometry and colors of a selected group of points. When viewed in 3D, this model displays as a multifaceted photo-realistic 3D representation of the corresponding feature.
For information about all of the features that the LiDAR Module has to offer, visit our website here.