Working with Bathymetric Data

By: Katrina Schweikert

Global Mapper is well known for its file format support and terrain analysis capabilities. Perhaps what is less well known is the way the various data analysis tools in Global Mapper can be used to generate and analyze bathymetric data. 

Bathymetry is the study of topographic landforms below the water, such as on the ocean floor, the bottom of a lake, or even the bed of a river. Given that over 70% of the earth’s surface is covered with water, this branch of 3D analysis is extremely important in understanding the characteristics of the planet. What follows is an exploration of some of Global Mapper’s analysis and visualization techniques that are relevant to the bathymetric analysis. 

Great Barrier Reef Depth model obtained from Geoscience Australia

Bathymetric Data Support

Global Mapper provides support for over 300 file formats, and many of those include formats for bathymetric data, marine navigation, and remote sensing of subsurface data. Here are some examples: 

  • Marine Navigation and Nautical Charts (S-57 and S-63 with s-52 symbols, NOS/GEO, NV Verlag, PCX,  and others)
  • Sonar, Sidescan sonar and Bathymetric Sounding data (Lowrance Sonar, XTF, HTF, and others) 
  • Gridded Bathymetric Data (BAG, DBDBV, Hypack, IBCOA, GRD98, NITF, various other terrain formats such as netCDF, GeoTiff, ASCII grid)

Bathymetry in a DTM

Gridded bathymetric data provides various visualization and analysis options when loaded into Global Mapper.   The preformatted elevation shaders or a custom shader can be used to find the best color scheme to show depths of submarine landforms. Terrain Shaders can also reveal the slope steepness and slope direction of underwater topography. 

Displayed in the 3D viewer, gridded bathymetric data comes to life with draped imagery and charts, water level visualizations, and any other reference vector data. Quickly and easily generate elevation profiles, or a series of sequential cross-profiles using the Path Profile tool and Perpendicular Profiles setting. 

3D view of bathymetric data with path profile cutaway showing a shipwreck site in the Gulf of Mexico

Combining data from different surveys and fusing data from multiple sensors is as easy as loading in the datasets and ordering the layers. The analysis and visualization tools can automatically merge the various inputs to take data from the topmost layer or choose to view and compare the data from multiple surfaces simultaneously. There are also options for cropping, aligning, feathering, and comparing to create a more seamless integration between disparate datasets. 

Analyzing Bathymetry as a 3D Point Cloud

Global Mapper provides tools for converting existing sensor data such as sonar or soundings to a 3D point cloud; or for sampling existing gridded data to create an array of 3D points at the pixel centers. This enables the automated classification algorithms of the Lidar Module, which can be used to identify the seafloor and identify or remove other subsurface structures or topography. This powerful tool has been used for shipwreck detection and modeling, as well as identification of other subsurface features. 

Subsurface Contouring

Global Mapper includes an easy-to-use tool for generating precise depth contours and shorelines from gridded bathymetric data. The resulting line features can be edited and stylized in a variety of ways and combined with other datasets to create custom bathymetric charts. Alternatively, the areas enclosed by contours lines can be filled to create polygons that show the water extent at different depths or sea levels. 

Contour lines colored by elevation combined with other basemap data to create a custom chart

Measurement and Volume Calculation

Global Mapper provides various tools for calculating two- and three-dimensional measurements. In the 2D map view, the Path Profile window, and the 3D Viewer linear distances and areas are measured using a simple drawing function. Volume can be calculated from bathymetric data by either defining a height or by calculating numerous volumes across a range of water heights. Volume can also be measured by defining a plane or comparing the bathymetric data to a surface grid. This provides various options for water volume calculation. 

Flood Modeling

By combining bathymetric data with terrain data and using tools such as the watershed analysis and water level rise tool it is possible to discover flood extents, flow accumulation, and perform other hydrographic analysis. 

Employing the various terrain editing and terrain creation functions, Global Mapper can be used to create hydro-enforced DEMs or other modified surface models. These can be analyzed within Global Mapper or exported to various formats to support analysis in other applications. 

Temperature and other Measurements

The bathymetric analysis may also involve other gridded datasets such as surface temperature, salinity, gravimetric data, and various other measured values. These datasets can also be visualized, rendered in 3D, and contoured to provide additional insight into the dynamics of lakes, oceans, and other water bodies. 

The latest version of the Global Mapper and Lidar Module include several enhancements, many of which apply to bathymetric data analysis. If this blog piqued your interest and you’d like to find out if Global Mapper is the right application for you, download a 14-day free trial and request a demo today!

How Pixels to Points Works

By: Katrina Schweikert

The Pixels to Points tool in Global Mapper’s Lidar Module uses a process of Automated Aerial Triangulation to reconstruct the 3D scene present in overlapping images. This computationally intensive process may seem like magic, but it relies on basic concepts of vision and photogrammetry. Photogrammetry is the science of taking real-world measurements from photographs. Let’s pull back the curtain to reveal how this process works. 

What is Aerial Triangulation?

Based on photogrammetry techniques, the location, size, and shape of objects can be derived from photographs taken from different angles. By combining views from multiple images, the location of distinct parts of the image are triangulated in 3D space. This is similar to how depth perception works with two eyes; since the object in front of you is viewed from two slightly different angles, the brain can perceive how far away the object is.

Diagram of depth perception

In traditional photogrammetry with stereo-image pairs, the two angles of the image allow the photogrammetrist to measure objects in the image and determine their real world size. With automated techniques using many overlapping images, the entire 3-dimensional nature of the scene being photographed can be reconstructed. 

Photogrammetry measurement diagram

What are the steps in Automated Aerial Triangulation?

Automated Aerial Triangulation involves a number of steps to get from the original images to 3D point clouds, terrain models, textured 3D models, and orthoimages. The first step is to detect distinct features in each image, and then match those features across the adjacent images. The challenge is to automatically detect distinct features that may be at different scales and rotations in each of the images. 

Features detected in two images, with lines showing the matches found

After the features are tracked through the images, the initial reconstruction begins with a process called Structure from Motion (SfM). In the context of mapping technology, the structure of the 3D scene is revealed based on the motion of the camera. This process calculates the precise orientation of the cameras relative to each other and to the scene, and builds the basic surface structure of the scene. This is the point where the selected Analysis Method is applied. The Incremental Analysis Method starts with a set of the best matching photos, and incrementally adds the features from subsequent images into the scene to build the 3D reconstruction. This works well for drone-collected images collected over a large area in a grid pattern. The reconstruction will typically start somewhere near the center of the scene, and work outwards. The Global Method, by contrast, takes information from all of the images together and builds the scene all at once. This makes for a faster process, but it also requires a higher degree of overlap between adjacent images. This is recommended if the images are collected focusing on an object of interest, such as a building, especially when all of the images focus on that central area or object. The result of the Structure from Motion analysis is a sparse point cloud that builds the basic structure of the scene, and a set of precisely oriented cameras that show where and in what direction the images were taken relative to each other

Example of sparse point cloud with camera frustums

The final step of the Automated Aerial Triangulation process involves filling in additional details from each image that was calibrated as part of the scene. This process is called Multi-view Stereo. It involves calculating the depth of each part of the image (i.e. how far away it is from the camera), and then fusing those depth maps to keep the points that appear in multiple images. 

Depth map and confidence map based on overlap with other images

This process generates the final dense 3D point cloud. Based on the options selected, there may be further processing to convert the point cloud into a refined mesh surface (3D Model) that is photo-textured by projecting the images onto it. This option also produces the highest quality orthoimage, removing relief distortions based on the 3D mesh surface. 

What factors impact Automated Aerial Triangulation?

Lens Distortion

An important initial step in the Pixels to Points process is removing the lens distortion in the image. While the photograph may appear as a flat image capture of the target area to the untrained eye, most photographs contain some distortion, particularly towards the edge of the image, where you can see the effect of the curvature of the camera lens. Pixels to Points will remove distortion in the image based on the Camera Type setting. Most standard cameras need correction for the basic radial lens distortion in order to create an accurate 3D scene. The default camera type setting, ‘Pinhole Radial 3’, corrects for the radial lens distortion (using 3 factors). In some cases it might be beneficial to use the ‘Pinhole Brown 2’ camera model, which accounts for both radial distortion and tangential distortion, where the lens and sensor are not perfectly parallel. 

Image with distortion and processed undistorted imag

Some cameras have the ability to perform a calibration, which automatically removes distortion in the image. If the Pixels to Points tool detects from the image metadata that the images have been calibrated, it will switch to the ‘Pinhole’ camera model. If you know your images have already had the distortion removed either by the camera, or some other software, choose the ‘Pinhole’ camera model, which will not apply any additional distortion removal. The final two Camera Type options account for the more extreme distortion of Fisheye or Spherical lenses. Select these options if appropriate for your camera. 

Focal Length and Sensor Width

An important part of transferring the information in the image into a real world scale is knowing some basic camera and image information. The focal length and sensor width values allow for a basic calculation of how large objects are in the image, and thus how far away they are from the camera. What is calculated using these values is a ratio between a known real world size (the sensor width) and the pixel equivalent of that size in the image. This is a starting point for reconstructing the 3D scene. Focal Length information is typically stored in the image metadata. Global Mapper includes a database of sensor widths based on the camera model, however, you may be prompted for this value if your camera is not in the database. You can obtain this information from the device manufacturer. 

Image Position

The basic position of each camera is typically stored in the image metadata (EXIF tags). With a standard camera this location is derived from GPS, from which average horizontal accuracy is within a few meters. There are a few ways to improve the accuracy of the resulting data based on the desired accuracy, and decisions about cost vs. time spent. 

Height Correction

The GPS sensors contained in most cameras may have sufficient horizontal accuracy for some applications. However, the corresponding height values are usually less accurate and are based on an ellipsoidal height model. A basic height correction can be performed using the options for Relative Altitude. This will anchor the output heights based on the ground height where the drone took off (the height of the ground in the first image). You can enter a specific value, or Global Mapper can automatically derive the value from loaded terrain data or online references (USGS NED or SRTM). 

Ground Control Points

One way to correct the position of the output data is through the use of Ground Control Points. This is a set of surveyed points with known X,Y,Z locations that should be evenly distributed throughout the scene. The measured ground control point locations need to be visually identifiable throughout the corresponding images, so it’s common to use a set of crosshairs or targets placed on the ground throughout the collection area before the images are captured.

 

Ground Control Points can be loaded into the Pixels to Points tool and the corresponding locations identified in multiple input images. This will align the scene based on the control points taking precedence over the camera positions. This procedure is a more time-intensive option, but is streamlined through a process whereby the images containing each point are highlighted, It is also possible to use Ground Control Points after the output files have been generated. Global Mapper provides various tools for this, including 3D rectification and the Lidar QC tool, which can also provide accuracy assessment information. 

RTK and PPK Positioning

Hardware manufacturers provide options for improving the accuracy of the positional information by communicating with a reference base station in addition to satellites, and by performing additional corrections based on available information at the time of the image collection. This includes both Real-Time Kinematic and Post-Processing Kinematic options. With some systems, higher accuracy positioning information is written into image metadata, which can be used directly in the Pixels to Points tool. Other systems may save the higher accuracy positions in a text file, in which case you will want to load your images into the Pixels to Points tool and use the option to Load Image Positions from External File

 

Understanding the variables and data requirements for the Pixels to Points tool and other SfM processes will help you to collect images better suited for processing. In turn, this will create higher quality results for further geospatial analysis.

The latest version of the Global Mapper Lidar Module includes several enhancements, many of which apply to the Pixels to Points tool for generating point clouds and 3D meshes from drone-captured images. If this blog piqued your interest and you’d like to find out if the Lidar Module of Global Mapper is the right application for you, download a 14-day free trial and request a demo today!

Classifying Lidar with the push of a (few) button(s)!

By Rachael Landry

If you are working with any type of point cloud data, the Global Mapper Lidar Module is a powerful, must-have add-on to the desktop application. One of the standout features of the Module is its ability to automatically identify and apply the appropriate ASPRS classification to each point with a few clicks. This blog will walk through the steps required to automatically classify a point cloud. 

Global Mapper’s Lidar auto-classification tools provide the means to identify ground, buildings, utility lines and poles, vegetation, and noise points within an unclassified point cloud. Each of the classification processes requires the presence of ground points in the point cloud so this is a good place to start. If necessary, noise classification can be used to automatically identify any points that are beyond the expected elevation range when compared to those in close proximity. This cleanup tool is used to remove obvious anomalies in the data.  At this stage, buildings and trees can be classified and if the point cloud is of sufficient density, there are even tools to classify above-ground utility lines and poles. 

When you begin the auto-classification process and load your point cloud into the software, it is important to know that Global Mapper has the ability to display points in several different ways including by RGB value (if present), intensity, and classification. For this process, we will color the Lidar by classification. If your point cloud has never been classified, it will look similar to this:

*An unclassified point cloud is displayed as gray.

After the data is loaded, you are ready to classify ground points. To do this, locate the Auto-Classify Ground Points button in the toolbar. This tool brings up the Automatic Classification of Ground Points settings window. These values will need to be adjusted based on the local terrain, the range of elevation values in the data set, user-defined preferences for filtering points prior to auto-classification, or known features in the landscape. This will help to optimize the output. When you have applied the necessary settings, click the OK button to initiate the process. 

*A point cloud with classified ground points.

If necessary, your next step will be to click the Auto-Classify Noise Points button. Identifying previously unclassified noise points will clean up the point cloud and improve further classification results. 

At this stage, the non-ground points or points representing buildings and vegetation, are ready to be identified and classified. In the Lidar module, buildings and vegetation are classified using the same algorithm, and the dialog box can be accessed using the Auto-classify Buildings and Vegetation button. The parameters required in the classification process describe the expected structure of buildings and trees within the point cloud. These values can be adjusted to account for the characteristics of your specific point cloud. 

*Point cloud with ground buildings and vegetation classified.

The Auto-Classify Powerline and Pole points button can automatically detect above-ground cables, and/or pole-like objects, such as utility poles, in high-density Lidar data with at least 20 points /m2. This density is typical of terrestrial Lidar and mobile Lidar point clouds. While synthetic Lidar (photogenerated Lidar) may also have this density, it does not typically have the reconstruction detail to precisely identify power lines or pole-like objects. Similar to the other classification tools, this process looks for structures resembling powerlines or poles based on user settings.

After you have classified your point cloud, you can begin analyzing the data further. This may involve creating a terrain model or extracting vector features from the classified point cloud.

Keep an eye out for our upcoming blog, focused on the lidar QC process! 

To learn more about the Lidar Module’s automatic classification tools please check out the Global Mapper Knowledge Base and if you have any further questions about the auto-classify tools please contact geohelp@bluemarblegeo.com.

Getting to Know the Global Mapper Toolbars

Written by: Cíntia Miranda, Director of Marketing

Global Mapper is a robust and yet easy-to-use GIS application that offers access to an unparalleled variety of spatial datasets, a complete suite of vector and raster processing tools, and an extensive collection of analysis tools, especially for working with Lidar or terrain data. If you’re new to Global Mapper, getting to know the toolbar is one of your first steps in familiarizing yourself with the application.  This blog provides a brief review of the buttons to help you understand the basic function of each.  More in-depth information is available in the Knowledge Base

The toolbars in Global Mapper provide quick and easy access to the most commonly used tools. To hide or display the toolbars, click the View menu and, from the Toolbars submenu, check or uncheck the appropriate checkboxes as needed.

The drop-down menu on the right side of each toolbar provides access to Customization of the toolbars, including adding new buttons and showing text labels.

Note that some toolbar buttons will not be available in certain situations. For example, most of the Digitizer (Edit) buttons will be disabled until one or more vector features are selected on the map.

Here’s what each toolbar button can do for you:

File

Open Data Files  Save Workspace  

Connect to Online Data  Map Layout Editor  

 Overlay Control CenterConfigure

 Overview Map

Navigation

Zoom (Alt+Z)Pan (Alt+G)

Zoom InZoom Out

Restore Last View (Ctrl+Backspace)Full View

Selection

Digitizer Tool (Alt+D)Select by Drawing Polygon

Clear Current SelectionSelect Labels

Tools

Measure Tool (Alt+M)Feature Info Tool (Alt+P)

Search Vector Data

Analysis

Create Elevation GridCreate Contours

Calculate Cut and Fill Volume (Ctrl+Alt+M)Path Profile (Alt+L)

Create View Shed (Alt+V)Create Water Shed

Combine/ Compare Terrain LayersCombine/ Compare Terrain Layers

Create 3D Fly-through

Viewer

Add 2D Map ViewsRotate Map

Image SwipeShow 3D View

Link 2D and 3D Views (Ctrl+Shift+3)Display Water Level

Increase Water LevelDecrease Water Level

Enable/ Disable Hill ShadingDynamic Hill Shading

Shader Drop-down Menu

GeoCalc

Enable GeoCalc Projection ModeAuto-select GeoCalc Transform

Launch Geographic Calculator

Favorites

Favorites Drop-down

Run Selected Command (Ctrl+Enter)

Digitizer (Create)

Create Point/ Text FeatureCreate Line Feature (Vertex Mode)

Create Line Feature (Trace Mode) (Shift+T)Create Area Feature

Create Rectangle/ Square Area FeatureCreate Circle/ Ellipse Area Feature

Digitizer (Advanced)

Create Distance/ Bearing/ COGO LineCreate Range Rings / Ellipses

Create Regular Grid of FeaturesCreate Strike-and-Dip Point

Cut Selected Area(s) From Another AreaRight Angle Draw Mode (R)

Ortho Draw Mode

Digitizer (Edit)

Move Selected Feature(s) (Ctrl+Shift+M)Rotate Scale Feature(s)

Display Area/ Line Vertices (Shift+V)Move Selected Vertices

Insert VertexCombine Line Features

Split Line At Selected VertexCreate Points From Line/ Area Vertices

Create Areas From LinesCreate Lines From Areas

Combine Selected AreasCrop To Selected Areas

Create Buffer Around Selected Features

GPS

Start Tracking GPS (Ctrl+T)Stop Tracking GPS

Keep GPS\ Video Vessel on screenOrient View to GPS \Video Heading

Mark Waypoint (Ctrl+M)Mark Waypoint from Averaged Position

Mark Waypoint at OffsetDisplay GPS Info

Animate

StartStop

SlowerFaster

AddRemove

Get the most of Global Mapper by learning how it can improve productivity, encourage efficiency, and save time and money in your GIS operations.  The following resources will help you become familiar and more proficient with the software.

1) The Global Mapper Getting Started Guide provides a concise overview of the software.

2) The Global Mapper Knowledge Base has more in-depth information about Global Mapper’s features and functions.

3) The FAQ page offers answers to commonly asked questions.

4) The self-guided training provides a series of free hands-on exercises, including written instructions and sample data files. Take a moment to download these instructional materials to learn how to use some of Global Mapper’s basic tools. 

5) The GeoTalks Express webinars are a series of free online presentations conducted every two weeks covering a wide variety of topics and themes. Sign-up to one or multiple webinars! 

6) Global Mapper online training classes provide the most effective way to get the most out of the software. Scheduled public classes provide a thorough introduction to the full breadth of the application’s features and functions, while a custom class will allow your organization to adapt the course content to meet your specific needs. For more information, email training@bluemarblegeo.co

Global Mapper’s intuitive user interface and logical layout help smooth the learning curve and ensures that users will be up-and-running in no time. Take advantage of the aforementioned resources and if you need any further assistance with the application, contact geohelp@bluemarblegeo.com.

How to Activate Global Mapper Single-User License

By Rachael Landry

The Global Mapper single-user licensing process begins with an email. When a purchase is completed, an order confirmation email is automatically sent with information and instructions on how to license the software, including links to download Global Mapper, information about how to become a registered user, and access to detailed instructions on how to activate the license. The email also provides the order number for the purchase, which is used to activate single-user licenses via the internet.

*It is important to note that the email is generally sent to the purchaser unless otherwise requested. Please keep this in mind if the software was purchased by your company the licensing email may have been sent to the purchasing department and not the end-user.

After reviewing the order confirmation email, the next step is to download Global Mapper and open the application. The software will open with the License Global Mapper dialog box where you enter your user information (make sure to enter the same information you use to login to the Blue Marble website). If you are not a registered user, follow this link to register before proceeding. Then select the Activate single-user or trial license option, and click the Continue button.

In the next dialog box, select the Single user license option and enter your complete order number. Note that this field is case-sensitive. This dialog box also allows users who purchased Global Mapper and the optional Lidar Module in the same order to license it at the same time. Finally, click the “Continue” button to complete the licensing process.

After your copy of Global Mapper has been registered, please be sure to check out all of Blue Marble’s Global Mapper resources. From the YouTube page to the self-guided training, and bi-monthly webinars, Blue Marble wants to provide you with the tools to ensure that you are using Global Mapper to the fullest. 

If you have any questions or issues activating your license please contact authorize@bluemarblegeo.com.

GeoTalks Express – Session 8 Questions & Answers

The eighth of Blue Marble’s GeoTalks Express online webinar series entitled Got a drone, now what? An Introduction to Pixels to Points, was conducted on June 24th, 2020. During the live session, numerous questions were submitted to the presenters. The following is a list of these questions and the answers provided by Blue Marble’s technical support team.

 

Can you see the orthoimage in 3D?

The orthoimage generated by the Pixels to Points tool is a 2D image. The image will appear in the 3D view, but it will be flat and appear below loaded 3D data layers. 

 

Do you have to purchase the Global Mapper Lidar Module to work with the drone data?

The Lidar Module is an add-on to the Global Mapper program that includes point cloud editing, viewing, and processing tools as well as the Pixels to Points tool shown in this webinar. 

If you are interested in testing out the Global Mapper program and the Lidar Module, I encourage you to download Global Mapper from our website and activate a trial license.

 

Can you perform tree height: DSM-DTM?

In Global Mapper you can use the Combine/Compare Terrain Layers tool to subtract one layer from another, like DSM – DTM as you have noted, to find tree heights. This does require having both a DSM (Digital Surface Model) and DTM (Digital Terrain Model) created for an area. 

Going back a step, you can generate elevation grid layers, DSM and DTM, using the Elevation Grid Creation tool in Global Mapper with point cloud data. The Elevation Grid Creation tool supports multiple binning methods, one uses maximum values to generate a DSM and another uses minimum values to generate a DTM. 

One of the drawbacks to photogrammetrically derived point cloud layers, those created from 2D drone images, is that ground area underneath tree cover is not usually identified accurately. This is due to the fact that the images only show the treetops, and not the ground so the process cannot identify features and create points representing the ground. If you are looking to generate your own DTM layer modeling ground, I would recommend true lidar data. That being said, if your collected images allow for a good reconstruction of the tree canopy you can create a DSM from the generated point cloud and compare that to an existing DTM layer. 

 

​What are the recommended camera orientation and drone height above the surface?​

​The height and angle at which you fly your drone when collecting images depend on the goals for your data collection. ​For instance, if you are planning to map the ground in a wide-open area, you can fly back and forth over the area collecting images looking straight down at the area of interest. 

If you are looking for more detail on terrain features or features on the surface, you may want to fly at a lower height, still taking nadir or maybe slightly oblique images, flying back and forth over the area. Then, to increase the views you have on the features in the area, continue flying over the area in straight lines crossing your original flight lines perpendicularly. This will provide views of terrain features from additional angles. 

To model a specific feature or pile, capture oblique images of the feature as the drone flies in a circle around it. This will capture many angles of the same feature, but with more oblique images you may need to use the masking tool in Pixels to Points to crop out areas along the sky and horizon. 

 

Each image file has coordinates of presumably the center point of the image. Where do those coordinates come from? Is it a drone camera or control function?

The drone collected images load into Global Mapper as Picture Points. These are point features, represented with a camera icon, that appear at the location of the camera when the image was captured. These coordinates are recorded and attached to the image by the GPS enabled camera that captured the image. The coordinates along with other information about the camera and image are stored in the EXIF data for each image. 

 

Do you have to use control points or is this an optional step to improve accuracy?

You do not need to use control points when generating your outputs using the Pixels to Points tool. Including ground control points will help to improve the accuracy of the outputs as they are placed in 3D space. Without ground control points the outputs generated from the Pixels to Points tool will still be accurate to themselves. 

You can also choose to incorporate some control points after generating your outputs. You can rectify layers in x, y, z, or use the Lidar QC tool to vertically adjust your generated point cloud layer. 

 

Is there anything in GM documentation regarding camera types? ie. pinhole 1, 2, or 3, etc?

Yes, additional information on the Camera Type options can be found here in the Global Mapper knowledge base. The information on the Pinhole camera types is as follows: 

  • Pinhole – A classic Pinhole camera
  • Pinhole Radial 1 – A classic pinhole camera with a best-fit for radial distortion defined by 1 factor to remove distortion.
  • Pinhole Radial 3 – A classic pinhole camera with a best-fit for radial distortion by 3 factors to remove distortion.
  • Pinhole Brown 2 – A classic pinhole camera with a best-fit for radial distortion by 3 factors and tangential distortion by 2 factors.

 

Can low-res lidar be used to calibrate Pixels to Points projects?

While you cannot use an existing point cloud to calibrate or help generate a new point cloud in the Pixels to points process, you may be able to derive some control points from your existing point cloud that could be used in Pixels to Points. 

In the Lidar Module of Global Mapper, there is a Fit Point Clouds tool that can be used to adjust one point cloud to better fit another. After generating your new point cloud with Pixels to Points, you may be able to use the Fit Point Cloud tool to adjust the new point cloud based on your existing lidar point cloud layer. 

 

Do you have to identify the control point in each of the green-colored images?

Yes, you should identify each ground control point in each image that it appears. With a ground control point selected in the Pixels to Points dialog the green listed images are suggestions on where the ground control point is likely to appear based on the coordinates of the point and the calculated image coverages for each input image. 

 

Is there some automatic tool in order to make placing the control points easier?

Placing the ground control points in images through the Pixels to Point tool is a manual process. With a control point selected the Pixels to Points tool will list some of the input images in green to suggest where the selected control point may appear. This suggestion based on the coordinates of the control point and the image coverages helps to narrow down the images to look through when placing the control point. 

 

Do you have a documented workflow for the drone data processing?

There is a workflow outline as well as details on the various steps, settings, and options for the Pixels to Points tool here in the Global Mapper Knowledge Base

If you would like more guidance on using the Lidar Module we do offer Lidar Module training, as well as Global Mapper training. These training sessions have been moved to an online format and more information can be found here on our website.  

 

After tagging a GCP in one image, can Global Mapper estimate the location in the overlapping images so that the user only needs to refine the location and not place it in every single image?

Tagging a ground control point in an image in the Pixels to Points tool only places it in that one image. Placing the ground control points in images is a manual process that must be done for each image, no ground control points are placed automatically.

 

I have seen in some cases where users put white paint markings on the ground or use trig beacons as control points. Is this a normal standard? 

Yes, you can absolutely paint a control point on the ground in your study area or use a trig beacon or other feature. A ground control point should be a point on the ground that you can survey the location of and that can be easily and accurately identified in your drone-collected images. 

 

How critical are the ground controls points when you do a drone survey? In other words, can you trust the data collected without any control points?   

You do not need to use control points when generating your outputs using the Pixels to Points tool. Including ground control points will help to improve the accuracy of the outputs as they are placed in 3D space. Without ground control points, the outputs generated from the Pixels to Points tool will still be accurate to themselves.

You can also choose to incorporate some control points after generating your outputs. You can rectify layers in x, y, z, or use the Lidar QC tool to vertically adjust your generated point cloud layer. 

 

Are hills and mountains identified as ground?

Hills and mountains should be identified as ground as they are ground area. The automatic ground classification tool in the Lidar Module requires some user-entered parameters to help guide the tool to more accurately classify ground. A couple of these parameters are Maximum Height Delta, the approximate range in elevation for ground in the area, and Expected Terrain Slope, the maximum expected terrain slope in the area. Adjusting these parameters appropriately will help to better classify ground in areas of steeper slope and higher elevation such as hills and mountains. 

 

Is it possible to load the GPS data in a separate text file that is not in the EXIF? We currently use full-size airplanes with medium format cameras that record the EO data separately from the EXIF. 

Yes, you can load the image position information from an external text or CSV format file. With images loaded into the Pixels to Points dialog, select images and right-click in the Input Images box. Select to Load Image Positions from External File and point to the file containing the image positions. More information on this right-click option can be found here with information on the input Images section of the Pixels to Points tool. 

As long as your captured images meet the data recommendations and have sufficient overlap and clear features, you should be able to use them with the Pixels to Points tool. 

 

Is it possible to create a point cloud with ground control only and no camera GPS positions?

Yes, the Pixels to Points tool can create outputs using images that do not have camera coordinate information. These images cannot be loaded into the main view of Global Mapper and would need to be loaded directly into the Pixels to Points tool

 

By downsampling the quality of the images, will the resolution of the exported data will also change?

Reducing the image sizes will reduce their resolution thus resulting in a lower level of detail in the input images. This may cause the program to find fewer like features based on the recurring pixel patterns ultimately slightly reducing the density and resolution of the outputs. 

For the best possible outputs it is recommended to use the full image resolution, but depending on the image sizes, settings, and machine specifications, that is not always possible. If you do end up needing to reduce the image sizes, try to reduce them by the smallest factor possible with your data and your machine. 

 

Is it possible to import metadata for images such as External orientation file instead of EXIF?

Yes, you can load the image position information from an external text or CSV format file. With images loaded into the Pixels to Points dialog, select images and right-click in the Input Images box. Select to Load Image Positions from External File and point to the file containing the image positions. More information on this right-click option can be found here with information on the input Images section of the Pixels to Points tool.

 

Is there a way to incorporate existing point cloud/lidar data to help with the mesh process? 

While you cannot use an existing point cloud to help generate the mesh feature from the Pixels to Points tool, you can create a mesh from a selected point cloud. If you have multiple point clouds loaded and selected for an area, all selected points will be used to generate the 3D mesh feature. 

 

​Can you load a file of post-processed GPS coordinates and replace the GPS coordinate in each image?​ To use GM to process drone mapping data it needs the ability to improve the image geotag with post-processed coordinates. Is this possible?​

​Yes, you can load positions for images from an external text or CSV format file in the ​Pixels to Points tool dialog. This can be used to add positions to images without any or to update and replace the existing image positions. 

With images loaded into the Pixels to Points dialog, select images and right-click in the Input Images box. Select to Load Image Positions from External File and point to the file containing the image positions. More information on this right-click option can be found here with information on the input Images section of the Pixels to Points tool.

 

Can you use any camera with the ability to geotag images?

Yes, any image taken with a GPS enabled camera should load into Global Mapper as a picture point. While the Pixels to Points tool is geared toward drone collected imagery you may be able to use any images as long as they meet the data recommendations

 

Can you have more than one ground control point in an image?

Yes, if multiple ground control points appear in a single image you should place them in that image. To place each ground control point, select the point from the ground control point list and then click Add Control Point to Image, and place the control point. Select another control point from the ground control point list and complete the same steps to add that point to the image. 

 

The kml file generated by Global Mapper needs to have the code edited after export so that DJI drones can read them. Are there plans to change kmls to be compatible with DJI?

Yes, we do have an open issue on resolving the incompatibility of Global Mapper produced KML files with DJI systems. The fix for this issue is temporarily slated for the Global Mapper 22.0 release. 

 

Do photo locations and GCP’s need to be in WGS84 coordinates or can other coordinate systems be used?

The photo location coordinates are assumed to be GPS derived and therefore implicitly bound to WGS84. Do you have collected images that use a different coordinate system for the photo locations? If so, could you share a few sample images? This would allow us to look at the EXIF info and look into a way to load them using the correct coordinate reference system.

Ground control points can be in any supported coordinate system. When loading a layer of ground control points into the main view of the Global Mapper workspace, they will be treated as any other point layer and loaded using the projection specified in the file, or selected by the user if needed. When loading control points from a text file into the Pixels to Points tool, you will be met with the Generic Text File Load Options, and since text files do not contain projection information, you will be prompted to select the correct projection for the file. 

 

Is MGA2020 projection supported in the output coordinate system?

The MGA (Map Grid of Australia) is a supported projection in Global Mapper, and GDA 2020 (Australian Geodetic 2020) is a supported datum. The workspace projection set in Configuration > Projection will be the coordinate system used when exporting files from a workspace. 

 

You need a pretty robust PC to process this data!

To process many images at full resolution it will take a more powerful machine. System requirements and recommendations for Global Mapper and the Pixels to Points tool can be found here in the Global Mapper knowledge base

 

I’ve noticed that when selecting the GCP, it sometimes does not exactly select the center of the GCP, is that OKAY?

You should try to tag the ground control points as precisely as possible in the Pixels to Points tool. Zooming and panning in the image preview window will help to place them at the desired location in the images. When placing ground control points on an image, you may see the control points snap to the image pixel center.  

 

How did you generate the flight line?!?

The flight line is automatically generated by loading the picture points via the Pixels to Points tool dialog. To generate a flight line, open the Pixels to Points tool and load the images directly into this dialog with the option to Load Image File(s)… Once the images have loaded into the Input Images list, go to the Map Menu in the Pixels to Points tool and choose to Load Image(s) as Picture Point(s). This option will load a group of layers in the Global Mapper workspace containing all the picture points for the drone images as well as a Flight Line layer with the flight path. 

 

Can you use a Global Mapper script to automate the process of Pixels to Points transformation?

Global Mapper script does support the command GENERATE_POINT_CLOUD to set up and run the Pixels to Points process. Running the process through a script does not allow you to perform manual tasks like placing ground control points or masking sections of images. 

 

Any recommendations for good flight plan apps?

We don’t have any recommendations for flight plan apps as there are many out there. We generally suggest working with the one most compatible with your drone model. 

 

Will you be supporting the GLB file in your next version?

We do not have any current plans to include the GLB format for 3D objects. Supported 3D model formats are listed here in the Global Mapper knowledge base

 

Do you have to fly the second time to capture the sides of the buildings/house at an oblique angle instead of nadir?

 

The flight path of your drone and the images captured should be designed with your end goal in mind. If you are planning to more accurately model building features as well as ground area you may want to capture oblique images from many angles. Keep in mind that the Pixels to Points tool and the Structure from Motion process can only construct the area for which there are clear views from overlapping images. 

Check out this blog post with some more details on drone flight tips when collecting images for use in the Pixels to Points tool. 

 

How many GCPs do you need for a certain area of your flight?

Your ground control points should be evenly spread over the area of interest and each ground control point should appear in multiple images. There is no rule on how many ground control points you should have in a given area. More control points will improve accuracy to a point, but eventually, the addition of more control points will not result in much improvement in the outputs. 

 

So the data altitude is assumed to be in NAVD88 instead of NGVD29?

Global Mapper does not work with vertical reference systems or transform between them. The assumed vertical system for Pixels to Points is Ellipsoidal Height. 

 

If you have two sets of flight plans with different altitudes for the same area, will it distort the 3d model because they have different ground sample distance size?

If both flights use the same camera, the lighting in both image sets is similar and fairly even, and the images are clear, the results from using the two datasets from different heights should process together fine. 

 

Should I use a special application or program to create my flight plan and make it compatible with Global Mapper? I mean, does Global Mapper have an application to create the drones flight plan? 

Global Mapper cannot create flight plans that can be used by drones on flights. You will need to use another app to create and execute the flight. Global Mapper does support many file formats, so if you are able to save your flight line you can then import it into a Global Mapper instance as you work with your drone collected images. 

Additionally, you can recreate your flight from your collected drone images using the Pixels to Points tool. To generate a flight line, open the Pixels to Points tool and load the images directly into this dialog with the option to Load Image File(s)… Once the images have loaded into the Input Images list, go to the Map Menu in the Pixels to Points tool and choose to Load Image(s) as Picture Point(s). This option will load a group of layers in the Global Mapper workspace containing all the picture points for the drone images as well as a Flight Line layer with the flight path. 

 

Can Global Mapper do the corresponding coordinate system transformation: in this case, could Global Mapper transform UTM coordinates to the German Gauß-Krüger coordinates?

You can reproject data in Global Mapper by changing the workspace projection in Configuration > Projection. Both UTM and Gauss-Kruger are supported projections in Global Mapper. The workspace projection set in Configuration > Projection will be used when exporting layers from Global Mapper. 

 

How long does it take Global Mapper to process 200 photos, with its best performance? What must be the specifications of my computer to achieve this processing time?

The length of time it takes to process an image set through the Pixels to Points tool depends not only on the number of images and the machine specifications, but also the size of the images and the settings applied in the Pixels to Points tool. Machine recommendations and requirements for using Global Mapper and the Pixels to Points tool can be found here in the Global Mapper knowledge base

 

Can we do all this processing with an Unregistered copy as my Registered Version is at my workplace but I am working from home?

An unregistered version of Global Mapper does have limitations that will likely prevent you from being able to process images with the Pixels to Points tool. If you need help with a Global Mapper license please reach out to our licensing team at authorize@bluemarblegeo.com with your most recent order number.

 

Any particular drone which works better with the Global Mapper LiDAR module?

We don’t have any specific drone recommendations, but you can find some data collection recommendations for the Pixels to Points tool in the Global Mapper knowledge base. 

 

Can you use scripting for all these processes?

Global Mapper script does support the command GENERATE_POINT_CLOUD to set up and run the Pixels to Points process. Running the process through a script does not allow you to perform manual tasks like placing ground control points or masking sections of images. 

 

We are watching while flying our drone doing a mapping job. I’d love for you to email the PowerPoint later. We want to start using global mapper and LiDAR vs drone2map or pix4d. We just need to learn it. 

I am glad you are interested in using Global Mapper for your drone data processing! We have thorough documentation on the tool and options here in the Global Mapper knowledge base. Additionally, as a registered attendee you should receive an email within the next week with access to the recording of this webinar.

If you have any questions about using Global Mapper, the Lidar Module, or the Pixels to Points tool specifically, you can reach out to our technical support team at geohelp@bluemarblegeo.com. 

 

Can you export to google earth?

Yes, Global Mapper supports many file formats for both import and export including raster and vector KMZ/KML formats compatible with Google Earth. 

 

Is it required to have a base overall image pre-loaded or can you do this work with captured images only?

A base image is not required. In the example shown in the webinar, the imagery simply provided a visual reference for the project. You can absolutely view and process your drone collected images without any background data, only loading the images and the generated outputs from the Pixels to Points tool

 

If possible please provide the best image capture settings for the fly-over? Or a couple of scenarios? 

The flight plan to fly your drone when collecting images depends on the goals for your data collection. Keep in mind that the Pixels to Points tool and the Structure from Motion process can only construct the area for which there are clear views from overlapping images. For instance, if you are looking to map the ground in a wide-open area, you can fly back and forth over the area collecting nadir images of the area of interest.

If you are planning to gather more detail on terrain features or features on the surface, you may want to fly at a lower height, taking slightly oblique images, flying back and forth over the area. Then, to increase the views you have on the features in the area, continue flying over the area in straight lines crossing your original flight lines perpendicularly to capture additional angles of the features.

Check out this blog post with some more details on drone flight tips when collecting images, and the data collection recommendations for images with the Pixels to Points tool. 

 

I use Microdrones mdLidar 1000 which has a 5mp camera which is primarily for colorizing the lidar point cloud. Can I generate just an orthoimage without generating a point cloud or 3D mesh?

You can select to only output the orthoimage from the Pixels to Points tool. Since this image is generated from the point cloud, the point cloud will still be constructed in the processing, but only the selected output, the orthoimage, will be saved. 

 

Does your workflow optimally utilize highly accurate camera coordinates in lieu of the use of GCPs?

If you do not choose to use ground control points with the Pixels to Points tool the camera coordinates alone will be used as the positioning information when generating the output layers. Increased accuracy of the camera coordinates will result in more accurately positioned outputs from the tool. 

 

Can you build a model without any coordinates i.e. without GCP or camera coordinates?

Yes, the Pixels to Points tool can create outputs using images that do not have camera coordinate information. The collected images with no coordinates cannot be loaded into the main view of Global Mapper as picture points and will need to be loaded directly into the Pixels to Points tool. With no camera coordinates or ground control points, the outputs will be placed at the origin (lat/long 0,0) since the program will have no coordinate references for the data.

 

Can I enter camera coordinates via a list rather than through EXIF embedded coordinates?

Yes, you can load camera positions from an external file text or CSV format file. With images loaded into the Pixels to Points dialog, select images and right-click in the Input Images box. Select to Load Image Positions from External File and point to the file containing the image positions. More information on this right-click option can be found here with information on the Input Images section of the Pixels to Points tool. 

 

Where in the object space do you specify “ground height”?

The Use Relative Altitude Based on Ground Height parameter in the Pixels to Points tool allows you to specify a ground height for the first image. Other points will use the entered value to calculate the vertical component for the outputs. 

 

Are there Points to Pixels tutorials available?

There is a workflow outline as well as details on the various steps, settings, and options for the Pixels to Points tool here in the Global Mapper Knowledge Base.

If you would like more guidance on using the Lidar Module we do offer Lidar Module training, as well as Global Mapper training. These training sessions have been moved to an online format and more information can be found here on our website.

Additionally, as a registered attendee you should receive an email in the next week with access to the recording of this webinar. 

 

The accuracy improvement of using ground control vs NOT using… are we talking metres, decimetres, or even better improvement? Under what conditions would the use of ground control points lead to marginal improvements? 

The accuracy of the outputs generated without control points depends on the accuracy of the image positions. If the cameras are using RTK/PPK then the output model should already be in the decimeter accuracy range. If the camera collects less accurate GPS coordinates, then high accuracy ground control points can definitely increase the accuracy from a few meters to closer to the GCP accuracy, which is also likely in the decimeter range depending on how they were collected.

Generally, we do recommend the use of high accuracy ground control points, or at least points collected with the use of some GPS averaging, if the collected image positions are not highly accurate. 

 

Also, can we have access to some test data like this for self-training?

We do not currently have a Pixels to Points lesson in the Global Mapper self-training available on our website. If you are interested in further training on the Lidar Module and the Pixels to Points tool we do have some public training classes scheduled in the upcoming months. These classes have been moved to an online format and you can find out more about training here on our website

 

What about PPK data?

High accuracy image positions or ground control points collected with a PPK system can be used with Global Mapper and the Pixels to Points tool. Using higher accuracy position information will help to improve the accuracy of the Pixels to Points outputs. 

 

Do the help files (or some discussion on a web site) discuss parameters for when you might want to adjust settings – e.g. analysis method, checkboxes for higher quality/resampling?  I’m thinking similarly to suggesting using masks for sky, water, snow cover, etc.

The Pixels to Points tool documentation contains information about the tool in general as well as an outline of the steps to use the tool and specific information on the options and settings available. 

 

How would I know which camera type my drone camera fits?

I recommend researching your drone model and contacting the manufacturer for information on compatible camera models. 

 

I would also like some information about processing time – some reference points of number of images, processor speed, cores, etc.

The time it takes to process an image set with the Pixels to Points tool depends on the number of images, the image resolution, the settings selected in the Pixels to Points dialog, and the machine on which you are running the process. You can find some system requirements and recommendations for Global Mapper and the Pixels to Points tool specifically in the Global Mapper knowledge base. 

 

Can I add EXIF info to images from a separate exterior orientation file like the one created from Trimble Applanix IMU/GPS?

Yes, you can load image positions from an external text or CSV format file. While I am not familiar with the format file created by your specific GPS device, information on the options to image positions from an external file can be found here with other information on the Image Input list in the Pixels to Points tool. 

 

​How accurately does the GPS of the drone need to be? Most camera GPS is no better than 5m accuracy.  Should potential drone purchasers be looking for any particular standard of GPS capture?

While more accurate image position information will produce more accurate outputs from the Pixels to Points tool, the use of high accuracy ground control points can help to significantly boost the accuracy of the outputs when using less accurate image position coordinates.

 

Looks like Global Mapper is projecting the images on-the-fly. Is that true?

The quick individual orthoimages loaded into Global Mapper are being projected to fill the approximate image coverage areas calculated from the camera position and view parameters.

 

I tried this with our new DJI Mavic 2 Enterprise Dual drone and Global Mapper asked what the focal length was. I had no idea so the Pixels to Points tool never worked.

If some camera parameters, like focal length, cannot be read from the image EXIF information Global Mapper will ask for the missing information. To find this I suggest looking through documentation on your drone and camera model and reaching out to the manufacturer for the hardware specs. 

 

​Is there any limitation when loading images like size, resolution,…?

There is no limitation on the image size, file size, or image resolution ​in Global Mapper. When working with large high-resolution images you may run into some hardware limitations when you attempt to process the images in the Pixels to Points tool. The most common limitation users run into when processing large datasets is an insufficient amount of available memory on the machine. In this case, the solution would be to reduce the image size by a factor.

​System recommendations for using Global Mapper and the Pixels to Points tool can be found here. Keep in mind that these are recommendations, and having more available memory will be beneficial when processing larger images. ​

 

Sometimes the images do not have EXIF, how to add this information using a CSV file?

You can load the image position information from an external text or CSV format file. With images loaded into the Pixels to Points dialog, select images and right-click in the Input Images box. Select to Load Image Positions from External File and point to the file containing the image positions. More information on this right-click option can be found here with information on the input Images section of the Pixels to Points tool. 

 

My images were generated by a camera which is not in the list of cameras, what can I do?

What camera do you use for image collection? Typically the camera information can be read from the image EXIF information, if it is not you will be prompted to enter some parameters. If you can provide the camera model and specifications I would be happy to pass them along to our team to get the camera added to the Camera Model list in the Pixels to Points tool. 

 

Is the irregularity in the trees and vegetation due to the way the data was collected or as a result of the SfM?

Since the goal of this image set was to reconstruct the farmhouse area, the irregularities and distortion in the trees stem from a combination of the data collection and the Structure from Motion (SfM) process. 

The images are nadir images and the tree areas where distortion is seen are on the edges of the study area. Being toward the edge of the study area with top-down images means that there aren’t as many good overlapping views of the trees in the source images. 

Additionally, trees and vegetation areas are often difficult for the Structure from Motion process because the images are noisy making it hard for the program to identify the match points for reconstruction. 

 

Does Pixels to Points work on non-nadir pointing images?

Yes, you can use non-nadir images, like oblique images, with the Pixels to Points tool.

 

Can you add different columns to the Image list, i.e. Roll/Pitch/Yaw?

If the Roll/Pitch/Yaw information is stored in the EXIF info for an image, it should appear as attributes for the picture point when the image is loaded into the main view of Global Mapper. If this information is detected, it will be used when ortho-rectifying individual images

However, these additional view parameters, Roll/Pitch/Yaw, are not currently used when generating the Pixels to Points outputs and cannot be viewed in the Input Images list information. 

 

Is it best to maintain a constant AGL or constant altitude when collecting imagery?

It is recommended to keep a constant altitude and in general, when capturing nadir images, fly the drone as high as possible. 

 

What is the camera type for DJI Phantom 3 Pro?

If the camera model and information cannot be read from the EXIF information in your drone images, the Pixels to Points tool will prompt you to select the camera and enter some specific parameters. I recommend you refer to the manufacturer for the specific camera model information for your drone. 

 

Can Global Mapper create true, not distorted, orthophoto? 

The Pixels to Points tool can remove distortion from the orthoimage by using the generated 3D mesh when generating the orthoimage layer. To generate the best quality texture for the mesh, you should select the Quality setting of Highest in the Pixels to Points settings. This combination of checking the option to Generate Orthoimage from Mesh and using the Highest Quality setting will remove distortion when generating the orthoimage layer. 

You can also choose to individually ortho-rectify images through Pixels to Points. By checking the option to Ortho-rectify Each Image Individually in the Pixels to Points tool settings each image will be placed on the map. With this method, there will likely be some noticeable seams between the images. 

 

Can we apply this workflow to satellite images within RPCS?

No, stereo imagery is generally collected with satellites, and with the Pixels to Points tool, you need multiple images (at least a dozen) that have a 60% overlap.

 

Can Global Mapper create disparity maps for export?

By using Global Mapper script to run the Pixels to Points tool you can add additional command options. One of these advanced options will generate depth map images. These depth maps are equivalent to disparity maps for the Pixels to Points and Structure from Motion process.

 

I take 360° panoramics for radar coverage prediction, will Global Mapper be able to process the pans to make a screen profile looking at the horizon?

Unfortunately, the Pixels to Points tool does not support the use of panoramic images, so processing your images would likely fail. If you were to collect images of the area with an alternate camera, you could use the Pixels to Points tool to create some 3D outputs of the area in order to view and analyze the area and the horizon line. 

 

Can Pixels to Points work underwater (clear) if you mask out the land and add a few control points underwater?

We have not tested with an underwater set of images. If the water does not cause noise or distortion in the images, the Pixels to Points tool may be able to generate an output, but it is unknown. There are various factors concerning how light travels through water that are not accounted for in our algorithms. 

If you have a dataset of clear underwater images that you would be able to share with us for testing purposes, that would be appreciated as we do not currently have test data of that type.

 

Are there any plans to incorporate flight planning within Global Mapper?

Currently, Global Mapper cannot create flight plan features that can be used by drones on flights to capture images. This is something being considered by our development team.

 

Can I use a QC tool to create and RSME for a surface vs check shots? I have used the LiDAR QC tool to do this but I am hoping to do this with a grid.

​While the Lidar QC tool can only be used with point cloud data at this time, we do have an open ticket, #GM-6634, on adding ​a QC tool for gridded elevation data. I have added your request to this ticket that our development team is considering. 

In the meantime, you could create points at the elevation grid cell centers for your gridded elevation layer and then use the Lidar QC tool to compare your control points to the created points from the elevation grid. 

 

​Can a camera and lens calibration be interpreted in Global Mapper?

Currently, camera calibration is not ​calculated and used in the Pixels to Points tool process. There is an open issue on adding some camera calibration tool to the program. This ticket is #GM-9644 and I have added your request to this ticket. 

Geo-Challenge — June 2020 Answers

How Well Did You Do?

Name the Country? – Uruguay

 

Name the Island? – Kodiak Island

 

Name the Body of Water? – Gulf of Corinth

 

Name the Mountain Range? – Atlas Mountains

 

Name the Capital City? – Hanoi

What’s new in Global Mapper Mobile version 2.1

Written by: Cíntia Miranda, Director of Marketing

If you don’t have Global Mapper Mobile® on your phone or tablet, you’re missing out on a great opportunity to expand the reach of your GIS operations – for free!  Global Mapper Mobile is a powerful iOS and Android application for viewing and collecting GIS data.  It utilizes the GPS capabilities of mobile devices to provide situational awareness and locational intelligence for remote mapping projects. The mobile application provides maps-in-hand functionality for engineers, surveyors, wildlife managers, foresters, and anyone whose job requires access to spatial data in remote locations.

A complement to the desktop version of Global Mapper®, the mobile edition can display all of the supported vector, raster, and elevation data formats and offers a powerful and efficient data collection tool. The 2.1 release includes several new enhancements including:

  • Vector feature styling improvements with an increase in the number of built-in supported vector styles and expanded support for custom symbols. Feature styles can now be previewed when creating or editing a feature as well.
  • Terrain layers are now rendered with hill shading and a default color shader and elevation values can be viewed from elevation layers at a specific location. 
In addition to being able to render terrain data with hills shading and a terrain shader, the app will now display the terrain layer’s elevation value when in crosshair location mode.
  • A new option to set the layer transparency for raster and terrain layers. This latest release also features an improved color picker and support for Dark Mode.
The new Shortcut Bar (upper left) allows for quick access to Advanced GPS functionality and zooming/panning tools

For advanced field mapping applications, a Pro version of Global Mapper Mobile is available for only $50. Version 2.1 of Global Mapper Mobile Pro includes all of the capabilities of the free version and it also offers:

  • Advanced GPS support allowing users to connect to external high accuracy Bluetooth GPS devices, from vendors such as the Bad Elf and Juniper. This functionality allows users to access detailed information from these devices including the ability to view satellites, detailed location information, and even view/record the NMEA stream.  
Once an external GPS device is connected to the app, the Advanced GPS functionality allows the user to view detailed location information and check on a satellite connection, along with viewing and saving the NMEA stream.
  • A new configuration option that allows Pro users to select and change the terrain shader directly within the application.
  • Water display enablement to render the simulated water level over the loaded terrain data at a given elevation to visualize potential flooding.

If you’re already using Global Mapper Mobile, update to version 2.1 now!  If you haven’t tried it yet, download the app today and expand the reach of your GIS operations.

 

Try Global Mapper Mobile v.2.1 today!

[Download (iOS)]        [Download (Android)]

Elevation Grid Creation in Global Mapper: Creating a DTM

Written by: Mackenzie Mills,  Application Specialist

The Elevation Grid Creation tool in Global Mapper uses loaded 3D data, data with x, y, and z values, to create a raster gridded elevation layer. This layer can then be exported in one of the supported elevation formats, or used for further analysis or to create a map.

A generated elevation grid layer displayed in the 3D viewer. 

The first method Global Mapper offers to generate elevation grid layers is the Triangulated Irregular Network or TIN method. This method connects 3D point features or the vertices of 3D line and area features into a network of triangles. From there, the program interpolates over the triangular faces using the feature elevation and slope values to generate an elevation grid layer.

Triangulation Method Process: Source Contour Line Data, Contour Lines with Vertices connected by the Triangulation Network, Triangulation Network with Interpolated Raster Grid, Output Gridded Elevation Layer.

With the Lidar Module, Global Mapper not only provides point cloud classification and processing tools, but the program also provides additional methods for generating an elevation grid. These additional options are all variations on the binning method. This method is better suited for point cloud processing because not every single point in the point cloud is used to generate the output grid.

Typically point clouds are quite dense and you don’t need to use every single value to generate an accurate output. In fact, using every point often results in an elevation grid layer that contains lots of noise and appears rougher than the actual study area. The binning methods help to reduce this noise by spatially binning the data into areas corresponding with the size of the output grid cells. One value from each of the spatial bins is then used to generate the gridded layer. The elevation value from each bin that is used to generate the grid is determined by the specific binning method that is selected. For example, the Binning Minimum Value method uses the minimum elevation value from each bin to generate the grid. The Lidar Module currently offers three variations on the binning method, with two additional variations coming soon.

  • Binning (Minimum Value – DTM)
  • Binning (Average Value)
  • Binning (Maximum Value – DSM)
  • Coming Soon – Binning (Median Value)
  • Coming Soon – Binning (Variance)
Elevation Grid Creation dialog from left to right: Using only 3D Line or Area Features, Triangulation Method Selected using a Point Cloud, A Binning Method Selected using a Point Cloud.

A digital terrain model, commonly referred to as a DTM, is an elevation model that describes the terrain or ground of an area as opposed to the structures and features on top of the ground, such as buildings and vegetation. Conversely, a digital surface model, or DSM, aims to show the structures and features on top of the ground.

When creating a DTM, you will likely want to use the binning minimum value method. Since lidar is not ground-penetrating, the minimum values detected in the point cloud are most likely to be true ground measurements.

Another option you have in your workflow is to further identify ground points by classifying your point cloud using the classification tools available in the Lidar Module. The automatic classification tools allow you to perform rough classifications that you can then clean up and fine-tune with manual classification.

When generating the elevation grid layer, there is the option to further filter the points of your point cloud to use only points within a specific class, with specific flags, or in a designated elevation range. This filtering will help to further narrow down the points available to consider when Global Mapper is building the elevation grid layer.

The Filter Lidar Points dialog accessed from the Elevation Grid Creation Options.

Water bodies such as ponds, lakes, and rivers may not provide consistent point cloud data. When generating an elevation grid that contains water-covered areas, you may want to flatten those areas to a specified elevation value. This can be done by including a 3D area feature in the data used to create the grid, and using the grid creation option to ‘Use 3D Area/Line Features as Breaklines’. This will burn the area feature into the output grid at the elevation designated by the area feature, thus flattening the noise within the area. This can be used for road features, building footprints, or any other area features as well.

A path profile showing the point cloud and generated terrain grid that used a breakline to flatten the water area, and the same grid in the 3D viewer showing the flattened water area and rockier shore. 

To compare a few different elevation grid creation methods, the path profile tool can be used. Below is a path profile over three elevation grids all using different methods. You can see that the binning method grid appears smooth compared to the triangulation method grid.

A Path Profile Comparing Generated Elevation Grids

With an elevation grid layer created to show the elevation as a surface, you can continue your analysis in Global Mapper to generate contour lines, generate watershed areas, perform volume calculations, or any other analysis function. To see more of what Global Mapper can do for you, please visit the Tips & Tricks page or request a demo or a trial today.

GeoTalks Express – Session 7 Questions & Answers

The seventh of Blue Marble’s GeoTalks Express online webinar series entitled Using Lidar for Archaeological Research, was conducted on June 10th, 2020. During the live session, numerous questions were submitted to the presenter, Forrest Briggs from LiDARUSA. The following is a list of these questions and the answers.

 

What kind of ASPRS points do you use to build DTM with buildings?

Ground classified points for the surface (DSM) and first returns for the DTM.

 

For the aerial mission did you employ ground control points? If so, how many targets on a typical project?

LiDAR is an active sensor meaning each point already has an XYZ, Photogrammetry does not provide that natively meaning control points are required to determine the position of the data. We always recommend using control points for survey-grade or construction-grade accuracy projects, however, for more archeology projects, it is not necessary. We recommend control, but it’s not required. It depends on the project.

 

What about bidirectional nadir satellite imagery?

Satellite and UAV don’t go hand in hand. Geo-referenced data (lidar and/or imagery) can be used together regardless of the source.

 

How is the laser aligned to the photogrammetry? Do you combine the reconstruction data from the photogrammetry with the laser scan data?

All of the instruments are bore-sighted, calibrated, and then geo-referenced to a common coordinate system.

 

If you scan it for free who owns the data you collect?

We own the data, along with the sponsor of the project and normally the country or landowner.

 

Can this be used to get bathymetry in a shallow lake?

We don’t manufacture green laser systems for bathymetric surveys, they are one very expensive and have a very limited application. They work well in clear water, zero turbidity, and Secchi depth.

 

Are your drones equipped with PPK/RTK module or do you need a typical base/rover setup?

We utilize PPK systems for LiDAR systems, RTK is far too limiting for LiDAR projects, you will either need a local base station near the project or take advantage of satellite or CORS networks base stations or utilize the advanced PPP algorithms that are created during an extended static aliment.

 

For the post processing of the data, do you exclusively ues GM?

We utilize our propriety software to create and fuse the lidar data with the IMU and GNSS to create a las or laz file format, that is what you import into global mapper for classification and feature extraction.

 

How do you separate the data from the different channels? Are there distinctions in the wavelength?

It varies with the scanner. The LAS files allow for annotating every point per channel, which we do.

 

What would you say are the main advantages of Lidar against photogrammetry and vice-versa?

With lidar, you can get points on the ground in most cases whereas photogrammetry is a “first return” system (so tops of grass and trees). Lidar is also better at sharp linear features such as transmission lines and railroads. Water boundaries are much easier with lidar as well. It is not a matter of accuracy on a hard surface as you can always get a better camera for this. Images are easier for a normal person to interpret.

 

How long did it take to fly the Uxmal area and what was the point density?

The 25 acres was collected in a 12-minute flight with 20% battery left. We were using our Revolution 120 lidar system mounted to the V1 DJI M200.

 

Do channels equate to returns?

A channel typically refers to the laser number and orientation. Each laser or channel has either 2 or 3 returns depending on the system. For instance, a VLP-16 is a 16 channel system, capable of 2 returns per channel.

 

Are the flights automated or are you manually flying the drone?

Generally, flights are automated in fixed-wing mode, you can fly manually in a pinch.

 

Is the output proprietary or .las/.laz format?

We export into las / laz / e57 / txt / XYZ / and several other formats. The most common are las and laz.

 

Did you have to file a flight plan with the FAA to be able to fly this mission?

Generally speaking, no flight plan is required to fly a UAV in the USA.

 

How much detail can you get on vegetation?

Can you identify individual species? Generally, the detail on the trees would need to be combined with on our LiDAR systems that integrates a LiDAR and camera to ensure you were to determine leaf and color and design.

 

Do you encounter a lot of high/low noise with your systems and do you find the Global Mapper auto noise classifier is sufficient in cleaning the data or do you need to couple it with manual noise cleaning?

Yes, all lidar systems create noise, regardless of what presented data is shown in marketing material. We use the GM automatic feature for the majority but almost always you will need to clean the data using manual methods as well.

 

What kind of drone you used for the lidar survey?

We use lots of different drones, multirotor, RC helicopters, gasoline, battery-powered. Your application and budget will determine what type of drone to use.

 

For water applications, how deep does lidar penetrate below the water surface?

We do not offer bathymetric lidar.

 

Do you use full waveform?

All LiDAR is waveform but only a few companies actually store it in raw form (it’s expensive to store and process). Most systems store discrete returns, which has proven sufficient for 20 years. We offer both for some systems.

 

Can you tilt the equipment when flying it on a helicopter, for looking sideward?

We offer 360-degree lidar systems as well as nadir lidar systems. Yes, you can mount the lidar whichever way you want and we of course will provide our expertise to ensure you are satisfied with the end result.

 

Flying time is the one limitation, but the other one surely is the data volume. So what is the max storage capacity onboard?

Flight time is a function of the drone, we offer a long-range drone with a 2-hour flight time, the data volume is also not an issue we provide with the system and external drive that can store 14 hours of scan data. We started building lidar systems for mobile ground-based scanning and normal scanning days can easily reach 8-10 hours, we have carried that technology forward to our UAV LiDAR systems to ensure you can scan all day without stopping.

 

Has LiDAR been used to find things that are underground?

LiDAR does not penetrate the ground, you will need GPR for that type of work.