GeoTalks Express – Session 2 Questions & Answers

The second of Blue Marble’s GeoTalks Express online webinar series, entitled Why do you need the Lidar Module?, was conducted on April 15th, 2020. During the live session, numerous questions were submitted to the presenters. The following is a list of these questions and the answers provided by Blue Marble’s technical support team.

 

Would it be possible to get an xyz file into the Lidar Module?

When you load your XYZ file, the Text File Loader will open automatically. It has an option to load as a lidar point cloud. 

 

Will the pixels to point tool work with standard aerial photography from fixed-wing aircraft?  

​The Pixels to Points tool in Global Mapper uses the structure from motion process to construct the point cloud, orthoimage, and mesh outputs. This process relies on overlap between adjacent images showing identifiable features from slightly different angles as the camera moves over the area. We do recommend at least 60% overlap between images, but it is always best to try for more. 

This issue with most standard aerial photography is that there is likely little to no overlap between the produced image frames. Without this overlap, features cannot be identified in multiple images and triangulated to then generate the output layers. That being said, if your images are geotagged and have the required overlap, you may be able to use them in the Pixels to Points tool. 

 

How long does it roughly take to run a pixels to points extraction for a couple of hundred images? Thanks 

How long does it take to generate a project of this size for the ortho, point cloud and mosaic using the pixels to point tool?

 It would be hard for me to give an exact time estimate on P2P processing. There are quite a few variables involved​, such as your settings within the tool, details of the imagery you’re using, hardware specs, etc. I would say generally this process is a more computationally intensive one than some other basic processing in Global Mapper, so it could range from as little as a few minutes, to longer, again all depending on the above criteria.

 

​So, you don’t need Pix4D or Agisoft PhotoScan (Metashape)?

Yes, the Pixels to Points tool can construct point cloud, orthoimage, and mesh outputs like Pix4D and Agisoft do. After generating these outputs with the Pixels to Points tool from the Lidar Module you can continue working right in Global Mapper to classify, grid, analyze your point cloud further.

 

​What are the recommended PC specifications for using the LiDAR module?

Minimum and recommended system requirements for Global Mapper and the Pixels to Points tool can be found here in the Global Mapper knowledge base.

 

​Can you speak *conceptually* to the difference between LiDAR data and point cloud data collected via an automated drone flight? 

​It sounds like you have some experience working with both types of data. The main differences stem from how the data is collected and the point cloud is generated. As you noted, lidar data collection is active and has multiple returns. This allows data to be collected from below tree canopy as some of the returns will likely pass beneath ​it and be able to return ground reflected points. True lidar data also collects true intensity and other characteristics that cannot be generated from drone collected images. 

A point cloud generated from drone collected images can only construct what the images show. For example in areas with dense vegetation or no clear view of the ground, the program cannot accurately identify ground since the images do not show it. This then can impact your classification of the point cloud using automated classification tools. 

​With drone collected images generating your point cloud there are RGB values assigned to the points, and with the Pixels to Points tool you can easily construct an orthoimage along with the point cloud. In Global Mapper if you have a point cloud without RGB values and an image for the area, you can apply color to the point cloud. ​

 

Do you have automatic powerline extraction tools? or is it done manually? Are breaklines automatically generated or are they manually collected?

Global Mapper does have automated Powerline Classification and Extraction tools.

When it comes to breaklines, this would depend on how you want to work with them. You could load in breaklines if you have them in an existing file already, you can incorporate them when making a terrain layer as well. Making Contours  may help you find them too.

 

Can you get measurements in feet and inches?

In Configuration > General > Measure Units you can ​change the distance units displayed when using the Measure tool in Global Mapper. 

 

Is there a limit in size or number of pictures to process in Pixels to Point from pictures taken using drones?

We do not impose a limit on the number of pictures ​you can use in the Pixels to Points tool, but the amount of data you can process is limited by your machine memory. This is also impacted by the quality setting in the pixels to points tool and whether or not you are reducing your images. 

When you go to run the Pixels to Points process Global Mapper does a memory estimate and if it is predicted you will run out of memory, the program will suggest you reduce your image sizes by a certain factor. 

 

Do you have a document describing best practices for collecting Drone footage to generate a point cloud? Is there a link to tutorials for Drone footage capture?

We don’t have any video tutorials on collecting drone images for processing with the Pixels to Points tool, but you can find our data collection recommendations here in the Global Mapper knowledge base

 

​Will there be a technical webinar for the Lidar Module too?

More information on upcoming GeoTalks Express webinars can be found here. We also have many past webinars posted on our YouTube channel including a series on lidar processing in Global Mapper. ​

 

​Can you talk more about free lidar sources please?  

There is freely available lidar data out there. In Global Mapper we include some of these sources in the Online Sources dialog. There is a LIDAR folder you can expand and the sources listed here will link you to an organization’s download page where you can then download data for your area of interest. ​

 

​Is this software actually processing the raw images and x,y,z coordinates from the UAS flight? Or are you processing the point cloud in another software and transferring it into your software for combination with other file extensions?

The Pixels to Points tool in Global Mapper is taking the collected drone images and running through a process in the program to construct a 3D point cloud, orthoimage, and 3D mesh output. These layers are created in Global Mapper with the use of the Lidar Module. ​In the demonstration shown in this GeoTalks Express session, no other software programs were used to process the drone images into the three output layers. 

 

​Can we use a GPS enabled DSLR handheld camera for image collection vs. a drone?

You can use a GPS enabled DSLR camera to collect images to then process in the Pixels to Points tool. Ideally, you will want to keep the data collection as uniform as possible using a fixed focus and no zoom. You can mount a DSLR camera to a drone, or collect images without a drone as long as the images meet the data recommendations. This includes clear images that have the required overlap and are geotagged. 

 

Does the LIDAR module support SLG and SL2 files such as are produced by Lowrance side scan sonar units on boats?  

Global Mapper supports SLG and SL2 formats for import and export, and you do not actually need the Lidar Module addition to load the file types. A full list of file formats supported in Global Mapper can be found here.

 

Is it possible to detect the position of overhead line conductors with the pixel to points process? 

​Global Mapper with the Lidar Module contains some automatic classification tools including a powerline classification tool. ​This tool allows you to take a point cloud, like one generated by the Pixels to Points tool, and detect and classify points that represent powerlines. Once these points have been classified, you can use the feature extraction tool to extract the powerlines to vector features. 

 

​Does LiDAR module have the ability to tie in ground control points for aerial imagery?​

​In Global Mapper you can rectify images using control points with the Image Rectifier tool. ​This tool can be used when loading images that contain no georeference information, or you can adjust a loaded image by right-clicking on the layer in the control center and selecting to Rectify

​When generating an image from the Pixels to Points tool, the ground control points you place in the Pixels to Points dialog will be used when generating the output layers. ​

 

​What point cloud file formats does LiDAR module support? Can I use an export from AutoDesk RECAP and import into the LiDAR module?

AutoDesk RECAP files are supported in Global Mapper and the Lidar Module. A full list of file formats supported by Global Mapper can be found here

 

I have a network license, but would like to use the software in the field while flying in order to check that I have good data before leaving the site. Is there a way to use my network license as a single floater license?  

You can borrow a network license seat from the server for use offline. This will allow you to use the Global Mapper and Lidar Module licenses off-network. 

 

Do you have to have the lidar point cloud loaded with the images to create a 3D mesh in the Pixels to Points tool?  

The mesh created from the Pixels to Points tool is created from the point cloud also generated by the tool. You do not need to have any previously created point cloud layers loaded to generate a mesh through the Pixels to Points tool. 

 

​Is the 3D mesh give you a similar file as colorizing the point cloud?​

The mesh generated from the Pixels to Points tool is textured with the drone collected images. You can create a mesh from a selected section of a point cloud with the Lidar Module but it will be textured with the RGB or elevation shader colors of the points as opposed to the drone collected images. 

 

How could the Lidar Module be used for mineral exploration?  

​The Lidar Module allows you to process and ​work with point clouds. What specific kind of analysis are you looking to do?

You can classify and generate elevation grids to model your study area. If you are able to collect your own data using a lidar scanner or processing drone collected images you can generate models of the area for specific dates and compare as they change over time using the Compare Point Clouds tool or the Combine/Compare Terrain layers option. 

 

Is it possible to do automatic extraction line. ie kerb of road or boundary from a shape (home, etc.).?​

In Global Mapper with the Lidar Module, you can do some custom feature extraction to place control points along a feature in a point cloud and extract a vector line. This is not automated extraction for all like features in a point cloud, you would need to extract each individually. 

 

​It is possible search for the best fit line from all points (wire line), for example : powerline wire?​

You can extract vector features for powerlines from a point cloud using the Feature Extraction tool. Before using the tool you must classify the point cloud to identify the points representing the powerline features. Global Mapper supports a few automatic classification tools including one to classify powerlines

 

Can your Pixels to Points program utilize precise camera coordinates instead of ground control points?  

If you do not have accurate ground control points or do not wish to use any, you do not need to enter them in the Pixels to Points tool. Without ground control points the camera coordinates, and other EXIF information, will be used alone to position the images for processing. 

 

Can your Pixels to Points program accommodate both precise camera coords as well as gcp coords in an integrated optimization (adjustment) of the camera positions and tie points? 

Currently in Global Mapper, the camera coordinates are adjusted during the process and the entered ground control points are used for this adjustment.

 

​Do you provide for inputs to control the weighting of coordinates – whether of ground control points or of camera exposure positions?​

The Pixels to Points tool does not allow for the weighting of control points or weighting between the set of camera coordinates and ground control points. We do have an open development ticket (#GM-9093) on adding the ability to weight control points. 

 

​The mesh file – is it a triangulation model or a square GIS mesh?

The mesh file generated from the Pixels to Points tool is a triangulation mesh. The mesh is a vector feature made up of triangular faces and textured from the drone collected images. 

​​

​Is it possible to import commonly-used camera parameters (ie Phantom 4 V2) to help with processing? If not are there plans to add this capability or can you discuss how to set up with currently available option for best results.​

Global Mapper keeps a database of camera models and parameters. Common camera models are recognized from the input image metadata, but if the camera is not part of this built-in database, you will be prompted to select the model and enter the sensor width. This entered information will then be stored by Global Mapper in the user data folder in a sensor_width_camera_database.txt file. 

The Phantom 4 V2 is in the built-in camera database and should be recognized from the input image metadata. 

 

How do comparisons with ESRI ArcMapper and ArcPro with the LP360 Extensions?

With the addition of the Lidar Module in Global Mapper, you can classify point clouds, with automatic classification tools or by manual classification, extract features, and compare and analyze point clouds in many ways. Additionally, in the Lidar Module you can use the Pixels to Points tool to construct a point cloud from drone collected images. From using the Lidar Module tools you can easily continue analysis with any of the Global Mapper tools

The Lidar Module is built into the Global Mapper general GIS solution. It is not a standalone application that would require you to use limited extension tools in another application or transfer files between programs during your workflow. This allows for a more seamless workflow since all of the needed tools are in one program. 

What processes do you most often perform in your point cloud analysis? Are there any processes you are specifically interested in?

 

​Is there a way to adjust roll, pitch and yaw or is the point cloud matching only based on xyz?  

The camera position and parameters like roll, pitch, and yaw are found in the metadata for each image used in the Pixels to Points tool. This information is used to position the images when processing them. You can load metadata for images from a text file in the Pixels to Points dialog. This method would allow you to alter the information as needed before applying it to the images. 

 

What are the benefits of using the Lidar Module as opposed to using Terrasolid?​

G​lobal Mapper with the Lidar Module offers classification, feature extraction, filtering, 3D viewing, point cloud construction from collected images, along with many other tools. Some of these functions are similar to those in Terrasolid, but Global Mapper supports a wide variety of file formats and our development team is always pushing to improve the existing tools and create new ones for point cloud processing. 

Since the Lidar Module is part of the Global Mapper program you can seamlessly go from working with the Lidar Module tools to using any of the Global Mapper tools in your analysis.

What processes do you most often perform in your point cloud analysis? Are there any processes you are specifically interested in?

 

Leave a Reply

Your email address will not be published. Required fields are marked *