GeoTalks Express – Session 8 Questions & Answers

The eighth of Blue Marble’s GeoTalks Express online webinar series entitled Got a drone, now what? An Introduction to Pixels to Points, was conducted on June 24th, 2020. During the live session, numerous questions were submitted to the presenters. The following is a list of these questions and the answers provided by Blue Marble’s technical support team.

 

Can you see the orthoimage in 3D?

The orthoimage generated by the Pixels to Points tool is a 2D image. The image will appear in the 3D view, but it will be flat and appear below loaded 3D data layers. 

 

Do you have to purchase the Global Mapper Lidar Module to work with the drone data?

The Lidar Module is an add-on to the Global Mapper program that includes point cloud editing, viewing, and processing tools as well as the Pixels to Points tool shown in this webinar. 

If you are interested in testing out the Global Mapper program and the Lidar Module, I encourage you to download Global Mapper from our website and activate a trial license.

 

Can you perform tree height: DSM-DTM?

In Global Mapper you can use the Combine/Compare Terrain Layers tool to subtract one layer from another, like DSM – DTM as you have noted, to find tree heights. This does require having both a DSM (Digital Surface Model) and DTM (Digital Terrain Model) created for an area. 

Going back a step, you can generate elevation grid layers, DSM and DTM, using the Elevation Grid Creation tool in Global Mapper with point cloud data. The Elevation Grid Creation tool supports multiple binning methods, one uses maximum values to generate a DSM and another uses minimum values to generate a DTM. 

One of the drawbacks to photogrammetrically derived point cloud layers, those created from 2D drone images, is that ground area underneath tree cover is not usually identified accurately. This is due to the fact that the images only show the treetops, and not the ground so the process cannot identify features and create points representing the ground. If you are looking to generate your own DTM layer modeling ground, I would recommend true lidar data. That being said, if your collected images allow for a good reconstruction of the tree canopy you can create a DSM from the generated point cloud and compare that to an existing DTM layer. 

 

​What are the recommended camera orientation and drone height above the surface?​

​The height and angle at which you fly your drone when collecting images depend on the goals for your data collection. ​For instance, if you are planning to map the ground in a wide-open area, you can fly back and forth over the area collecting images looking straight down at the area of interest. 

If you are looking for more detail on terrain features or features on the surface, you may want to fly at a lower height, still taking nadir or maybe slightly oblique images, flying back and forth over the area. Then, to increase the views you have on the features in the area, continue flying over the area in straight lines crossing your original flight lines perpendicularly. This will provide views of terrain features from additional angles. 

To model a specific feature or pile, capture oblique images of the feature as the drone flies in a circle around it. This will capture many angles of the same feature, but with more oblique images you may need to use the masking tool in Pixels to Points to crop out areas along the sky and horizon. 

 

Each image file has coordinates of presumably the center point of the image. Where do those coordinates come from? Is it a drone camera or control function?

The drone collected images load into Global Mapper as Picture Points. These are point features, represented with a camera icon, that appear at the location of the camera when the image was captured. These coordinates are recorded and attached to the image by the GPS enabled camera that captured the image. The coordinates along with other information about the camera and image are stored in the EXIF data for each image. 

 

Do you have to use control points or is this an optional step to improve accuracy?

You do not need to use control points when generating your outputs using the Pixels to Points tool. Including ground control points will help to improve the accuracy of the outputs as they are placed in 3D space. Without ground control points the outputs generated from the Pixels to Points tool will still be accurate to themselves. 

You can also choose to incorporate some control points after generating your outputs. You can rectify layers in x, y, z, or use the Lidar QC tool to vertically adjust your generated point cloud layer. 

 

Is there anything in GM documentation regarding camera types? ie. pinhole 1, 2, or 3, etc?

Yes, additional information on the Camera Type options can be found here in the Global Mapper knowledge base. The information on the Pinhole camera types is as follows: 

  • Pinhole – A classic Pinhole camera
  • Pinhole Radial 1 – A classic pinhole camera with a best-fit for radial distortion defined by 1 factor to remove distortion.
  • Pinhole Radial 3 – A classic pinhole camera with a best-fit for radial distortion by 3 factors to remove distortion.
  • Pinhole Brown 2 – A classic pinhole camera with a best-fit for radial distortion by 3 factors and tangential distortion by 2 factors.

 

Can low-res lidar be used to calibrate Pixels to Points projects?

While you cannot use an existing point cloud to calibrate or help generate a new point cloud in the Pixels to points process, you may be able to derive some control points from your existing point cloud that could be used in Pixels to Points. 

In the Lidar Module of Global Mapper, there is a Fit Point Clouds tool that can be used to adjust one point cloud to better fit another. After generating your new point cloud with Pixels to Points, you may be able to use the Fit Point Cloud tool to adjust the new point cloud based on your existing lidar point cloud layer. 

 

Do you have to identify the control point in each of the green-colored images?

Yes, you should identify each ground control point in each image that it appears. With a ground control point selected in the Pixels to Points dialog the green listed images are suggestions on where the ground control point is likely to appear based on the coordinates of the point and the calculated image coverages for each input image. 

 

Is there some automatic tool in order to make placing the control points easier?

Placing the ground control points in images through the Pixels to Point tool is a manual process. With a control point selected the Pixels to Points tool will list some of the input images in green to suggest where the selected control point may appear. This suggestion based on the coordinates of the control point and the image coverages helps to narrow down the images to look through when placing the control point. 

 

Do you have a documented workflow for the drone data processing?

There is a workflow outline as well as details on the various steps, settings, and options for the Pixels to Points tool here in the Global Mapper Knowledge Base

If you would like more guidance on using the Lidar Module we do offer Lidar Module training, as well as Global Mapper training. These training sessions have been moved to an online format and more information can be found here on our website.  

 

After tagging a GCP in one image, can Global Mapper estimate the location in the overlapping images so that the user only needs to refine the location and not place it in every single image?

Tagging a ground control point in an image in the Pixels to Points tool only places it in that one image. Placing the ground control points in images is a manual process that must be done for each image, no ground control points are placed automatically.

 

I have seen in some cases where users put white paint markings on the ground or use trig beacons as control points. Is this a normal standard? 

Yes, you can absolutely paint a control point on the ground in your study area or use a trig beacon or other feature. A ground control point should be a point on the ground that you can survey the location of and that can be easily and accurately identified in your drone-collected images. 

 

How critical are the ground controls points when you do a drone survey? In other words, can you trust the data collected without any control points?   

You do not need to use control points when generating your outputs using the Pixels to Points tool. Including ground control points will help to improve the accuracy of the outputs as they are placed in 3D space. Without ground control points, the outputs generated from the Pixels to Points tool will still be accurate to themselves.

You can also choose to incorporate some control points after generating your outputs. You can rectify layers in x, y, z, or use the Lidar QC tool to vertically adjust your generated point cloud layer. 

 

Are hills and mountains identified as ground?

Hills and mountains should be identified as ground as they are ground area. The automatic ground classification tool in the Lidar Module requires some user-entered parameters to help guide the tool to more accurately classify ground. A couple of these parameters are Maximum Height Delta, the approximate range in elevation for ground in the area, and Expected Terrain Slope, the maximum expected terrain slope in the area. Adjusting these parameters appropriately will help to better classify ground in areas of steeper slope and higher elevation such as hills and mountains. 

 

Is it possible to load the GPS data in a separate text file that is not in the EXIF? We currently use full-size airplanes with medium format cameras that record the EO data separately from the EXIF. 

Yes, you can load the image position information from an external text or CSV format file. With images loaded into the Pixels to Points dialog, select images and right-click in the Input Images box. Select to Load Image Positions from External File and point to the file containing the image positions. More information on this right-click option can be found here with information on the input Images section of the Pixels to Points tool. 

As long as your captured images meet the data recommendations and have sufficient overlap and clear features, you should be able to use them with the Pixels to Points tool. 

 

Is it possible to create a point cloud with ground control only and no camera GPS positions?

Yes, the Pixels to Points tool can create outputs using images that do not have camera coordinate information. These images cannot be loaded into the main view of Global Mapper and would need to be loaded directly into the Pixels to Points tool

 

By downsampling the quality of the images, will the resolution of the exported data will also change?

Reducing the image sizes will reduce their resolution thus resulting in a lower level of detail in the input images. This may cause the program to find fewer like features based on the recurring pixel patterns ultimately slightly reducing the density and resolution of the outputs. 

For the best possible outputs it is recommended to use the full image resolution, but depending on the image sizes, settings, and machine specifications, that is not always possible. If you do end up needing to reduce the image sizes, try to reduce them by the smallest factor possible with your data and your machine. 

 

Is it possible to import metadata for images such as External orientation file instead of EXIF?

Yes, you can load the image position information from an external text or CSV format file. With images loaded into the Pixels to Points dialog, select images and right-click in the Input Images box. Select to Load Image Positions from External File and point to the file containing the image positions. More information on this right-click option can be found here with information on the input Images section of the Pixels to Points tool.

 

Is there a way to incorporate existing point cloud/lidar data to help with the mesh process? 

While you cannot use an existing point cloud to help generate the mesh feature from the Pixels to Points tool, you can create a mesh from a selected point cloud. If you have multiple point clouds loaded and selected for an area, all selected points will be used to generate the 3D mesh feature. 

 

​Can you load a file of post-processed GPS coordinates and replace the GPS coordinate in each image?​ To use GM to process drone mapping data it needs the ability to improve the image geotag with post-processed coordinates. Is this possible?​

​Yes, you can load positions for images from an external text or CSV format file in the ​Pixels to Points tool dialog. This can be used to add positions to images without any or to update and replace the existing image positions. 

With images loaded into the Pixels to Points dialog, select images and right-click in the Input Images box. Select to Load Image Positions from External File and point to the file containing the image positions. More information on this right-click option can be found here with information on the input Images section of the Pixels to Points tool.

 

Can you use any camera with the ability to geotag images?

Yes, any image taken with a GPS enabled camera should load into Global Mapper as a picture point. While the Pixels to Points tool is geared toward drone collected imagery you may be able to use any images as long as they meet the data recommendations

 

Can you have more than one ground control point in an image?

Yes, if multiple ground control points appear in a single image you should place them in that image. To place each ground control point, select the point from the ground control point list and then click Add Control Point to Image, and place the control point. Select another control point from the ground control point list and complete the same steps to add that point to the image. 

 

The kml file generated by Global Mapper needs to have the code edited after export so that DJI drones can read them. Are there plans to change kmls to be compatible with DJI?

Yes, we do have an open issue on resolving the incompatibility of Global Mapper produced KML files with DJI systems. The fix for this issue is temporarily slated for the Global Mapper 22.0 release. 

 

Do photo locations and GCP’s need to be in WGS84 coordinates or can other coordinate systems be used?

The photo location coordinates are assumed to be GPS derived and therefore implicitly bound to WGS84. Do you have collected images that use a different coordinate system for the photo locations? If so, could you share a few sample images? This would allow us to look at the EXIF info and look into a way to load them using the correct coordinate reference system.

Ground control points can be in any supported coordinate system. When loading a layer of ground control points into the main view of the Global Mapper workspace, they will be treated as any other point layer and loaded using the projection specified in the file, or selected by the user if needed. When loading control points from a text file into the Pixels to Points tool, you will be met with the Generic Text File Load Options, and since text files do not contain projection information, you will be prompted to select the correct projection for the file. 

 

Is MGA2020 projection supported in the output coordinate system?

The MGA (Map Grid of Australia) is a supported projection in Global Mapper, and GDA 2020 (Australian Geodetic 2020) is a supported datum. The workspace projection set in Configuration > Projection will be the coordinate system used when exporting files from a workspace. 

 

You need a pretty robust PC to process this data!

To process many images at full resolution it will take a more powerful machine. System requirements and recommendations for Global Mapper and the Pixels to Points tool can be found here in the Global Mapper knowledge base

 

I’ve noticed that when selecting the GCP, it sometimes does not exactly select the center of the GCP, is that OKAY?

You should try to tag the ground control points as precisely as possible in the Pixels to Points tool. Zooming and panning in the image preview window will help to place them at the desired location in the images. When placing ground control points on an image, you may see the control points snap to the image pixel center.  

 

How did you generate the flight line?!?

The flight line is automatically generated by loading the picture points via the Pixels to Points tool dialog. To generate a flight line, open the Pixels to Points tool and load the images directly into this dialog with the option to Load Image File(s)… Once the images have loaded into the Input Images list, go to the Map Menu in the Pixels to Points tool and choose to Load Image(s) as Picture Point(s). This option will load a group of layers in the Global Mapper workspace containing all the picture points for the drone images as well as a Flight Line layer with the flight path. 

 

Can you use a Global Mapper script to automate the process of Pixels to Points transformation?

Global Mapper script does support the command GENERATE_POINT_CLOUD to set up and run the Pixels to Points process. Running the process through a script does not allow you to perform manual tasks like placing ground control points or masking sections of images. 

 

Any recommendations for good flight plan apps?

We don’t have any recommendations for flight plan apps as there are many out there. We generally suggest working with the one most compatible with your drone model. 

 

Will you be supporting the GLB file in your next version?

We do not have any current plans to include the GLB format for 3D objects. Supported 3D model formats are listed here in the Global Mapper knowledge base

 

Do you have to fly the second time to capture the sides of the buildings/house at an oblique angle instead of nadir?

 

The flight path of your drone and the images captured should be designed with your end goal in mind. If you are planning to more accurately model building features as well as ground area you may want to capture oblique images from many angles. Keep in mind that the Pixels to Points tool and the Structure from Motion process can only construct the area for which there are clear views from overlapping images. 

Check out this blog post with some more details on drone flight tips when collecting images for use in the Pixels to Points tool. 

 

How many GCPs do you need for a certain area of your flight?

Your ground control points should be evenly spread over the area of interest and each ground control point should appear in multiple images. There is no rule on how many ground control points you should have in a given area. More control points will improve accuracy to a point, but eventually, the addition of more control points will not result in much improvement in the outputs. 

 

So the data altitude is assumed to be in NAVD88 instead of NGVD29?

Global Mapper does not work with vertical reference systems or transform between them. The assumed vertical system for Pixels to Points is Ellipsoidal Height. 

 

If you have two sets of flight plans with different altitudes for the same area, will it distort the 3d model because they have different ground sample distance size?

If both flights use the same camera, the lighting in both image sets is similar and fairly even, and the images are clear, the results from using the two datasets from different heights should process together fine. 

 

Should I use a special application or program to create my flight plan and make it compatible with Global Mapper? I mean, does Global Mapper have an application to create the drones flight plan? 

Global Mapper cannot create flight plans that can be used by drones on flights. You will need to use another app to create and execute the flight. Global Mapper does support many file formats, so if you are able to save your flight line you can then import it into a Global Mapper instance as you work with your drone collected images. 

Additionally, you can recreate your flight from your collected drone images using the Pixels to Points tool. To generate a flight line, open the Pixels to Points tool and load the images directly into this dialog with the option to Load Image File(s)… Once the images have loaded into the Input Images list, go to the Map Menu in the Pixels to Points tool and choose to Load Image(s) as Picture Point(s). This option will load a group of layers in the Global Mapper workspace containing all the picture points for the drone images as well as a Flight Line layer with the flight path. 

 

Can Global Mapper do the corresponding coordinate system transformation: in this case, could Global Mapper transform UTM coordinates to the German Gauß-Krüger coordinates?

You can reproject data in Global Mapper by changing the workspace projection in Configuration > Projection. Both UTM and Gauss-Kruger are supported projections in Global Mapper. The workspace projection set in Configuration > Projection will be used when exporting layers from Global Mapper. 

 

How long does it take Global Mapper to process 200 photos, with its best performance? What must be the specifications of my computer to achieve this processing time?

The length of time it takes to process an image set through the Pixels to Points tool depends not only on the number of images and the machine specifications, but also the size of the images and the settings applied in the Pixels to Points tool. Machine recommendations and requirements for using Global Mapper and the Pixels to Points tool can be found here in the Global Mapper knowledge base

 

Can we do all this processing with an Unregistered copy as my Registered Version is at my workplace but I am working from home?

An unregistered version of Global Mapper does have limitations that will likely prevent you from being able to process images with the Pixels to Points tool. If you need help with a Global Mapper license please reach out to our licensing team at authorize@bluemarblegeo.com with your most recent order number.

 

Any particular drone which works better with the Global Mapper LiDAR module?

We don’t have any specific drone recommendations, but you can find some data collection recommendations for the Pixels to Points tool in the Global Mapper knowledge base. 

 

Can you use scripting for all these processes?

Global Mapper script does support the command GENERATE_POINT_CLOUD to set up and run the Pixels to Points process. Running the process through a script does not allow you to perform manual tasks like placing ground control points or masking sections of images. 

 

We are watching while flying our drone doing a mapping job. I’d love for you to email the PowerPoint later. We want to start using global mapper and LiDAR vs drone2map or pix4d. We just need to learn it. 

I am glad you are interested in using Global Mapper for your drone data processing! We have thorough documentation on the tool and options here in the Global Mapper knowledge base. Additionally, as a registered attendee you should receive an email within the next week with access to the recording of this webinar.

If you have any questions about using Global Mapper, the Lidar Module, or the Pixels to Points tool specifically, you can reach out to our technical support team at geohelp@bluemarblegeo.com. 

 

Can you export to google earth?

Yes, Global Mapper supports many file formats for both import and export including raster and vector KMZ/KML formats compatible with Google Earth. 

 

Is it required to have a base overall image pre-loaded or can you do this work with captured images only?

A base image is not required. In the example shown in the webinar, the imagery simply provided a visual reference for the project. You can absolutely view and process your drone collected images without any background data, only loading the images and the generated outputs from the Pixels to Points tool

 

If possible please provide the best image capture settings for the fly-over? Or a couple of scenarios? 

The flight plan to fly your drone when collecting images depends on the goals for your data collection. Keep in mind that the Pixels to Points tool and the Structure from Motion process can only construct the area for which there are clear views from overlapping images. For instance, if you are looking to map the ground in a wide-open area, you can fly back and forth over the area collecting nadir images of the area of interest.

If you are planning to gather more detail on terrain features or features on the surface, you may want to fly at a lower height, taking slightly oblique images, flying back and forth over the area. Then, to increase the views you have on the features in the area, continue flying over the area in straight lines crossing your original flight lines perpendicularly to capture additional angles of the features.

Check out this blog post with some more details on drone flight tips when collecting images, and the data collection recommendations for images with the Pixels to Points tool. 

 

I use Microdrones mdLidar 1000 which has a 5mp camera which is primarily for colorizing the lidar point cloud. Can I generate just an orthoimage without generating a point cloud or 3D mesh?

You can select to only output the orthoimage from the Pixels to Points tool. Since this image is generated from the point cloud, the point cloud will still be constructed in the processing, but only the selected output, the orthoimage, will be saved. 

 

Does your workflow optimally utilize highly accurate camera coordinates in lieu of the use of GCPs?

If you do not choose to use ground control points with the Pixels to Points tool the camera coordinates alone will be used as the positioning information when generating the output layers. Increased accuracy of the camera coordinates will result in more accurately positioned outputs from the tool. 

 

Can you build a model without any coordinates i.e. without GCP or camera coordinates?

Yes, the Pixels to Points tool can create outputs using images that do not have camera coordinate information. The collected images with no coordinates cannot be loaded into the main view of Global Mapper as picture points and will need to be loaded directly into the Pixels to Points tool. With no camera coordinates or ground control points, the outputs will be placed at the origin (lat/long 0,0) since the program will have no coordinate references for the data.

 

Can I enter camera coordinates via a list rather than through EXIF embedded coordinates?

Yes, you can load camera positions from an external file text or CSV format file. With images loaded into the Pixels to Points dialog, select images and right-click in the Input Images box. Select to Load Image Positions from External File and point to the file containing the image positions. More information on this right-click option can be found here with information on the Input Images section of the Pixels to Points tool. 

 

Where in the object space do you specify “ground height”?

The Use Relative Altitude Based on Ground Height parameter in the Pixels to Points tool allows you to specify a ground height for the first image. Other points will use the entered value to calculate the vertical component for the outputs. 

 

Are there Points to Pixels tutorials available?

There is a workflow outline as well as details on the various steps, settings, and options for the Pixels to Points tool here in the Global Mapper Knowledge Base.

If you would like more guidance on using the Lidar Module we do offer Lidar Module training, as well as Global Mapper training. These training sessions have been moved to an online format and more information can be found here on our website.

Additionally, as a registered attendee you should receive an email in the next week with access to the recording of this webinar. 

 

The accuracy improvement of using ground control vs NOT using… are we talking metres, decimetres, or even better improvement? Under what conditions would the use of ground control points lead to marginal improvements? 

The accuracy of the outputs generated without control points depends on the accuracy of the image positions. If the cameras are using RTK/PPK then the output model should already be in the decimeter accuracy range. If the camera collects less accurate GPS coordinates, then high accuracy ground control points can definitely increase the accuracy from a few meters to closer to the GCP accuracy, which is also likely in the decimeter range depending on how they were collected.

Generally, we do recommend the use of high accuracy ground control points, or at least points collected with the use of some GPS averaging, if the collected image positions are not highly accurate. 

 

Also, can we have access to some test data like this for self-training?

We do not currently have a Pixels to Points lesson in the Global Mapper self-training available on our website. If you are interested in further training on the Lidar Module and the Pixels to Points tool we do have some public training classes scheduled in the upcoming months. These classes have been moved to an online format and you can find out more about training here on our website

 

What about PPK data?

High accuracy image positions or ground control points collected with a PPK system can be used with Global Mapper and the Pixels to Points tool. Using higher accuracy position information will help to improve the accuracy of the Pixels to Points outputs. 

 

Do the help files (or some discussion on a web site) discuss parameters for when you might want to adjust settings – e.g. analysis method, checkboxes for higher quality/resampling?  I’m thinking similarly to suggesting using masks for sky, water, snow cover, etc.

The Pixels to Points tool documentation contains information about the tool in general as well as an outline of the steps to use the tool and specific information on the options and settings available. 

 

How would I know which camera type my drone camera fits?

I recommend researching your drone model and contacting the manufacturer for information on compatible camera models. 

 

I would also like some information about processing time – some reference points of number of images, processor speed, cores, etc.

The time it takes to process an image set with the Pixels to Points tool depends on the number of images, the image resolution, the settings selected in the Pixels to Points dialog, and the machine on which you are running the process. You can find some system requirements and recommendations for Global Mapper and the Pixels to Points tool specifically in the Global Mapper knowledge base. 

 

Can I add EXIF info to images from a separate exterior orientation file like the one created from Trimble Applanix IMU/GPS?

Yes, you can load image positions from an external text or CSV format file. While I am not familiar with the format file created by your specific GPS device, information on the options to image positions from an external file can be found here with other information on the Image Input list in the Pixels to Points tool. 

 

​How accurately does the GPS of the drone need to be? Most camera GPS is no better than 5m accuracy.  Should potential drone purchasers be looking for any particular standard of GPS capture?

While more accurate image position information will produce more accurate outputs from the Pixels to Points tool, the use of high accuracy ground control points can help to significantly boost the accuracy of the outputs when using less accurate image position coordinates.

 

Looks like Global Mapper is projecting the images on-the-fly. Is that true?

The quick individual orthoimages loaded into Global Mapper are being projected to fill the approximate image coverage areas calculated from the camera position and view parameters.

 

I tried this with our new DJI Mavic 2 Enterprise Dual drone and Global Mapper asked what the focal length was. I had no idea so the Pixels to Points tool never worked.

If some camera parameters, like focal length, cannot be read from the image EXIF information Global Mapper will ask for the missing information. To find this I suggest looking through documentation on your drone and camera model and reaching out to the manufacturer for the hardware specs. 

 

​Is there any limitation when loading images like size, resolution,…?

There is no limitation on the image size, file size, or image resolution ​in Global Mapper. When working with large high-resolution images you may run into some hardware limitations when you attempt to process the images in the Pixels to Points tool. The most common limitation users run into when processing large datasets is an insufficient amount of available memory on the machine. In this case, the solution would be to reduce the image size by a factor.

​System recommendations for using Global Mapper and the Pixels to Points tool can be found here. Keep in mind that these are recommendations, and having more available memory will be beneficial when processing larger images. ​

 

Sometimes the images do not have EXIF, how to add this information using a CSV file?

You can load the image position information from an external text or CSV format file. With images loaded into the Pixels to Points dialog, select images and right-click in the Input Images box. Select to Load Image Positions from External File and point to the file containing the image positions. More information on this right-click option can be found here with information on the input Images section of the Pixels to Points tool. 

 

My images were generated by a camera which is not in the list of cameras, what can I do?

What camera do you use for image collection? Typically the camera information can be read from the image EXIF information, if it is not you will be prompted to enter some parameters. If you can provide the camera model and specifications I would be happy to pass them along to our team to get the camera added to the Camera Model list in the Pixels to Points tool. 

 

Is the irregularity in the trees and vegetation due to the way the data was collected or as a result of the SfM?

Since the goal of this image set was to reconstruct the farmhouse area, the irregularities and distortion in the trees stem from a combination of the data collection and the Structure from Motion (SfM) process. 

The images are nadir images and the tree areas where distortion is seen are on the edges of the study area. Being toward the edge of the study area with top-down images means that there aren’t as many good overlapping views of the trees in the source images. 

Additionally, trees and vegetation areas are often difficult for the Structure from Motion process because the images are noisy making it hard for the program to identify the match points for reconstruction. 

 

Does Pixels to Points work on non-nadir pointing images?

Yes, you can use non-nadir images, like oblique images, with the Pixels to Points tool.

 

Can you add different columns to the Image list, i.e. Roll/Pitch/Yaw?

If the Roll/Pitch/Yaw information is stored in the EXIF info for an image, it should appear as attributes for the picture point when the image is loaded into the main view of Global Mapper. If this information is detected, it will be used when ortho-rectifying individual images

However, these additional view parameters, Roll/Pitch/Yaw, are not currently used when generating the Pixels to Points outputs and cannot be viewed in the Input Images list information. 

 

Is it best to maintain a constant AGL or constant altitude when collecting imagery?

It is recommended to keep a constant altitude and in general, when capturing nadir images, fly the drone as high as possible. 

 

What is the camera type for DJI Phantom 3 Pro?

If the camera model and information cannot be read from the EXIF information in your drone images, the Pixels to Points tool will prompt you to select the camera and enter some specific parameters. I recommend you refer to the manufacturer for the specific camera model information for your drone. 

 

Can Global Mapper create true, not distorted, orthophoto? 

The Pixels to Points tool can remove distortion from the orthoimage by using the generated 3D mesh when generating the orthoimage layer. To generate the best quality texture for the mesh, you should select the Quality setting of Highest in the Pixels to Points settings. This combination of checking the option to Generate Orthoimage from Mesh and using the Highest Quality setting will remove distortion when generating the orthoimage layer. 

You can also choose to individually ortho-rectify images through Pixels to Points. By checking the option to Ortho-rectify Each Image Individually in the Pixels to Points tool settings each image will be placed on the map. With this method, there will likely be some noticeable seams between the images. 

 

Can we apply this workflow to satellite images within RPCS?

No, stereo imagery is generally collected with satellites, and with the Pixels to Points tool, you need multiple images (at least a dozen) that have a 60% overlap.

 

Can Global Mapper create disparity maps for export?

By using Global Mapper script to run the Pixels to Points tool you can add additional command options. One of these advanced options will generate depth map images. These depth maps are equivalent to disparity maps for the Pixels to Points and Structure from Motion process.

 

I take 360° panoramics for radar coverage prediction, will Global Mapper be able to process the pans to make a screen profile looking at the horizon?

Unfortunately, the Pixels to Points tool does not support the use of panoramic images, so processing your images would likely fail. If you were to collect images of the area with an alternate camera, you could use the Pixels to Points tool to create some 3D outputs of the area in order to view and analyze the area and the horizon line. 

 

Can Pixels to Points work underwater (clear) if you mask out the land and add a few control points underwater?

We have not tested with an underwater set of images. If the water does not cause noise or distortion in the images, the Pixels to Points tool may be able to generate an output, but it is unknown. There are various factors concerning how light travels through water that are not accounted for in our algorithms. 

If you have a dataset of clear underwater images that you would be able to share with us for testing purposes, that would be appreciated as we do not currently have test data of that type.

 

Are there any plans to incorporate flight planning within Global Mapper?

Currently, Global Mapper cannot create flight plan features that can be used by drones on flights to capture images. This is something being considered by our development team.

 

Can I use a QC tool to create and RSME for a surface vs check shots? I have used the LiDAR QC tool to do this but I am hoping to do this with a grid.

​While the Lidar QC tool can only be used with point cloud data at this time, we do have an open ticket, #GM-6634, on adding ​a QC tool for gridded elevation data. I have added your request to this ticket that our development team is considering. 

In the meantime, you could create points at the elevation grid cell centers for your gridded elevation layer and then use the Lidar QC tool to compare your control points to the created points from the elevation grid. 

 

​Can a camera and lens calibration be interpreted in Global Mapper?

Currently, camera calibration is not ​calculated and used in the Pixels to Points tool process. There is an open issue on adding some camera calibration tool to the program. This ticket is #GM-9644 and I have added your request to this ticket. 

Leave a Reply

Your email address will not be published. Required fields are marked *