AECCafe Voice Susan Smith
Susan Smith has worked as an editor and writer in the technology industry for over 16 years. As an editor she has been responsible for the launch of a number of technology trade publications, both in print and online. Currently, Susan is the Editor of GISCafe and AECCafe, as well as those sites’ … More » GeoCue Releases TrueView UAV/LiDAR Fusion PlatformAugust 23rd, 2019 by Susan Smith
GeoCue President and CTO, Lewis Graham, answered some questions for GISCafe Voice about GeoCue products and their new TrueView platform. TrueView is UAV/LiDAR fusion by design, according to company materials, an integrated LIDAR/camera fusion platform designed from the ground up to generate high accuracy 3D colorized point clouds.
TrueView Technical Specifications:
Who are the primary users of the GeoCue products? We have three units within our company – what I call “traditional” airborne/mobile laser scanner (ALS/MLS), an Enterprise development unit that focuses on creating applications that are cloud-hosted (in Amazon Web Services or Microsoft Azure) and our UAS division where the True View and other products are located. For ALS/MLS our primary customers are professional collectors and exploiters of LIDAR data collected from manned aircraft and mobile platforms. These tend to be production companies, engineering firms and government agencies who exploit LIDAR data such as the USGS and USDA. Our Enterprise unit works with customers who need to do a lot of processing and need automatic scaling. For example, we built and now manage a large hyperspectral camera processing system for Teledyne Technologies. This system supports a hyperspectral earth observing camera that Teledyne has placed on the International Space Station. Finally, our UAS division works with customers who want to collect very accurate topographic data with drones. Here our customers tend to be engineering surveyors, mining companies, departments of transportation and similar users. We currently do not pursue non-mapping drone applications such as inspection or public safety except for mine site inspection. Describe a typical workflow for using the GeoCue TrueView system? We have had a desktop LIDAR exploitation software product, LP360, on the market since about 2009. LP360 is a very robust point cloud and imagery processing package that has a lot of advanced features such as automatic ground classification, ASPRS accuracy assessment tools, advanced volumetric analysis tools and many other features. We have extended this product to support True View workflows and rebranded this version as “True View Evo.” The True View sensors create an organized project folder structure on their mass storage device (a USB 3.0 memory stick). Data are transferred from the sensor to the processing workstation via this USB memory stick. True View Evo has an import wizard that automatically parses this data structure and creates a True View Project. A set of organized tools then lead the user through the process of converting the raw data sources to a colorized LAS file. We have even built a wizard into Evo for running the Applanix POSPac application (the post-processing software for the Position and Orientation System, POS) so users do not need to leave the project environment for this step of processing. We have optimized steps that can normally be rather arduous such as clipping the trajectory to only the desired flight lines, greatly speeding the workflow process. We have also multithreaded the sections of the workflow that benefit from multiple cores such as the colorization step. A typical 20 hectare (50 acre) project can be processed from raw data ingest to colorized LAS point cloud in about 15 minutes. When you say you have 3D colorization of all LIDAR points, do you have a certain color code for the different features that you are scanning? No, we are actually attributing the point cloud with natural color. The True View 410 has two Red-Green-Blue (RGB) mapping cameras. These cameras provide RGB imagery of the scene being collected by the laser scanner. In post-processing, we ray trace each laser point into the camera images that have imaged the point. We then select the “best” image from this collection. Finally, we interpolate the RGB value from the image and add this value set to the RGB fields of the point cloud LAS record. The result is that any software that can visualize and/or process the RGB fields of a LAS file can take advantage of the colorization. Where do you place the two GeoCue Mapping Cameras? The GeoCue Mapping Cameras (GMC) are placed in the “belly” of the sensor, just in front of the laser scanner and just below the electronics enclosure. They, as well as the Position and Orientation System and the laser scanner, are all mounted on a rigid aluminum frame. This mounting scheme ensures that all three sensors remain aligned. We mounted the GMCs in an oblique position to achieve a wider field of view and also to image of the sides of vertical objects such as buildings. This provides a much better source for colorization of the LIDAR data than one would achieve with a single nadir pointing camera. How is sensor coordination important to the end result of the data? I think it is critically important. We have done quite a few drone LIDAR projects in our services division and as a result of testing/calibrating various drone LIDAR systems. Everyone always wants imagery since you cannot really get the entire picture (pun intended!) from the 3D point cloud alone. We would typically end up doing a second mission using a low-cost drone such as a DJI Inspire to collect the imagery. However, since the image is not concurrent with the LIDAR data, you can see displacements. This is exacerbated on busy sites such as a mine where equipment are in constant motion. For example, an oscillating stockpile conveyor cannot be modeled in 3D without a concurrent system since it will be in a different position between the LIDAR and camera imaging. Of course, we are not the first ones to glom a camera onto a LIDAR unit. There are several companies who can mount a DSLR camera onto a LIDAR scanner. We have tried some of these solutions but find them problematic for a number of reasons. The field of view of the camera is always quite a bit narrower than the laser scanner, it is difficult to find a lens that can be photogrammetrically calibrated, synchronization with the GNSS is always a bit of kludge and the systems are so heavy that flight time is dramatically reduced. We really studied these issues when we developed the camera scheme for the True View 410. What is the level of expertise is needed to use the GeoCue TrueView product in the drone LIDAR market? I would say that anyone who has some experience flying medium size drones such as the DJI M600 and is comfortable with GIS or CAD will be able to learn our system with 3 days of training. It is actually mush less complex to use than many other drone LIDAR systems because of our operational philosophy. A single button on a top-mounted control box on the drone activates the system and finalizes data recording. Our True View Evo software is aided by processing “wizards” to make the workflow fairly simple. Of course, there are literally hundreds of applications for colorized 3D point cloud data. If a company does not have inhouse LIDAR exploitation expertise, they might want to start with a straightforward process such as volumetric analysis or creating ground gridded elevation models. As they gain experience, they can offer more complex services such as 3D feature collection, hydro-enforced modeling and so forth. Is customization from GeoCue necessary? We have a mounting kit available for the DJI M600 hexcopter so for this platform, we have an “out of the box” solution. The True View 410 is easily mounted to any rotary wing platform with a payload capacity of 2.25 kg by adapting the top bracket of the True View 410 to a belly mounted adapter on the drone. The mission control software on the True View 410 is autonomous with no connection to the drone flight control system. You simply modify any desired variables (such as camera firing rate/distance, laser scanner rotation rate and so forth) in a text configuration file called the Mission Configuration File. We provide a set of defaults that will work for most projects so even this step is not strictly required. Nearly anyone can be productively collecting data within a day or two of receiving a system. What is the subscription model? This is a new business model we are first trying out in the USA and Canada that I am very excited about. The idea is that users can engage in a “pay as you collect” model that removes the need for a big up-front investment. Under our subscription model, we essentially rent the system based on the number of minutes the system is in motion (“kinematic minutes” or “kinmin”). There is a minimum fee each month of US $3,250 (that drops to $3,000 per month for a 12-month commitment), but the user receives 3,250 True View Points for this minimum fee. The minimum contract period is only 3 months. The base monthly fee gives enough True View Points to fly about 20 projects, each of 20 hectares (about 50 acres) using a local base station for the GNSS reference. If the customer needs to fly more than this, they can purchase additional True View Points. All software is included in this subscription so, other than the drone, a mounting kit and a base station, they are good to go. If the system fails, we ship a replacement and the customer returns the defective unit to us. There are no other fees beyond the charge for kinmins; the kinmin fee includes all hardware and software maintenance. This is a very low risk way to enter the drone LIDAR mapping business. Of course, if there is a crash or loss of sensor, the customer must reimburse us so comprehensive insurance is probably a wise investment! I am very excited about this model. I think it will enable a lot of customers to dip their toes in the drone LIDAR lake without fear of drowning. ________________________________________________________ Congratulations to Atran Raikany for winning the sharecg Sweepstakes at SIGGRAPH 2019, receiving the prize of the $100 Amazon Gift Certificate Sweepstakes. Atran Raikany graduated as Valedictorian from SCAD Atlanta and is currently working at ZeniMax Online Studios. Atran Raikany has been enamored by the world of animation his entire life. He works in 2D at ZeniMax, but is developing his 3D animation skills at home. He has also served as a Student Volunteer for SIGGRAPH '17 and '18, with this year being his first time at the conference as a full attendee. Tags: 3D, AEC, architects, architecture, AutoCAD, BIM, building design, building information modeling, CAD, infrastructure, laser scanning, mobile Categories: building information modeling, construction, drones, drpmes, engineering, field, field solutions, geospatial, GIS, IFC, infrastructure, mobile, point clouds, simulation, UAV, visualization |