Filter by type:

Sort by year:

3D Terrain Segmentation in the SWIR Spectrum

Dalton Rosario, Anthony Ortiz, and Olac Fuentes
Conference PaperIEEE Workshop on Hyperspectral Image and Signal Processing Conference (WHISPERS 2018), Amsterdam, The Netherlands, September 2018.

Abstract

We focus on the automatic 3D terrain segmentation problem using hyperspectral shortwave IR (HS-SWIR) imagery and 3D Digital Elevation Models (DEM). The datasets were independently collected, and metadata for the HS-SWIR dataset are unavailable. We explore an overall slope of the SWIR spectrum that correlates with the presence of moisture in soil to propose a band ratio test to be used as a proxy for soil moisture content to distinguish two broad classes of objects: live vegetation from impermeable manmade surface. We show that image based localization techniques combined with the Optimal Randomized RANdom Sample Consensus (RANSAC) algorithm achieve precise spatial matches between HS-SWIR data of a portion of downtown Los Angeles (LA (USA)) and the Visible image of a geo-registered 3D DEM, covering a wider-area of LA. Our spectral-elevation rule based approach yields an overall accuracy of 97.7%, segmenting the object classes into buildings, houses, trees, grass, and roads/parking lots.

Integrated Learning and Feature Selection for Deep Neural Networks in Multispectral Images

Anthony Ortiz, Alonso Granados, Olac Fuentes, Christopher Kiekintveld, Dalton Rosario, Zachary Bell
Conference Paper 14th IEEE Workshop on Perception Beyond the Visible Spectrum, held in conjunction with Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, Utah, June 2018.

Abstract

The curse of dimensionality is a well-known phenomenon that arises when applying machine learning algorithms to highly-dimensional data; it degrades performance as a function of increasing dimension. Due to the high data dimensionality of multispectral and hyperspectral imagery, classifiers trained on limited samples with many spectral bands tend to overfit, leading to weak generalization capability. In this work, we propose an end-to-end framework to effectively integrate input feature selection into the training procedure of a deep neural network for dimensionality reduction. We show that Integrated Learning and Feature Selection (ILFS) significantly improves performance on neural networks for multispectral imagery applications. We also evaluate the proposed methodology as a potential defense against adversarial examples, which are malicious inputs carefully designed to fool a machine learning system. Our experimental results show that methods for generating adversarial examples designed for RGB space are also effective for multispectral imagery and that ILFS significantly mitigates their effect.

Spectral-elevation data registration using visible-SWIR spatial correspondence

Dalton Rosario, Anthony Ortiz
Conference Paper SPIE Defense and Comercial Sensing 2018, Orlando, Florida, April 2018.

Abstract

We focus on the problem of spatial feature correspondence between images generated by sensors operating in different regions of the spectrum, in particular the Visible (Vis: 0.4-0.7 m) and Shortwave Infrared (SWIR: 1.0-2.5 m). Under the assumption that only one of the available datasets is geospatial ortho-rectified (e.g., Vis), this spatial correspondence can play a major role in enabling a machine to automatically register SWIR and Vis images, representing the same swath, as the first step toward achieving a full geospatial ortho-rectification of, in this case, the SWIR dataset. Assuming further that the Vis images are associated with a Lidar derived Digital Elevation Model (DEM), corresponding local spatial features between SWIR and Vis images can also lead to the association of all of the additional data available in these sets, to include SWIR hyperspectral and elevation data. Such a data association may also be interpreted as data fusion from these two sensing modalities: hyperspectral and Lidar. We show that, using the Scale Invariant Feature Transformation (SIFT) and Optimal Randomized RANdom Sample Consensus (RANSAC) algorithm, a software method can successfully find spatial correspondence between SWIR and Vis images for a complete pixel by pixel alignment. Our method is validated through an experiment using a large SWIR hyperspectral data cube, representing a portion of Los Angeles, California, and a DEM with associated Vis images covering a significantly wider area of Los Angeles.

Image-based 3D Model and Hyperspectral Data Fusion for Improved Scene understanding

Anthony Ortiz, Dalton Rosario, Olac Fuentes, Blair Simon
Conference Paper IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2017), Fort Worth, Texas, USA, July 2017

Abstract

We address the problem of automatically fusing hyperspectral data of a digitized scene with an image-based 3D model, overlapping the same scene, in order to associate material spectra with corresponding height information for improved scene understanding. The datasets have been independently collected at different spatial resolutions by different aerial platforms and the georegistration information about the datasets is assumed to be insufficient or unavailable. We propose a method to solve the fusion problem by associating Scale Invariant Feature Transform (SIFT) descriptors from the hyperspectral data with the corresponding 3D point cloud in a large scale 3D model. We find the correspondences effi- ciently without affecting matching performance by limiting the initial search space to the centroids obtained after performing k-means clustering. Finally, we apply the Optimal Randomized RANdom Sample Consensus (RANSAC) algorithm to enforce geometric alignment of the hyperspectral images onto the 3D model. We present preliminary results that show the effectiveness of the method using two large datasets collected from drone-based sensors in an urban setting.

Small Drone Field Experiment: Data Collection & Processing

D. Rosario, C. Borel, D. Conover, R. McAlinden, Anthony Ortiz, S. Shiver, B. Simon
Conference Papers Proceeding of the 9th NATO Military Sensing Symposium, Quebec City, Canada, May 31 - Jun 2 2017

Abstract

Following an initiative formalized in April 2016—formally known as ARL West—between the U.S. Army Research Laboratory (ARL) and University of Southern California’s Institute for Creative Technologies (USC ICT), a field experiment was coordinated and executed in the summer of 2016 by ARL, USC ICT, and Headwall Photonics. The purpose was to image part of the USC main campus in Los Angeles, USA, using two portable COTS (commercial off the shelf) aerial drone solutions for data acquisition, for photogrammetry (3D reconstruction from images), and fusion of hyperspectral data with the recovered set of 3D point clouds representing the target area. The research aims for determining the viability of having a machine capable of segmenting the target area into key material classes (e.g., manmade structures, live vegetation, water) for use in multiple purposes, to include providing the user with a more accurate scene understanding and enabling the unsupervised automatic sampling of meaningful material classes from the target area for adaptive semisupervised machine learning. In the latter, a target-set library may be used for automatic machine training with data of local material classes, as an example, to increase the prediction chances of machines recognizing targets. The field experiment and associated data post processing approach to correct for reflectance, georectify, recover the area’s dense point clouds from images, register spectral with elevation properties of scene surfaces from the independently collected datasets, and generate the desired scene segmented maps are discussed. Lessons learned from the experience are also highlighted throughout the paper.