Category Archives: Remote Sensing

What Do You Put On The Borg Warner Trophy When An Autonomous Car Wins the Indy 500?

Peter Lobner

A year ago, this might have seemed like a foolish question.  An autonomous car racing in the Indianapolis 500 Mile Race?  Ha!  When pigs fly!

The Indy 500 Borg Warner Trophy. 
Source:  The359 – Flickr via Wikipedia

One of the first things you may notice about the Borg Warner Trophy is that the winning driver of each Indy 500 Race is commemorated with a small portrait/sculpture of their face in bas-relief along with a small plaque with their name, winning year and winning average speed. Today, 105 faces grace the trophy.

Borg Warner Trophy close-up.
Source: WISH-TV, Indianapolis, March 2016

The Indianapolis Motor Speedway (IMS) website provides the following details:

“The last driver to have his likeness placed on the original trophy was Bobby Rahal in 1986, as all the squares had been filled. A new base was added in 1987, and it was filled to capacity following Gil de Ferran’s victory in 2003. For 2004, Borg-Warner commissioned a new base that will not be filled to capacity until 2034.”

On 11 January 2021, the Indianapolis Motor Speedway along with Energy Systems network announced the Indy Autonomous Challenge (IAC), with the inaugural race taking place at the IMS on 23 October of 2021.  The goal of the IAC is to create the fastest autonomous race car that can complete a head-to-head 50 mile (80.5 km) race at IMS. The challenge, which offers $1.5 million in prize money, is geared towards college and university teams. The IAC website is here:

The IAC organizers state that this challenge was “inspired and advised by innovators who competed in the Defense Advanced Research Projects Agency (DARPA) Grand Challenge, which put forth a $1 million award in 2004 that created the modern automated vehicle industry.”

All teams will be racing an open-wheel, automated Dallara IL-15 race car that appears, at first glance, quite similar to conventional (piloted) 210 mph Dallara race cars used in the Indy Lights race series.  However, the IL-15 has been modified with hardware and controls to enable automation.  The automation systems include an advanced set of sensors (radar, lidar, optical cameras) and computers.  Each completed race car has a value of more than $1 million. The teams will focus primarily on writing the software that will process the sensor data and drive the cars.  When fully configured for the race, the IAC Dallara IL-15 will be the world’s fastest autonomous automotive vehicle.

Rendering of the autonomous Dallara IL-15.  Source: IAC
Rendering of the autonomous Dallara IL-15 on the IMS race track.  Source: IAC

Originally, 39 university teams from 11 counties and 14 states had applied to compete in the IAC.  As of mid-January 2021, the IAC website lists 24 teams still actively seeking to qualify for the race.  

The race winner will be the first team whose car crosses the finish line after a 20-lap (50 mile / 80.5 km) head-to-head race that is completed in less than 25 minutes.  This requires an average lap speed of at least 120 mph (193 kph) and an average lap time of less than 75 seconds around the 2.5 mile (4 km) IMS race track. 

In comparison, Indy Light races at IMS from 2003 to 2019 have had an average winning speed of 148.1 mph (238.3 kph) and an average winning lap time of 60.8 seconds.  All of these races were run with cars using a Dallara chassis. The highest winning average speed for an Indy Lights race at IMS was in 2018, when Colton Herta won in a Dallara-Mazda at an average speed of 195.0 mph (313.8 kph) and an average lap time of 46.1 seconds, with no cautions during the race.

Milestones preceding the autonomous race are listed on the IAC website here:

Key milestones include:

  • 27 – 29 May: Vehicle distribution to the teams
  • 5 – 6 June: Track practice #1
  • 4 – 6 September: Track practice #2
  • 19 – 20 October: Track practice #3
  • 21 – 22 October: Final race qualification
  • 23 October: Race day

The winning team will receive a prize of $1 million, with the second and third place teams receiving $250,000 and $50,000, respectively.

The IAC race will be held more than 17 years after the first of three DARPA Grand Challenge autonomous vehicle competitions that were instrumental in building the technical foundation and developing broad-based technical competencies related to autonomous vehicles.  A quick look at these DARPA Grand Challenge races may help put the upcoming IAC race in perspective.

The first DARPA Grand Challenge autonomous vehicle race was held on 13 March 2004.  From an initial field of 106 applicants, DARPA selected 25 finalists. After a series of pre-race trials, 15 teams qualified their vehicles for the race. The “race course” was a 140 mile (225 km) off-road route designated by GPS waypoints through the Mojave Desert, from Barstow, CA to Primm, NV.  You might remember that no vehicles completed the course and there was no winner of the $1 million prize. The vehicle that went furthest was the Carnegie Mellon Sandstorm, a modified Humvee sponsored by SAIC, Boeing and others.  Sandstorm broke down after completing 7.36 miles (11.84 km), just 5% of the course. 

A second Grand Challenge race was held 18 months later, on 8 October 2005. DARPA raised the prize money to $2 million for this 132 mile (212 km) off-road race. From an original field of 197 applicants, 23 teams qualified to have their vehicles on the starting line for the race.  In the end, five teams finished the course, four of them in under the 10-hour limit. Stanford University’s Stanley was the overall winner.  All but one of the 23 finalist teams traveled farther than the best vehicle in 2004.  This was a pretty remarkable improvement in autonomous vehicle performance in just 18 months.

In 2007, DARPA sponsored a different type of autonomous vehicle competition, the Urban Challenge.  DARPA describes this competition as follows:

“This event required teams to build an autonomous vehicle capable of driving in traffic, performing complex maneuvers such as merging, passing, parking, and negotiating intersections. As the day wore on, it became apparent to all that this race was going to have finishers. At 1:43 pm, “Boss”, the entry of the Carnegie Mellon Team, Tartan Racing, crossed the finish line first with a run time of just over four hours. Nineteen minutes later, Stanford University’s entry, “Junior,” crossed the finish line. It was a scene that would be repeated four more times as six robotic vehicles eventually crossed the finish line, an astounding feat for the teams and proving to the world that autonomous urban driving could become a reality. This event was groundbreaking as the first time autonomous vehicles have interacted with both manned and unmanned vehicle traffic in an urban environment.”

In January 2021, a production Tesla Model 3 with the new Full Self-Driving (FSD) Beta software package drove from San Francisco to Los Angeles with almost no human intervention.  I wonder how that Tesla Model 3 would have performed on the 2007 DARPA Urban Challenge.  You can read more about the SF – LA FSD trip at the following link:

We’ve seen remarkable advances in the development of autonomous vehicles in the 17 years since the 2004 DARPA Grand Challenge race.  Is it unreasonable to think that an autonomous race car will become competitive with a piloted Indy race car during the next decade and compete in the Indy 500 before they run out of space on the Borg Warner Trophy in 2034?  If the autonomous racer wins the Indy 500, what will they put on the trophy to commemorate the victory? A silver bas-relief of a microchip?

I think I see a flying pig!

For more information on IAC and IMS

For more information on the DARPA Grand Challenges for autonomous vehicles

The Moon has Never Looked so Colorful

Peter Lobner

On 20 April 2020, the U.S. Geological Survey (USGS) released the first-ever comprehensive digital geologic map of the Moon.  The USGS described this high-resolution map as follows:

“The lunar map, called the ‘Unified Geologic Map of the Moon,’ will serve as the definitive blueprint of the moon’s surface geology for future human missions and will be invaluable for the international scientific community, educators and the public-at-large.”

Color-coded orthographic projections of the “Unified Geologic Map of the Moon” showing the geology of the Moon’s near side (left) and far side (right).  Source:  NASA/GSFC/USGS

You’ll find the USGS announcement here:

You can view an animated, rotating version of this map here:

This remarkable mapping product is the culmination of a decades-long project that started with the synthesis of six Apollo-era (late 1960s – 1970s) regional geologic maps that had been individually digitized and released in 2013 but not integrated into a single, consistent lunar map. 

This intermediate mapping product was updated based on data from the following more recent lunar satellite missions:

  • NASA’s Lunar Reconnaissance Orbiter (LRO) mission:
    • The Lunar Reconnaissance Orbiter Camera (LROC) is a system of three cameras that capture high resolution black and white images and moderate resolution multi-spectral images of the lunar surface:
    • Topography for the north and south poles was supplemented with Lunar Orbiter Laser Altimeter (LOLA) data:
  • JAXA’s (Japan Aerospace Exploration Agency) SELENE (SELenological and ENgineering Explorer) mission:

The final product is a seamless, globally consistent map that is available in several formats: geographic information system (GIS) format at 1:5,000,000-scale, PDF format at 1:10,000,000-scale, and jpeg format.

At the following link, you can download a large zip file (310 Mb) that contains a jpeg file (>24 Mb) with a Mercator projection of the lunar surface between 57°N and 57°S latitude, two polar stereographic projections of the polar regions from 55°N and 55°S latitudes to the poles, and a description of the symbols and color coding used in the maps.

These high-resolution maps are great for exploring the lunar surface in detail. A low-resolution copy (not suitable for browsing) is reproduced below.

For more information on the Unified Geologic Map of the Moon, refer to the paper by C. M. Fortezzo, et al., “Release of the digital Unified Global Geologic Map of the Moon at 1:5,000,000-scale,” which is available here:

Antarctica – What’s Under All That Ice?

Peter Lobner

From space, Antarctica gives the appearance of a large, ice-covered continental land mass surrounded by the Southern Ocean.  The satellite photo mosaic, below, reinforces that illusion.  Very little ice-free rock is visible, and it’s hard to distinguish between the continental ice sheet and ice shelves that extend into the sea.

Satellite mosaic image of Antarctica created by Dave Pape, 
adapted to the same orientation as the following maps. 

The following topographical map presents the surface of Antarctica in more detail, and shows the many ice shelves (in grey) that extend beyond the actual coastline and into the sea.  The surface contour lines on the map are at 500 meter (1,640 ft) intervals.

Map of Antarctica and the Southern Ocean showing the topography of Antarctica (as blue lines), research stations of the United States and the United Kingdom (in red text), ice-free rock areas (in brown), ice shelves (in gray) and names of the major ocean water bodies (in blue uppercase text).
Source: LIMA Project (Landsat Image Mosaic of Antarctica) via Wikipedia

The highest elevation of the ice sheet is 4,093 m (13,428 ft) at Dome Argus (aka Dome A), which is located in the East Antarctic Ice Sheet, about 1,200 kilometers (746 miles) inland.  The highest land elevation in Antarctica is Mount Vinson, which reaches 4,892 meters (16,050 ft) on the north part of a larger mountain range known as Vinson Massif, near the base of the Antarctic Peninsula.  This topographical map does not provide information on the continental bed that underlies the massive ice sheets.

A look at the bedrock under the ice sheets: Bedmap2 and BedMachine

In 2001, the British Antarctic Survey (BAS) released a topographical map of the bedrock that underlies the Antarctic ice sheets and the coastal seabed derived from data collected by international consortia of scientists since the 1950s. The resulting dataset was called  BEDMAP1.  

In a 2013 paper, P. Fretwell, et al. (a very big team of co-authors), published the paper, “Bedmap2: Improved ice bed, surface and thickness datasets for Antarctica,” which included the following bed elevation map, with bed elevations color coded as indicated in the scale on the left.  As you can see, large portions of the Antarctic “continental” bedrock are below sea level.

Bedmap2 bed elevation grid.  Source:  Fretwell 2013, Fig. 9

You can read the 2013 Fretwell paper here:

For an introduction to Antarctic ice sheet thickness, ice flows, and the topography of the underlying bedrock, please watch the following short (1:51) 2013 video, “Antarctic Bedrock,” by the National Aeronautics and Space Administration’s (NASA’s) Scientific Visualization Studio:

NASA explained:

  • “In 2013, BAS released an update of the topographic dataset called BEDMAP2 that incorporates twenty-five million measurements taken over the past two decades from the ground, air and space.”
  • “The topography of the bedrock under the Antarctic Ice Sheet is critical to understanding the dynamic motion of the ice sheet, its thickness and its influence on the surrounding ocean and global climate. This visualization compares the new BEDMAP2 dataset, released in 2013, to the original BEDMAP1 dataset, released in 2001, showing the improvements in resolution and coverage.  This visualization highlights the contribution that NASA’s mission Operation IceBridge made to this important dataset.”

On 12 December 2019, a University of California Irvine (UCI)-led team of glaciologists unveiled the most accurate portrait yet of the contours of the land beneath Antarctica’s ice sheet.  The new topographic map, named “BedMachine Antarctica,”  is shown below.

BedMachine Antarctica topographical map showing the underlying ground features and the large portions of the continental bed that are below sea level.  
 Credit: Mathieu Morlighem / UCI

UCI reported:

  • “The new Antarctic bed topography product was constructed using ice thickness data from 19 different research institutes dating back to 1967, encompassing nearly a million line-miles of radar soundings. In addition, BedMachine’s creators utilized ice shelf bathymetry measurements from NASA’s Operation IceBridge campaigns, as well as ice flow velocity and seismic information, where available. Some of this same data has been employed in other topography mapping projects, yielding similar results when viewed broadly.”
  • “By basing its results on ice surface velocity in addition to ice thickness data from radar soundings, BedMachine is able to present a more accurate, high-resolution depiction of the bed topography. This methodology has been successfully employed in Greenland in recent years, transforming cryosphere researchers’ understanding of ice dynamics, ocean circulation and the mechanisms of glacier retreat.”
  • “BedMachine relies on the fundamental physics-based method of mass conservation to discern what lies between the radar sounding lines, utilizing highly detailed information on ice flow motion that dictates how ice moves around the varied contours of the bed.”

The net result is a much higher resolution topographical map of the bedrock that underlies the Antarctic ice sheets.  The authors note:“This transformative description of bed topography redefines the high- and lower-risk sectors for rapid sea level rise from Antarctica; it will also significantly impact model projections of sea level rise from Antarctica in the coming centuries.”

You can take a visual tour of BedMachine’s high-precision model of Antarctic’s ice bed topography here.  Enjoy your trip.

There is significant geothermal heating under parts of Antarctica’s bedrock

West Antarctica and the Antarctic Peninsula form a connected rift / fault zone that includes about 60 active and semi-active volcanoes, which are shown as red dots in the following map.  

Volcanoes located along the branching West Antarctic Fault/Rift System.
Source:  James Kamis, Plate Climatology, 4 July 2017

In a 29 June 2018 article on the Plate Climatology website, author James Kamis presents evidence that the fault / rift system underlying West Antarctica generates a significant geothermal heat flow into the bedrock and is the source of volcanic eruptions and sub-glacial volcanic activity in the region.  The heat flow into the bedrock and the observed volcanic activity both contribute to the glacial melting observed in the region.  You can read this article here:

The correlation between the locations of the West Antarctic volcanoes and the regions of higher heat flux within the fault / rift system are evident in the following map, which was developed in 2017 by a multi-national team.

Geothermal heat flux distribution at the ice-rock interface superimposed on subglacial topography.  Source:  Martos, et al., Geophysical Research Letter 10.1002/2017GL075609, 30 Nov 2017

The authors note: “Direct observations of heat flux are difficult to obtain in Antarctica, and until now continent-wide heat flux maps have only been derived from low-resolution satellite magnetic and seismological data. We present a high-resolution heat flux map and associated uncertainty derived from spectral analysis of the most advanced continental compilation of airborne magnetic data. …. Our high-resolution heat flux map and its uncertainty distribution provide an important new boundary condition to be used in studies on future subglacial hydrology, ice sheet dynamics, and sea level change.”  This Geophysical Research Letter is available here:

The results of six Antarctic heat flux models developed from 2004 to 2017 were compared by Brice Van Liefferinge in his 2018 PhD thesis.  His results, shown below, are presented on the Cryosphere Sciences website of the European Sciences Union (EGU). 

Spatial distributions of geothermal heat flux: (A) Pollard et al. (2005) constant values, (B) Shapiro and Ritzwoller (2004): seismic model, (C) Fox Maule et al. (2005): magnetic measurements, (D) Purucker (2013): magnetic measurements, (E) An et al. (2015): seismic model and (F) Martos et al. (2017): high resolution magnetic measurements.  Source:  Brice Van Liefferinge (2018) PhD Thesis.

Regarding his comparison of Antarctic heat flux models, Van Liefferinge reported:  

  • “As a result, we know that the geology determines the magnitude of the geothermal heat flux and the geology is not homogeneous underneath the Antarctic Ice Sheet:  West Antarctica and East Antarctica are significantly distinct in their crustal rock formation processes and ages.”
  • “To sum up, although all geothermal heat flux data sets agree on continent scales (with higher values under the West Antarctic ice sheet and lower values under East Antarctica), there is a lot of variability in the predicted geothermal heat flux from one data set to the next on smaller scales. A lot of work remains to be done …” 

The effects of geothermal heating are particularly noticeable at Deception Island, which is part of a collapsed and still active volcanic crater near the tip of the Antarctic Peninsula.  This high heat flow volcano is in the same major fault zone as the rapidly melting / breaking-up Larsen Ice Shelf.  The following map shows the faults and volcanoes in this region.  

Key geological features in the Larsen “C” sea ice segment area.  
Source:  James Kamis, Plate Climatology, 4 July 2017
Tourists enjoying the geothermally heated ocean water at Deception Island.  
Source: Public domain

So, if you take a cruise to Antarctica and the Cruise Director offers a “polar bear” plunge, I suggest that you wait until the ship arrives at Deception Island.  Remember, this warm water is not due to climate change.  You’re in a volcano.

For more information on Bedmap 2 and BedMachine:

  • “Antarctic Bedrock,” Visualizations by Cindy Starr,  NASA Scientific Visualization Studio, Released on June 4, 2013:
  • Morlighem, M., Rignot, E., Binder, T. et al. “Deep glacial troughs and stabilizing ridges unveiled beneath the margins of the Antarctic ice sheet,” Nature Geoscience (2019) doi:10.1038/s41561-019-0510-8:

More information on geothermal heating in the West Antarctic rift / fault zone:

NOAA’s Monthly Climate Summaries are Worth Your Attention

Peter Lobner

The National Oceanic and Atmospheric Administration’s (NOAA’s) National Centers for Environmental Information (NCEI) are responsible for “preserving, monitoring, assessing, and providing public access to the Nation’s treasure of climate and historical weather data and information.”  The main NOAA / NCEI website is here:

The “State of the Climate” is a collection of monthly summaries recapping climate-related occurrences on both a global and national scale.  Your starting point for accessing this collection is here:

The following monthly summaries are available.

I’d like to direct your attention to two particularly impressive monthly summaries:

  • Global Summary Information, which provides a comprehensive top-level view, including the Sea Ice Index
  • Global Climate Report, which provides more information on temperature and precipitation, but excludes the Sea Ice Index information

Here are some of the graphics from the Global Climate Report for June 2019.


NOAA offered the following synopsis of the global climate for June 2019.

  • The month of June was characterized by warmer-than-average temperatures across much of the world. The most notable warm June 2019 temperature departures from average were observed across central and eastern Europe, northern Russia, northeastern Canada, and southern parts of South America.
  • Averaged as a whole, the June 2019 global land and ocean temperature departure from average was the highest for June since global records began in 1880.
  • Nine of the 10 warmest Junes have occurred since 2010.

For more details, see the online June 2019 Global Climate Reportat the following link:

A complementary NOAA climate data resource is the National Snow & Ice Data Center’s (NSIDC’s) Sea Ice Index, which provides monthly and daily quick looks at Arctic-wide and Antarctic-wide changes in sea ice. It is a source for consistently processed ice extent and concentration images and data values since 1979. Maps show sea ice extent with an outline of the 30-year (1981-2010) median extent for the corresponding month or day. Other maps show sea ice concentration and anomalies and trends in concentration.  In addition, there are several tools you can use on this website to animate a series of monthly images or to compare anomalies or trends.  You’ll find the Sea Ice Index here:

The Arctic sea ice extent for June 2019 and the latest daily results for 23 July 2019 are shown in the following graphics, which show the rapid shrinkage of the ice pack during the Arctic summer.  NOAA reported that the June 2019 Arctic sea ice extent was 10.5% below the 30-year (1981 – 2010) average.  This is the second smallest June Arctic sea ice extent since satellite records began in 1979.


The monthly Antarctic results for June 2019 and the latest daily results for 23 July 2019 are shown in the following graphics, which show the growth of the Antarctic ice pack during the southern winter season. NOAA reported that the June 2019 Antarctic sea ice extent was 8.5% below the 30-year (1981 – 2010) average.  This is the smallest June Antarctic sea ice extent on record.


I hope you enjoy exploring NOAA’s “State of the Climate” collection of monthly summaries.

India Poised to Become the 4th Nation to Land a Spacecraft on the Moon

Peter Lobner

This post was updated on 31 July 2019

After the failure of Israel’s Beresheet spacecraft to execute a soft landing on the Moon in April 2019, India is the next new contender for lunar soft landing honors with their Chandrayaan-2 spacecraft.  We’ll take a look at the Chandrayaan-2 mission in this post.

If you’re not familiar with the Israel’s Beresheet lunar mission, see my 4 April 2019 post at the following link:

1. Background:  India’s Chandrayaan-1 mission to the Moon

India’s first mission to the Moon, Chandrayaan-1, was a mapping mission designed to operate in a circular (selenocentric) polar orbit at an altitude of 100 km (62 mi).  The Chandrayaan-1 spacecraft, which had an initial mass of 1,380 kg (3,040 lb), consisted of two modules, an orbiter and a Moon Impact Probe (MIP). Chandrayaan-1 carried 11 scientific instruments for chemical, mineralogical and photo-geologic mapping of the Moon.  The spacecraft was built in India by the Indian Space Research Organization (ISRO), and included instruments from the USA, UK, Germany, Sweden and Bulgaria.  

Chandrayaan-1 was launched on 22 October 2008 from the Satish Dhawan Space Center (SDSC) in Sriharikota on an “extended” version of the indigenous Polar Satellite Launch Vehicle designated PSLV-XL. Initially, the spacecraft was placed into a highly elliptical geostationary transfer orbit (GTO), and was sent to the Moon in a series of orbit-increasing maneuvers around the Earth over a period of 21 days.  A lunar transfer maneuver enabled the Chandrayaan-1 spacecraft to be captured by lunar gravity and then maneuvered to the intended lunar mapping orbit.   This is similar to the five-week orbital transfer process used by Israel’s Bersheet lunar spacecraft to move from an initial GTO to a lunar circular orbit.

The goal of MIP was to make detailed measurements during descent using three instruments: a radar altimeter, a visible imaging camera, and a mass spectrometer known as Chandra’s Altitudinal Composition Explorer (CHACE), which directly sampled the Moon’s tenuous gaseous atmosphere throughout the descent.  On 14 November 2008, the 34 kg (75 lb) MIP separated from the orbiter and descended for 25 minutes while transmitting data back to the orbiter.  MIP’s mission ended with the expected hard landing in the South Pole region near Shackelton crater at 85 degrees south latitude.

In May 2009, controllers raised the orbit to 200 km (124 miles) and the orbiter mission continued until 28 August 2009, when communications with Earth ground stations were lost.  The spacecraft was “found” in 2017 by NASA ground-based radar, still in its 200 km orbit.

Numerous reports have been published describing the detection by the Chandrayaan-1 mission of water in the top layers of the lunar regolith.  The data from CHACE produced a lunar atmosphere profile from orbit down to the surface, and may have detected trace quantities of water in the atmosphere.  You’ll find more information on the Chandrayaan-1 mission at the following links:

2. India’s upcoming Chandrayaan-2 mission to the Moon

Chandrayaan-2 was launched on 22 July 2019.  After achieving a 100 km (62 mile) circular polar orbit around the Moon, a lander module will separate from the orbiting spacecraft and descend to the lunar surface for a soft landing, which currently is expected to occur in September 2019, after a seven-week journey to the Moon.  The target landing area is in the Moon’s southern polar region, where no lunar lander has operated before.  A small rover vehicle will be deployed from the lander to conduct a 14-day mission on the lunar surface.  The orbiting spacecraft is designed to conduct a one-year mapping mission.

Artist’s illustration of India’s lunar lander and the small rover vehicle
on the surface of the moon. Source: ISRO

The launch vehicle

India will launch Chandrayaan-2 using the medium-lift Geosynchronous Satellite Launch Vehicle Mark III (GSLV Mk III) developed and manufactured by ISRO.  As its name implies, GSLV Mk III was developed primarily to launch communication satellites into geostationary orbit.  Variants of this launch vehicle also are used for science missions and a human-rated version is being developed to serve as the launch vehicle for the Indian Human Spaceflight Program.

The GSLV III launch vehicle will place the Chandrayaan-2 spacecraft into an elliptical parking orbit (EPO) from which the spacecraft will execute orbital transfer maneuvers comparable to those successfully executed by Chandrayaan-1 on its way to lunar orbit in 2008.  The Chandrayaan-2 mission profile is shown in the following graphic. You’ll find more information on the GSLV Mk III on the ISRO website at the following link:

Source:  ISRO
GSLV Mk III D2 on the launch pad at SDSC for the launch of the GSAT-29 communications satellite
in 2018. Source:  ISRO via Wikipedia
GSLV Mk III D1 lifting off from the SDSC with the GSAT-19 communications satellite
in 2017. Source:  ISRO via Wikipedia
Transporting the partially integrated GSLV MkIII M1 launch vehicle
 for the Chandrayaan-2 mission on the Mobile Launch Pedestal.  
Source: ISRO

The spacecraft

Chandrayaan-2 builds on the design and operating experience from the previous Chandrayaan-1 mission.  The new spacecraft developed by ISRO has an initial mass of 3,877 kg (8,547 lb).  It consists of three modules: an Orbiter Craft (OC) module, the Vikram Lander Craft (LC) module, and the small Pragyan rover vehicle, which is carried by the LC.  The three modules are shown in the following diagram.

Three spacecraft modules (not to scale).  Source: ISRO

Chandrayaan-2 carries 13 Indian payloads — eight on the orbiter, three on the lander and two on the rover. In addition, the lander carries a passive Laser Retroreflector Array (LRA) provided by NASA. 

Laser Retroreflector Array (LRA). Source: ISRO

The OC and the LC are stacked together within the payload fairing of the launch vehicle and remain stacked until the LC separates in lunar orbit and starts its descent to the lunar surface.

Orbiter (bottom) & lander (top) in stacked configuration.  Source: ISRO

The solar-powered orbiter is designed for a one-year mission to map lunar surface characteristics (chemical, mineralogical, topographical), probe the lunar surface for water ice, and map the lunar exosphere using the CHACE-2 mass spectrometer.  The orbiter also will relay communication between Earth and Vikram lander.

The orbiter.  Source: ISRO

The solar-powered Vikram lander weighs 1,471 kg (3,243 lb).  The scientific instruments on the lander will measure lunar seismicity, measure thermal properties of the lunar regolith in the polar region, and measure near-surface plasma density and its changes with time. 

The Vikram lander with the Pragyan rover on the ramp. Source: ISRO

The 27 kg (59.5 lb) six-wheeled Pragyan rover, whose name means “wisdom” in Sanskrit, is solar-powered and capable of traveling up to 500 meters (1,640 feet) on the lunar surface. The rover can communicate only with the Vikram lander.  It is designed for a 14-day mission on the lunar surface.  It is equipped with cameras and two spectroscopes to study the elemental composition of lunar soil.

Rover during testing. Source: ISRO
Rover details.  Source: ISRO

You’ll find more information on the spacecraft in the 2018 article by V. Sundararajan, “Overview and Technical Architecture of India’s Chandrayaan-2 Mission to the Moon,” at the following link:

Also see the ISRO webpage for the GSLV-Mk III – M1 / Chandrayaan-2 mission at the following link:

Best wishes to the Chandrayaan-2 mission team for a successful soft lunar landing and long-term lunar mapping mission.

Declassified Military Satellite Imagery has Applications in a Wide Variety of Civilian Geospatial Studies

Peter Lobner

1. Overview of US military optical reconnaissance satellite programs

The National Reconnaissance Office (NRO) is responsible for developing and operating space reconnaissance systems and conducting intelligence-related activities for US national security.  NRO developed several generations of classified Keyhole (KH) military optical reconnaissance satellites that have been the primary sources of Earth imagery for the US Department of Defense (DoD) and intelligence agencies.  NRO’s website is here:

NRO’s early generations of Keyhole satellites were placed in low Earth orbits, acquired the desired photographic images on film during relatively short-duration missions, and then returned the film to Earth in small reentry capsules for airborne recovery. After recovery, the film was processed and analyzed.  The first US military optical reconnaissance satellite program, code named CORONA, pioneered the development and refinement of the technologies, equipment and systems needed to deploy an operational orbital optical reconnaissance capability. The first successful CORONA film recovery occurred on 19 August 1960.

Specially modified US Air Force C-119J aircraft recovers a
CORONA film canister in flight.  Source: US Air Force
First reconnaissance picture taken in orbit and successfully recovered on Earth;  taken on 18 August 1960 by a CORONA KH-1 satellite dubbed Discoverer 14.  Image shows the Mys Shmidta airfield in the Chukotka region of the Russian Arctic, with a resolution of about 40 feet (12.2 meters).  Source: Wikipedia

Keyhole satellites are identified by a code word and a “KH” designator, as summarized in the following table.

In 1976, NRO deployed its first electronic imaging optical reconnaissance satellite known as KENNEN KH-11 (renamed CRYSTAL in 1982), which eventually replaced the KH-9, and brought an end to reconnaissance satellite missions requiring film return.  The KH-11 flies long-duration missions and returns its digital images in near real time to ground stations for processing and analysis.  The KH-11, or an advanced version sometimes referred to as the KH-12, is operational today.

US film-return reconnaissance satellites from KH-1 to KH-9 shown to scale
with the KH-11 electronic imaging reconaissance satellite.  
Credit: Giuseppe De Chiara and The Space Review.

Geospatial intelligence, or GEOINT, is the exploitation and analysis of imagery and geospatial information to describe, assess and visually depict physical features and geographically referenced activities on the Earth. GEOINT consists of imagery, imagery intelligence and geospatial information.  Satellite imagery from Keyhole reconnaissance satellites is an important information source for national security-related GEOINT activities.

The National Geospatial-Intelligence Agency (NGA), which was formed in 2003, has the primary mission of collecting, analyzing, and distributing GEOINT in support of national security.  NGA’s predecessor agencies, with comparable missions, were:

  • National Imagery and Mapping Agency (NIMA), 1996 – 2003
  • National Photographic Interpretation Center (NPIC), a joint project of the Central Intelligence Agency (CIA) and DoD, 1961 – 1996

The NGA’s web homepage, at the following link:

The NGA’s webpage for declassified satellite imagery is here:

2. The advent of the US civilian Earth observation programs

Collecting Earth imagery from orbit became an operational US military capability more than a decade before the start of the joint National Aeronautics & Space Administration (NASA) / US Geological Survey (USGS) civilian Landsat Earth observation program.  The first Landsat satellite was launched on 23 July 1972 with two electronic observing systems, both of which had a spatial resolution of about 80 meters (262 feet). 

Since 1972, Landsat satellites have continuously acquired low-to-moderate resolution digital images of the Earth’s land surface, providing long-term data about the status of natural resources and the environment. Resolution of the current generation multi-spectral scanner on Landsat 9 is 30 meters (98 feet) in visible light bands. 

You’ll find more information on the Landsat program on the USGS website here:

3. Declassification of certain military reconnaissance satellite imagery

All military reconnaissance satellite imagery was highly classified until 1995, when some imagery from early defense reconnaissance satellite programs was declassified.  The USGS explains:

“The images were originally used for reconnaissance and to produce maps for U.S. intelligence agencies. In 1992, an Environmental Task Force evaluated the application of early satellite data for environmental studies. Since the CORONA, ARGON, and LANYARD data were no longer critical to national security and could be of historical value for global change research, the images were declassified by Executive Order 12951 in 1995”

You can read Executive Order 12951 here:

Additional sets of military reconnaissance satellite imagery were declassified in 2002 and 2011 based on extensions of Executive Order 12951.

The declassified imagery is held by the following two organizations:

  • The original film is held by the National Archives and Records Administration (NARA).
  • Duplicate film held in the USGS Earth Resources Observation and Science (EROS) Center archive is used to produce digital copies of the imagery for distribution to users.

The declassified military satellite imagery available in the EROS archive is summarized below:

USGS EROS Archive – Declassified Satellite Imagery – 1 (1960 to 1972)

  • This set of photos, declassified in 1995, consists of more than 860,000 images of the Earth’s surface from the CORONA, ARGON, and LANYARD satellite systems.
  • CORONA image resolution improved from 40 feet (12.2 meters) for the KH-1 to about 6 feet (1.8 meters) for the KH-4B.
  • KH-5 ARGON image resolution was about 460 feet (140 meters).
  • KH-6 LANYARD  image resolution was about 6 feet (1.8 meters).

USGS EROS Archive – Declassified Satellite Imagery – 2 (1963 to 1980)

  • This set of photos, declassified in 2002, consists of photographs from the KH-7 GAMBIT surveillance system and KH-9 HEXAGON mapping program.
  • KH-7 image resolution is 2 to 4 feet (0.6 to 1.2 meters).  About 18,000 black-and-white images and 230 color images are available.
  • The KH-9 mapping camera was designed to support mapping requirements and exact positioning of geographical points. Not all KH-9 satellite missions included a mapping camera.  Image resolution is 20 to 30 feet (6 to 9 meters); significantly better than the 98 feet (30 meter) resolution of LANDSAT imagery.  About 29,000 mapping images are available.

USGS EROS Archive – Declassified Satellite Imagery – 3 (1971 to 1984)

  • This set of photos, declassified in 2011, consists of more photographs from the KH-9 HEXAGON mapping program.  Image resolution is 20 to 30 feet (6 to 9 meters).

More information on the declassified imagery resources is available from the USGS EROS Archive – Products Overview webpage at the following link (see heading “Declassified Data”):

4.  Example applications of declassified military reconnaissance satellite imagery

The declassified military reconnaissance satellite imagery provides views of the Earth starting in the early 1960s, more than a decade before civilian Earth observation satellites became operational.  The military reconnaissance satellite imagery, except from ARGON KH-5, is higher resolution than is available today from Landsat civilian earth observation satellites. The declassified imagery is an important supplement to other Earth imagery sources.  Several examples applications of the declassified imagery are described below.

Assessing Aral Sea depletion:

USGS reports: “The Aral Sea once covered about 68,000 square kilometers, a little bigger than the U.S. state of West Virginia. It was the 4th largest lake in the world. It is now only about 10% of the size it was in 1960…..In the 1990s, a dam was built to prevent North Aral water from flowing into the South Aral. It was rebuilt in 2005 and named the Kok-Aral Dam…..The North Aral has stabilized but the South Aral has continued to shrink and become saltier. Up until the 1960s, Aral Sea salinity was around 10 grams per liter, less than one-third the salinity of the ocean. The salinity level now exceeds 100 grams per liter in the South Aral, which is about three times saltier than the ocean.”

On the USGS website, the “Earthshots: Satellite Images of Environmental Change” webpages show the visible changes at many locations on Earth over a 50+ year time period.  The table of contents to the Earthshots webpages is shown below and is at the following link: http://

USGS Earthshots Table of Contents

For the Aral Sea region, the Earthshots photo sequences start with ARGON KH-5 photos taken in 1964.  Below are three screenshots  of the USGS Earthshots pages showing the KH-5 images for the whole the Aral Sea, the North Aral Sea region and the South Aral Sea region. You can explore the Aral Sea Earthshots photo sequences at the following link:

Assessing Antarctic ice shelf condition:

In a 7 June 2016 article entitled, ”Spy satellites reveal early start to Antarctic ice shelf collapse,” Thomas Sumner reported:

“Analyzing declassified images from spy satellites, researchers discovered that the downhill flow of ice on Antarctica’s Larsen B ice shelf was already accelerating as early as the 1960s and ’70s. By the late 1980s, the average ice velocity at the front of the shelf was around 20 percent faster than in the preceding decades,….”

You can read the complete article on the ScienceNews website here:

Satellite images taken by the ARGON KH-5 satellite have revealed how the accelerated movement that triggered the collapse of the Larsen B ice shelf on the east side of the Antarctic Peninsula began in the 1960s. The declassified images taken by the satellite on 29 August 1963 and 1 September 1963 are pictured right.  
Source: Daily Mail, 10 June 2016

Assessing Himalayan glacier condition:  

In a 19 June 2019 paper “Acceleration of ice loss across the Himalayas over the past 40 years,” the authors, reported on the use of HEXAGON KH-9 mapping camera imagery to improve their understanding of trends affecting the Himalayan glaciers from 1975 to 2016:

“Himalayan glaciers supply meltwater to densely populated catchments in South Asia, and regional observations of glacier change over multiple decades are needed to understand climate drivers and assess resulting impacts on glacier-fed rivers. Here, we quantify changes in ice thickness during the intervals 1975–2000 and 2000–2016 across the Himalayas, using a set of digital elevation models derived from cold war–era spy satellite film and modern stereo satellite imagery.”

“The majority of the KH-9 images here were acquired within a 3-year interval (1973–1976), and we processed a total of 42 images to provide sufficient spatial coverage.”

“We observe consistent ice loss along the entire 2000-km transect for both intervals and find a doubling of the average loss rate during 2000–2016.”

“Our compilation includes glaciers comprising approximately 34% of the total glacierized area in the region, which represents roughly 55% of the total ice volume based on recent ice thickness estimates.”

You can read the complete paper by J. M. Maurer, et al., on the Science Advances website here:

3-D image of the Himalayas derived from HEXAGON KH-9 satellite mapping photographs taken on December 20, 1975. Source:  J. M. Maurer/LDEO

Discovering archaeological sites:

The Center for Advanced Spatial Technologies, a University of Arkansas / U.S. Geological Survey collaboration, has undertaken the CORONA Atlas Project using military reconnaissance satellite imagery to create the “CORONA Atlas & Referencing System”. The current Atlas focuses on the Middle East and a small area of Peru, and is derived from 1,024 CORONA images taken on 50 missions. The Atlas contains 833 archaeological sites.

“In regions like the Middle East, CORONA imagery is particularly important for archaeology because urban development, agricultural intensification, and reservoir construction over the past several decades have obscured or destroyed countless archaeological sites and other ancient features such as roads and canals. These sites are often clearly visible on CORONA imagery, enabling researchers to map sites that have been lost and to discover many that have never before been documented. However, the unique imaging geometry of the CORONA satellite cameras, which produced long, narrow film strips, makes correcting spatial distortions in the images very challenging and has therefore limited their use by researchers.”

Screenshot of the CORONA Atlas showing regions in the Middle East
with data available.

CAST reports that they have “developed methods for efficient 

orthorectification of CORONA imagery and now provides free public access to our imagery database for non-commercial use. Images can be viewed online and full resolution images can be downloaded in NITF format.”  

The can explore the CORONA Atlas & Referencing System here:

Conducting commercial geospatial analytics over a broader period of time:

The firm Orbital Insight, founded in 2013, is an example of commercial firms that are mining geospatial data and developing valuable information products for a wide range of customers. Orbital Insight reports:

“Orbital Insight turns millions of images into a big-picture understanding of Earth. Not only does this create unprecedented transparency, but it also empowers business and policy decision makers with new insights and unbiased knowledge of socio-economic trends. As the number of Earth-observing devices grows and their data output expands, Orbital Insight’s geospatial analytics platform finds observational truth in an interconnected world. We map out and quantify the world’s complexities so that organizations can make more informed decisions.”

“By applying artificial intelligence to satellite, UAV, and other geospatial data sources, we seek to discover and quantify societal and economic trends on Earth that are indistinguishable to the human eye. Combining this information with terrestrial data, such as mobile and location-based data, unlocks new sources of intelligence.”

The Orbital Insight website is here:

5. Additional reading related to US optical reconnaissance satellites

You’ll find more information on the NRO’s film-return, optical reconnaissance satellites (KH-1 to KH-9) at the following links:

  • Robert Perry, “A History of Satellite Reconnaissance,” Volumes I to V, National Reconnaissance Office (NRO), various dates 1973 – 1974; released under FOIA and available for download on the NASA website, here:

You’ll find details on NRO’s electronic optical reconnaissance satellites (KH-11, KH-12) at the following links:

6. Additional reading related to civilian use of declassified spy satellite imagery


Assessing Aral Sea depletion:

Assessing Antarctic ice sheet condition:

Assessing Himalayan glacier condition:

Discovering archaeological sites:

Remote Sensing Shows the Extent of Flooding from Hurricane Harvey and Other Large Flooding Events

Peter Lobner

Dartmouth Flood Observatory, at the University of Colorado, Boulder, CO, integrates international satellite data to develop a worldwide view of surface water issues, and can provide regional maps that show the extent of flooding in areas of interest. Data from many satellite sources are used, including NASA’s MODIS (Moderate Resolution Imaging Spectrometer) and Landsat, European Space Agency’s (ESA) Sentinel 1, ASI (Agenzia Spaziale Italiana) Cosmos-SkyMed, and Canadian Space Agency’s Radarsat 2.

The Dartmouth Flood Observatory homepage is here:

The world view of large flooding events as of 26 August 2017 is shown in the graphic.

Source: Dartmouth Flood Observatory

The following 31 August 2017 maps show the areas in Texas and Louisiana that were flooded by Hurricane Harvey also known as DFO flood event 4510). Red represents flooded areas, blue represents normal water extent, and dark grey represents urban areas.

Area mapSource: Dartmouth Flood Observatory

Here’s the link to these detailed flooding maps for Hurricane Harvey:

This webpage also provides links to other information sources related to Hurricane Harvey.

The Dartmouth Flood Observatory maintains an archive of large flood events from 1985 to present. This archive is accessible online at the following link:

Dartmouth Flood Observatory is a member of the Global Flood Partnership (GFP), which describes itself as, “a cooperation framework between scientific organizations and flood disaster managers worldwide to develop flood observational and modeling infrastructure, leveraging on existing initiatives for better predicting and managing flood disaster impacts and flood risk globally.” For more information on the Global Flood Partnership, visit their homepage and portal at the following links:

Lidar Remote Sensing Helps Archaeologists Uncover Lost City and Temple Complexes in Cambodia

Peter Lobner

In Cambodia, remote sensing is proving to be of great value for looking beneath a thick jungle canopy and detecting signs of ancient civilizations, including temples and other structures, villages, roads, and hydraulic engineering systems for water management. Building on a long history of archaeological research in the region, the Cambodian Archaeological Lidar Initiative (CALI) has become a leader in applying lidar remote sensing technology for this purpose. You’ll find the CALI website at the following link:

Areas in Cambodia surveyed using lidar in 2012 and 2015 are shown in the following map.

Angkor Wat and vicinity_CALISource: Cambodian Archaeological LIDAR Initiative (CALI)

CALI describes its objectives as follows:

“Using innovative airborne laser scanning (‘lidar’) technology, CALI will uncover, map and compare archaeological landscapes around all the major temple complexes of Cambodia, with a view to understanding what role these complex and vulnerable water management schemes played in the growth and decline of early civilizations in SE Asia. CALI will evaluate the hypothesis that the Khmer civilization, in a bid to overcome the inherent constraints of a monsoon environment, became locked into rigid and inflexible traditions of urban development and large-scale hydraulic engineering that constrained their ability to adapt to rapidly-changing social, political and environmental circumstances.”

Lidar is a surveying technique that creates a 3-dimensional map of a surface by measuring the distance to a target by illuminating the target with laser light. A 3-D map is created by measuring the distances to a very large number of different targets and then processing the data to filter out unwanted reflections (i.e., reflections from vegetation) and build a “3-D point cloud” image of the surface. In essence, lidar removes the surface vegetation, as shown in the following figure, and produces a map with a much clearer view of surface features and topography than would be available from conventional photographic surveys.

Lidar sees thru vegetation_CALISource: Cambodian Archaeological LIDAR Initiative

CALI uses a Leica ALS70 lidar instrument. You’ll find the product specifications for the Leica ALS70 at the following link:

CALI conducts its surveys from a helicopter with GPS and additional avionics to help manage navigation on the survey flights and provide helicopter geospatial coordinates to the lidar. The helicopter also is equipped with downward-looking and forward-looking cameras to provide visual photographic references for the lidar maps.

Basic workflow in a lidar instrument is shown in the following diagram.

Lidar instrument workflow_Leica

An example of the resulting point cloud image produced by a lidar is shown below.

Example lidar point cloud_Leica

Here are two views of a site named Choeung Ek; the first is an optical photograph and the second is a lidar view that removes most of the vegetation. I think you’ll agree that structures appear much more clearly in the lidar image.

Choueng_Ek_Photo_CALISource: Cambodian Archaeological LIDAR InitiativeChoueng_Ek_Lidar_CALISource: Cambodian Archaeological LIDAR Initiative

An example of a lidar image for a larger site is shown in the following map of the central monuments of the well-researched and mapped site named Sambor Prei Kuk. CALI reported:

“The lidar data adds a whole new dimension though, showing a quite complex system of moats, waterways and other features that had not been mapped in detail before. This is just the central few sq km of the Sambor Prei Kuk data; we actually acquired about 200 sq km over the site and its environs.”

Sambor Prei Kuk lidar_CALISource: Cambodian Archaeological LIDAR Initiative

For more information on the lidar archaeological surveys in Cambodia, please refer to the following recent articles:

See the 18 July 2016 article by Annalee Newitz entitled, “How archaeologists found the lost medieval megacity of Angkor,” on the arsTECHNICA website at the following link:

On the Smithsonian magazine website, see the April 2016 article entitled, “The Lost City of Cambodia,” at the following link:

Also on the Smithsonian magazine website, see the 14 June 2016 article by Jason Daley entitled, “Laser Scans Reveal Massive Khmer Cities Hidden in the Cambodian Jungle,” at the following link:

Exploring Microgravity Worlds

Peter Lobner

1.  Background:

We’re all familiar with scenes of the Apollo astronauts bounding across the lunar surface in the low gravity on the Moon, where gravity (g) is 0.17 of the gravity on the Earth’s surface. Driving the Apollo lunar rover kicked up some dust, but otherwise proved to be a practical means of transportation on the Moon’s surface. While the Moon’s gravity is low relative to Earth, techniques for achieving lunar orbit have been demonstrated by many spacecraft, many soft landings have been made, locomotion on the Moon’s surface with wheeled vehicles has worked well, and there is no risk of flying off into space by accidentally exceeding the Moon’s escape velocity.

There are many small bodies in the Solar System (i.e., dwarf planets, asteroids, comets) where gravity is so low that it creates unique problems for visiting spacecraft and future astronauts: For example:

  • Spacecraft require efficient propulsion systems and precise navigation along complex trajectories to rendezvous with the small body and then move into a station-keeping position or establish a stable orbit around the body.
  • Landers require precise navigation to avoid hazards on the surface of the body (i.e., craters, boulders, steep slopes), land gently in a specific safe area, and not rebound back into space after touching down.
  • Rovers require a locomotion system that is adapted to the specific terrain and microgravity conditions of the body and allows the rover vehicle to move across the surface of the body without risk of being launched back into space by reaction forces.
  • Many asteroids and comets are irregularly shaped bodies, so the surface gravity vector will vary significantly depending on where you are relative to the center of mass of the body.

You will find a long list of known objects in the Solar System, including many with diameters less than 1 km (0.62 mile), at the following link:

You can determine the gravity on the surface of a body in the Solar System using the following equation:

Equation for g

where (using metric):

g = acceleration due to gravity on the surface of the body (m/sec2)

G = universal gravitational constant = 6.672 x 10-11 m3/kg/sec2

M = mass of the body (kg)

r = radius of the body (which is assumed to be spherical) (m)

You can determine the escape velocity from a body using the following equation:

Equation - Escape velocity

Applying these equations to the Earth and several smaller bodies in in the Solar System yields the following results:

g and escape velocity table

Note how weak the gravity is on the small bodies in this table. These are very different conditions than on the surface of the Moon or Mars where the low gravity still allows relatively conventional locomotion.

As noted in my 31 December 2015 post, the “U.S. Commercial Space Launch Competitiveness Act,” which was signed into law on 25 November 2015, opens the way for U.S. commercial exploitation of space, including commercial missions to asteroids and comets.  Let’s take a look at missions to these microgravity worlds and some of the unique issues associated with visiting a microgravity world.

2.  Recent and Current Missions to Asteroids and Comets

There have been several spacecraft that have made a successful rendezvous with one or more small bodies in the Solar System. Several have been fly-by missions. Four spacecraft have flown in close formation with or entered orbit around low-gravity bodies. Three of these missions included landing on (or at least touching) the body, and one returned very small samples to Earth. These missions are:

  • National Aeronautics and Space Administration’s (NASA) NEAR-Shoemaker
  • Japan Aerospace Exploration Agency’s (JAXA) Hayabusa
  • European Space Agency’s (ESA) Rosetta
  • NASA’s Dawn

In addition, China’s Chang’e 2 mission demonstrated its ability to navigate to an asteroid intercept after completing its primary mission in lunar orbit. JAXA’s Hayabusa 2 mission currently is enroute to asteroid rendezvous.

Following is a short synopsis of each of these missions.

NASA’s NEAR-Shoemaker Mission (1996 – 2001): This mission was launched 17 February 1996 and on 27 June 1997 flew by the asteroid 253 Mathilde at a distance of about 1,200 km (746 miles).   On 14 February 2000, the spacecraft reached its destination and entered a near-circular orbit around the asteroid 433 Eros, which is about the size of Manhattan. After completing its survey of Eros, the NEAR spacecraft was maneuvered close to the surface and it touched down on 12 February 2001, after a four-hour descent, during which it transmitted 69 close-up images of the surface. Transmissions continued for a short time after landing. NEAR-Shoemaker was the first man-made object to soft-land on an asteroid.

Asteroid Eros                Asteroid EROS. Source: NASA/JPL/JHUAPL

JAXA’s Hayabusa Mission (2003 – 2010): The Hayabusa spacecraft was launched in May 2003. This solar-powered, ion-driven spacecraft rendezvoused with near-Earth asteroid 25143 Itokawa in mid-September 2005.

Asteroid Itokawa           Asteroid Itokawa. Source: JAXA

Hayabusa carried the solar-powered MINERVA (Micro/Nano Experimental Robot Vehicle for Asteroid) mini-lander, which was designed to be released close to the asteroid, land softly, and move across the surface using an internal flywheel and braking system to generate the momentum needed to hop in microgravity. However, MINERVA was not captured by the asteroid’s gravity after being released and was lost in deep space.

In November 2005, Hayabusa moved in from its station-keeping position and briefly touched the asteroid to collect surface samples in the form of tiny grains of asteroid material.

Hayabusa taking a sampleHayabusa in position to obtain samples. Source: JAXA

The spacecraft then backed off and navigated back to Earth using its failing ion thrusters. Hayabusa returned to Earth on 13 June 2010 and the sample-return capsule, with about 1,500 grains of asteroid material, was recovered after landing in the Woomera Test Range in the western Australian desert.

You’ll find a JAXA mission summary briefing at the following link:

ESA’s Rosetta Mission (2004 – present): The Rosetta spacecraft was launched in March 2004 and in August 2014 rendezvoused with and achieved orbit around irregularly shaped comet 67P/Churyumov-Gerasimenko. This comet orbits the Sun outside of Earth’s orbit, between 1.24 and 5.68 AU (astronomical units; 1 AU = average distance from Earth’s orbit to the Sun). The size of 67P/Churyumov-Gerasimenko is compared to downtown Los Angeles in the following figure.

ESA Attempts To Land Probe On CometSource: ESA

Currently, Rosetta remains in orbit around this comet. The lander, Philae, is on the surface after a dramatic rebounding landing on 12 November 2014. Anchoring devices failed to secure Philae after its initial touchdown. The lander bounced twice and finally came to rest in an unfavorable position after contacting the surface a third time, about two hours after the initial touchdown. Philae was the first vehicle to land on a comet and it briefly transmitted data back from the surface of the comet in November 2014 and again in June – July 2015.

NASA’s Dawn Mission (2007 – present): Dawn was launched on 27 September 2007 and used its ion engine to fly a complex flight path to a 2009 gravitational assist flyby of Mars and then a rendezvous with the large asteroid Vesta (2011 – 2012) in the main asteroid belt.

NASA_Dawn_spacecraft_near_Ceres   Dawn approaches Vesta. Source: NASA / JPL Caltech

Dawn spent 14 months in orbit surveying Vesta before departing to its next destination, the dwarf planet Ceres, which also is in the main asteroid belt. On 6 March 2015 Dawn was captured by Ceres’ gravity and entered its initial orbit following the complex trajectory shown in the following diagram.

Dawn navigation to Ceres orbit   Dawn captured by Ceres gravity. Source: NASA / JPL Caltech

Dawn is continuing its mapping mission in a circular orbit at an altitude of 385 km (240 miles), circling Ceres every 5.4 hours at an orbital velocity of about 983 kph (611 mph). The Dawn mission does not include a lander.

See my 20 March 2015 and 13 Sep 2015 posts for more information on the Dawn mission.

CNSA’s Chang’e 2 extended mission (2010 – present): The China National Space Agency’s (CNSA) Chang’e 2 spacecraft was launched in October 2010 and placed into a 100 km lunar orbit with the primary objective of mapping the lunar surface. After completing this objective in 2011, Chang’e 2 navigated to the Earth-Sun L2 Lagrange point, which is a million miles from Earth in the opposite direction of the Sun. In April 2012, Chang’e 2 departed L2 for an extended mission to asteroid 4179 Toutatis, which it flew by in December 2012.

Toutatis_from_Chang'e_2Asteroid Toutatis. Source: CHSA

JAXA’s Hayabusa 2 Mission (2014 – 2020): The JAXA Hayabusa 2 spacecraft was launched on 3 December 2014. This ion-propelled spacecraft is very similar to the first Hayabusa spacecraft. Its planned arrival date at the target asteroid, 1999 JU3 (Ryugu), is in mid-2018.   As you can see in the following diagram, 1999 JU3 is a substantially larger asteroid than Itokawa.

Hayabusa 1-2 target comparisonSource: JAXA

The spacecraft will spend about a year mapping the asteroid using Near Infrared Spectrometer (NIRS3) and Thermal Infrared Imager (TIR) instruments.

Hayabusa 2 includes three solar-powered MINERVA-II mini-landers and one battery-powered MASCOT (Mobile Asteroid Surface Scout) small lander. All landers will be deployed to the asteroid surface from an altitude of about 100 meters (328 feet) so they can be captured by the asteroid’s very weak gravity. The 1.6 – 2.5 kg (3.5 – 5.5 pounds) MINERVA-II landers will deliver imagery and temperature measurements. The 10 kg (22 pound) MASCOT will make measurements of surface composition and properties using a camera, magnetometer, radiometer, and infrared microscope. All landers are expected to make several hops to take measurements at different locations on the asteroid’s surface.

Three MINERVA landers

Three MINERVA mini-landers. Source: JAXA

MASCOT lander         MASCOT small lander. Source: JAXA

For sample collection, Hayabusa 2 will descend to the surface to capture samples of the surface material. A device called a Small Carry-on Impactor (SCI) will be deployed and should impact the surface at about 2 km/sec, creating a small crater to expose material beneath the asteroid’s surface. Hayabusa 2 will attempt to gather a sample of the exposed material. More information about SCI is available at the following link:

At the end of 2019, Hayabusa 2 is scheduled to depart asteroid 1999 JU3 (Ryugu) and return to Earth in 2020 with the collected samples. You will find more information on the Hayabusa 2 mission at the JAXA website at the following links:


3.  Future Missions:

NASA OSIRIS-REx: This NASA’s mission is expected to launch in September 2016, travel to the near-Earth asteroid 101955 Bennu, map the surface, harvest a sample of surface material, and return the samples to Earth for study. After arriving at Bennu in 2018, the solar-powered OSIRIS-Rex spacecraft will map the asteroid surface from a station-keeping distance of about 5 km (3.1 miles) using two primary mapping instruments: the OVIRS Visible and Infrared Spectrometer and the OTRS Thermal Emission Spectrometer. Together, these instruments are expected to develop a comprehensive map of Bennu’s mineralogical and molecular components and enable mission planners to target the specific site(s) to be sampled. In 2019, a robotic arm on OSIRIS-REx will collect surface samples during one or more very close approaches, without landing. These samples (60 grams minimum) will be loaded into a small capsule that is scheduled to return to Earth in 2023.

OSIRIS-REx SpacecraftOSIRIS-REx spacecraft. Source: NASA / ASU

For more information on OSIRIS-REx, visit the NASA website at the following link:

and the ASU website at the following link:

NASA Asteroid Redirect Mission (ARM): This mission will involve rendezvousing with a near-Earth asteroid, mapping the surface for about a year, and locating a suitable bolder to be captured [maximum diameter about 4 meters (13.1 feet)]. The ARM spacecraft will land and capture the intended bolder, lift off and deliver the bolder into a stable lunar orbit during the first half of the next decade. The current reference target is known as asteroid 2008 EV5.

ARM asteroid-capture      ARM lander gripping a bolder on an asteroid. Source: NASA

You can find more information on the NASA Asteroid Redirect Mission at the following links:


4. Locomotion in Microgravity

OK, you’ve landed on a small asteroid, your spacecraft has anchored itself to the surface and now you want to go out and explore the surface. If this is asteroid 2008 EV5, the local gravity is about 1.79 E-05 that of Earth (less than 2/100,000 the gravity of Earth) and the escape velocity is about 0.6 mph (1 kph). Just how are you going to move about on the surface and not launch yourself on an escape trajectory into deep space?

There is a good article on the problems of locomotion in microgravity in a 7 March 2015 article entitled, “A Lightness of Being,” in the Economist magazine. You can find this article on the Economist website at the following link:

In this article, it is noted that:

“Wheeled and tracked rovers could probably be made to work in gravity as low as a hundredth of that on Earth……But in the far weaker microgravity of small bodies like asteroids and comets, they would fail to get a grip in fine regolith. Wheels also might hover above the ground, spinning hopelessly and using up power. So an entirely different system of locomotion is needed for rovers operating in a microgravity.”

Novel concepts for locomotion in microgravity include:

  • Hoppers / tumblers
  • Structurally compliant rollers
  • Grippers

Hoppers / tumblers: Hoppers are designed to move across a surface using a moving internal mass that can be controlled to transfer momentum to the body of the rover to cause it to tumble or to generate a more dramatic hop, which is a short ballistic trajectory in microgravity. The magnitude of the hop must be controlled so the lander does not exceed escape velocity during a hop. JAXA’s MINERVA-II and MASCOT asteroid landers both are hoppers.

JAXA described the MINERVA-II hopping mechanism as follows:

“MINERVA can hop from one location to another using two DC motors – the first serving as a torquer, rotating an internal mass that leads to a resulting force, sufficient to make the rover hop for several meters. The second motor rotates the table on which the torquer is placed in order to control the direction of the hop. The rover reaches a top speed of 9 centimeters per second, allowing it to hop a considerable distance.”

JAXA MINERVA hopperMINERVA torque & turntable. Source: JAXA

The MASCOT hopper operates on a different principle:

“With a mass of not even half a gram in the gravitational field of the asteroid, the (MASCOT) lander can easily withstand its initial contact with the surface and several bounces that are expected upon landing. It also means that only small forces are needed to move the lander from point to point. MASCOT’s Mobility System essentially consists of an off-centered mass installed on an eccentric arm that moves that mass to generate momentum that is sufficient to either rotate the lander to face the surface with its instruments or initiate a hop of up to 70 meters to get to the next sampling site.”

MASCOT Mobility SystemMASCOT mobility mechanism. Source: JAXA

You will find a good animation of MASCOT and its Mobility System at the following link:

NASA is examining a class of microgravity rovers called “hedgehogs” that are designed to hop and tumble on microgravity surfaces by spinning and braking a set of three internal flywheels. Cushions or spikes at the corners of the cubic body of a hedgehog protect the body from the terrain and act as feet while hopping and tumbling.

NASA Hedgehog                               NASA Hedgehog prototype. Source: NASA

Read more on the NASA hedgehog rovers at the following link:

Structurally compliant rollers: One means of “rolling” across a microgravity surface is with a deformable structure that allows the location of the center of mass to be controlled in a way that causes the rover to tip over in the desired direction of motion. NASA is exploring the use of a class of rolling rovers called Super Ball Bots, which are terrestrial rovers based on a R. Buckminster Fuller’s tensegrity toy. NASA explains:

“The Super Ball Bot has a sphere-like matrix of cables and joints that could withstand being dropped from a spacecraft high above a planetary surface and hit the ground with a bounce. Once on the planet, the joints could adjust to roll the bot in any direction while housing a data collecting device within its core.”

NASA Super Ball Bot                    Source:

You’ll find a detailed description of the principles behind tensegrity (tensional integrity) in a 1961 R. Buckminster Fuller paper at the following link:

Grippers: Without having a grip on a microgravity body, a rover cannot use sampling tools that generate a reaction force on the rover (i.e., drills, grinders, chippers). For such operations to be successful a rover needs an anchoring system to secure the rover and transfer the reaction loads into the microgravity body.

An approach being developed by Jet Propulsion Laboratory (JPL) involves articulated feet with microspine grippers that have a large number of small claws that can grip irregular rocky surfaces.

JPL microspine gripper           Microspine gripper. Source: NASA / JPL

Such a gripper could be used to hold a rover in place during mechanical sampling activities or to allow a rover to climb across an irregular surface like a spider.  See more about the operation of the NASA / JPL microspine gripper at the following link:

5. Conclusions

Missions to small bodies in our Solar System are very complex undertakings that require very advanced technologies in many areas, including: propulsion, navigation, autonomous controls, remote sensing, and locomotion in microgravity. The ambitious current and planned future missions will greatly expand our knowledge of these small bodies and the engineering required to operate spacecraft in their vicinity and on their surface.

While commercial exploitation of dwarf planets, asteroids and comets still may sound like science fiction, the technical foundation for such activities is being developed now. It’s hard to guess how much progress will be made in the next decades. However, I’m convinced that the “U.S. Commercial Space Launch Competitiveness Act,” will encourage commercial investments in space exploration and exploitation and lead to much greater progress than if we depended on NASA alone.

The technologies being developed also may lead, in the long term, to effective techniques for redirecting an asteroid or comet that poses a threat to Earth. Such a development would give our Planetary Defense Officer (see my 21 January 2016 post) an actual tool for defending the planet.

Synthetic Aperture Radar (SAR) and Inverse SAR (ISAR) Enable an Amazing Range of Remote Sensing Applications

Peter Lobner

SAR Basics

Synthetic Aperture Radar (SAR) is an imaging radar that operates at microwave frequencies and can “see” through clouds, smoke and foliage to reveal detailed images of the surface below in all weather conditions. Below is a SAR image superimposed on an optical image with clouds, showing how a SAR image can reveal surface details that cannot be seen in the optical image.

Example SAR imageSource: Cassidian radar, Eurimage optical

SAR systems usually are carried on airborne or space-based platforms, including manned aircraft, drones, and military and civilian satellites. Doppler shifts from the motion of the radar relative to the ground are used to electronically synthesize a longer antenna, where the synthetic length (L) of the aperture is equal to: L = v x t, where “v” is the relative velocity of the platform and “t” is the time period of observation. Depending on the altitude of the platform, “L” can be quite long. The time-multiplexed return signals from the radar antenna are electronically recombined to produce the desired images in real-time or post-processed later.

SAR principle

Source: Christian Wolff,

This principle of SAR operation was first identified in 1951 by Carl Wiley and patented in 1954 as “Simultaneous Buildup Doppler.”

SAR Applications

There are many SAR applications, so I’ll just highlight a few.

Boeing E-8 JSTARS: The Joint Surveillance Target Attack Radar System is an airborne battle management, command and control, intelligence, surveillance and reconnaissance platform, the prototypes of which were first deployed by the U.S. Air Force during the 1991 Gulf War (Operation Desert Storm). The E-8 platform is a modified Boeing 707 with a 27 foot (8 meter) long, canoe-shaped radome under the forward fuselage that houses a 24 foot (7.3 meters) long, side-looking, multi-mode, phased array antenna that includes a SAR mode of operation. The USAF reports that this radar has a field of view of up to 120-degrees, covering nearly 19,305 square miles (50,000 square kilometers).


Lockheed SR-71: This Mach 3 high-altitude reconnaissance jet carried the Advanced Synthetic Aperture Radar System (ASARS-1) in its nose. ASARS-1 had a claimed 1 inch resolution in spot mode at a range of 25 to 85 nautical miles either side of the flight path.  This SAR also could map 20 to 100 nautical mile swaths on either side of the aircraft with lesser resolution.


Northrop RQ-4 Global Hawk: This is a large, multi-purpose, unmanned aerial vehicle (UAV) that can simultaneously carry out electro-optical, infrared, and synthetic aperture radar surveillance as well as high and low band signal intelligence gathering.

Global HawkSource: USAF

Below is a representative RQ-4 2-D SAR image that has been highlighted to show passable and impassable roads after severe hurricane damage in Haiti. This is an example of how SAR data can be used to support emergency management.

Global Hawk Haiti post-hurricane image123-F-0000X-103Source: USAF

NASA Space Shuttle: The Shuttle Radar Topography Mission (SRTM) used the Space-borne Imaging Radar (SIR-C) and X-Band Synthetic Aperture Radar (X-SAR) to map 140 mile (225 kilometer) wide swaths, imaging most of Earth’s land surface between 60 degrees north and 56 degrees south latitude. Radar antennae were mounted in the Space Shuttle’s cargo bay, and at the end of a deployable 60 meter mast that formed a long-baseline interferometer. The interferometric SAR data was used to generate very accurate 3-D surface profile maps of the terrain.

Shuttle STRMSource: NASA / Jet Propulsion Laboratory

An example of SRTM image quality is shown in the following X-SAR false-color digital elevation map of Mt. Cotopaxi in Ecuador.

Shuttle STRM imageSource: NASA / Jet Propulsion Laboratory

You can find more information on SRTM at the following link:

ESA’s Sentinel satellites: Refer to my 4 May 2015 post, “What Satellite Data Tell Us About the Earthquake in Nepal,” for information on how the European Space Agency (ESA) assisted earthquake response by rapidly generating a post-earthquake 3-D ground displacement map of Nepal using SAR data from multiple orbits (i.e., pre- and post-earthquake) of the Sentinel-1A satellite.  You can find more information on the ESA Sentinel SAR platform at the following link:

You will find more general information on space-based SAR remote sensing applications, including many high-resolution images, in a 2013 European Space Agency (ESA) presentation, “Synthetic Aperture Radar (SAR): Principles and Applications”, by Alberto Moreira, at the following link:

ISAR Basics

ISAR technology uses the relative movement of the target rather than the emitter to create the synthetic aperture. The ISAR antenna can be mounted in a airborne platform. Alternatively, ISAR also can be used by one or more ground-based antennae to generate a 2-D or 3-D radar image of an object moving within the field of view.

ISAR Applications

Maritime surveillance: Maritime surveillance aircraft commonly use ISAR systems to detect, image and classify surface ships and other objects in all weather conditions. Because of different radar reflection characteristics of the sea, the hull, superstructure, and masts as the vessel moves on the surface of the sea, vessels usually stand out in ISAR images. There can be enough radar information derived from ship motion, including pitching and rolling, to allow the ISAR operator to manually or automatically determine the type of vessel being observed. The U.S. Navy’s new P-8 Poseidon patrol aircraft carry the AN/APY-10 multi-mode radar system that includes both SAR and ISAR modes of operation.

The principles behind ship classification is described in detail in the 1993 MIT paper, “An Automatic Ship Classification System for ISAR Imagery,” by M. Menon, E. Boudreau and P. Kolodzy, which you can download at the following link:

You can see in the following example ISAR image of a vessel at sea that vessel classification may not be obvious to the casual observer. I can see that an automated vessel classification system is very useful.

Ship ISAR image

Source: Blanco-del-Campo, A. et al.,

Imaging Objects in Space: Another ISAR (also called “delayed Doppler”) application is the use of one or more large radio telescopes to generate radar images of objects in space at very long ranges. The process for accomplishing this was described in a 1960 MIT Lincoln Laboratory paper, “Signal Processing for Radar Astronomy,” by R. Price and P.E. Green.

Currently, there are two powerful ground-based radars in the world capable of investigating solar system objects: the National Aeronautics and Space Administration (NASA) Goldstone Solar System Radar (GSSR) in California and the National Science Foundation (NSF) Arecibo Observatory in Puerto Rico. News releases on China’s new FAST radio telescope have not revealed if it also will be able to operate as a planetary radar (see my 18 February 2016 post).

The 230 foot (70 meter) GSSR has an 8.6 GHz (X-band) radar transmitter powered by two 250 kW klystrons. You can find details on GSSR and the techniques used for imaging space objects in the article, “Goldstone Solar System Radar Observatory: Earth-Based Planetary Mission Support and Unique Science Results,” which you can download at the following link:

The 1,000 foot (305 meter) Arecibo Observatory has a 2.38 GHz (S-band) radar transmitter, originally rated at 420 kW when it was installed in 1974, and upgraded in 1997 to 1 MW along with other significant upgrades to improve radio telescope and planetary radar performance. You will find details on the design and upgrades of Arecibo at the following link:

The following examples demonstrate the capabilities of Arecibo Observatory to image small bodies in the solar system.

  • In 1999, this radar imaged the Near-Earth Asteroid 1999 JM 8 at a distance of about 5.6 million miles (9 million km) from Earth. The ISAR images of this 1.9 mile 3-km) sized object had a resolution of about 49 feet (15 meters).
  • In November 1999, Arecibo Observatory imaged the tumbling Main-Belt Asteroid 216 Kleopatra. The resulting ISAR images, which made the cover of Science magazine, showed a dumbbell-shaped object with an approximate length of 134.8 miles (217 kilometers) and varying diameters up to 58.4 miles (94 kilometers).

Asteroid image  Source: Science

More details on the use of Arecibo Observatory to image planets and other bodies in the solar system can be found at the following link:

The NASA / Jet Propulsion Laboratory Asteroid Radar Research website also contains information on the use of radar to map asteroids and includes many examples of asteroid radar images. Access this website at the following link:


In recent years, SAR units have become smaller and more capable as hardware is miniaturized and better integrated. For example, Utah-based Barnard Microsystems offers a miniature SAR for use in lightweight UAVs such as the Boeing ScanEagle. The firm claimed that their two-pound “NanoSAR” radar, shown below, weighed one-tenth as much as the smallest standard SAR (typically 30 – 200 pounds; 13.6 – 90.7 kg) at the time it was announced in March 2008. Because of power limits dictated by the radar circuit boards and power supply limitations on small UAVs, the NanoSAR has a relatively short range and is intended for tactical use on UAVs flying at a typical ScanEagle UAV operational altitude of about 16,000 feet.

Barnard NanoSARSource: Barnard Microsystems

ScanEagle_UAVScanEagle UAV. Source: U.S. Marine Corps.

Nanyang Technological University, Singapore (NTU Singapore) recently announced that its scientists had developed a miniaturized SAR on a chip, which will allow SAR systems to be made a hundred times smaller than current ones.

?????????????????????????????????????????????????????????Source: NTU

NTU reports:

“The single-chip SAR transmitter/receiver is less than 10 sq. mm (0.015 sq. in.) in size, uses less than 200 milliwatts of electrical power and has a resolution of 20 cm (8 in.) or better. When packaged into a 3 X 4 X 5-cm (0.9 X 1.2 X 1.5 in.) module, the system weighs less than 100 grams (3.5 oz.), making it suitable for use in micro-UAVs and small satellites.”

NTU estimates that it will be 3 to 6 years before the chip is ready for commercial use. You can read the 29 February 2016 press release from NTU at the following link:

With such a small and hopefully low cost SAR that can be integrated with low-cost UAVs, I’m sure we’ll soon see many new and useful radar imaging applications.