Category Archives: Artificial Intelligence

NASA’s Sonification Project Makes the Universe Audible

Peter Lobner, updated 27 November 2023

In my 2016 post, “Remarkable Multispectral View of Our Milky Way Galaxy, “ I started by recalling the following lyrics from the 1968 Moody Blues song, “The Word,” by Graeme, Edge, from the album “In Search of the Lost Chord”:

This garden universe vibrates complete

Some, we get a sound so sweet

Vibrations reach on up to become light

And then through gamma, out of sight

Between the eyes and ears there lie

The sounds of color and the light of a sigh

And to hear the sun, what a thing to believe

But it’s all around if we could but perceive

To know ultraviolet, infrared and X-rays

Beauty to find in so many ways.

Well, NASA actually has done this thru their Sonification Project, which they explain as follows:

“Much of our Universe is too distant for anyone to visit in person, but we can still explore it. Telescopes give us a chance to understand what objects in our Universe are like in different types of light. By translating the inherently digital data (in the form of ones and zeroes) captured by telescopes in space into images, astronomers can create visual representations of what would otherwise be invisible to us. But what about experiencing these data with other senses, like hearing? Sonification is the process that translates data into sound. Our new project brings parts of our Milky Way galaxy, and of the greater Universe beyond it, to listeners for the first time. We take actual observational data from telescopes like NASA’s Chandra X-ray Observatory, Hubble Space Telescope or James Webb Space Telescope and translate it into corresponding frequencies that can be heard by the human ear.”

I hope you’ll enjoy NASA’s ” Universe of Sound” website, which includes sonifications of more than 20 astronomical targets, each with descriptions of the target and details on how the sonification was made. Start your audio exploration of the Milky Way galaxy and the Universe beyond here: https://chandra.si.edu/sound/

Good luck trying to pick a favorite.

Many of NASA’s sonifications also are available individually on YouTube. Here are two very different samples:

For more information

A UAE Rover Carried by a Japanese Lander Attempted a Moon Landing in April 2023

Peter Lobner, updated 13 September 2023

1. Introduction

To date, only Russia, the U.S. and China have accomplished soft landings on the Moon, with each nation using a launch vehicle and spacecraft developed within their own national space programs. 

On 8 October 2020, Sheikh Mohammed bin Rashid announced the formation of the UAE’s lunar rover program, which intends to accomplish the first moon landing for the Arab world using the commercial services of a U.S. SpaceX Falcon 9 launch vehicle and a Japanese ispace lunar landing vehicle named HAKUTO-R. Once on the lunar surface, the UAE’s Rashid rover will be deployed to perform a variety of science and exploration tasks. This mission was launched from Cape Canaveral on 11 December 2022.

Emirates Lunar Mission (ELM) patch. 
Source: MBRSpaceCenter tweet

2. Japan’s ispace HAKUTO-R lunar lander

The Japanese firm ispace, inc. was founded in September 2010, with headquarters in Tokyo, a U.S. office in Denver, CO, and a European office in Luxembourg.  Their website is here: https://ispace-inc.com

ispace’s HAKUTO team was one of six finalist teams competing for the Google Lunar XPRIZE. On 15 December 2017, XPRIZE reported,” Congratulations to Google Lunar XPRIZE Team HAKUTO for raising $90.2 million in Series A funding toward the development of a lunar lander and future lunar missions! This is the biggest investment to date for an XPRIZE team, and sends a strong signal that commercial lunar exploration is on the trajectory to success. One of the main goals of the Google Lunar XPRIZE is to revolutionize lunar exploration by spurring innovation in the private space sector, and this announcement demonstrates that there is strong market interest in innovative robotic solutions for sustainable exploration and development of the Moon. The XPRIZE Foundation looks forward to following Team HAKUTO as they progress toward their lunar mission!”

The Google Lunar XPRIZE was cancelled when it became clear that none of the finalist teams could meet the schedule for a lunar landing in 2018 and other constraints set for the competition.  Consequently, Team HAKUTO’s lander was not  flown on a mission to the Moon.

In April 2021, the Mohammed Bin Rashid Space Center (MBRSC) of the United Arab Emirates (UAE) signed a contract with ispace, under which ispace agreed to provide commercial payload delivery services for the Emirates Lunar Mission. After final testing in Germany, the ispace SERIES-1 (S1) lunar lander was ready in 2022 for the company’s ‘Mission 1,’ as part of its commercial lunar landing services program known as ‘HAKUTO-R’.

HAKUTO-R, aka SERIES-1 (S1), lunar lander general arrangement. 
It is more than 7 feet (2.3 meters) tall. Source: ispace

After its launch on 11 December 2022, the lunar spacecraft has been flying a “low energy” trajectory to the Moon in order to minimize fuel use during the transit and, hence, maximizes the available mission payload. It will take nearly five months for the combined lander / rover spacecraft to reach the Moon in April 2023.

The low-energy trajectory being flown for the Emirates Lunar Mission shows spacecraft position (end of blue line, at top) as of 4 March 2023. The spacecraft will enter lunar orbit (yellow circle) in April 2023, before landing on the Moon.
Source: ispace

The primary landing site is the  Atlas crater in Lacus Somniorum (Lake of Dreams), which is a basaltic plain formed by flows of basaltic lava, located in the northeastern quadrant of the moon’s near side.

Lake of Dreams is highlighted in the yellow square.
Source: The Lunar Registry
Hakuto-R Mission 1 Moon landing milestones. Source: ispace

If successful, HAKUTO-R will also become the first commercial spacecraft ever to make a controlled landing on the moon.

After landing, the UAE’s Rashid rover will be deployed from the HAKUTO-R lander. In addition, the lander will deploy an orange-sized sphere from the Japanese Space Agency that will transform into a small wheeled robot that will move about on the lunar surface. 

3. UAE’s Rashid lunar rover

The Emirates Lunar Mission (ELM) team at the Mohammed bin Rashid Space Centre (MBRSC) is responsible for designing, manufacturing and developing the rover, which is named Rashid after Dubai’s royal family.  The ELM website is here: https://www.mbrsc.ae/service/emirates-lunar-mission/

The Rashid rover weighs just 22 pounds (10 kilograms) and, with four-wheel drive, can traverse a smooth surface at a maximum speed of 10 cm/sec (0.36 kph) and climb over an obstacle up to 10 cm (3.9 inches) tall and descend a 20-degree slope. 

Rashid rover general arrangement. Source: MBRSC

The Rashid rover is designed to operate on the Moon’s surface for one full lunar day (29.5 Earth days), during which time it will conduct studies of the lunar soil in a previously unexplored area. In addition, the rover will conduct engineering studies of mobility on the lunar surface and susceptibility of different materials to adhesion of lunar particles. The outer rims of this rover’s four wheels incorporate small sample panels to test how different materials cope with the abrasive lunar surface, including four samples contributed by the European Space Agency (ESA).

The diminutive rover carries the following scientific instruments:

  • Two high-resolution optical cameras (Cam-1 & Cam-2) are expected to take more than 1,000 still images of the Moon’s surface to assess the how lunar dust and rocks are distributed on the surface.
  • A “microscope” camera
  • A thermal imaging camera (Cam-T) will provide data for determining the thermal properties of lunar surface material.
  • Langmuir probes will analyze electric charge and electric fields at the lunar surface.
  • An inertial measurement unit to track the motion of the rover.

Mobility and communications tests of the completed rover were conducted in March 2022 in the Dubai desert.

Rashid rover during desert tests. Source: Gulf News (March 2022)

The Ottawa, Ontario company Mission Control Space Services has provided a deep-learning artificial intelligence (AI) system named MoonNet that will be used for identifying geologic features seen by the rover’s cameras. Mission Control Services reports, “Rashid will capture images of geological features on the lunar terrain and transmit them to the lander and into MoonNet. The output of MoonNet will be transmitted back to Earth and then distributed to science team members….Learning how effectively MoonNet can identify geological features, inform operators of potential hazards and support path planning activities will be key to validating the benefits of AI to support future robotic missions.”

This color-coded image is an example of the type of output the MoonNet AI system is expected to produce.
 Source: Mission Control Space Services

4. Landing attempt failed

The Hakuto-R lander crashed into the Moon on 25 April 2023 during its landing attempt.

In May 2023, the results of an ispace analysis of the landing failure were reported by Space.com:

“The private Japanese moon lander Hakuto-R crashed in late April during its milestone landing attempt because its onboard altitude sensor got confused by the rim of a lunar crater. the unexpected terrain feature led the lander’s onboard computer to decide that its altitude measurement was wrong and rely instead on a calculation based on its expected altitude at that point in the mission. As a result, the computer was convinced the probe was lower than it actually was, which led to the crash on April 25.”

“While the lander estimated its own altitude to be zero, or on the lunar surface, it was later determined to be at an altitude of approximately 5 kms [3.1 miles] above the lunar surface,” ispace said in a statement released on Friday (May 26). “After reaching the scheduled landing time, the lander continued to descend at a low speed until the propulsion system ran out of fuel. At that time, the controlled descent of the lander ceased, and it is believed to have free-fallen to the moon’s surface.”

On 23 May 2023, NASA reported that the its Lunar Reconnaissance Orbiter spacecraft had located the crash site of the UAE’s lunar spacecraft. The before and after views are shown in the following images.

Hakuto-R crash site, before (left) and after (right) the crash. Source: NASA/GSFC/Arizona State University

5. The future

ispace future lunar plans

ispace reported, “ispace’s SERIES-2 (S2) lander is designed, manufactured, and will be launched from the United States. While the S2 lander leverages lessons learned from the company’s SERIES-1 (S1) lander, it is an evolved platform representing our next generation lander series with increased payload capacity, enhanced capabilities and featuring a modular design to accommodate orbital, stationary or rover payloads.”

Ispace was selected through the Commercial Lunar Payload Services (CLPS) initiative to deliver NASA payloads to the far side of the Moon using the SERIES-2 (S2) lander, starting in 2025.

UAE future lunar plans

In October 2022, the UAE announced that it was collaborating with China on a second lunar rover mission, which would be part of China’s planned 2026 Chang’e 7 lunar mission that will be targeted to land near the Moon’s south pole. These plans may be cancelled after the U.S. applied export restrictions in March 2023 on the Rashid 2 rover, which contains some US-built components. The U.S. cited its 1976 International Traffic in Arms Regulations (ITAR), which prohibit even the most common US-built items from being launched aboard Chinese rockets.

6. For more information

Future missions

Video

Exascale Computing is at the Doorstep

Updated 7 April 2020

Peter Lobner

The best current supercomputers are “petascale” machines.  This term refers to supercomputers capable of performing at least 1.0 petaflops [PFLOPS; 1015  floating-point operations per second (FLOPS)], and also refers to data storage systems capable of storing at least 1.0 petabyte (PB; 1015  bytes) of data.

In my 13 November 2018 post, I reported the latest TOP500 ranking of the world’s fastest supercomputers.  The new leaders were two US supercomputers: Summit and Sierra. A year later, in November 2019, they remained at the top of the TOP500 ranking.

  • Summit:  The #1 ranked IBM Summit is installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL) in Tennessee.  It has a LINPACK Benchmark Rmax (maximal achieved performance) rating of 148.6 PFLOPS (1.486 x 1017  FLOPS) and an Rpeak (theoretical peak performance) rating of 200.8 PFLOPS. Summit’s peak electric power demand is 10.01 MW (megawatts).
  • Sierra:The #2 ranked IBM Sierra is installed at the DOE’s Lawrence Livermore National Laboratory (LLNL) in California. It has an Rmax rating of 94.64 PFLOPS (0.9464 x 1017  FLOPS) and an Rpeak rating of 125.7 PFLOPS. Sierra’s peak electric power demand is 7.44 MW.

The next update of the TOP500 ranking will be in June 2020.  Check out their website here to see if the rankings change:   http:// https://www.top500.org

New exascale machines are only a year or two away

The next big step up in supercomputing power will be the arrival of “exascale” machines, which refers to supercomputers capable of performing at least 1.0 exaflops (EFLOPS; 1018  FLOPS), and also refers to data storage systems capable of storing at least 1.0 exabyte (EB, 1018  bytes) of data.  As you might suspect, there is intense international completion to be the first nation to operate an exascale supercomputer.  The main players are the US, China and Japan.

In the US, DOE awarded contracts to build three new exascale supercomputers: 

  • Aurora, announced in March 2019
  • Frontier, announced in May 2019
  • El Capitan, announced in March 2020

In this post, we’ll take a look at these three new supercomputers, each of which will be about ten times faster than the existing TOP500 leaders, Summit and Sierra.

Aurora supercomputer for ANL

The Aurora supercomputer is being built at Argonne National Laboratory (ANL) by the team of Intel (prime contractor) and Cray (subcontractor), under a contract valued at more than $500 million. 

Aurora supercomputer concept drawing.
Source: DOE / Argonne National Laboratory

The computer architecture is based on the Cray “Shasta” system and Intel’s Xeon Scalable processor, Xe compute architecture, Optane Datacenter Persistent Memory, and One API software. Those Cray and Intel technologies will be integrated into more than 200 Shasta cabinets, all connected by Cray’s Slingshot interconnect and associated software stack. 

Aurora is expected to come online by the end of 2021 and likely will be the first exascale supercomputer in the US.  It is being designed for sustained performance of one exaflops.  An Argonne spokesman stated, “This platform is designed to tackle the largest AI (artificial intelligence) training and inference problems that we know about.”

For more information on the Aurora supercomputer, see the 18 March 2019 ANL press release here:  https://www.anl.gov/article/us-department-of-energy-and-intel-to-deliver-first-exascale-supercomputer

Frontier supercomputer for ORNL

The Frontier supercomputer is being built by at ORNL by the team of Cray (prime contractor) and Advanced Micro Devices, Inc. (AMD, subcontractor), under a contract valued at about $600 million. 

Frontier supercomputer concept drawing.
Source:  DOE / Oak Ridge National Laboratory

The computer architecture is based on the Cray “Shasta” system and will consist of more than 100 Cray Shasta cabinets with high density “compute blades” that support a 4:1 GPU to CPU ratio using AMD EPYC processors (CPUs) and Radeon Instinct GPU accelerators purpose-built for the needs of exascale computing. Cray and AMD are co-designing and developing enhanced GPU programming tools.  

Frontier is expected to come online in 2022 after Aurora, but is expected to be more powerful, with a rating of 1.5 exaflops. Frontier will find applications in deep learning, machine learning and data analytics for applications ranging from manufacturing to human health.

For more information on the Frontier supercomputer, see the 7 May 2019 ORNL press release here:  https://www.ornl.gov/news/us-department-energy-and-cray-deliver-record-setting-frontier-supercomputer-ornl

El Capitan supercomputer for NNSA Labs

The El Capitan supercomputer, announced in March 2020, will be built at LLNL by the team of Hewlett Packard Enterprise (HPE) and AMD under a $600 million contract.  El Capitan is funded by the DOE’s National Nuclear Security Administration (NNSA) under their Advanced Simulation and Computing (ASC) program.  The primary users will be the three NNSA laboratories:  LLNL, Sandia National Laboratories and Los Alamos National Laboratory.  El Capitan will be used to perform complex predictive modeling and simulation to support NNSA’s nuclear weapons life extension programs (LEPs), which address aging weapons management, stockpile modernization and other matters.  

El Capitan supercomputer concept drawing.
Source:  Hewlett Packard Enterprise

El Capitan’s peak performance is expected to exceed 2 exaflops, making it about twice as fast as Aurora and about 30% faster than Frontier.

LLNL describes the El Capitan hardware as follows:  “El Capitan will be powered by next-generation AMD EPYC processors, code-named ‘Genoa’ and featuring the ‘Zen 4’ processor core, next-generation AMD Radeon Instinct GPUs based on a new compute-optimized architecture for workloads including HPC and AI, and the AMD Radeon Open Compute platform (ROCm) heterogeneous computing software.”  

NNSA’s El Capitan is expected to come online in 2023 at LLNL, about a year after ANL’s Aurora and ORNL’s Frontier.For more information on the El Capitan supercomputer, see the 5 March 2020 LLNL press release here:  https://www.llnl.gov/news/llnl-and-hpe-partner-amd-el-capitan-projected-worlds-fastest-supercomputer

Hewlett Packard Enterprise acquires Cray in May 2019

On 17 May 2019, Hewlett Packard Enterprise (HPE) announced that it has acquired Cray, Inc. for about $1.3 billion.  The following charts from the November 2018 TOP500 report gives some interesting insight into HPE’s rationale for acquiring Cray.  In the Vendor’s System Share chart, both HPE and Cray have a 9 – 9.6% share of the market based on the number of installed TOP500 systems.  In the Vendor’s Performance Share chart, the aggregate installed performance of Cray systems far exceeds the aggregate performance of a similar number of lower-end HPE systems (25.5% vs. 7.3%).  The Cray product line fits above the existing HPE product line, and the acquisition of Cray should enable HPE to compete directly with IBM in the supercomputer market.  HPE reported that it sees a growing market for exascale computing. The primary US customers are government laboratories.

The March 2020 award of NNSA’s El Capitan supercomputer to the HPE and AMD team seems to indicate that HPE made a good decision in their 2019 acquisition of Cray.

TOP500 ranking of supercomputer vendors, Nov 2018
Source:  https://www.top500.org
 

Meanwhile in China:

On 19 May 2019, the South China Morning Post reported that China is making a multi-billion dollar investment to re-take the lead in supercomputer power.  In the near-term (possibly in 2019), the newest Shuguang supercomputers are expected to operate about 50% faster than the US Summit supercomputer. This should put the new Chinese super computers in the Rmax = 210 – 250 PFLOPS range. 

In addition, China is expected to have its own exascale supercomputer operating in 2020, a year ahead of the first US exascale machine, with most, if not all, of the hardware and software being developed in China.  This computer will be installed at the Center of the Chinese Academy of Sciences (CAS) in Beijing.

You’ll find a description of China’s three exascale prototypes installed in 2018 and a synopsis of what is known about the first exascale machine on the TOP500 website at the following link: https://www.top500.org/news/china-spills-details-on-exascale-prototypes/

Where to next?

Why, zettascale, of course.  These will be supercomputers performing at least 1.0 zettaflops (ZFLOPS; 1021  FLOPS), while consuming about 100 megawatts (MW) of electrical power.

Check out the December 2018 article by Tiffany Trader, “Zettascale by 2035? China thinks so,” at the following link: https://www.hpcwire.com/2018/12/06/zettascale-by-2035/

Deep Learning Has Gone Mainstream

Peter Lobner

The 28 September 2016 article by Roger Parloff, entitled, “Why Deep Learning is Suddenly Changing Your Life,” is well worth reading to get a general overview of the practical implications of this subset of artificial intelligence (AI) and machine learning. You’ll find this article on the Fortune website at the following link:

http://fortune.com/ai-artificial-intelligence-deep-machine-learning/?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

Here, the relationship between AI, machine learning and deep learning are put in perspective as shown in the following table.

Def of deep learning  _ FortuneSource: Fortune

This article also includes a helpful timeline to illustrate the long history of technical development, from 1958 to today, that have led to the modern technology of deep learning.

Another overview article worth your time is by Robert D. Hof, entitled, “Deep Learning –

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.” This article is in the MIT Technology Review, which you will find at the following link:

https://www.technologyreview.com/s/513696/deep-learning/

As noted in both articles, we’re seeing the benefits of deep learning technology in the remarkable improvements in image and speech recognition systems that are being incorporated into modern consumer devices and vehicles, and less visibly, in military systems. For example, see my 31 January 2016 post, “Rise of the Babel Fish,” for a look at two competing real-time machine translation systems: Google Translate and ImTranslator.

The rise of deep learning has depended on two key technologies:

Deep neural nets: These are layers of neural nets that progressively build up the complexity needed for real-time image and speech recognition. Robert D. Hoff explains: “The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects…… Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments….”

Big data: Roger Parloff reported: “Although the Internet was awash in it (data), most data—especially when it came to images—wasn’t labeled, and that’s what you needed to train neural nets. That’s where Fei-Fei Li, a Stanford AI professor, stepped in. ‘Our vision was that big data would change the way machine learning works,’ she explains in an interview. ‘Data drives learning.’

In 2007 she launched ImageNet, assembling a free database of more than 14 million labeled images. It went live in 2009, and the next year she set up an annual contest to incentivize and publish computer-vision breakthroughs.

In October 2012, when two of Hinton’s students won that competition, it became clear to all that deep learning had arrived.”

The combination of these technologies has resulted in very rapid improvements in image and speech recognition capabilities and performance and their employment in marketable products and services. Typically the latest capabilities and performance appear at the top of a market and then rapidly proliferate down into the lower price end of the market.

For example, Tesla cars include a camera system capable of identifying lane markings, obstructions, animals and much more, including reading signs, detecting traffic lights, and determining road composition. On a recent trip in Europe, I had a much more modest Ford Fusion with several of these image recognition and associated alerting capabilities. You can see a Wall Street Journal video on how Volvo is incorporating kangaroo detection and alerting into their latest models for the Australian market

https://ca.finance.yahoo.com/video/playlist/autos-on-screen/kangaroo-detection-help-cars-avoid-220203668.html?pt=tAD1SCT8P72012-08-09.html/?date20140124

I believe the first Teslas in Australia incorrectly identified kangaroos as dogs. Within days, the Australian Teslas were updated remotely with the capability to correctly identify kangaroos.

Regarding the future, Robert D. Hof noted: “Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, ‘deep learning has reignited some of the grand challenges in artificial intelligence.’”

Actually, I think there’s more to the story of what potentially is beyond the demonstrated capabilities of deep learning in the areas of speech and image recognition. If you’ve read Douglas Adams “The Hitchhiker’s Guide to the Galaxy,” you already have had a glimpse of that future, in which the great computer, Deep Thought, was asked for “the answer to the ultimate question of life, the universe and everything.”  Surely, this would be the ultimate test of deep learning.

Deep ThoughtAsking the ultimate question to the great computer Deep Thought. Source: BBC / The Hitchhiker’s Guide to the Galaxy

In case you’ve forgotten the answer, either of the following two videos will refresh your memory.

From the original 1981 BBC TV serial (12:24 min):

https://www.youtube.com/watch?v=cjEdxO91RWQ

From the 2005 movie (2:42 min):

https://www.youtube.com/watch?v=aboZctrHfK8