The PISA 2015 Report Provides an Insightful International Comparison of U.S. High School Student Performance

Peter Lobner

In early December 2016, the U.S. Department of Education and the Institute for Educational Sciences’ (IES) National Center for Educational Statistics (NCES) issued a report entitled, “Performance of U.S. 15-Year-Old Students in Science, Reading, and Mathematics Literacy in an International Context: First Look at PISA 2015.”

PISA 2015 First Look cover

The NCES describes PISA as follows:

“The Program for International Student Assessment (PISA) is a system of international assessments that allows countries to compare outcomes of learning as students near the end of compulsory schooling. PISA core assessments measure the performance of 15-year old students in science, reading and mathematics literacy every 3 years. Coordinated by the Organization for Economic Cooperation and Development (OECD), PISA was first implemented in 2000 in 32 countries. It has since grown to 73 educational systems in 2015. The United States has participated in every cycle of PISA since its inception in 2000. In 2015, Massachusetts, North Carolina and Puerto Rico also participated separately from the nation. Of these three, Massachusetts previously participated in PISA 2012.”

In each country, the schools participating in PISA are randomly selected, with a goal that the sample of student selected for the examination are representative of a broad range of backgrounds and abilities. About 540,000 students participated in PISA 2015, including about 5,700 students from U.S. public and private schools. All participants were rated on a 1,000 point scale.

The authors describe the contents of the PISA 2015 report as follows:

“ The report includes average scores in the three subject areas; score gaps across the three subject areas between the top (90th percentile) and low performing (10th percentile) students; the percentages of students reaching selected PISA proficiency levels; and trends in U.S. performance in the three subjects over time.”

You can download the report from the NCES website at the following link:

https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2017048

In the three subject areas assessed by PISA 2015, key U.S. results include the following:

  • Math:
    • U.S. students ranked 40th (out of 73) in math
    • U.S. average score was 470, which is below the international average of 490
    • 29% of U.S. students did not meet the baseline proficiency for math
    • 6% of U.S. students scored in the highest proficiency range for math
    • U.S. average math scores have been declining over the last two PISA cycles since 2009
  • Science:
    • U.S. ranked 25th in science
    • U.S. average was 496, which is very close to the international average of 493
    • 20% of U.S. students did not meet the baseline proficiency for science
    • 9% of U.S. students scored in the highest proficiency range for science
    • U.S. average science scores have been flat over the last two PISA cycles since 2009
  • Reading:
    • U.S. ranked 24th in reading
    • U.S. average was 497, which is very close to the international average of 493
    • 19% of U.S. students did not meet the baseline proficiency for reading
    • 10% of U.S. students scored in the highest proficiency range for reading
    • U.S. average reading scores have been flat over the last two PISA cycles since 2009

In comparison, students in the small nation of Singapore were the top performers in all three subject areas, recording the following results in PISA 2015:

  • Math: 564
  • Science: 556
  • Reading: 535

Japan, South Korea, Canada, Germany, New Zealand, Australia, Hong Kong (China), Estonia, and Netherlands were among the countries that consistently beat the U.S. in all three subject areas.

China significantly beat the U.S. in math and science and was about the same in reading. Russia significantly beat the U.S. in math, but was a bit behind in science and reading.

Numerous articles have been written on the declining math performance and only average science and reading performance of the U.S. students that participated in PISA 2015. Representative articles include:

US News: 6 December 2016 article, “Internationally, U.S. Students are Failing”

http://www.usnews.com/news/politics/articles/2016-12-06/math-a-concern-for-us-teens-science-reading-flat-on-test

Washington Post: 6 December 2016, “On the World Stage, U.S. Students Fall Behind”

https://www.washingtonpost.com/local/education/on-the-world-stage-us-students-fall-behind/2016/12/05/610e1e10-b740-11e6-a677-b608fbb3aaf6_story.html?utm_term=.c33931c67010

I think the authors of these articles are correct and the U.S. educational system is failing to develop students in high school that, on average, will be able to compete effectively in a knowledge-based world economy with many of their international peers.

Click the link to the PISA 2015 report (above) and read about the international test results for yourself.

Visualize the Effects of a Nuclear Explosion in Your Neighborhood

Peter Lobner

The Restricted Data blog, run by Alex Wellerstein, is a very interesting website that focuses on nuclear weapons history and nuclear secrecy issues. Alex Wellerstein explains the origin of the blog:

“For me, ‘Restricted Data’ represents all of the historical strangeness of nuclear secrecy, where the shock of the bomb led scientists, policymakers, and military men to construct a baroque and often contradictory system of knowledge control in the (somewhat vain) hope that they could control the spread and use of nuclear technology.”

You can access the home page of this blog at the following link:

http://blog.nuclearsecrecy.com/about-the-blog/

From there, navigation to recent posts and blog categories is simple. Among the features of this blog is a visualization tool called NUKEMAP. With this visualization tool, you can examine the effects of a nuclear explosion on a target of your choice, with results presented on a Google map. The setup for an analysis is simple, requiring only the following basic parameters:

  • Target (move the marker on the Google map)
  • Yield (in kilotons)
  • Set for airburst or surface burst

You can select “other effects” if you wish to calculate casualties and/or display the fallout pattern. Advanced options let you set additional parameters, including details of an airburst.

To illustrate the use of this visualization tool, consider the following scenario: A 10 kiloton nuclear device is being smuggled into the U.S. on a container ship and is detonated before docking in San Diego Bay. The problem setup and results are shown in the following screenshots from the NUKEMAP visualization tool.

NUKEMAP1NUKEMAP2NUKEMAP3

Among the “Advanced options” are selectable settings for the effects you want to display on the map. The effects radii increase considerably when you select lower effects limits.

So, there you have it. NUKEMAP is a sobering visualization tool for a world where the possibility of an isolated act of nuclear terrorism cannot be ruled out. If these results bother you, I suggest that you don’t re-do the analysis with military-scale (hundreds of kilotons to megatons) airburst warheads.

Current Status of the Fukushima Daiichi Nuclear Power Station (NPS)

Peter Lobner

Following a severe offshore earthquake on 11 March 2011 and subsequent massive tidal waves, the Fukushima Daiichi NPS and surrounding towns were severely damaged by these natural events. The extent of damage to the NPS, primarily from the effects of flooding by the tidal waves, resulted in severe fuel damage in the operating Units 1, 2 and 3, and hydrogen explosions in Units 1, 3 and 4. In response to the release of radioactive material from the NPS, the Japanese government ordered the local population to evacuate. You’ll find more details on the Fukushima Daiichi reactor accidents in my 18 January 2012 Lyncean presentation (Talk #69), which you can access at the following link:

https://lynceans.org/talk-69-11812/

On 1 September 2016, Tokyo Electric Power Company Holdings, Inc. (TEPCO) issued a video update describing the current status of recovery and decommissioning efforts at the Fukushima Daiichi NPS, including several side-by-side views contrasting the immediate post-accident condition of a particular unit with its current condition. Following is one example showing Unit 3.

Fukushima Unit 3_TEPCO 1Sep16 video updateSource: TEPCO

You can watch this TEPCO video at the following link:

http://www.tepco.co.jp/en/news/library/archive-e.html?video_uuid=kc867112&catid=69631

This video is part of the TEPCO Photos and Videos Library, which includes several earlier videos on the Fukushima Daiichi NPS as well as videos on other nuclear plants owned and operated by TEPCO (Kashiwazaki-Kariwa and Fukushima Daini) and other TEPCO activities. TEPCO estimates that recovery and decommissioning activities at the Fukushima Daiichi NPS will continue for 30 – 40 years.

An excellent summary article by Will Davis, entitled, “TEPCO Updates on Fukushima Daiichi Conditions (with video),” was posted on 30 September 2016 on the ANS Nuclear Café website at the following link:

http://ansnuclearcafe.org/2016/09/30/tepco-updates-on-fukushima-daiichi-conditions-with-video/

For additional resources related to the Fukushima Daiichi accident, recovery efforts, and lessons learned, see my following posts on Pete’s Lynx:

  • 20 May 2016: Fukushima Daiichi Current Status and Lessons Learned
  • 22 May 2015: Reflections on the Fukushima Daiichi Nuclear Accident
  • 8 March 2015: Scientists Will Soon Use Natural Cosmic Radiation to Peer Inside Fukushima’s Mangled Reactor

Lidar Remote Sensing Helps Archaeologists Uncover Lost City and Temple Complexes in Cambodia

Peter Lobner

In Cambodia, remote sensing is proving to be of great value for looking beneath a thick jungle canopy and detecting signs of ancient civilizations, including temples and other structures, villages, roads, and hydraulic engineering systems for water management. Building on a long history of archaeological research in the region, the Cambodian Archaeological Lidar Initiative (CALI) has become a leader in applying lidar remote sensing technology for this purpose. You’ll find the CALI website at the following link:

http://angkorlidar.org

Areas in Cambodia surveyed using lidar in 2012 and 2015 are shown in the following map.

Angkor Wat and vicinity_CALISource: Cambodian Archaeological LIDAR Initiative (CALI)

CALI describes its objectives as follows:

“Using innovative airborne laser scanning (‘lidar’) technology, CALI will uncover, map and compare archaeological landscapes around all the major temple complexes of Cambodia, with a view to understanding what role these complex and vulnerable water management schemes played in the growth and decline of early civilizations in SE Asia. CALI will evaluate the hypothesis that the Khmer civilization, in a bid to overcome the inherent constraints of a monsoon environment, became locked into rigid and inflexible traditions of urban development and large-scale hydraulic engineering that constrained their ability to adapt to rapidly-changing social, political and environmental circumstances.”

Lidar is a surveying technique that creates a 3-dimensional map of a surface by measuring the distance to a target by illuminating the target with laser light. A 3-D map is created by measuring the distances to a very large number of different targets and then processing the data to filter out unwanted reflections (i.e., reflections from vegetation) and build a “3-D point cloud” image of the surface. In essence, lidar removes the surface vegetation, as shown in the following figure, and produces a map with a much clearer view of surface features and topography than would be available from conventional photographic surveys.

Lidar sees thru vegetation_CALISource: Cambodian Archaeological LIDAR Initiative

CALI uses a Leica ALS70 lidar instrument. You’ll find the product specifications for the Leica ALS70 at the following link:

http://w3.leica-geosystems.com/downloads123/zz/airborne/ALS70/brochures/Leica_ALS70_6P_BRO_en.pdf

CALI conducts its surveys from a helicopter with GPS and additional avionics to help manage navigation on the survey flights and provide helicopter geospatial coordinates to the lidar. The helicopter also is equipped with downward-looking and forward-looking cameras to provide visual photographic references for the lidar maps.

Basic workflow in a lidar instrument is shown in the following diagram.

Lidar instrument workflow_Leica

An example of the resulting point cloud image produced by a lidar is shown below.

Example lidar point cloud_Leica

Here are two views of a site named Choeung Ek; the first is an optical photograph and the second is a lidar view that removes most of the vegetation. I think you’ll agree that structures appear much more clearly in the lidar image.

Choueng_Ek_Photo_CALISource: Cambodian Archaeological LIDAR InitiativeChoueng_Ek_Lidar_CALISource: Cambodian Archaeological LIDAR Initiative

An example of a lidar image for a larger site is shown in the following map of the central monuments of the well-researched and mapped site named Sambor Prei Kuk. CALI reported:

“The lidar data adds a whole new dimension though, showing a quite complex system of moats, waterways and other features that had not been mapped in detail before. This is just the central few sq km of the Sambor Prei Kuk data; we actually acquired about 200 sq km over the site and its environs.”

Sambor Prei Kuk lidar_CALISource: Cambodian Archaeological LIDAR Initiative

For more information on the lidar archaeological surveys in Cambodia, please refer to the following recent articles:

See the 18 July 2016 article by Annalee Newitz entitled, “How archaeologists found the lost medieval megacity of Angkor,” on the arsTECHNICA website at the following link:

http://arstechnica.com/science/2016/07/how-archaeologists-found-the-lost-medieval-megacity-of-angkor/?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

On the Smithsonian magazine website, see the April 2016 article entitled, “The Lost City of Cambodia,” at the following link:

http://www.smithsonianmag.com/history/lost-city-cambodia-180958508/?no-ist

Also on the Smithsonian magazine website, see the 14 June 2016 article by Jason Daley entitled, “Laser Scans Reveal Massive Khmer Cities Hidden in the Cambodian Jungle,” at the following link:

http://www.smithsonianmag.com/smart-news/laser-scans-reveal-massive-khmer-cities-hidden-cambodian-jungle-180959395/

CIA’s 1950 Nuclear Security Assessments After the Soviet’s First Nuclear Test

Peter Lobner

The first Soviet test of a nuclear device occurred on 29 August 1949 at the Semipalatinsk nuclear test site in what today is Kazakhstan. In the Soviet Union, this first device was known as RDS-1, Izdeliye 501 (device 501) and First Lightning. In the U.S., it was named Joe-1. This was an implosion type device with a yield of about 22 kilotons that, thanks to highly effective Soviet nuclear espionage during World War II, may have been very similar to the U.S. Fat Man bomb that was dropped on the Japanese city Nagasaki.

Casing_for_the_first_Soviet_atomic_bomb,_RDS-1Joe-1 casing. Source: Wikipedia / Minatom Archives

The Central Intelligence Agency (CIA) was tasked with assessing the impact of the Soviet Union having a demonstrated nuclear capability. In mid-1950, the CIA issued two Top Secret reports providing their assessment. These reports have been declassified and now are in the public domain. I think you’ll find that they make interesting reading, even 66 years later.

The first report, ORE 91-49, is entitled, “Estimate of the Effects of the Soviet Possession of the Atomic Bomb upon the Security of the United States and upon the Probabilities of Direct Soviet Military Action,” dated 6 April 1950.

ORE 91-49 cover page

You can download this report as a pdf file at the following link:

https://www.cia.gov/library/readingroom/docs/DOC_0000258849.pdf

The second, shorter summary report, ORE 32-50, is entitled, “The Effect of the Soviet Possession of Atomic Bombs on the Security of the United States,” dated 9 June 1950.

ORE_32-50 cover page

You can download this report as a pdf file at the following link:

http://www.alternatewars.com/WW3/WW3_Documents/CIA/ORE-32-50_9-JUN-1950.pdf

The next Soviet nuclear tests didn’t occur until 1951. The RDS-2 (Joe-2) and RDS-3 (Joe-3) tests were conducted on 24 September 1951 and 18 October 1951, respectively.

Deep Learning Has Gone Mainstream

Peter Lobner

The 28 September 2016 article by Roger Parloff, entitled, “Why Deep Learning is Suddenly Changing Your Life,” is well worth reading to get a general overview of the practical implications of this subset of artificial intelligence (AI) and machine learning. You’ll find this article on the Fortune website at the following link:

http://fortune.com/ai-artificial-intelligence-deep-machine-learning/?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

Here, the relationship between AI, machine learning and deep learning are put in perspective as shown in the following table.

Def of deep learning  _ FortuneSource: Fortune

This article also includes a helpful timeline to illustrate the long history of technical development, from 1958 to today, that have led to the modern technology of deep learning.

Another overview article worth your time is by Robert D. Hof, entitled, “Deep Learning –

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.” This article is in the MIT Technology Review, which you will find at the following link:

https://www.technologyreview.com/s/513696/deep-learning/

As noted in both articles, we’re seeing the benefits of deep learning technology in the remarkable improvements in image and speech recognition systems that are being incorporated into modern consumer devices and vehicles, and less visibly, in military systems. For example, see my 31 January 2016 post, “Rise of the Babel Fish,” for a look at two competing real-time machine translation systems: Google Translate and ImTranslator.

The rise of deep learning has depended on two key technologies:

Deep neural nets: These are layers of neural nets that progressively build up the complexity needed for real-time image and speech recognition. Robert D. Hoff explains: “The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects…… Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments….”

Big data: Roger Parloff reported: “Although the Internet was awash in it (data), most data—especially when it came to images—wasn’t labeled, and that’s what you needed to train neural nets. That’s where Fei-Fei Li, a Stanford AI professor, stepped in. ‘Our vision was that big data would change the way machine learning works,’ she explains in an interview. ‘Data drives learning.’

In 2007 she launched ImageNet, assembling a free database of more than 14 million labeled images. It went live in 2009, and the next year she set up an annual contest to incentivize and publish computer-vision breakthroughs.

In October 2012, when two of Hinton’s students won that competition, it became clear to all that deep learning had arrived.”

The combination of these technologies has resulted in very rapid improvements in image and speech recognition capabilities and performance and their employment in marketable products and services. Typically the latest capabilities and performance appear at the top of a market and then rapidly proliferate down into the lower price end of the market.

For example, Tesla cars include a camera system capable of identifying lane markings, obstructions, animals and much more, including reading signs, detecting traffic lights, and determining road composition. On a recent trip in Europe, I had a much more modest Ford Fusion with several of these image recognition and associated alerting capabilities. You can see a Wall Street Journal video on how Volvo is incorporating kangaroo detection and alerting into their latest models for the Australian market

https://ca.finance.yahoo.com/video/playlist/autos-on-screen/kangaroo-detection-help-cars-avoid-220203668.html?pt=tAD1SCT8P72012-08-09.html/?date20140124

I believe the first Teslas in Australia incorrectly identified kangaroos as dogs. Within days, the Australian Teslas were updated remotely with the capability to correctly identify kangaroos.

Regarding the future, Robert D. Hof noted: “Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, ‘deep learning has reignited some of the grand challenges in artificial intelligence.’”

Actually, I think there’s more to the story of what potentially is beyond the demonstrated capabilities of deep learning in the areas of speech and image recognition. If you’ve read Douglas Adams “The Hitchhiker’s Guide to the Galaxy,” you already have had a glimpse of that future, in which the great computer, Deep Thought, was asked for “the answer to the ultimate question of life, the universe and everything.”  Surely, this would be the ultimate test of deep learning.

Deep ThoughtAsking the ultimate question to the great computer Deep Thought. Source: BBC / The Hitchhiker’s Guide to the Galaxy

In case you’ve forgotten the answer, either of the following two videos will refresh your memory.

From the original 1981 BBC TV serial (12:24 min):

https://www.youtube.com/watch?v=cjEdxO91RWQ

From the 2005 movie (2:42 min):

https://www.youtube.com/watch?v=aboZctrHfK8

New Testable Theory on the Flow of Time and the Meaning of Now

Peter Lobner

Richard A. Muller, a professor of physics at the University of California, Berkeley, and Facility Senior Scientist at Lawrence Berkeley Laboratory, is the author of in intriguing new book entitled, “NOW, the Physics of Time.”

NOW cover page  Source: W. W. Norton & Company

In Now, Muller addresses weaknesses in past theories about the flow of time and the meaning of “now.” He also presents his own revolutionary theory, one that makes testable predictions. He begins by describing the physics building blocks of his theory: relativity, entropy, entanglement, antimatter, and the Big Bang. Muller points out that the standard Big Bang theory explains the ongoing expansion of the universe as the continuous creation of new space. He argues that time is also expanding and that the leading edge of the new time is what we experience as “now.”

You’ll find a better explanation in the UC Berkeley short video, “Why does time advance?: Richard Muller’s new theory,” at the following link:

https://www.youtube.com/watch?v=FYxUzm7gQkY

In the video, Muller explains that his theory would have resulted in a measurable 1 millisecond delay in “chirp” seen in the first gravitational wave signals detected on 11 February 2016 by the Laser Interferometer Gravitational-Wave Observatory (LIGO). LIGO’s current sensitivity precluded seeing the predicted small delay. If LIGO and other and-based gravity wave detector sensitivities are not adequate, a potentially more sensitive space-based gravity wave detection array, eLISA, should be in place in the 2020s to test Muller’s theory.

It’ll be interesting to see if LIGO, any of the other land-based gravity wave detectors, or eLISA will have the needed sensitivity to prove or disprove Muller’s theory.

For more information related to gravity wave detection, see my following posts:

  • 16 December 2015 post, “100th Anniversary of Einstein’s General Theory of Relativity and the Advent of a New Generation of Gravity Wave Detectors ”
  • 11 February 2016 post, “NSF and LIGO Team Announce First Detection of Gravitational Waves”
  • 27 September 2016, “Space-based Gravity Wave Detection System to be Deployed by ESA”

The Vision for Manned Exploration and Colonization of Mars is Alive Again

Peter Lobner

On 25 May 1961, President John F. Kennedy made an important speech to a joint session of Congress in which he stated:

“I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth.”

This was a very bold statement considering the state-of-the-art of U.S. aerospace technology in mid-1961. Yuri Gagarin became the first man to orbit the Earth on 12 April 1961 in a Soviet Vostok spacecraft and Alan Shepard completed the first Project Mercury suborbital flight on 5 May 1961. No American had yet flown in orbit. It wasn’t until 20 February 1962 that the first Project Mercury capsule flew into Earth orbit with astronaut John Glenn. The Soviets had hit the Moon with Luna 2 and returned photos from the backside of the moon with Luna 3. The U.S had only made one distant lunar flyby with the tiny Pioneer 4 spacecraft. The Apollo manned lunar program was underway, but still in the concept definition phase. The first U.S. heavy booster rocket designed to support the Apollo program, the Saturn 1, didn’t fly until 27 October 1961.

President Kennedy concluded this part of his 25 May 1961 speech with the following admonition:

“This decision (to proceed with the manned lunar program) demands a major national commitment of scientific and technical manpower, materiel and facilities, and the possibility of their diversion from other important activities where they are already thinly spread. It means a degree of dedication, organization and discipline, which have not always characterized our research and development efforts. It means we cannot afford undue work stoppages, inflated costs of material or talent, wasteful interagency rivalries, or a high turnover of key personnel.

New objectives and new money cannot solve these problems. They could in fact, aggravate them further–unless every scientist, every engineer, every serviceman, every technician, contractor, and civil servant gives his personal pledge that this nation will move forward, with the full speed of freedom, in the exciting adventure of space.”

This was the spirit that lead to the great success of the Apollo program, which landed the first men on the Moon, astronauts Neil Armstrong and Ed Aldrin, on 20 July 1969; a little more than 8 years after President Kennedy’s speech.

NASA’s plans for manned Mars exploration

By 1964, exciting concepts for manned Mars exploration vehicles were being developed under National Aeronautics and Space Administration (NASA) contract by several firms. One example is a Mars lander design shown below from Aeronutronic (then a division of Philco Corp). A Mars Excursion Module (MEM) would descend to the surface of Mars from a larger Mars Mission Module (MMM) that remained in orbit. The MEM was designed for landing a crew of three on Mars, spending 40 days on the Martian surface, and then returning the crew back to Mars orbit and rendezvousing with the MMM for the journey back to Earth.

1963 Aeronutronic Mars lander conceptSource: NASA / Aviation Week 24Feb64

This and other concepts developed in the 1960s are described in detail in Chapters 3 – 5 of NASA’s Monograph in Aerospace History #21, “Humans to Mars – Fifty Years of Mission Planning, 1950 – 2000,” which you can download at the following link:

http://www.nss.org/settlement/mars/2001-HumansToMars-FiftyYearsOfMissionPlanning.pdf

In the 1960’s the U.S. nuclear thermal rocket development program led to the development of the very promising NERVA nuclear engine for use in an upper stage or an interplanetary spacecraft. NASA and the Space Nuclear Propulsion Office (SNPO) felt that tests had “confirmed that a nuclear rocket engine was suitable for space flight application.”

In 1969, Marshall Space Flight Director Wernher von Braun propose sending 12 men to Mars aboard two rockets, each propelled by three NERVA engines. This spacecraft would have measured 270 feet long and 100 feet wide across the three nuclear engine modules, with a mass of 800 tons, including 600 tons of liquid hydrogen propellant for the NERVA engines. The two outboard nuclear engine modules only would be used to inject the spacecraft onto its trans-Mars trajectory, after which they would separate from the spacecraft. The central nuclear engine module would continue with the manned spacecraft and be used to enter and leave Mars orbit and enter Earth orbit at the end of the mission. The mission would launch in November 1981 and land on Mars in August 1982.

Marshall 1969 NERVA mars missionNERVA-powered Mars spacecraft. Source: NASA / Monograph #21

NASA’s momentum for conducting a manned Mars mission by the 1980s was short-lived. Development of the super heavy lift Nova booster, which was intended to place about 250 tons to low Earth orbit (LEO), was never funded. Congress reduced NASA’s funding in the FY-69 budget, resulting in NASA ending production of the Saturn 5 heavy-lift booster rocket (about 100 tons to LEO) and cancelling Apollo missions after Apollo 17. This left NASA without the heavy-lift booster rocket needed to carry NERVA and/or assembled interplanetary spacecraft into orbit.

NASA persevered with chemical rocket powered Mars mission concepts until 1971. The final NASA concept vehicle from that era, looking much like von Braun’s 1969 nuclear-powered spacecraft, is shown below.

NASA 1971 mars concept

Source: NASA / Monograph #21

The 24-foot diameter modules would have required six Shuttle-derived launch vehicles (essentially the large center tank and the strap-in solid boosters, without the Space Shuttle itself) to deliver the various modules for assembly in orbit.

While no longer a factor in Mars mission planning, the nuclear rocket program was canceled in 1972. You can read a history of the U.S. nuclear thermal rocket program at the following links:

http://www.lanl.gov/science/NSS/issue1_2011/story4full.shtml

and,

http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19910017902.pdf

NASA budget realities in subsequent years, dictated largely by the cost of Space Shuttle and International Space Station development and operation, reduced NASA’s manned Mars efforts to a series of design studies, as described in the Monograph #21.

Science Applications International Corporation (SAIC) conducted manned Mars mission studies for NASA in 1984 and 1987. The latter mission design study was conducted in collaboration with astronaut Sally Ride’s August 1987 report, Leadership and America’s Future in Space. You can read this report at the following link.

http://history.nasa.gov/riderep/cover.htm

Details on the 1987 SAIC mission study are included in Chapter 8 of the Monograph #21. SAIC’s mission concept employed two chemically-fueled Mars spacecraft in “split/sprint” roles. An automated cargo-carrying spacecraft would be first to depart Earth. It would fly an energy-saving trajectory and enter Mars orbit carrying the fuel needed by the future manned spacecraft for its return to Earth. After the cargo spacecraft was in Mars orbit, the manned spacecraft would be launched on a faster “sprint” trajectory, taking about six months to get to Mars. With one month allocated for exploration of the Martian surface, total mission time would be on the order of 12 – 14 months.

President Obama’s FY-11 budget redirected NASA’s focus away from manned missions to the Moon and Mars. The result is that there are no current programs with near-term goals to establish a continuous U.S. presence on the Moon or conduct the first manned mission to Mars. Instead, NASA is engaged in developing hardware that will be used initially for a relatively near-Earth (but further out than astronauts have gone before) “asteroid re-direct mission.” NASA’s current vision for getting to Mars is summarized below.

  • In the 2020s, NASA will send astronauts on a year-long mission into (relatively near-Earth) deep space, verifying spacecraft habitation and testing our readiness for a Mars mission.
  • In the 2030s, NASA will send astronauts first to low-Mars orbit. This phase will test the entry, descent and landing techniques needed to get to the Martian surface and study what’s needed for in-situ resource utilization.
  • Eventually, NASA will land humans on Mars.

You can read NASA’s Journey to Mars Overview at the following link:

https://www.nasa.gov/content/journey-to-mars-overview

NASA’s current plans for getting to Mars don’t really sound like much of a plan to me. Think back to President Kennedy’s speech that outlined the national commitment needed to accomplish a lunar landing within the decade of the 1960s. There is no real sense of timeliness in NASA plans for getting to Mars.

Thinking back to the title of NASA’s Monograph #21, “Humans to Mars – Fifty Years of Mission Planning, 1950 – 2000,” I’d say that NASA is quite good at manned Mars mission planning, but woefully short on execution. I recognize that NASA’s ability to execute anything is driven by its budget. However, in 1969, Wernher von Braun thought the U.S. was about 12 years from being able to launch a nuclear-powered manned Mars mission in 1981. Now it seems we’re almost 20 years away, with no real concept for the spacecraft that will get our astronauts there and back.

Commercial plans for manned Mars exploration

Fortunately, the U.S. commercial aerospace sector seems more committed to conducting manned Mars missions than NASA. The leading U.S. contenders are Bigelow Aerospace and SpaceX. Let’s look at their plans.

Bigelow Aerospace

Bigelow is developing expandable structures that can be used to house various types of occupied spaces on manned Earth orbital platforms or on spacecraft destined for lunar orbital missions or long interplanetary missions. Versions of these expandable structures also can be used for habitats on the surface of the Moon, Mars, or elsewhere.

The first operational use of this type of expandable structure in space occurred on 26 May 2016, when the BEAM (Bigelow Expandable Activity Module) was deployed to its full size on the International Space Station (ISS). BEAM was expanded by air pressure from the ISS.

Bigelow BEAMBEAM installed in the ISS. Source: Bigelow Aerospace

You can view a NASA time-lapse video of BEAM deployment at the following link:

https://www.youtube.com/watch?v=QxzCCrj5ssE

A large, complex space vehicle can be built with a combination of relatively conventional structures and Bigelow inflatable modules, as shown in the following concept drawing.

Bigelow spacecraft conceptSource: Bigelow Aerospace

A 2011 NASA concept named Nautilus-X, also making extensive use of inflatable structures, is shown in the following concept drawing. Nautilus is an acronym for Non-Atmospheric Universal Transport Intended for Lengthy United States Exploration.

NASA Nautilus-X-space-exploration-vehicle-concept-1

Source: NASA / NASA Technology Applications Assessment Team

SpaceX

SpaceX announced that it plans to send its first Red Dragon capsule to Mars in 2018 to demonstrate the ability to land heavy loads using a combination of aero braking with the capsule’s ablative heat shield and propulsive braking using rocket engines for the final phase of landing.

Red Dragon landing on MarsSource: SpaceX

More details on the Red Dragon spacecraft are in a 2012 paper by Karcs, J. et al., entitled, “Red Dragon: Low-cost Access to the Surface of Mars Using Commercial Capabilities,” which you’ll find at the following link:

https://www.nas.nasa.gov/assets/pdf/staff/Aftosmis_M_RED_DRAGON_Low-Cost_Access_to_the_Surface_of_Mars_Using_Commercial_Capabilities.pdf

NASA is collaborating with SpaceX to gain experience with this landing technique, which NASA expects to employ in its own future Mars missions.

On 27 September 2016, SpaceX CEO Elon Musk unveiled his grand vision for colonizing Mars at the 67th International Astronautical Congress in Guadalajara, Mexico. You’ll find an excellent summary in the 29 September 2016 article by Dave Mosher entitled, “Elon Musk’s complete, sweeping vision on colonizing Mars to save humanity,” which you can read on the Business Insider website at the following link:

http://www.businessinsider.com/elon-musk-mars-speech-transcript-2016-9

The system architecture for the SpaceX colonizing flights is shown in the following diagram. Significant features include:

  • 100 passengers on a one-way trip to Mars
  • Booster and spacecraft are reusable
  • No spacecraft assembly in orbit required.
  • The manned interplanetary vehicle is fueled with methane in Earth orbit from a tanker spacecraft.
  • The entire manned interplanetary vehicle lands on Mars. There is no part of the vehicle left orbiting Mars.
  • The 100 passengers disembark to colonize Mars
  • Methane fuel for the return voyage to Earth is manufactured on the surface of Mars.
  • The spacecraft returns to Earth for reuse on another mission.
  • Price per person for Mars colonists could be in the $100,000 to $200,000 range.

The Mars launcher for this mission would have a gross lift-off mass of 10,500 tons; 3.5 times the mass of NASA’s Saturn 5 booster for the Apollo Moon landing program.

SpaceX colonist architectureSource: SpaceX

 Terraforming Mars

Colonizing Mars will require terraforming to transform the planet so it can sustain human life. Terraforming the hostile environment of another planet has never been done before. While there are theories about how to accomplish Martian terraforming, there currently is no clear roadmap. However, there is a new board game named, “Terraforming Mars,” that will test your skills at using limited resources wisely to terraform Mars.

Nate Anderson provides a detailed introduction to this board game in his 1 October 2016 article entitled, “Terraforming Mars review: Turn the ‘Red Planet’ green with this amazing board game,” which you can read at the following link:

http://arstechnica.com/gaming/2016/10/terraforming-mars-review/?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

71RW5ZM0bBL._SL1000_Source: Stronghold GamesTerraforming Mars gameboardSource: Nate Anderson / arsTECHNICA

Nate Anderson described the game as follows:

“In Terraforming Mars, you play one of several competing corporations seeking to terraform the Red Planet into a livable—indeed, hospitable—place filled with cows, dogs, fish, lichen, bacteria, grasslands, atmosphere, and oceans. That goal is achieved when three things happen: atmospheric oxygen rises to 14 percent, planetary temperature rises to 8°C, and all nine of the game’s ocean tiles are placed.

Real science rests behind each of these numbers. The ocean tiles each represent one percent coverage of the Martian surface; once nine percent of the planet is covered with water, Mars should develop its own sustainable hydrologic cycle. An atmosphere of 14 percent oxygen is breathable by humans (though it feels like a 3,000 m elevation on Earth). And at 8°C, water will remain liquid in the Martian equatorial zone.

Once all three milestones have been achieved, Mars has been successfully terraformed, the game ends, and scores are calculated.”

The players are competing corporations, each with limited resources. The game play evolves based how each player (corporation) chooses to spend their resources to build their terraforming engines (constrained by some rules of precedence), and the opportunities dealt to them in each round.

You can buy the game Terraforming Mars on Amazon.

So, before you sign up with SpaceX to become a Martian colonist, practice your skills at terraforming Mars. You’ll be in high demand as an expert terraformer when you get to Mars on a SpaceX colonist ship in the late 2020s.

India and Pakistan’s Asymmetrical Nuclear Weapons Doctrines Raise the Risk of a Regional Nuclear War With Global Consequences

Peter Lobner

The nuclear weapons doctrines of India and Pakistan are different. This means that these two countries are not in sync on the matters of how and when they might use nuclear weapons in a regional military conflict. I’d like to think that cooler heads would prevail during a crisis and use of nuclear weapons would be averted. In light of current events, there may not be enough “cooler heads” on both sides in the region to prevail every time there is a crisis.

Case in point: In late September 2016, India announced it had carried out “surgical strikes” (inside Pakistan) on suspected militants preparing to infiltrate from the Pakistan-held part of Kashmir into the Indian-held part of that state. Responding to India’s latest strikes, Pakistan’s Defense Minister, Khawaja Muhammad Asif, has been reported widely to have made the following very provocative statement, which provides unsettling insights into Pakistan’s current nuclear weapons doctrine:

“Tactical weapons, our programs that we have developed, they have been developed for our protection. We haven’t kept the devices that we have just as showpieces. But if our safety is threatened, we will annihilate them (India).”

You can see a short Indian news video on this matter at the following link:

http://shoebat.com/2016/09/29/pakistan-defense-minister-threatens-to-wipe-out-india-with-a-nuclear-attack-stating-we-will-annihilate-india/

 1. Asymmetry in nuclear weapons doctrines

There are two recent papers that discuss in detail the nuclear weapons doctrines of India and Pakistan. Both papers address the issue of asymmetry and its operational implication. However, the papers differ a bit on the details of the nuclear weapons doctrines themselves. I’ll start by briefly summarizing these papers and using them to synthesize a short list of the key points in the respective nuclear weapons doctrines.

The first paper, entitled “India and Pakistan’s Nuclear Doctrines and Posture: A Comparative Analysis,” by Air Commodore (Retired) Khalid Iqbal, former Assistant Chief of Air Staff, Pakistan Air Force was published in Criterion Quarterly (Islamabad), Volume 11, Number 3, Jul-Sept 2016. The author’s key points are:

“Having preponderance in conventional arms, India subscribed to ‘No First Use’ concept but, soon after, started diluting it by attaching conditionalities to it; and having un-matching conventional capability, Pakistan retained the options of ‘First Use.’. Ever since 1998, doctrines of both the countries are going through the pangs of evolution. Doctrines of the two countries are mismatched. India intends to deter nuclear use by Pakistan while Pakistan’s nuclear weapons are meant to compensate for conventional arms asymmetry.”

You will read Khalid Iqbal’s complete paper at the following link:

https://www.academia.edu/28382385/India_and_Pakistans_Nuclear_Doctrines_and_Posture_A_Comparative_Analysis

The second paper, entitled “A Comparative Study of Nuclear Doctrines of India and Pakistan,” by Amir Latif appeared in the June 2014, Vol. 2, No. 1 issue of Journal of Global Peace and Conflict. The author provides the following summary (quoted from a 2005 paper by R. Hussain):

“There are three main attributes of the Pakistan’s undeclared nuclear doctrine. It has three distinct policy objectives: a) deter a first nuclear use by India; b) enable Pakistan to deter Indian conventional attack; c) allow Islamabad to “internationalize the crisis and invite outside intervention in the unfavorable circumstance.”

You can read Amir Latif’s complete paper at the following link

http://jgpcnet.com/journals/jgpc/Vol_2_No_1_June_2014/7.pdf

Synopsis of India’s nuclear weapons doctrine

India published its official nuclear doctrine on 4 January 2003. The main points related to nuclear weapons use are the following.

  1. India’s nuclear deterrent is directed toward Pakistan and China.
  2. India’s will build and maintain a credible minimum deterrent against those nations.
  3. India’s adopted a “No First Use” policy, subject to the following caveats:
    • India may use nuclear weapons in retaliation after a nuclear attack on its territory or on its military forces (wherever they may be).
    • In the event of a major biological or chemical attack, India reserves the option to use nuclear weapons.
  4. Only the civil political leadership (the Nuclear Command Authority) can authorize nuclear retaliatory attacks.
  5. Nuclear weapons will not be used against non-nuclear states (see caveat above regarding chemical or bio weapon attack).

Synopsis of Pakistan’s nuclear weapons doctrine

Pakistan does not have an officially declared nuclear doctrine. Their doctrine appears to be based on the following points:

  1. Pakistan’s nuclear deterrent is directed toward India.
  2. Pakistan will build and maintain a credible minimum deterrent.
    • The sole aim of having these weapons is to deter India from aggression that might threaten Pakistan’s territorial integrity or national independence / sovereignty.
    • Size of the deterrent force is enough inflict unacceptable damage on India with strikes on counter-value targets.
  3. Pakistan has not adopted a “No First Use” policy.
    • Nuclear weapons are essential to counter India’s conventional weapons superiority.
    • Nuclear weapons reestablish an overall Balance of Power, given the unbalanced conventional force ratios between the two sides (favoring India).
  4. National Command Authority (NCA), comprising the Employment Control Committee, Development Control Committee and Strategic Plans Division, is the center point of all decision-making on nuclear issues.
  5. Nuclear assets are considered to be safe, secure and almost free from risks of improper or accidental use.

The nuclear weapons doctrine asymmetry between India and Pakistan really boils down to this:

 India’s No First Use policy (with some caveats) vs. Pakistan’s policy of possible first use to compensate for conventional weapons asymmetry.

2. Nuclear tests and current nuclear arsenals

India

India tested its first nuclear device on 18 May 1974. Twenty-four years later, in mid-1998, tests of three devices were conducted, followed two days later by two more tests. All of these tests were low-yield, but multiple weapons configurations were tested in 1998.

India’s current nuclear arsenal is described in a paper by Hans M. Kristensen and Robert S. Norris entitled, “Indian Nuclear Forces, 2015,” which was published online on 27 November 2015 in the Bulletin of Atomic Scientists, Volume 71 at the following link:

http://www.tandfonline.com/doi/full/10.1177/0096340215599788

In this paper, authors Kristensen and Norris make the following points regarding India’s nuclear arsenal.

  • India is estimated to have produced approximately 540 kg of weapon-grade plutonium, enough for 135 to 180 nuclear warheads, though not all of that material is being used.
  • India has produced between 110 and 120 nuclear warheads.
  • The country’s fighter-bombers are the backbone of its operational nuclear strike force.
  • India also has made considerable progress in developing land-based ballistic missile and cruise missile delivery systems.
  • India is developing a nuclear-powered missile submarine and is developing sea-based ballistic missile (and cruise missile) delivery systems.

Pakistan

Pakistan is reported to have conducted many “cold” (non-fission) tests in March 1983. Shortly after the last Indian nuclear tests, Pakistan conducted six low-yield nuclear tests in rapid succession in late May 1998.

On 1 August 2016, the Congressional Research Service published the report, “Pakistan’s Nuclear Weapons,” which provides an overview of Pakistan’s nuclear weapons program. You can download this report at the following link:

https://www.fas.org/sgp/crs/nuke/RL34248.pdf

An important source for this CRS report was another paper by Hans M. Kristensen and Robert S. Norris entitled, “Pakistani Nuclear Forces, 2015,” which was published online on 27 November 2015 in the Bulletin of Atomic Scientists, Volume 71 at the following link:

http://www.tandfonline.com/doi/full/10.1177/0096340215611090

In this paper, authors Kristensen and Norris make the following points regarding Pakistan’s nuclear arsenal.

  • Pakistan has a nuclear weapons stockpile of 110 to 130 warheads.
  • As of late 2014, the International Panel on Fissile Materials estimated that Pakistan had an inventory of approximately 3,100 kg of highly enriched uranium (HEU) and roughly 170kg of weapon-grade plutonium.
  • The weapons stockpile realistically could grow to 220 – 250 warheads by 2025.
  • Pakistan has several types of operational nuclear-capable ballistic missiles, with at least two more under development.

3. Impact on global climate and famine of a regional nuclear war between India and Pakistan

On their website, the organization NuclearDarkness presents the results of analyses that attempt to quantify the effects on global climate of a nuclear war, based largely on the quantity of smoke lofted into the atmosphere by the nuclear weapons exchange. Results are presented for three cases: 5, 50 and 150 million metric tons (5, 50 and 150 Teragrams, Tg). The lowest case, 5 million tons, represents a regional nuclear war between India and Pakistan, with both sides using low-yield nuclear weapons. A summary of the assessment is as follows:

“Following a war between India and Pakistan, in which 100 Hiroshima-size (15 kiloton) nuclear weapons are detonated in the large cities of these nations, 5 million tons of smoke is lofted high into the stratosphere and is quickly spread around the world. A smoke layer forms around both hemispheres which will remain in place for many years to block sunlight from reaching the surface of the Earth. One year after the smoke injection there would be temperature drops of several degrees C within the grain-growing interiors of Eurasia and North America. There would be a corresponding shortening of growing seasons by up to 30 days and a 10% reduction in average global precipitation.”

You will find more details, including a day-to-day animation of the global distribution of the dust cloud for a two-month period after the start of the war, at the following link:

http://www.nucleardarkness.org/warconsequences/fivemilliontonsofsmoke/

In the following screenshots from the animation at the above link, you can see how rapidly the smoke distributes worldwide in the upper atmosphere after the initial regional nuclear exchange.

Regional war cloud dispersion 1

Regional war cloud dispersion 2

Regional war cloud dispersion 3

This consequence assessment on the nucleardarkness.org website is based largely on the following two papers by Robock, A. et al., which were published in 2007:

The first paper, entitled, “Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences,” was published in the Journal of Geophysical Research, Vol. 112. The authors offer the following comments on the climate model they used.

“We use a modern climate model to reexamine the climate response to a range of nuclear wars, producing 50 and 150 Tg of smoke, using moderate and large portions of the current global arsenal, and find that there would be significant climatic responses to all the scenarios. This is the first time that an atmosphere-ocean general circulation model has been used for such a simulation and the first time that 10-year simulations have been conducted.”

You can read this paper at the following link:

http://climate.envsci.rutgers.edu/pdf/RobockNW2006JD008235.pdf

The second paper, entitled, “Climatic consequences of regional nuclear conflicts”, was published in Atmospheric Chemistry and Physics, 7, pp. 2003 – 2012. This paper provides the analysis for the 5 Tg case.

“We use a modern climate model and new estimates of smoke generated by fires in contemporary cities to calculate the response of the climate system to a regional nuclear war between emerging third world nuclear powers using 100 Hiroshima-size bombs.”

You can read this paper at the following link:

http://www.atmos-chem-phys.net/7/2003/2007/acp-7-2003-2007.pdf

Building on the work of Roblock, Ira Helhand authored the paper, “An Assessment of the Extent of Projected Global Famine Resulting From Limited, Regional Nuclear War.” His main points with regard to a post-war famine are:

“The recent study by Robock et al on the climatic consequences of regional nuclear war shows that even a “limited” nuclear conflict, involving as few as 100 Hiroshima-sized bombs, would have global implications with significant cooling of the earth’s surface and decreased precipitation in many parts of the world. A conflict of this magnitude could arise between emerging nuclear powers such as India and Pakistan. Past episodes of abrupt global cooling, due to volcanic activity, caused major crop failures and famine; the predicted climate effects of a regional nuclear war would be expected to cause similar shortfalls in agricultural production. In addition large quantities of food might need to be destroyed and significant areas of cropland might need to be taken out of production because of radioactive contamination. Even a modest, sudden decline in agricultural production could trigger significant increases in the prices for basic foods and hoarding on a global scale, both of which would make food inaccessible to poor people in much of the world. While it is not possible to estimate the precise extent of the global famine that would follow a regional nuclear war, it seems reasonable to postulate a total global death toll in the range of one billion from starvation alone. Famine on this scale would also lead to major epidemics of infectious diseases, and would create immense potential for war and civil conflict.”

You can download this paper at the following link:

http://www.psr.org/assets/pdfs/helfandpaper.pdf

 4. Conclusions

The nuclear weapons doctrines of India and Pakistan are not in sync on the matters of how and when they might use nuclear weapons in a regional military conflict. The highly sensitive region of Kashmir repeatedly has served as a flashpoint for conflicts between India and Pakistan and again is the site of a current conflict. If the very provocative recent statements by Pakistan’s Defense Minister, Khawaja Muhammad Asif, are to be believed, then there are credible scenarios in which Pakistan makes first use of low-yield nuclear weapons against India’s superior conventional forces.

The consequences to global climate from this regional nuclear conflict can be quite significant and lasting, with severe impacts on global food production and distribution. With a bit of imagination, I’m sure you can piece together a disturbing picture of how an India – Pakistan regional nuclear conflict can evolve into a global disaster.

Let’s hope that cooler heads in that region always prevail.

Rosetta Spacecraft Lands on Comet 67P, Completing its 12-Year Mission

Peter Lobner

The European Space Agency (ESA) launched the Rosetta mission in 2004. After its long journey from Earth, followed by 786 days in orbit around comet 67P / Churyumov–Gerasimenko, the Rosetta spacecraft managers maneuvered the spacecraft out of its orbit and directed it to a “hard” landing on the “head” (the smaller lobe) of the comet.

Comet_67P_15_April_2015Comet 67P. Source: ESA – European Space Agency

The descent path, which started from an altitude of 19 km (11.8 miles), was designed to bring Rosetta down in the vicinity of active pits that had been observed from higher altitude earlier in the mission. ESA noted:

  • The descent gave Rosetta the opportunity to study the comet’s gas, dust and plasma environment very close to its surface, as well as take very high-resolution images.
  • Pits are of particular interest because they play an important role in the comet’s activity (i.e., venting gases to space).

The spacecraft impacted at a speed of about 90 cm/sec (about 2 mph) at 11:19 AM GMT (4:19 AM PDT) on 30 September 2016. I stayed up in California to watch the ESA’s live stream of the end of this important mission. I have to say that the live stream was not designed as a media event. As the landing approached, only a few close-up photos of the surface were shown, including the following photo taken from an altitude of about 5.7 km (3.5 miles).

Comet 67P 30Sep2016Source: ESA – European Space Agency

At the appointed moment, touchdown was marked by the loss of the telemetry signal from Rosetta. ESA said that the Rosetta spacecraft contained a message in many languages for some future visitor to 67P to find.

You can read the ESA’s press release on the end of the Rosetta mission at the following link:

http://www.esa.int/For_Media/Press_Releases/Mission_complete_Rosetta_s_journey_ends_in_daring_descent_to_comet

Some of the key Rosetta mission findings reported by ESA include:

  • Comet 67P likely was “born” in a very cold region of the protoplanetary nebula when the Solar System was still forming more than 4.5 billion years ago.
  • The comet’s two lobes probably formed independently, joining in a low-speed collision in the early days of the Solar System.
  • The comet’s shape influences its “seasons,” which are characterized by variations in dust moving across its surface and variations in the density and composition of the coma, the comet’s ‘atmosphere’.
  • Gases streaming from the comet’s nucleus include molecular oxygen and nitrogen, and water with a different ‘flavor’ than water in Earth’s oceans.
    • 67P’s water contains about three times more deuterium (a heavy form of hydrogen) than water on Earth.
    • This suggests that comets like Rosetta’s may not have delivered as much of Earth’s water as previously believed.
  • Numerous inorganic chemicals and organic compounds were detected by Rosetta (from orbit) and the Philae lander (on the surface). These include the amino acid glycine, which is commonly found in proteins, and phosphorus, a key component of DNA and cell membranes.

Analysis of data from the Rosetta mission will continue for several years. It will be interesting to see how our understanding of comet 67P and similar comets evolve in the years ahead.

For more information on the Rosetta mission, visit the ESA’s Rosetta website at the following link:

http://sci.esa.int/rosetta/

Also see my following postings: 24 August 2016, “Exploring Microgravity Worlds,” and 6 September 2016, “Philae Found in a Rocky Ditch on Comet 67P/Churyumov-Gerasimenko.”