Category Archives: All Posts

Climate Change and Nuclear Power

Peter Lobner

In September 2016, the International Atomic Energy Agency (IAEA) published a report entitled, “Climate Change and Nuclear Power 2016.” As described by the IAEA:

“This publication provides a comprehensive review of the potential role of nuclear power in mitigating global climate change and its contribution to other economic, environmental and social sustainability challenges.”

An important result documented in this report is a comparative analysis of the life cycle greenhouse gas (GHG) emissions for 10 electric power generating technologies. The IAEA authors note that:

“By comparing the GHG emissions of all existing and future energy technologies, this section (of the report) demonstrates that nuclear power provides energy services with very few GHG emissions and is justifiably considered a low carbon technology.

In order to make an adequate comparison, it is crucial to estimate and aggregate GHG emissions from all phases of the life cycle of each energy technology. Properly implemented life cycle assessments include upstream processes (extraction of construction materials, processing, manufacturing and power plant construction), operational processes (power plant operation and maintenance, fuel extraction, processing and transportation, and waste management), and downstream processes (dismantling structures, recycling reusable materials and waste disposal).”

The results of this comparative life cycle GHG analysis appear in Figure 5 of this report, which is reproduced below (click on the graphic to enlarge):

IAEA Climate Change & Nuclear Power

You can see that nuclear power has lower life cycle GHG emissions that all other generating technologies except hydro. It also is interesting to note how effective carbon dioxide capture and storage could be in reducing GHG emissions from fossil power plants.

You can download a pdf copy of this report for free on the IAEA website at the following link:

http://www-pub.iaea.org/books/iaeabooks/11090/Climate-Change-and-Nuclear-Power-2016

For a link to a similar 2015 report by The Brattle Group, see my post dated 8 July 2015, “New Report Quantifies the Value of Nuclear Power Plants to the U.S. Economy and Their Contribution to Limiting Greenhouse Gas (GHG) Emissions.”

It is noteworthy that the U.S. Environmental Protection Agency’s (EPA) Clean Power Plan (CPP), which was issued in 2015, fails to give appropriate credit to nuclear power as a clean power source. For more information on this matter see my post dated 2 July 2015,” EPA Clean Power Plan Proposed Rule Does Not Adequately Recognize the Role of Nuclear Power in Greenhouse Gas Reduction.”

In contrast to the EPA’s CPP, New York state has implemented a rational Clean Energy Standard (CES) that awards zero-emissions credits (ZEC) that favor all technologies that can meet specified emission standards. These credits are instrumental in restoring merchant nuclear power plants in New York to profitable operation and thereby minimizing the likelihood that the operating utilities will retire these nuclear plants early for financial reasons. For more on this subject, see my post dated 28 July 2016, “The Nuclear Renaissance is Over in the U.S.”  In that post, I noted that significant growth in the use of nuclear power will occur in Asia, with use in North America and Europe steady or declining as older nuclear power plants retire and fewer new nuclear plants are built to take their place.

An updated projection of worldwide use of nuclear power is available in the 2016 edition of the IAEA report, “Energy, Electricity and Nuclear Power Estimates for the Period up to 2050.” You can download a pdf copy of this report for free on the IAEA website at the following link:

http://www-pub.iaea.org/books/IAEABooks/11120/Energy-Electricity-and-Nuclear-Power-Estimates-for-the-Period-up-to-2050

Combining the information in the two IAEA reports described above, you can get a sense for what parts of the world will be making greater use of nuclear power as part of their strategies for reducing GHG emissions. It won’t be North America or Europe.

The World’s Best Cotton Candy

Peter Lobner

While there are earlier claims to various forms of spun sugar, Wikipedia reports that machine–spun cotton candy (then known as fairy floss) was invented in 1897 by confectioner John C. Wharton and dentist William Morrison. If you sense a possible conspiracy here, you may be right.  Cotton candy was first widely introduced at the 1904 St. Louis World Fair (aka the Louisiana Purchase Exposition).

As in modern cotton candy machines, the early machines were comprised of a centrifugal melter spinning in the center of a large catching bowl. The centrifugal melter produced the strands of cotton candy, which collected on the inside surface of the surrounding catching bowl. The machine operator then twirled a stick or paper cone around the catching bowl to create the cotton candy confection.

Basic cotton candySource: I, FocalPoint

Two early patents provide details on how a cotton candy machine works.

The first patent for a centrifugal melting device was filed on 11 October 1904 by Theodore Zoeller for the Electric Candy Machine Company. The patent, US816055 A, was published on 27 March 1906, and can be accessed at the following link:

https://www.google.com/patents/US816055

In his patent application, Zoeller discussed the problems with the then-current generation of cotton candy machines, which were,

“…objectionable in that the product is unreliable, being more often scorched than otherwise, such scorching of the product resulting from the continued application of the intense heat to a gradually-diminishing quantity of the molten sugar. Devices so heated are further objectionable in that all once melted (sugar) must be converted into filaments without allowing such molten sugar to cool and harden, as (it will later be) scorched in the reheating.”

Zoeller describes his centrifugal melting device as:

“….comprising a rotatable vessel having a circumferential discharge-passage, and an electrically-heated band in said passage…”

His novel feature involved moving the heater to the rim of the centrifugal melting device.

US816055-1 crop

A patent for an improved device was filed on 13 June 1906 by Ralph E. Pollock. This patent, US 847366A, was published on 19 March 1907, and can be accessed at the following link:

https://www.google.com/patents/US847366

This patent application provides a more complete description of the operation of the centrifugal melter for production of cotton candy:

“This invention relates to certain improvements in candy-spinning machines comprising, essentially, a rotary sugar-receptacle having a perforated peripheral band constituting an electric heater against which the sugar is centrifugally forced and through which the sugar emerges in the form of a line (of) delicate candy-wool to be used as a confection.

The essential object is to provide a simple, practical, and durable rotary receptacle with a comparatively large receiving chamber having a comparatively small annular space adjacent to the heater for the purpose of retarding the centrifugal action of the sugar through the heater sufficiently to cause the desired liquefaction of the sugar by said heater and to cause it to emerge in comparatively fine jets under high centrifugal pressure, thereby yielding an extremely fine continuous stream of candy-wool.”

This is the same basic process used more than a century later to make cotton candy at carnivals and state fairs today. The main problem I have with cotton candy sold at these venues is that it often is pre-made and sold in plastic bags and looks about as appetizing as a small portion of fiberglass insulation. Even when you can get it made on the spot, the product usually is just a big wad of cotton candy on a stick, as in the photo above, which can be created in about 30 seconds.

Let me introduce you to the best cotton candy in the world, which is made by a real artist at the Jinli market in Chengdu, China using the same basic cotton candy machine described above. As far as I can tell, the secret is working with small batches of pre-colored sugar and taking time to slowly build up the successive layers of what would become the very delicate, precisely shaped cotton candy flower shown below. This beautiful confection was well worth the wait, and, yes, it even tasted better than any cotton candy I’ve had previously.

Worlds best cotton candy 1Worlds best cotton candy 2Worlds best cotton candy 3Worlds best cotton candy 4

The PISA 2015 Report Provides an Insightful International Comparison of U.S. High School Student Performance

Peter Lobner

In early December 2016, the U.S. Department of Education and the Institute for Educational Sciences’ (IES) National Center for Educational Statistics (NCES) issued a report entitled, “Performance of U.S. 15-Year-Old Students in Science, Reading, and Mathematics Literacy in an International Context: First Look at PISA 2015.”

PISA 2015 First Look cover

The NCES describes PISA as follows:

“The Program for International Student Assessment (PISA) is a system of international assessments that allows countries to compare outcomes of learning as students near the end of compulsory schooling. PISA core assessments measure the performance of 15-year old students in science, reading and mathematics literacy every 3 years. Coordinated by the Organization for Economic Cooperation and Development (OECD), PISA was first implemented in 2000 in 32 countries. It has since grown to 73 educational systems in 2015. The United States has participated in every cycle of PISA since its inception in 2000. In 2015, Massachusetts, North Carolina and Puerto Rico also participated separately from the nation. Of these three, Massachusetts previously participated in PISA 2012.”

In each country, the schools participating in PISA are randomly selected, with a goal that the sample of student selected for the examination are representative of a broad range of backgrounds and abilities. About 540,000 students participated in PISA 2015, including about 5,700 students from U.S. public and private schools. All participants were rated on a 1,000 point scale.

The authors describe the contents of the PISA 2015 report as follows:

“ The report includes average scores in the three subject areas; score gaps across the three subject areas between the top (90th percentile) and low performing (10th percentile) students; the percentages of students reaching selected PISA proficiency levels; and trends in U.S. performance in the three subjects over time.”

You can download the report from the NCES website at the following link:

https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2017048

In the three subject areas assessed by PISA 2015, key U.S. results include the following:

  • Math:
    • U.S. students ranked 40th (out of 73) in math
    • U.S. average score was 470, which is below the international average of 490
    • 29% of U.S. students did not meet the baseline proficiency for math
    • 6% of U.S. students scored in the highest proficiency range for math
    • U.S. average math scores have been declining over the last two PISA cycles since 2009
  • Science:
    • U.S. ranked 25th in science
    • U.S. average was 496, which is very close to the international average of 493
    • 20% of U.S. students did not meet the baseline proficiency for science
    • 9% of U.S. students scored in the highest proficiency range for science
    • U.S. average science scores have been flat over the last two PISA cycles since 2009
  • Reading:
    • U.S. ranked 24th in reading
    • U.S. average was 497, which is very close to the international average of 493
    • 19% of U.S. students did not meet the baseline proficiency for reading
    • 10% of U.S. students scored in the highest proficiency range for reading
    • U.S. average reading scores have been flat over the last two PISA cycles since 2009

In comparison, students in the small nation of Singapore were the top performers in all three subject areas, recording the following results in PISA 2015:

  • Math: 564
  • Science: 556
  • Reading: 535

Japan, South Korea, Canada, Germany, New Zealand, Australia, Hong Kong (China), Estonia, and Netherlands were among the countries that consistently beat the U.S. in all three subject areas.

China significantly beat the U.S. in math and science and was about the same in reading. Russia significantly beat the U.S. in math, but was a bit behind in science and reading.

Numerous articles have been written on the declining math performance and only average science and reading performance of the U.S. students that participated in PISA 2015. Representative articles include:

US News: 6 December 2016 article, “Internationally, U.S. Students are Failing”

http://www.usnews.com/news/politics/articles/2016-12-06/math-a-concern-for-us-teens-science-reading-flat-on-test

Washington Post: 6 December 2016, “On the World Stage, U.S. Students Fall Behind”

https://www.washingtonpost.com/local/education/on-the-world-stage-us-students-fall-behind/2016/12/05/610e1e10-b740-11e6-a677-b608fbb3aaf6_story.html?utm_term=.c33931c67010

I think the authors of these articles are correct and the U.S. educational system is failing to develop students in high school that, on average, will be able to compete effectively in a knowledge-based world economy with many of their international peers.

Click the link to the PISA 2015 report (above) and read about the international test results for yourself.

Visualize the Effects of a Nuclear Explosion in Your Neighborhood

Peter Lobner

The Restricted Data blog, run by Alex Wellerstein, is a very interesting website that focuses on nuclear weapons history and nuclear secrecy issues. Alex Wellerstein explains the origin of the blog:

“For me, ‘Restricted Data’ represents all of the historical strangeness of nuclear secrecy, where the shock of the bomb led scientists, policymakers, and military men to construct a baroque and often contradictory system of knowledge control in the (somewhat vain) hope that they could control the spread and use of nuclear technology.”

You can access the home page of this blog at the following link:

http://blog.nuclearsecrecy.com/about-the-blog/

From there, navigation to recent posts and blog categories is simple. Among the features of this blog is a visualization tool called NUKEMAP. With this visualization tool, you can examine the effects of a nuclear explosion on a target of your choice, with results presented on a Google map. The setup for an analysis is simple, requiring only the following basic parameters:

  • Target (move the marker on the Google map)
  • Yield (in kilotons)
  • Set for airburst or surface burst

You can select “other effects” if you wish to calculate casualties and/or display the fallout pattern. Advanced options let you set additional parameters, including details of an airburst.

To illustrate the use of this visualization tool, consider the following scenario: A 10 kiloton nuclear device is being smuggled into the U.S. on a container ship and is detonated before docking in San Diego Bay. The problem setup and results are shown in the following screenshots from the NUKEMAP visualization tool.

NUKEMAP1NUKEMAP2NUKEMAP3

Among the “Advanced options” are selectable settings for the effects you want to display on the map. The effects radii increase considerably when you select lower effects limits.

So, there you have it. NUKEMAP is a sobering visualization tool for a world where the possibility of an isolated act of nuclear terrorism cannot be ruled out. If these results bother you, I suggest that you don’t re-do the analysis with military-scale (hundreds of kilotons to megatons) airburst warheads.

Current Status of the Fukushima Daiichi Nuclear Power Station (NPS)

Peter Lobner

Following a severe offshore earthquake on 11 March 2011 and subsequent massive tidal waves, the Fukushima Daiichi NPS and surrounding towns were severely damaged by these natural events. The extent of damage to the NPS, primarily from the effects of flooding by the tidal waves, resulted in severe fuel damage in the operating Units 1, 2 and 3, and hydrogen explosions in Units 1, 3 and 4. In response to the release of radioactive material from the NPS, the Japanese government ordered the local population to evacuate. You’ll find more details on the Fukushima Daiichi reactor accidents in my 18 January 2012 Lyncean presentation (Talk #69), which you can access at the following link:

https://lynceans.org/talk-69-11812/

On 1 September 2016, Tokyo Electric Power Company Holdings, Inc. (TEPCO) issued a video update describing the current status of recovery and decommissioning efforts at the Fukushima Daiichi NPS, including several side-by-side views contrasting the immediate post-accident condition of a particular unit with its current condition. Following is one example showing Unit 3.

Fukushima Unit 3_TEPCO 1Sep16 video updateSource: TEPCO

You can watch this TEPCO video at the following link:

http://www.tepco.co.jp/en/news/library/archive-e.html?video_uuid=kc867112&catid=69631

This video is part of the TEPCO Photos and Videos Library, which includes several earlier videos on the Fukushima Daiichi NPS as well as videos on other nuclear plants owned and operated by TEPCO (Kashiwazaki-Kariwa and Fukushima Daini) and other TEPCO activities. TEPCO estimates that recovery and decommissioning activities at the Fukushima Daiichi NPS will continue for 30 – 40 years.

An excellent summary article by Will Davis, entitled, “TEPCO Updates on Fukushima Daiichi Conditions (with video),” was posted on 30 September 2016 on the ANS Nuclear Café website at the following link:

http://ansnuclearcafe.org/2016/09/30/tepco-updates-on-fukushima-daiichi-conditions-with-video/

For additional resources related to the Fukushima Daiichi accident, recovery efforts, and lessons learned, see my following posts on Pete’s Lynx:

  • 20 May 2016: Fukushima Daiichi Current Status and Lessons Learned
  • 22 May 2015: Reflections on the Fukushima Daiichi Nuclear Accident
  • 8 March 2015: Scientists Will Soon Use Natural Cosmic Radiation to Peer Inside Fukushima’s Mangled Reactor

Lidar Remote Sensing Helps Archaeologists Uncover Lost City and Temple Complexes in Cambodia

Peter Lobner

In Cambodia, remote sensing is proving to be of great value for looking beneath a thick jungle canopy and detecting signs of ancient civilizations, including temples and other structures, villages, roads, and hydraulic engineering systems for water management. Building on a long history of archaeological research in the region, the Cambodian Archaeological Lidar Initiative (CALI) has become a leader in applying lidar remote sensing technology for this purpose. You’ll find the CALI website at the following link:

http://angkorlidar.org

Areas in Cambodia surveyed using lidar in 2012 and 2015 are shown in the following map.

Angkor Wat and vicinity_CALISource: Cambodian Archaeological LIDAR Initiative (CALI)

CALI describes its objectives as follows:

“Using innovative airborne laser scanning (‘lidar’) technology, CALI will uncover, map and compare archaeological landscapes around all the major temple complexes of Cambodia, with a view to understanding what role these complex and vulnerable water management schemes played in the growth and decline of early civilizations in SE Asia. CALI will evaluate the hypothesis that the Khmer civilization, in a bid to overcome the inherent constraints of a monsoon environment, became locked into rigid and inflexible traditions of urban development and large-scale hydraulic engineering that constrained their ability to adapt to rapidly-changing social, political and environmental circumstances.”

Lidar is a surveying technique that creates a 3-dimensional map of a surface by measuring the distance to a target by illuminating the target with laser light. A 3-D map is created by measuring the distances to a very large number of different targets and then processing the data to filter out unwanted reflections (i.e., reflections from vegetation) and build a “3-D point cloud” image of the surface. In essence, lidar removes the surface vegetation, as shown in the following figure, and produces a map with a much clearer view of surface features and topography than would be available from conventional photographic surveys.

Lidar sees thru vegetation_CALISource: Cambodian Archaeological LIDAR Initiative

CALI uses a Leica ALS70 lidar instrument. You’ll find the product specifications for the Leica ALS70 at the following link:

http://w3.leica-geosystems.com/downloads123/zz/airborne/ALS70/brochures/Leica_ALS70_6P_BRO_en.pdf

CALI conducts its surveys from a helicopter with GPS and additional avionics to help manage navigation on the survey flights and provide helicopter geospatial coordinates to the lidar. The helicopter also is equipped with downward-looking and forward-looking cameras to provide visual photographic references for the lidar maps.

Basic workflow in a lidar instrument is shown in the following diagram.

Lidar instrument workflow_Leica

An example of the resulting point cloud image produced by a lidar is shown below.

Example lidar point cloud_Leica

Here are two views of a site named Choeung Ek; the first is an optical photograph and the second is a lidar view that removes most of the vegetation. I think you’ll agree that structures appear much more clearly in the lidar image.

Choueng_Ek_Photo_CALISource: Cambodian Archaeological LIDAR InitiativeChoueng_Ek_Lidar_CALISource: Cambodian Archaeological LIDAR Initiative

An example of a lidar image for a larger site is shown in the following map of the central monuments of the well-researched and mapped site named Sambor Prei Kuk. CALI reported:

“The lidar data adds a whole new dimension though, showing a quite complex system of moats, waterways and other features that had not been mapped in detail before. This is just the central few sq km of the Sambor Prei Kuk data; we actually acquired about 200 sq km over the site and its environs.”

Sambor Prei Kuk lidar_CALISource: Cambodian Archaeological LIDAR Initiative

For more information on the lidar archaeological surveys in Cambodia, please refer to the following recent articles:

See the 18 July 2016 article by Annalee Newitz entitled, “How archaeologists found the lost medieval megacity of Angkor,” on the arsTECHNICA website at the following link:

http://arstechnica.com/science/2016/07/how-archaeologists-found-the-lost-medieval-megacity-of-angkor/?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

On the Smithsonian magazine website, see the April 2016 article entitled, “The Lost City of Cambodia,” at the following link:

http://www.smithsonianmag.com/history/lost-city-cambodia-180958508/?no-ist

Also on the Smithsonian magazine website, see the 14 June 2016 article by Jason Daley entitled, “Laser Scans Reveal Massive Khmer Cities Hidden in the Cambodian Jungle,” at the following link:

http://www.smithsonianmag.com/smart-news/laser-scans-reveal-massive-khmer-cities-hidden-cambodian-jungle-180959395/

CIA’s 1950 Nuclear Security Assessments After the Soviet’s First Nuclear Test

Peter Lobner

The first Soviet test of a nuclear device occurred on 29 August 1949 at the Semipalatinsk nuclear test site in what today is Kazakhstan. In the Soviet Union, this first device was known as RDS-1, Izdeliye 501 (device 501) and First Lightning. In the U.S., it was named Joe-1. This was an implosion type device with a yield of about 22 kilotons that, thanks to highly effective Soviet nuclear espionage during World War II, may have been very similar to the U.S. Fat Man bomb that was dropped on the Japanese city Nagasaki.

Casing_for_the_first_Soviet_atomic_bomb,_RDS-1Joe-1 casing. Source: Wikipedia / Minatom Archives

The Central Intelligence Agency (CIA) was tasked with assessing the impact of the Soviet Union having a demonstrated nuclear capability. In mid-1950, the CIA issued two Top Secret reports providing their assessment. These reports have been declassified and now are in the public domain. I think you’ll find that they make interesting reading, even 66 years later.

The first report, ORE 91-49, is entitled, “Estimate of the Effects of the Soviet Possession of the Atomic Bomb upon the Security of the United States and upon the Probabilities of Direct Soviet Military Action,” dated 6 April 1950.

ORE 91-49 cover page

You can download this report as a pdf file at the following link:

https://www.cia.gov/library/readingroom/docs/DOC_0000258849.pdf

The second, shorter summary report, ORE 32-50, is entitled, “The Effect of the Soviet Possession of Atomic Bombs on the Security of the United States,” dated 9 June 1950.

ORE_32-50 cover page

You can download this report as a pdf file at the following link:

http://www.alternatewars.com/WW3/WW3_Documents/CIA/ORE-32-50_9-JUN-1950.pdf

The next Soviet nuclear tests didn’t occur until 1951. The RDS-2 (Joe-2) and RDS-3 (Joe-3) tests were conducted on 24 September 1951 and 18 October 1951, respectively.

Deep Learning Has Gone Mainstream

Peter Lobner

The 28 September 2016 article by Roger Parloff, entitled, “Why Deep Learning is Suddenly Changing Your Life,” is well worth reading to get a general overview of the practical implications of this subset of artificial intelligence (AI) and machine learning. You’ll find this article on the Fortune website at the following link:

http://fortune.com/ai-artificial-intelligence-deep-machine-learning/?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

Here, the relationship between AI, machine learning and deep learning are put in perspective as shown in the following table.

Def of deep learning  _ FortuneSource: Fortune

This article also includes a helpful timeline to illustrate the long history of technical development, from 1958 to today, that have led to the modern technology of deep learning.

Another overview article worth your time is by Robert D. Hof, entitled, “Deep Learning –

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.” This article is in the MIT Technology Review, which you will find at the following link:

https://www.technologyreview.com/s/513696/deep-learning/

As noted in both articles, we’re seeing the benefits of deep learning technology in the remarkable improvements in image and speech recognition systems that are being incorporated into modern consumer devices and vehicles, and less visibly, in military systems. For example, see my 31 January 2016 post, “Rise of the Babel Fish,” for a look at two competing real-time machine translation systems: Google Translate and ImTranslator.

The rise of deep learning has depended on two key technologies:

Deep neural nets: These are layers of neural nets that progressively build up the complexity needed for real-time image and speech recognition. Robert D. Hoff explains: “The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects…… Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments….”

Big data: Roger Parloff reported: “Although the Internet was awash in it (data), most data—especially when it came to images—wasn’t labeled, and that’s what you needed to train neural nets. That’s where Fei-Fei Li, a Stanford AI professor, stepped in. ‘Our vision was that big data would change the way machine learning works,’ she explains in an interview. ‘Data drives learning.’

In 2007 she launched ImageNet, assembling a free database of more than 14 million labeled images. It went live in 2009, and the next year she set up an annual contest to incentivize and publish computer-vision breakthroughs.

In October 2012, when two of Hinton’s students won that competition, it became clear to all that deep learning had arrived.”

The combination of these technologies has resulted in very rapid improvements in image and speech recognition capabilities and performance and their employment in marketable products and services. Typically the latest capabilities and performance appear at the top of a market and then rapidly proliferate down into the lower price end of the market.

For example, Tesla cars include a camera system capable of identifying lane markings, obstructions, animals and much more, including reading signs, detecting traffic lights, and determining road composition. On a recent trip in Europe, I had a much more modest Ford Fusion with several of these image recognition and associated alerting capabilities. You can see a Wall Street Journal video on how Volvo is incorporating kangaroo detection and alerting into their latest models for the Australian market

https://ca.finance.yahoo.com/video/playlist/autos-on-screen/kangaroo-detection-help-cars-avoid-220203668.html?pt=tAD1SCT8P72012-08-09.html/?date20140124

I believe the first Teslas in Australia incorrectly identified kangaroos as dogs. Within days, the Australian Teslas were updated remotely with the capability to correctly identify kangaroos.

Regarding the future, Robert D. Hof noted: “Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, ‘deep learning has reignited some of the grand challenges in artificial intelligence.’”

Actually, I think there’s more to the story of what potentially is beyond the demonstrated capabilities of deep learning in the areas of speech and image recognition. If you’ve read Douglas Adams “The Hitchhiker’s Guide to the Galaxy,” you already have had a glimpse of that future, in which the great computer, Deep Thought, was asked for “the answer to the ultimate question of life, the universe and everything.”  Surely, this would be the ultimate test of deep learning.

Deep ThoughtAsking the ultimate question to the great computer Deep Thought. Source: BBC / The Hitchhiker’s Guide to the Galaxy

In case you’ve forgotten the answer, either of the following two videos will refresh your memory.

From the original 1981 BBC TV serial (12:24 min):

https://www.youtube.com/watch?v=cjEdxO91RWQ

From the 2005 movie (2:42 min):

https://www.youtube.com/watch?v=aboZctrHfK8

New Testable Theory on the Flow of Time and the Meaning of Now

Peter Lobner

Richard A. Muller, a professor of physics at the University of California, Berkeley, and Facility Senior Scientist at Lawrence Berkeley Laboratory, is the author of in intriguing new book entitled, “NOW, the Physics of Time.”

NOW cover page  Source: W. W. Norton & Company

In Now, Muller addresses weaknesses in past theories about the flow of time and the meaning of “now.” He also presents his own revolutionary theory, one that makes testable predictions. He begins by describing the physics building blocks of his theory: relativity, entropy, entanglement, antimatter, and the Big Bang. Muller points out that the standard Big Bang theory explains the ongoing expansion of the universe as the continuous creation of new space. He argues that time is also expanding and that the leading edge of the new time is what we experience as “now.”

You’ll find a better explanation in the UC Berkeley short video, “Why does time advance?: Richard Muller’s new theory,” at the following link:

https://www.youtube.com/watch?v=FYxUzm7gQkY

In the video, Muller explains that his theory would have resulted in a measurable 1 millisecond delay in “chirp” seen in the first gravitational wave signals detected on 11 February 2016 by the Laser Interferometer Gravitational-Wave Observatory (LIGO). LIGO’s current sensitivity precluded seeing the predicted small delay. If LIGO and other and-based gravity wave detector sensitivities are not adequate, a potentially more sensitive space-based gravity wave detection array, eLISA, should be in place in the 2020s to test Muller’s theory.

It’ll be interesting to see if LIGO, any of the other land-based gravity wave detectors, or eLISA will have the needed sensitivity to prove or disprove Muller’s theory.

For more information related to gravity wave detection, see my following posts:

  • 16 December 2015 post, “100th Anniversary of Einstein’s General Theory of Relativity and the Advent of a New Generation of Gravity Wave Detectors ”
  • 11 February 2016 post, “NSF and LIGO Team Announce First Detection of Gravitational Waves”
  • 27 September 2016, “Space-based Gravity Wave Detection System to be Deployed by ESA”

The Vision for Manned Exploration and Colonization of Mars is Alive Again

Peter Lobner

On 25 May 1961, President John F. Kennedy made an important speech to a joint session of Congress in which he stated:

“I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth.”

This was a very bold statement considering the state-of-the-art of U.S. aerospace technology in mid-1961. Yuri Gagarin became the first man to orbit the Earth on 12 April 1961 in a Soviet Vostok spacecraft and Alan Shepard completed the first Project Mercury suborbital flight on 5 May 1961. No American had yet flown in orbit. It wasn’t until 20 February 1962 that the first Project Mercury capsule flew into Earth orbit with astronaut John Glenn. The Soviets had hit the Moon with Luna 2 and returned photos from the backside of the moon with Luna 3. The U.S had only made one distant lunar flyby with the tiny Pioneer 4 spacecraft. The Apollo manned lunar program was underway, but still in the concept definition phase. The first U.S. heavy booster rocket designed to support the Apollo program, the Saturn 1, didn’t fly until 27 October 1961.

President Kennedy concluded this part of his 25 May 1961 speech with the following admonition:

“This decision (to proceed with the manned lunar program) demands a major national commitment of scientific and technical manpower, materiel and facilities, and the possibility of their diversion from other important activities where they are already thinly spread. It means a degree of dedication, organization and discipline, which have not always characterized our research and development efforts. It means we cannot afford undue work stoppages, inflated costs of material or talent, wasteful interagency rivalries, or a high turnover of key personnel.

New objectives and new money cannot solve these problems. They could in fact, aggravate them further–unless every scientist, every engineer, every serviceman, every technician, contractor, and civil servant gives his personal pledge that this nation will move forward, with the full speed of freedom, in the exciting adventure of space.”

This was the spirit that lead to the great success of the Apollo program, which landed the first men on the Moon, astronauts Neil Armstrong and Ed Aldrin, on 20 July 1969; a little more than 8 years after President Kennedy’s speech.

NASA’s plans for manned Mars exploration

By 1964, exciting concepts for manned Mars exploration vehicles were being developed under National Aeronautics and Space Administration (NASA) contract by several firms. One example is a Mars lander design shown below from Aeronutronic (then a division of Philco Corp). A Mars Excursion Module (MEM) would descend to the surface of Mars from a larger Mars Mission Module (MMM) that remained in orbit. The MEM was designed for landing a crew of three on Mars, spending 40 days on the Martian surface, and then returning the crew back to Mars orbit and rendezvousing with the MMM for the journey back to Earth.

1963 Aeronutronic Mars lander conceptSource: NASA / Aviation Week 24Feb64

This and other concepts developed in the 1960s are described in detail in Chapters 3 – 5 of NASA’s Monograph in Aerospace History #21, “Humans to Mars – Fifty Years of Mission Planning, 1950 – 2000,” which you can download at the following link:

http://www.nss.org/settlement/mars/2001-HumansToMars-FiftyYearsOfMissionPlanning.pdf

In the 1960’s the U.S. nuclear thermal rocket development program led to the development of the very promising NERVA nuclear engine for use in an upper stage or an interplanetary spacecraft. NASA and the Space Nuclear Propulsion Office (SNPO) felt that tests had “confirmed that a nuclear rocket engine was suitable for space flight application.”

In 1969, Marshall Space Flight Director Wernher von Braun propose sending 12 men to Mars aboard two rockets, each propelled by three NERVA engines. This spacecraft would have measured 270 feet long and 100 feet wide across the three nuclear engine modules, with a mass of 800 tons, including 600 tons of liquid hydrogen propellant for the NERVA engines. The two outboard nuclear engine modules only would be used to inject the spacecraft onto its trans-Mars trajectory, after which they would separate from the spacecraft. The central nuclear engine module would continue with the manned spacecraft and be used to enter and leave Mars orbit and enter Earth orbit at the end of the mission. The mission would launch in November 1981 and land on Mars in August 1982.

Marshall 1969 NERVA mars missionNERVA-powered Mars spacecraft. Source: NASA / Monograph #21

NASA’s momentum for conducting a manned Mars mission by the 1980s was short-lived. Development of the super heavy lift Nova booster, which was intended to place about 250 tons to low Earth orbit (LEO), was never funded. Congress reduced NASA’s funding in the FY-69 budget, resulting in NASA ending production of the Saturn 5 heavy-lift booster rocket (about 100 tons to LEO) and cancelling Apollo missions after Apollo 17. This left NASA without the heavy-lift booster rocket needed to carry NERVA and/or assembled interplanetary spacecraft into orbit.

NASA persevered with chemical rocket powered Mars mission concepts until 1971. The final NASA concept vehicle from that era, looking much like von Braun’s 1969 nuclear-powered spacecraft, is shown below.

NASA 1971 mars concept

Source: NASA / Monograph #21

The 24-foot diameter modules would have required six Shuttle-derived launch vehicles (essentially the large center tank and the strap-in solid boosters, without the Space Shuttle itself) to deliver the various modules for assembly in orbit.

While no longer a factor in Mars mission planning, the nuclear rocket program was canceled in 1972. You can read a history of the U.S. nuclear thermal rocket program at the following links:

http://www.lanl.gov/science/NSS/issue1_2011/story4full.shtml

and,

http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19910017902.pdf

NASA budget realities in subsequent years, dictated largely by the cost of Space Shuttle and International Space Station development and operation, reduced NASA’s manned Mars efforts to a series of design studies, as described in the Monograph #21.

Science Applications International Corporation (SAIC) conducted manned Mars mission studies for NASA in 1984 and 1987. The latter mission design study was conducted in collaboration with astronaut Sally Ride’s August 1987 report, Leadership and America’s Future in Space. You can read this report at the following link.

http://history.nasa.gov/riderep/cover.htm

Details on the 1987 SAIC mission study are included in Chapter 8 of the Monograph #21. SAIC’s mission concept employed two chemically-fueled Mars spacecraft in “split/sprint” roles. An automated cargo-carrying spacecraft would be first to depart Earth. It would fly an energy-saving trajectory and enter Mars orbit carrying the fuel needed by the future manned spacecraft for its return to Earth. After the cargo spacecraft was in Mars orbit, the manned spacecraft would be launched on a faster “sprint” trajectory, taking about six months to get to Mars. With one month allocated for exploration of the Martian surface, total mission time would be on the order of 12 – 14 months.

President Obama’s FY-11 budget redirected NASA’s focus away from manned missions to the Moon and Mars. The result is that there are no current programs with near-term goals to establish a continuous U.S. presence on the Moon or conduct the first manned mission to Mars. Instead, NASA is engaged in developing hardware that will be used initially for a relatively near-Earth (but further out than astronauts have gone before) “asteroid re-direct mission.” NASA’s current vision for getting to Mars is summarized below.

  • In the 2020s, NASA will send astronauts on a year-long mission into (relatively near-Earth) deep space, verifying spacecraft habitation and testing our readiness for a Mars mission.
  • In the 2030s, NASA will send astronauts first to low-Mars orbit. This phase will test the entry, descent and landing techniques needed to get to the Martian surface and study what’s needed for in-situ resource utilization.
  • Eventually, NASA will land humans on Mars.

You can read NASA’s Journey to Mars Overview at the following link:

https://www.nasa.gov/content/journey-to-mars-overview

NASA’s current plans for getting to Mars don’t really sound like much of a plan to me. Think back to President Kennedy’s speech that outlined the national commitment needed to accomplish a lunar landing within the decade of the 1960s. There is no real sense of timeliness in NASA plans for getting to Mars.

Thinking back to the title of NASA’s Monograph #21, “Humans to Mars – Fifty Years of Mission Planning, 1950 – 2000,” I’d say that NASA is quite good at manned Mars mission planning, but woefully short on execution. I recognize that NASA’s ability to execute anything is driven by its budget. However, in 1969, Wernher von Braun thought the U.S. was about 12 years from being able to launch a nuclear-powered manned Mars mission in 1981. Now it seems we’re almost 20 years away, with no real concept for the spacecraft that will get our astronauts there and back.

Commercial plans for manned Mars exploration

Fortunately, the U.S. commercial aerospace sector seems more committed to conducting manned Mars missions than NASA. The leading U.S. contenders are Bigelow Aerospace and SpaceX. Let’s look at their plans.

Bigelow Aerospace

Bigelow is developing expandable structures that can be used to house various types of occupied spaces on manned Earth orbital platforms or on spacecraft destined for lunar orbital missions or long interplanetary missions. Versions of these expandable structures also can be used for habitats on the surface of the Moon, Mars, or elsewhere.

The first operational use of this type of expandable structure in space occurred on 26 May 2016, when the BEAM (Bigelow Expandable Activity Module) was deployed to its full size on the International Space Station (ISS). BEAM was expanded by air pressure from the ISS.

Bigelow BEAMBEAM installed in the ISS. Source: Bigelow Aerospace

You can view a NASA time-lapse video of BEAM deployment at the following link:

https://www.youtube.com/watch?v=QxzCCrj5ssE

A large, complex space vehicle can be built with a combination of relatively conventional structures and Bigelow inflatable modules, as shown in the following concept drawing.

Bigelow spacecraft conceptSource: Bigelow Aerospace

A 2011 NASA concept named Nautilus-X, also making extensive use of inflatable structures, is shown in the following concept drawing. Nautilus is an acronym for Non-Atmospheric Universal Transport Intended for Lengthy United States Exploration.

NASA Nautilus-X-space-exploration-vehicle-concept-1

Source: NASA / NASA Technology Applications Assessment Team

SpaceX

SpaceX announced that it plans to send its first Red Dragon capsule to Mars in 2018 to demonstrate the ability to land heavy loads using a combination of aero braking with the capsule’s ablative heat shield and propulsive braking using rocket engines for the final phase of landing.

Red Dragon landing on MarsSource: SpaceX

More details on the Red Dragon spacecraft are in a 2012 paper by Karcs, J. et al., entitled, “Red Dragon: Low-cost Access to the Surface of Mars Using Commercial Capabilities,” which you’ll find at the following link:

https://www.nas.nasa.gov/assets/pdf/staff/Aftosmis_M_RED_DRAGON_Low-Cost_Access_to_the_Surface_of_Mars_Using_Commercial_Capabilities.pdf

NASA is collaborating with SpaceX to gain experience with this landing technique, which NASA expects to employ in its own future Mars missions.

On 27 September 2016, SpaceX CEO Elon Musk unveiled his grand vision for colonizing Mars at the 67th International Astronautical Congress in Guadalajara, Mexico. You’ll find an excellent summary in the 29 September 2016 article by Dave Mosher entitled, “Elon Musk’s complete, sweeping vision on colonizing Mars to save humanity,” which you can read on the Business Insider website at the following link:

http://www.businessinsider.com/elon-musk-mars-speech-transcript-2016-9

The system architecture for the SpaceX colonizing flights is shown in the following diagram. Significant features include:

  • 100 passengers on a one-way trip to Mars
  • Booster and spacecraft are reusable
  • No spacecraft assembly in orbit required.
  • The manned interplanetary vehicle is fueled with methane in Earth orbit from a tanker spacecraft.
  • The entire manned interplanetary vehicle lands on Mars. There is no part of the vehicle left orbiting Mars.
  • The 100 passengers disembark to colonize Mars
  • Methane fuel for the return voyage to Earth is manufactured on the surface of Mars.
  • The spacecraft returns to Earth for reuse on another mission.
  • Price per person for Mars colonists could be in the $100,000 to $200,000 range.

The Mars launcher for this mission would have a gross lift-off mass of 10,500 tons; 3.5 times the mass of NASA’s Saturn 5 booster for the Apollo Moon landing program.

SpaceX colonist architectureSource: SpaceX

 Terraforming Mars

Colonizing Mars will require terraforming to transform the planet so it can sustain human life. Terraforming the hostile environment of another planet has never been done before. While there are theories about how to accomplish Martian terraforming, there currently is no clear roadmap. However, there is a new board game named, “Terraforming Mars,” that will test your skills at using limited resources wisely to terraform Mars.

Nate Anderson provides a detailed introduction to this board game in his 1 October 2016 article entitled, “Terraforming Mars review: Turn the ‘Red Planet’ green with this amazing board game,” which you can read at the following link:

http://arstechnica.com/gaming/2016/10/terraforming-mars-review/?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

71RW5ZM0bBL._SL1000_Source: Stronghold GamesTerraforming Mars gameboardSource: Nate Anderson / arsTECHNICA

Nate Anderson described the game as follows:

“In Terraforming Mars, you play one of several competing corporations seeking to terraform the Red Planet into a livable—indeed, hospitable—place filled with cows, dogs, fish, lichen, bacteria, grasslands, atmosphere, and oceans. That goal is achieved when three things happen: atmospheric oxygen rises to 14 percent, planetary temperature rises to 8°C, and all nine of the game’s ocean tiles are placed.

Real science rests behind each of these numbers. The ocean tiles each represent one percent coverage of the Martian surface; once nine percent of the planet is covered with water, Mars should develop its own sustainable hydrologic cycle. An atmosphere of 14 percent oxygen is breathable by humans (though it feels like a 3,000 m elevation on Earth). And at 8°C, water will remain liquid in the Martian equatorial zone.

Once all three milestones have been achieved, Mars has been successfully terraformed, the game ends, and scores are calculated.”

The players are competing corporations, each with limited resources. The game play evolves based how each player (corporation) chooses to spend their resources to build their terraforming engines (constrained by some rules of precedence), and the opportunities dealt to them in each round.

You can buy the game Terraforming Mars on Amazon.

So, before you sign up with SpaceX to become a Martian colonist, practice your skills at terraforming Mars. You’ll be in high demand as an expert terraformer when you get to Mars on a SpaceX colonist ship in the late 2020s.