All posts by Drummer

New Safe Confinement Structure Moved into Place at Chernobyl Unit 4

Peter Lobner

Following the Chernobyl accident on 26 April 1986, a concrete and steel “sarcophagus” was built around the severely damaged Unit 4 as an emergency measure to halt the release of radioactive material into the atmosphere from that unit. For details on the design and construction of the sarcophagus, including many photos of the damage at Unit 4, visit the chernobylgallery.com website at the following link:

http://chernobylgallery.com/chernobyl-disaster/sarcophagus/

The completed sarcophagus is shown below, at left end of the 4-unit Chernobyl nuclear plant. In 1988, Soviet scientists announced that the sarcophagus would only last 20–30 years before requiring restorative maintenance work. They were a bit optimistic.

Sarcophagus overview photoThe completed sarcophagus at left end of the 4-unit Chernobyl nuclear plant. Source: chernobylgallery.com

Sarcophagus closeup photoClose-up of the sarcophagus. Source: chernobylgallery.com

Inside-sarcophagusCross-section of the sarcophagus. Source: chernobylgallery.com

The sarcophagus rapidly deteriorated. In 2006, the “Designed Stabilization Steel Structure” was extended to better support a damaged roof that posed a significant risk if it collapsed. In 2010, it was found that water leaking through the sarcophagus roof was becoming radioactively contaminated as it seeped through the rubble of the damaged reactor plant and into the soil.

To provide a longer-term remedy for Chernobyl Unit 4, the  European Bank of Reconstruction and Development (EBRD) funded the design and construction of the New Safe Confinement (NSC, or New Shelter) at a cost of about €1.5 billion ($1.61 billion) for the shelter itself. Total project cost is expected to be about €2.1 billion ($2.25 billion).

Construction by Novarka (a French construction consortium of VINCI Construction and Bouygues Construction) started in 2012. The arched NSC structure was built in two halves and joined together in 2015. The completed NSC is the largest moveable land-based structure ever built, with a span of 257 m (843 feet), a length of 162 m (531 feet), a height of 108 m (354 feet), and a total weight of 36,000 tonnes.

NSC exterior viewNSC exterior view. Source: EBRD

NSC cross section

NSC cross-section. Adapted from phys.org/news

Novarka started moving the NSC arch structure into place on 14 November 2016 and completed the task more than a week later. The arched structure was moved into place using a system of 224 hydraulic jacks that pushed the arch 60 centimeters (2 feet) each stroke. On 29 November 2016, a ceremony at the site was attended by Ukrainian president, Petro Poroshenko, diplomats and site workers, to celebrate the successful final positioning of the NSC over Chernobyl Unit 4.

EBRD reported on this milestone:

“Thirty years after the nuclear disaster in Chernobyl, the radioactive remains of the power plant’s destroyed reactor 4 have been safely enclosed following one of the world’s most ambitious engineering projects.

Chernobyl’s giant New Safe Confinement (NSC) was moved over a distance of 327 meters (1,072 feet) from its assembly point to its final resting place, completely enclosing a previous makeshift shelter that was hastily assembled immediately after the 1986 accident.

The equipment in the New Safe Confinement will now be connected to the new technological building, which will serve as a control room for future operations inside the arch. The New Safe Confinement will be sealed off from the environment hermetically. Finally, after intensive testing of all equipment and commissioning, handover of the New Safe Confinement to the Chernobyl Nuclear Power Plant administration is expected in November 2017.”

You can see EBRD’s short video of this milestone, “Unique engineering feat concluded as Chernobyl arch reaches resting place,” at the following link

https://www.youtube.com/watch?v=dH1bv9fAxiY

The NSC has an expected lifespan of at least 100 years.

The NSC is fitted with an overhead crane to allow for the future dismantling of the existing sarcophagus and the remains of Chernobyl Unit 4.

Redefining the Kilogram

Peter Lobner

Since my early science classes, I’ve been content knowing that a mass of 1.0 kilogram weighed about 2.205 pounds. In fact, the mass of a kilogram is defined to a much higher level of precision.

The U.S. National Institute of Standards and Technology (NIST) describes the current international standard for the kilogram as follows:

“For more than a century, the kilogram (kg) – the fundamental unit of mass in the International System of Units (SI) – has been defined as exactly equal to the mass of a small polished cylinder, cast in 1879 of platinum and iridium, which is kept in a triple-locked vault on the outskirts of Paris.

That object is called the International Prototype of the Kilogram (IPK), and the accuracy of every measurement of mass or weight worldwide, whether in pounds and ounces or milligrams and metric tons, depends on how closely the reference masses used in those measurements can be linked to the mass of the IPK.”

Key issues with the current kilogram standard

The kilogram is the only SI unit still defined in terms of a manufactured object. Continued use of this standard definition of the kilogram creates the following problems: lack of portability, drift, and scalability.

Lack of portability

The IPK is used to calibrate several copies held at the International Bureau of Weights and Measures (BIPM) in Sevres, France. The IPK also is used to calibrate national “primary” standard kilograms, which in turn are used to calibrate national “working” standard kilograms, all with traceability back to the IPK. The “working” standards are used to calibrate various lower-level standards used in science and industry. In the U.S., NIST is responsible for managing our mass standards, including the primary prototype national standard known as K20, which is shown in the photo below.

NIST K20  K20. Source: NIST

Drift

There is a laborious process for making periodic comparisons among the various standard kilogram artifacts. Surprisingly, it has been found that the measured mass of each individual standard changes, or “drifts,” over time. NIST reports on this phenomena:

“Theoretically, of course, the IPK mass cannot actually change. Because it defines the kilogram, its mass is always exactly 1 kg. So change is expressed as variation with reference to the IPK on the rare occasions in which the IPK is brought out of storage and compared with its official “sister” copies as well as national prototypes from various countries. These “periodic verifications” occurred in 1899-1911, 1939-53, and 1988-92. In addition, a special calibration, involving only BIPM’s own mass standards, was conducted in 2014.

The trend over the past century has been for most of BIPM’s official copies to gain mass relative to the IPK, although by somewhat different amounts, averaging around 50 micrograms (millionths of a gram, abbreviated µg) over 100 years. Alternatively, of course, the IPK could be losing mass relative to its copies.”

The NIST chart below shows the change in BIPM prototype mass artifacts (identified by numbers) over time compared to the mass of the IPK.

IPK prototype_mass_drift

Scalability

The IPK defines the standard kilogram. However, there is no manufactured artifact that defines an international “standard milligram,” a “standard metric ton,” or any other fraction or multiple of the IPK.  NIST observed:

“…..the present system is not easily scalable. The smaller the scale, the larger the uncertainty in measurement because a very long sequence of comparisons is necessary to get from a 1 kg standard down to tiny metal mass standards in the milligram (mg) range, and each comparison entails an added uncertainty. As a result, although a 1 kg artifact can be measured against a 1 kg standard to an uncertainty of a few parts in a billion, a milligram measured against the same 1 kg has relative uncertainties of a few parts in ten thousand.”

Moving toward a new definition of the standard kilogram

On 21 October 2011, the General Conference on Weights and Measures agreed on a plan to redefine the kilogram in terms of an invariant of nature. There are competing proposals on how to do this. The leading candidates are described below.

The Watt Balance

The Watt Balance is the likely method to be approved for redefining the kilogram. It uses electromagnetic forces to precisely balance the force of gravity on a test mass. The mass is defined in terms of the strength of the magnetic field and the current flowing through a magnet coil. The latest NIST Watt Balance is known as NIST-4, which became operational in early 2015. NIST-4 is able to establish the unit of mass with an uncertainty of 3 parts in 108.

You can read more about the operation of the NIST-4 Watt Balance and watch a short video on its operating at the following link:

https://www.nist.gov/pml/redefining-kilogram-watt-balance

Using a Watt Balance to redefine the kilogram introduces its own complications, as described by NIST:

“In the method believed most likely to be adopted by the ……BIPM to redefine the kilogram, an exact determination of the Planck constant is essential. And to measure the Planck constant on a watt balance, the local acceleration of gravity, g, must be known to high precision. Hence the importance of a head-to-head comparison of the gravimeters used by each watt-balance team.”

The 2012 North American Watt Balance Absolute Gravity Comparison produced the following estimates of the Planck Constant.

Planck Constant determinations

In this chart:

  • Blue: current standard value of the Planck Constant from the international Committee on Data for Science and Technology
  • Red: Values obtained from watt balances
  • Green: Values obtained from other methods

The researchers noted that there is a substantial difference among instruments. This matter needs to be resolved in order for the Watt Balance to become the tool for defining the international standard kilogram.

International Avogadro Project

An alternate approach for redefining the kilogram has been suggested by the International Avogadro Project, which proposes a definition based on the Avogadro constant (NA). An approximate value of NA is 6.022 x 1023. To compete in accuracy and reliability with the current kilogram standard, NA must be defined with greater precision, to an uncertainty of just 20 parts per billion, or 2.0 x 10-8.

The proposed new standard starts with a uniform crystal of silicon-28 that is carefully machined into a sphere with a mass of 1 kg based on the current definition. NIST describes the process:

“With precise geometrical information — the mass and dimensions of the sphere, as well as the spatial parameters of the silicon’s crystal lattice — they can use the well-known mass of each individual silicon atom to calculate the total number of atoms in the sphere. And this information, in turn, lets them determine NA.

……Once the number of atoms has been resolved with enough precision by the collaboration, the newly refined Avogadro constant could become the basis of a new recipe for realizing the kilogram.”

 Intnl Avogadro ProjectKilogram silicon sphere. Source: NIST

You’ll find more information on the International Avogadro Project on the BIPM website at the following link:

http://www.bipm.org/en/bipm/mass/avogadro/

Here the BIPM reports: Improvements of the experiments during the continued collaboration resulted in the publication of the most recent determination of the Avogadro constant in 2015:

NA = 6.022 140 76(12) × 1023 mol−1

with a relative uncertainty of 2.0 × 10−8

The last two digits of NA, in parentheses, are an expression of absolute uncertainty in NA and can be read as:  plus or minus 0.000 000 12 × 1023 mol-1.

We’ll have to wait until 2018 to find out how the General Conference on Weights and Measures decides to redefine the kilogram.

There’s Increased Worldwide Interest in Asteroid and Moon Mining Missions

Peter Lobner

In my 31 December 2015 post, “Legal Basis Established for U.S. Commercial Space Launch Industry Self-regulation and Commercial Asteroid Mining,” I commented on the likely impact of the “U.S. Commercial Space Launch Competitiveness Act,” (2015 Space Act) which was signed into law on 25 November 2016. A lot has happened since then.

Planetary Resources building technology base for commercial asteroid prospecting

The firm Planetary Resources (Redmond, Washington) has a roadmap for developing a working space-based prospecting system built on the following technologies:

  • Space-based observation systems: miniaturization of hyperspectral sensors and mid-wavelength infrared sensors.
  • Low-cost avionics software: tiered and modular spacecraft avionics with a distributed set of commercially-available, low-level hardened elements each handling local control of a specific spacecraft function.
  • Attitude determination and control systems: distributed system, as above
  • Space communications: laser communications
  • High delta V small satellite propulsion systems: “Oberth maneuver” (powered flyby) provides most efficient use of fuel to escape Earth’s gravity well

Check out their short video, “Why Asteroids Fuel Human Expansion,” at the following link:

http://www.planetaryresources.com/asteroids/#asteroids-intro

 Planetary Resources videoSource: Planetary Resources

For more information, visit the Planetary Resources home page at the following link:

http://www.planetaryresources.com/#home-intro

Luxembourg SpaceResources.lu Initiative and collaboration with Planetary Resources

On 3 November 2016, Planetary Resources announced funding and a target date for their first asteroid mining mission:

“Planetary Resources, Inc. …. announced today that it has finalized a 25 million euro agreement that includes direct capital investment of 12 million euros and grants of 13 million euros from the Government of the Grand Duchy of Luxembourg and the banking institution Société Nationale de Crédit et d’Investissement (SNCI). The funding will accelerate the company’s technical advancements with the aim of launching the first commercial asteroid prospecting mission by 2020. This milestone fulfilled the intent of the Memorandum of Understanding with the Grand Duchy and its SpaceResources.lu initiative that was agreed upon this past June.”

The homepage for Luxembourg’s SpaceResources.lu Initiative is at the following link:

http://www.spaceresources.public.lu/en/index.html

Here the Grand-Duchy announced its intent to position Luxembourg as a European hub in the exploration and use of space resources.

“Luxembourg is the first European country to set out a formal legal framework which ensures that private operators working in space can be confident about their rights to the resources they extract, i.e. valuable resources from asteroids. Such a legal framework will be worked out in full consideration of international law. The Grand-Duchy aims to participate with other nations in all relevant fora in order to agree on a mutually beneficial international framework.”

Remember the book, “The Mouse that Roared?” Well, here’s Luxembourg leading the European Union (EU) into the business of asteroid mining.

European Space Agency (ESA) cancels Asteroid Impact Mission (AIM)

ESA’s Asteroid Impact Mission (AIM) was planning to send a small spacecraft to a pair of co-orbital asteroids, Didymoon and Didymos, in 2022. Among other goals, this ESA mission was intended to observe the NASA’s Double Asteroid Redirection Test when it impacts Didymoon at high speed. ESA mission profile for AIM is described at the following link:

http://www.esa.int/Our_Activities/Space_Engineering_Technology/Asteroid_Impact_Mission/Mission_profile

On 2 Dec 2016, ESA announced that AIM did not win enough support from member governments and will be cancelled. Perhaps the plans for an earlier commercial asteroid mission marginalized the value of the ESA investment in AIM.

Japanese Aerospace Exploration Agency (JAXA) announces collaboration for lunar resource prospecting, production and delivery

On 16 December 2016, JAXA announced that it will collaborate with the private lunar exploration firm, ispace, Inc. to prospect for lunar resources and then eventually build production and resource delivery facilities on the Moon.

ispace is a member of Japan’s Team Hakuto, which is competing for the Google Lunar XPrize. Team Hakuto describes their mission as follows:

“In addition to the Grand Prize, Hakuto will be attempting to win the Range Bonus. Furthermore, Hakuto’s ultimate target is to explore holes that are thought to be caves or “skylights” into underlying lava tubes, for the first time in human history.  These lava tubes could prove to be very important scientifically, as they could help explain the moon’s volcanic past. They could also become candidate sites for long-term habitats, able to shield humans from the moon’s hostile environment.”

Hakuto is facing the challenges of the Google Lunar XPRIZE and skylight exploration with its unique ‘Dual Rover’ system, consisting of two-wheeled ‘Tetris’ and four-wheeled ‘Moonraker.’ The two rovers are linked by a tether, so that Tetris can be lowered into a suspected skylight.”

Hakuto rover-with-tail

Team Hakuto dual rover. Source: ispace, Inc.

So far, the team has won one Milestone Prize worth $500,000 and must complete its lunar mission by the end of 2017 in order to be eligible for the final prizes. You can read more about Team Hakuto and their rover on the Google Lunar XPrize website at the following link:

http://lunar.xprize.org/teams/hakuto

Building on this experience, and apparently using the XPrize rover, ispace has proposed the following roadmap to the moon (click on the graphic to enlarge).

ispace lunar roadmapSource: ispace, Inc.

This ambitious roadmap offers an initial lunar resource utilization capability by 2030. Ice will be the primary resource sought on the Moon. Ispace reports:

“According to recent studies, the Moon houses an abundance of precious minerals on its surface, and an estimated 6 billion tons of water ice at its poles. In particular, water can be broken down into oxygen and hydrogen to produce efficient rocket fuel. With a fuel station established in space, the world will witness a revolution in the space transportation system.”

The ispace website is at the following link:

http://ispace-inc.com

Strange Things are Happening Underground

Peter Lobner

In the last month, there have been reports of some very unexpected things happening under the surface of the earth. I’m talking about subduction plates that maintain their structure as they dive toward the Earth’s core and “jet streams” in the Earth’s core itself. Let’s take a look at these interesting phenomena.

What happens to subduction plates?

Oceanic tectonic plates are formed as magma wells up along mid-ocean ridges, forming new lithospheric rock that spread away from both sides of the ridge, building two different tectonic plates. This is known as a divergent plate boundary.

As tectonic plates move slowly across the Earth’s surface, each one moves differently than the adjacent plates. In simple terms, this relative motion at the plate interfaces is either a slipping, side-by-side (transform) motion, or a head-to-head (convergent) motion.

A map of the Earth showing the tectonic plates and the nature of the relative motion at the plate interfaces is shown below (click on the image to enlarge).

ESRT Page5

Source: http://www.regentsearth.com/

When two tectonic plate converge, one will sink under (subduct) the other. In the case of an oceanic plate converging with a continental plate, the heavier oceanic plate always sinks under the continental plate and may cause mountain building along the edge of the continental plate. When two oceanic plates converge, one will subduct the other, creating a deep mid-ocean trench (i.e., Mariana trench) and possibly forming an arc of islands on the overriding plate (i.e., Aleutian Islands and south Pacific island chains). In the diagram above, you can see that some subduction zones are quite long.

subd_zoneSource: http://www.columbia.edu/~vjd1/subd_zone_basic.htm

The above diagram shows the subducting material from an oceanic plate descending deep into the Earth beneath the overriding continental plate.  New research indicates that the subducting plates maintain their structure to a considerable depth below the surface of the Earth.

On 22 November 2016, an article by Paul Voosen, “’Atlas of the Underworld’ reveals oceans and mountains lost to Earth’s history,” was posted on the sciencemag.org website. The author reports:

“A team of Dutch scientists will announce a catalog of 100 subducted plates, with information about their age, size, and related surface rock records, based on their own tomographic model and cross-checks with other published studies.”

“…geoscientists have begun ….peering into the mantel itself, using earthquake waves that pass through Earth’s interior to generate images resembling computerized tomography (CT) scans. In the past few years, improvements in these tomographic techniques have revealed many of these cold, thick slabs as they free fall in slow motion to their ultimate graveyard—heaps of rock sitting just above Earth’s molten core, 2900 kilometers below.”

The following concept drawing illustrates how a CT scan of the whole Earth might look, with curtains of subducting material surrounding the molten core.

Atlas_1121_1280x720Source: Science / Fabio Crameri

The author notes that research teams around the world are using more than 20 different models to interpret similar tomographic data. As you might expect, results differ. However, a few points are consistent:

  • The subducting slabs in the upper mantle appear to be stiff, straight curtains of lithospheric rock
  • These slabs may flex but they don’t crumble.
  • These two features make it possible to “unwind” the geologic history of individual tectonic slabs and develop a better understanding of the route each slab took to its present location.
  • The geologic history in subducting slabs only stretches back about 250 million years, which is the time it takes for subducting material to fall from the surface to the bottom of the mantle and be fully recycled.

You can read the fill article by Paul Voosen at the following link:

http://www.sciencemag.org/news/2016/11/atlas-underworld-reveals-oceans-and-mountains-lost-earths-history

Hopefully, the “Atlas of the Underworld” will help focus the dialogue among international research teams toward collaborative efforts to improve and standardize the processes and models for building an integrated CT model of our Earth.

A “jet stream” in the Earth’s core

The European Space Agency (ESA) developed the Swarm satellites to make highly accurate and frequent measurements of Earth’s continuously changing magnetic field, with the goal of developing new insights into our planet’s formation, dynamics and environment. The three-satellite Swarm mission was launched on 22 November 2013.

3 satellite SWARMSwarm satellites separating from Russian booster. Source: ESA

ESA’s website for the Swarm mission is at the following link:

http://www.esa.int/Our_Activities/Observing_the_Earth/Swarm/From_core_to_crust

Here ESA explains the value of the measurements made by the Swarm satellites.

“One of the very few ways of probing Earth’s liquid core is to measure the magnetic field it creates and how it changes over time. Since variations in the field directly reflect the flow of fluid in the outermost core, new information from Swarm will further our understanding of the physics and dynamics of Earth’s stormy heart.

The continuous changes in the core field that result in motion of the magnetic poles and reversals are important for the study of Earth’s lithosphere, also known as the ‘crustal’ field, which has induced and remnant magnetized parts. The latter depend on the magnetic properties of the sub-surface rock and the history of Earth’s core field.

We can therefore learn more about the history of the magnetic field and geological activity by studying magnetism in Earth’s crust. As new oceanic crust is created through volcanic activity, iron-rich minerals in the upwelling magma are oriented to magnetic north at the time.

These magnetic stripes are evidence of pole reversals so analyzing the magnetic imprints of the ocean floor allows past core field changes to be reconstructed and also helps to investigate tectonic plate motion.”

Data from the Swarm satellites indicates that the liquid iron part of the Earth’s core has an internal, 420 km (261 miles) wide “jet stream” circling the core at high latitude at a current speed of about 40 km/year (25 miles/year) and accelerating. In geologic terms, this “jet stream” is significantly faster than typical large scale flows in the core. The basic geometry of this “jet stream” is shown in the following diagram.

jet-stream-earth-core-ESA-e1482190909115Source: ESA

These results were published on 19 December 2016 in the article, An accelerating high-latitude jet in Earth’s core,” on the Nature Geoscience website at the following link:

http://www.nature.com/ngeo/journal/vaop/ncurrent/full/ngeo2859.html

A subscription is required for access to the full paper.

The Swarm mission is ongoing. Watch the ESA’s mission website for more news.

Airbus Delivers its 10,000th Aircraft

Peter Lobner

Airbus was founded on 18 December 1970 and delivered its first aircraft, an A300B2, to Air France on 10 May 1974. This was the world’s first twin-engine, wide body (two aisles) commercial airliner, beating Boeing’s 767, which was not introduced into commercial service until September 1982. The A300 was followed in the early 1980s by a shorter derivative, the A310, and then, later that decade, by the single-aisle A320. The A320 competed directly with the single-aisle Boeing 737 and developed into a very successful family of single-aisle commercial airliners: A318, A319, A320 and A321.

On 14 October 2016, Airbus announced the delivery of its 10,000th aircraft, which was an A350-900 destined for service with Singapore Airlines.

EVE-1236Source: Airbus

In their announcement, Airbus noted:

“The 10,000th Airbus delivery comes as the manufacturer achieves its highest level of production ever and is on track to deliver at least 650 aircraft this year from its extensive product line. These range from 100 to over 600 seats and efficiently meet every airline requirement, from high frequency short haul operations to the world’s longest intercontinental flights.”

You can read the complete Airbus press release at the following link:

http://www.airbus.com/presscentre/pressreleases/press-release-detail/detail/-9b32c4364a/

As noted previously, Airbus beat Boeing to the market for twinjet, wide-body commercial airliners, which are the dominant airliner type on international and high-density routes today. Airbus also was an early adopter of fly-by-wire flight controls and a “glass cockpit”, which they first introduced in the A320 family.

In October 2007, the ultra-large A380 entered service, taking the honors from the venerable Boeing 747 as the largest commercial airliner.   Rather than compete head-to-head with the A380, Boeing opted for stretching its 777 and developing a smaller, more advanced and more efficient, all-composite new airliner, the 787, which was introduced in airline service 2011.

Airbus countered with the A350 XWB in 2013. This is the first Airbus with fuselage and wing structures made primarily of carbon fiber composite material, similar to the Boeing 787.

The current Airbus product line comprises a total of 16 models in four aircraft families: A320 (single aisle), A330 (two aisle wide body), A350 XWB (two aisle wide body) and A380 (twin deck, two aisle wide body). The following table summarizes Airbus commercial jet orders, deliveries and operational status as of 30 November 2016.

Airbus orders* Includes all models in this family. Source: https://en.wikipedia.org/wiki/Airbus

Boeing is the primary competitor to Airbus. Boeing’s first commercial jet airliner, the 707, began commercial service Pan American World Airways on 26 October 1958. The current Boeing product line comprises five airplane families: 737 (single-aisle), 747 (twin deck, two aisle wide body), 767 (wide body, freighter only), 777 (two aisle wide body) and 787 (two aisle wide body).

The following table summarizes Boeing’s commercial jet orders, deliveries and operational status as of 30 June 2016. In that table, note that the Boeing 717 started life in 1965 as the Douglas DC-9, which in 1980 became the McDonnell-Douglas MD-80 (series) / MD-90 (series) before Boeing acquired McDonnell-Douglas in 1997. Then the latest version, the MD-95, became the Boeing 717.

Boeing commercial order status 30Jun2016

Source: https://en.wikipedia.org/wiki/Boeing_Commercial_Airplanes

Boeing’s official sales projections for 2016 are for 740 – 745 aircraft. Industry reports suggest a lower sales total is more likely because of weak worldwide sales of wide body aircraft.

Not including the earliest Boeing models (707, 720, 727) or the Douglas DC-9 derived 717, here’s how the modern competition stacks up between Airbus and Boeing.

Single-aisle twinjet:

  • 12,805 Airbus A320 family (A318, A319, A320 and A321)
  • 14,527 Boeing 737 and 757

Two-aisle twinjet:

  • 3,260 Airbus A300, A310, A330 and A350
  • 3,912 Boeing 767, 777 and 787

Twin aisle four jet heavy:

  • 696 Airbus A340 and A380
  • 1,543 Boeing 747

These simple metrics show how close the competition is between Airbus and Boeing. It will be interesting to see how these large airframe manufacturers fare in the next decade as they face more international competition, primarily at the lower end of their product range: the single-aisle twinjets. Former regional jet manufacturers Bombardier (Canada) and Embraer (Brazil) are now offering larger aircraft that can compete effectively in some markets. For example, the new Bombardier C Series is optimized for the 100 – 150 market segment. The Embraer E170/175/190/195 families offer capacities from 70 to 124 seats, and range up to 3,943 km (2,450 miles).  Other new manufacturers soon will be entering this market segment, including Russia’s Sukhoi Superjet 100 with about 108 seats, the Chinese Comac C919 with up to 168 seats, and Japan’s Mitsubishi Regional Jet with 70 – 80 seats.

At the upper end of the market, demand for four jet heavy aircraft is dwindling. Boeing is reducing the production rate of its 747-8, and some airlines are planning to not renew their leases on A380s currently in operation.

It will be interesting to watch how Airbus and Boeing respond to this increasing competition and to increasing pressure for controlling aircraft engine emissions after the Paris Agreement became effective in November 2016.

Qualcomm Tricorder XPrize Competition Down to Two Finalists

Peter Lobner

I described the Qualcomm Tricorder XPrize competition in my 10 March 2015 post, “Medical Tricorder Technology is Closer Than you Think.” The goal of the competition is to develop a real-world equivalent of the Star Trek Tricorder, with the following basic capabilities and features:

  • Diagnose at least 13 different health conditions including the following nine required conditions: anemia, atrial fibrillation, chronic obstructive pulmonary disease, diabetes, leukocytosis, pneumonia, ottis media, sleep apnea and urinary tract infection.
  • Weigh less than five pounds

At the time of my last update in December 2015, the following seven teams had been selected to compete in the extended Final Round for $10 million in prize money.

  • Aezon (U.S.)
  • Clouddx (Canada)
  • Danvantri (India)
  • DMI (U.S.)
  • Dynamical Biomarkers Group (Taiwan)
  • Final Frontier Medical Devices (U.S.)
  • Intellesens-Scanadu (UK)

Each of these teams submitted their final working prototypes for evaluation in Q3 2016. On 13 December 2016, Qualcomm Tricorder XPrize announced that they had selected two teams to continue into the finals:

“Congratulations to our two final teams, Dynamical Biomarkers Group and Final Frontier Medical Devices, who will proceed to the final phase in the $10M Qualcomm Tricorder XPRIZE. Both teams’ devices will undergo consumer testing over the next few months at the Altman Clinical Translational Research Institute at the University of California San Diego, and the winner will be announced in Q2, 2017.”

Both teams are required to deliver 45 kits for testing.

The XPrize will be split with $6 million going to the winning team, $2 million going to the runner-up, and $1 million for the team that receives the highest vital signs score in the final round. An additional $1M already has been awarded in milestone prizes.

The two competing devices are briefly described below. For more information, visit the Qualcomm Tricorder XPrize website at the following link:

http://tricorder.xprize.org

Dynamical Biomarkers Group

DBG_TricorderSource: Qualcomm Tricorder XPrize

 Key system features:

  • Comprised of three modules: Smart Vital-Sense Monitor; Smart Blood-Urine Test Kit; Smart Scope Module.
  • Includes technologies for physiologic signal analysis, image processing, and biomarker detection.
  • Smartphone app executes simple, interactive screening process that guides the users to carry out specific tests to generate disease diagnosis. The phone’s on-board camera is used to capture images of test strips. The smartphone communicates to the base unit via Bluetooth.
  • The base unit uploads collected data to a remote server for analysis.

Final Frontier Medical Devices: DxtER

DexTR_TricorderSource: Qualcomm Tricorder XPrize

Key system features:

  • DxtER is designed as a consumer product for monitoring your health and diagnosing illnesses in the comfort of your own home.
  • Non-invasive sensors collect data about your vital signs, body chemistry, and biological functions.
  • An iPad Mini with an on-board AI diagnostic app synthesizes the health data to generate a diagnosis.
  • While DxtER functions autonomously, it also can share data with a remote healthcare provider.

Best wishes to both teams as they enter the final round of this challenging competition, which could significantly change the way some basic medical services are delivered in the U.S. and around the world.

First Ever Antimatter Spectroscopy in ALPHA-2

Peter Lobner

ALPHA-2 is a device at the European particle physics laboratory at CERN, in Meyrin, Switzerland used for collecting and analyzing antimatter, or more specifically, antihydrogen.  A common hydrogen atom is composed of an electron and proton.  In contrast, an  antihydrogen atom is made up of a positron bound to an antiproton.

Screen Shot 2016-12-22 at 4.19.01 PMSource: CERN

The ALPHA-2 project homepage is at the following link:

http://alpha.web.cern.ch

On 16 December 2016, the ALPHA-2 team reported the first ever optical spectroscopic observation of the 1S-2S (ground state – 1st excited state) transition of antihydrogen that had been trapped and excited by a laser.

“This is the first time a spectral line has been observed in antimatter. ……..This first result implies that the 1S-2S transition in hydrogen and antihydrogen are not too different, and the next steps are to measure the transition’s lineshape and increase the precision of the measurement.”

In the ALPHA-2 online news article, “Observation of the 1S-2S Transition in Trapped Antihydrogen Published in Nature,” you will find two short videos explaining how this experiment was conducted:

  • Antihydrogen formation and 1S-2S excitation in ALPHA
  • ALPHA first ever optical spectroscopy of a pure anti atom

These videos describe the process for creating antihydrogen within a magnetic trap (octupole & mirror coils) containing positrons and antiprotons. Selected screenshots from the first video are reproduced below to illustrate the process of creating and exciting antihydrogen and measuring the results.

Alpha2 mirror trap

The potentials along the trap are manipulated to allow the initially separated positron and antiproton populations to combine, interact and form antihydrogen.

Combining positron & antiproton 1Combining positron & antiproton 2Combining positron & antiproton 3

If the magnetic trap is turned off, the antihydrogen atoms will drift into the inner wall of the device and immediately be annihilated, releasing pions that are detected by the “annihilation detectors” surrounding the magnetic trap. This 3-layer detector provides a means for counting antihydrogen atoms.

Detecting antihydrogen

A tuned laser is used to excite the antihydrogen atoms in the magnetic trap from the 1S (ground) state to the 2S (first excited) state. The interaction of the laser with the antihydrogen atoms is determined by counting the number of free antiprotons annihilating after photo ionization (an excited antihydrogen atom loses its positron) and counting all remaining antihydrogen atoms. Two cases were investigated: (1) laser tuned for resonance of the 1S-2S transition, and (2) laser detuned, not at resonance frequency. The observed differences between these two cases confirmed that, “the on-resonance laser light is interacting with the antihydrogen atoms via their 1S-2S transition.”

Exciting antihydrogen

The ALPHA-2 team reported that the accuracy of the current antihydrogen measurement of the 1S-2S transition is about “a few parts in 10 billion” (1010). In comparison, this transition in common hydrogen has been measured to an accuracy of “a few parts in a thousand trillion” (1015).

For more information, see the 19 December 2016 article by Adrian Cho, “Deep probe of antimatter puts Einstein’s special relativity to the test,” which is posted on the Sciencemag.org website at the following link:

http://www.sciencemag.org/news/2016/12/deep-probe-antimatter-puts-einstein-s-special-relativity-test?utm_campaign=news_daily_2016-12-19&et_rid=215579562&et_cid=1062570

Polyhedral Projections Improve the Accurately of Mapping the Earth on a 2D Surface

Peter Lobner

Representing the Earth’s 3-dimensional surface on a 2-dimensional map is a problem that has vexed cartographers through the ages. The difficulties in creating a 2D map of the world include maintaining continental shapes, distances, areas, and relative positions so the 2D map is useful for its intended purpose.

Old world mapWorld map circa 1630. Source: World Maps Online

In this article, we’ll look at selected classical projection schemes for creating a 2D world map followed by polyhedral projection schemes, the newest of which, the AuthaGraph World Map, may yield the most accurate maps of the world.

1. Classical Projections

To get an appreciation for the large number of classical projection schemes that have been developed to create 2D world maps, I suggest that you start with a visit to the Radical Cartography website at the following link, where you’ll find descriptions of 31 classical projections (and 2 polyhedral projections).

http://www.radicalcartography.net/?projectionref

Now let’s take a look at the following classical projection schemes.

  • 1569 Mercator projection
  • 1855 Gail equal-area projection & 1967 Gail-Peters equal-area projection
  • 1805 Mollweide equal-area projection
  • 1923 Goode homolosine projection

Mercator projection

The Mercator world map is a cylindrical projection that is created as shown in the following diagram.

Cylindrical projection

Source: adapted from http://images.slideplayer.com/15/4659006/slides/slide_17.jpg

Mercator map

Source:https://tripinbrooklyn.files.wordpress.com/2008/04/new_world60_small.gif?w=450

Important characteristics of a Mercator map are:

  • It represents a line of constant course (rhumb line) as a straight line segments with a constant angle to the meridians on the map. Therefore, Mercator maps became the standard map projection for nautical purposes.
  • The linear scale of a Mercator map increases with latitude. This means that geographical objects further from the equator appear disproportionately larger than objects near the equator. You can see this in the relative size comparison of Greenland and Africa, below.

Greenland & AfricaThe size distortion on Mercator maps has led to significant criticism of this projection, primarily because it conveys a distorted perception of the overall geometry of the planet.

Gail equal-area projection & Gail-Peters equal-area projection

James Gail developed a cylindrical “equal area” projection that attempted to rectify the significant area distortions in Mercator projections. There are several similar cylindrical “equal-area” projection schemes that differ mainly in the scaling factor (standard parallel) used.

In 1967, German filmmaker Arno Peters “re-invented” the century old Gail equal-area projection and claimed that it better represented the interests of the many small nations in the equatorial region that were marginalized (at least in terms of area) in the Mercator projection.   Arno’s focus was on the social stigma of this marginalization. UNESCO favors the Gail-Peters projection.

Gall–Peters_projection_SWSource: By Strebe – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=16115242

Mollweide equal-area projection

The key strength of this projection is in the accuracy of land areas, but with compromises in angle and shape. The central meridian is perpendicular to the equator and one-half the length of the equator.  The whole earth is depicted in a proportional 2:1 ellipse

This projection is popular in maps depicting global distributions. Astronomers also use the Mollweide equal-area projection for maps of the night sky.

Mollweide_projection_SW

Source: Wikimedia Commons

An interrupted Mollweide map addresses the issue of shape distortion, while preserving the relative accuracy of land areas.

Interrupted Mollweide

Source: http://www.progonos.com/furuti/MapProj/Normal/ProjInt/ProjIntC/projIntC.html

Goode homolosine projection

This projection is a combination of sinusoidal (to mid latitudes) and Mollweide at higher latitudes. It has no distortion along the equator or the vertical meridians in the middle latitudes. It was developed as a teaching replacement for Mercator maps. It is used by the U.S. Geologic Service (USGS) and also is found in many school atlases. The version shown below includes extensions repeating a few portions in order to show Greenland and eastern Russia uninterrupted.

Goode Homolosine Projection

Source: http://www.progonos.com/furuti/MapProj/Normal/ProjInt/ProjIntC/projIntC.html

2. Polyhedral Projections

In his 1525 book, Underweysung der Messung (Painter’s Manual), German printmaker Abrecht Durer presented the earliest known examples of how a sphere could be represented by a polyhedron that could be unfolded to lie flat for printing. The polyhedral shapes he described included the icosahedron and the cuboctahedron.

While Durer did not apply these ideas at the time to cartography, his work laid the foundation for the use of complex polyhedral shapes to create 2D maps of the globe. Several examples are shown in the following diagram.

Polygom globes & mapsSource: J.J. van Wijk, “Unfolding the Earth: Myriahedral Projections”

Now we’ll take a look at the following polyhedral maps:

  • 1909 Bernard J. S. Cahill’s butterfly map
  • 1943 & 1954 R. Buckminster Fuller’s Dymaxion globe & map
  • 1975 Cahill-Keyes World Map
  • 1996 Waterman polyhedron projections
  • 2008 Jarke J. van Wijk myriahedral projection
  • 2016 AuthaGraph World Map

Bernard J. S. Cahill’s Butterfly Map

Cahill was the inventor of the “butterfly map,” which is comprised of eight symmetrical triangular lobes. The basic geometry of Cahill’s process for unfolding a globe into eight symmetrical octants and producing a butterfly map is shown in the following diagram made by Cahill in his original 1909 article on this mapping process.

Cahill mapping process

The octants were arrayed four above and four below the equator. As shown below, the octant starting point in longitude (meridian) was strategically selected so all continents would be uninterrupted on the 2D map surface. This type of projection offered a 2D world map with much better fidelity to the globe than a Mercator projection.

Cahill butterfly mapCahill’s 1909 map. Source: genekeys.com

You can read Cahill’s original 1909 article in the Scottish Geographical Magazine at the following link:

http://www.genekeyes.com/CAHILL-1909/Cahill-1909.html

R. Buckminster Fuller’s Dymaxion Globe & Map

In the 1940s, R. Buckminster Fuller developed his approach for mapping the spherical globe onto a polyhedron. He first used a 14-sided cuboctahedron (8 triangular faces and 6 square faces), with each edge of the polyhedron representing a partial great circle on the globe. For each polyhedral face, Fuller developed his own projection of the corresponding surface of the globe. Fuller first published this map in Life magazine on 1 March 1943 along with cut-outs and instructions for assembling a polygonal globe.

Dymaxion map 1943Fuller’s 1943 Dymaxion map. Source: Life magazine

RBF 1st Dymaxion globeFuller’s 1943 cuboctahedron Dymaxion globe.  Source: Life magazine

You can see the complete Life magazine article, “R. Buckminster Fuller’s Dymaxion World,” at the following link:

https://books.google.co.uk/books?id=WlEEAAAAMBAJ&pg=PA41&source=gbs_toc_r&redir_esc=y&hl=en#v=onepage&q&f=false

A later, improved version, known as the Airocean World Map, was published in 1954. This version of Fuller’s Dymaxion map, shown below, was based on a regular icosahedron, which has 20 triangular faces with each edge representing a partial great circle on a globe.

Dymaxion mapSource: http://www.genekeyes.com/FULLER/1972-BF-BNS-.25-.95.1-Sc-1.jpg

You can see in the diagram below that there are relatively modest variations between the icosahedron’s 20 surfaces and the surface of a sphere.

Sphere vs icosahedron

Source: https://sciencevspseudoscience.files.wordpress.com/2013/09/embedded_icosahedron.png

Dymaxion globe2

Fuller’s icosahedron Dymaxion globe.   Source: http://workingknowledge.com/blog/wp-content/uploads/2012/03/DymaxionPic.jpg

You can watch an animation of a spherical globe transforming into an icosahedron and then unfolding into a 2D map at the following link:

https://upload.wikimedia.org/wikipedia/commons/b/bb/Dymaxion_2003_animation_small1.gif

Cahill-Keyes World Map

The Cahill–Keyes World Map developed in 1975 is an adaptation of the 1909 Cahill butterfly map. The Cahill-Keyes World map also is a polyhedral map comprised of eight symmetrical octants with a compromise treatment for Antarctica. Desirable characteristics include symmetry of component maps (octants) and scalability, which allows the map to continue to work well even at high resolution.

Cahill-KeyesSource: http://imgur.com/GICCYmz

Waterman polyhedron projection maps

The Waterman polyhedron projection is another variation of the “butterfly” projection that is created by unfolding the globe into eight symmetric, truncated octahedrons plus a separate eight-sided piece for Antarctica.  The Atlantic-centered projection and the comparable Pacific-centered projection are shown below.

Waterman Atlantic

Waterman Pacific

Source, two maps: watermanpolyhedron.com

The Waterman home page is at the following link:

http://watermanpolyhedron.com/deploy/

Here the developers make the following claims:

“Shows the equator clearly, as well as continental shapes, distances (within 10 %), areas (within 10 %) angular distortions (within 20 degrees), and relative positions, as a compromise: statistically better than all other World maps.”

Myriahedral projection maps

A myriahedron is a polyhedron with a myriad of faces. This projection was developed in 2008 by Jarke J. van Wijk and is described in detail in the article, “Unfolding the Earth: Myriahedral Projections,” in the Cartographic Journal, which you can read at the following link:

https://www.win.tue.nl/~vanwijk/myriahedral/

Examples of myriahedral projections are shown below. As you can see, there are many different ways to define a 2D map using a myriahedral projection.

Myriahedral projectionSource: https://www.win.tue.nl/~vanwijk/myriahedral/geo_aligned_maps.png

AuthaGraph World Map

The latest attempt to accurately map the globe on a 2D surface is the AuthaGraph World Map, made by equally dividing a spherical surface into 96 triangles, transferring it to a tetrahedron while maintaining areas proportions and unfolding it to be a rectangle. The developers explain the basic process as follows:

“…we developed an original world map called ‘AuthaGraph World Map’ which represents all oceans, continents including Antarctica which has been neglected in many existing maps in substantially proper sizes. These fit in a rectangular frame without interruptions and overlaps. The idea of this projection method was developed through an intensive research by modeling spheres and polyhedra. And then we applied the idea to cartography as one of the most useful applications.”

 AuthaGraph World Map 2The AuthaGraph World Map. Source: AuthaGraph

For detailed information on this mapping process, I suggest that you start at the AuthaGraph home page:

http://www.authagraph.com/top/?lang=en

From here, select “Details” for a comprehensive review of the mapping technology behind the AuthaGraph World Map.

Also check out the 4 November 2016 article on the AuthaGraph World Map, “This Might Be the Most Accurate Map of the World,” at the following link:

http://motherboard.vice.com/read/this-might-be-the-most-accurate-world-map-we-have?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

What to do with Carbon Dioxide

Peter Lobner

In my 17 December 2016 post, “Climate Change and Nuclear Power,” there is a chart that shows the results of a comparative life cycle greenhouse gas (GHG) analysis for 10 electric power-generating technologies. In that chart, it is clear how carbon dioxide capture and storage technologies can greatly reduce the GHG emissions from gas and coal generators.

An overview of carbon dioxide capture and storage technology is presented in a December 2010 briefing paper issued by the London Imperial College. This paper includes the following process flow diagram showing the capture of CO2 from major sources, use or storage of CO2 underground, and use of CO2 as a feedstock in other industrial processes. Click on the graphic to enlarge.

Carbon capture and storage process

You can download the London Imperial College briefing paper at the following link:

https://www.imperial.ac.uk/media/imperial-college/grantham-institute/public/publications/briefing-papers/Carbon-dioxide-storage—-Grantham-BP-4.pdf

Here is a brief look at selected technologies being developed for underground storage (sequestration) and industrial utilization of CO2.

Store in basalt formations by making carbonate rock

Iceland generates about 85% of its electric power from renewable resources, primarily hydro and geothermal. Nonetheless, Reykjavik Energy initiated a project called CarbFix at their 303 MWe Hellisheidi geothermal power plant to control its rather modest CO2 emissions along with hydrogen sulfide and other gases found in geothermal steam.

Hellisheidi geothermal power plantHellisheidi geothermal power plant. Source: Power Technology

The process system collects the CO2 and other gases, dissolves the gas in large volumes of water, and injects the water into porous, basaltic rock 400 – 800 meters (1,312 – 2,624 feet) below the surface. In the deep rock strata, the CO2 undergoes chemical reactions with the naturally occurring calcium, magnesium and iron in the basalt, permanently immobilizing the CO2 as environmentally benign carbonates. There typically are large quantities of calcium, magnesium and iron in basalt, giving a basalt formation a large CO2 storage capacity.

The surprising aspect of this process is that the injected CO2 was turned into hard rock very rapidly. Researchers found that in two years, more that 95% of the CO2 injected into the basaltic formation had been turned into carbonate.

For more information, see the 9 June 2016 Washington Post article by Chris Mooney, “This Iceland plant just turned carbon dioxide into solid rock — and they did it super fast,” at the following link:

https://www.washingtonpost.com/news/energy-environment/wp/2016/06/09/scientists-in-iceland-have-a-solution-to-our-carbon-dioxide-problem-turn-it-into-stone/?utm_term=.886f1ca92c56

The author notes,

“The researchers are enthusiastic about their possible solution, although they caution that they are still in the process of scaling up to be able to handle anything approaching the enormous amounts of carbon dioxide that are being emitted around the globe — and that transporting carbon dioxide to locations featuring basalt, and injecting it in large volumes along with even bigger amounts of water, would be a complex affair.”

Basalt formations are common worldwide, making up about 10% of continental rock and most of the ocean floor. Iceland is about 90% basalt.

Detailed results of this Reykjavik Energy project are reported in a May 2016 paper by J.M. Matter, M. Stute, et al., Rapid carbon mineralization for permanent disposal of anthropogenic carbon dioxide emissions,” which is available on the Research Gate website at the following link:

https://www.researchgate.net/publication/303450549_Rapid_carbon_mineralization_for_permanent_disposal_of_anthropogenic_carbon_dioxide_emissions

Similar findings were made in a separate pilot project in the U.S. conducted by Pacific Northwest National Laboratory and the Big Sky Carbon Sequestration Partnership. In this project, 1,000 tons of pressurized liquid CO2 were injected into a basalt formation in eastern Washington state in 2013. Samples taken two years later confirmed that the CO2 had been converted to carbonate minerals.

These results were published in a November 2016 paper by B. P McGrail, et al., “Field Validation of Supercritical CO2 Reactivity with Basalts.” The abstract and the paper are available at the following link:

http://pubs.acs.org/doi/pdf/10.1021/acs.estlett.6b00387

Store in fractures in deep crystalline rock

Lawrence Berkeley National Laboratory has established an initiative dubbed SubTER (Subsurface Technology and Engineering Research, Development and Demonstration Crosscut) to study how rocks fracture and to develop a predictive understanding of fracture control. A key facility is an observatory set up 1,478 meters (4,850 feet) below the surface in the former Homestake mine near Lead, South Dakota (note: Berkeley shares this mine with the neutrino and dark matter detectors of the Sanford Underground Research Facility). The results of the Berkeley effort are expected to be applicable both to energy production and waste storage strategies, including carbon capture and sequestration.

You can read more about this Berkeley project in the article, “Underground Science: Berkeley Lab Digs Deep For Clean Energy Solutions,” on the Global Energy World website at the following link:

http://www.newswise.com/articles/view/663141/?sc=rssn&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+NewswiseScinews+%28Newswise%3A+SciNews%29

Make ethanol

Researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL) have defined an efficient electrochemical process for converting CO2 into ethanol. While direct electrochemical conversion of CO2 to useful products has been studied for several decades, the yields of most reactions have been very low (single-digit percentages) and some required expensive catalysts.

Key points about the new process developed by ORNL are:

  • The electro-reduction process occurs in CO2 saturated water at ambient temperature and pressure with modest electrical requirements
  • The nanotechnology catalyst is made from inexpensive materials: carbon nanospike (CNS) electrode with electro-nucleated copper nanoparticles (Cu/CNS). The Cu/CNS catalyst is unusual because it primarily produces ethanol.
  • Process yield (conversion efficiency from CO2 to ethanol) is high: about 63%
  • The process can be scaled up.
  • A process like this could be used in an energy storage / conversion system that consumes extra electricity when it’s available and produces / stores ethanol for later use.

You can read more on this process in the 19 October 2016 article, “Scientists just accidentally discovered a process that turns CO2 directly into ethanol,” on the Science Alert website at the following link

http://www.sciencealert.com/scientists-just-accidentally-discovered-a-process-that-turns-co2-directly-into-ethanol

The full paper is available on the Chemistry Select website at the following link:

http://onlinelibrary.wiley.com/doi/10.1002/slct.201601169/full

Emergent Gravity Theory Passes its First Test

Peter Lobner

In 2010, Prof. Erik Verlinde, University of Amsterdam, Delta Institute for Theoretical Physics, published the paper, “The Origin of Gravity and the Laws of Newton.” In this paper, the author concluded:

 “The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged. If the gravity and space time can indeed be explained as emergent phenomena, this should have important implications for many areas in which gravity plays a central role. It would be especially interesting to investigate the consequences for cosmology. For instance, the way redshifts arise from entropy gradients could lead to many new insights.

The derivation of the Einstein equations presented in this paper is analogous to previous works, in particular [the 1995 paper by T. Jacobson, ‘Thermodynamics of space-time: The Einstein equation of state.’]. Also other authors have proposed that gravity has an entropic or thermodynamic origin, see for instance [the paper by T. Padmanabhan, ‘Thermodynamical Aspects of Gravity: New insights.’]. But we have added an important element that is new. Instead of only focusing on the equations that govern the gravitational field, we uncovered what is the origin of force and inertia in a context in which space is emerging. We identified a cause, a mechanism, for gravity. It is driven by differences in entropy, in whatever way defined, and a consequence of the statistical averaged random dynamics at the microscopic level. The reason why gravity has to keep track of energies as well as entropy differences is now clear. It has to, because this is what causes motion!”

You can download Prof. Verlinde’s 2010 paper at the following link:

https://arxiv.org/pdf/1001.0785.pdf

On 8 November 2016, Delta Institute announced that Prof. Verlinde had published a new research paper, “Emergent Gravity and the Dark Universe,” expanding on his previous work. You can read this announcement and see a short video by Prof. Verlinde on the Delta Institute website at the following link:

http://www.d-itp.nl/news/list/list/content/folder/press-releases/2016/11/new-theory-of-gravity-might-explain-dark-matter.html

You can download this new paper at the following link:

https://arxiv.org/abs/1611.02269

I found it helpful to start with Section 8, Discussion and Outlook, which is the closest you will find to a layman’s description of the theory.

On the Physics.org website, a short 8 November 2016 article, “New Theory of Gravity Might Explain Dark Matter,” provides a good synopsis of Verlinde’s emergent gravity theory:

“According to Verlinde, gravity is not a fundamental force of nature, but an emergent phenomenon. In the same way that temperature arises from the movement of microscopic particles, gravity emerges from the changes of fundamental bits of information, stored in the very structure of spacetime……

According to Erik Verlinde, there is no need to add a mysterious dark matter particle to the theory……Verlinde shows how his theory of gravity accurately predicts the velocities by which the stars rotate around the center of the Milky Way, as well as the motion of stars inside other galaxies.

One of the ingredients in Verlinde’s theory is an adaptation of the holographic principle, introduced by his tutor Gerard ‘t Hooft (Nobel Prize 1999, Utrecht University) and Leonard Susskind (Stanford University). According to the holographic principle, all the information in the entire universe can be described on a giant imaginary sphere around it. Verlinde now shows that this idea is not quite correct—part of the information in our universe is contained in space itself.

This extra information is required to describe that other dark component of the universe: Dark energy, which is believed to be responsible for the accelerated expansion of the universe. Investigating the effects of this additional information on ordinary matter, Verlinde comes to a stunning conclusion. Whereas ordinary gravity can be encoded using the information on the imaginary sphere around the universe, as he showed in his 2010 work, the result of the additional information in the bulk of space is a force that nicely matches that attributed to dark matter.”

Read the full Physics.org article at the following link:

http://phys.org/news/2016-11-theory-gravity-dark.html#jCp

On 12 December 2016, a team from Leiden Observatory in The Netherlands reported favorable results of the first test of the emergent gravity theory. Their paper, “First Test of Verlinde’s Theory of Emergent Gravity Using Weak Gravitational Lensing Measurements,” was published in the Monthly Notices of the Royal Astronomical Society. The complete paper is available at the following link:

http://mnras.oxfordjournals.org/content/early/2016/12/09/mnras.stw3192

An example of a gravitational lens is shown in the following diagram.

Gravitational-lensing-galaxyApril12_2010-1024x768-e1481555047928 Source: NASA, ESA & L. Calça

As seen from the Earth, the light from the galaxy at the left is bent by the gravitational forces of the galactic cluster in the center, much like light passing though an optical lens.

The Leiden Observatory authors reported:

“We find that the prediction from EG, despite requiring no free parameters, is in good agreement with the observed galaxy-galaxy lensing profiles in four different stellar mass bins. Although this performance is remarkable, this study is only a first step. Further advancements on both the theoretical framework and observational tests of EG are needed before it can be considered a fully developed and solidly tested theory.”

These are exciting times! As noted in the Physics.org article, “We might be standing on the brink of a new scientific revolution that will radically change our views on the very nature of space, time and gravity.”