All posts by Drummer

Antediluvian Continents and Modern Sovereignty Over Continental Seabeds

Peter Lobner

Ignatius Donnelly was the author of the book, Atlantis: The Antediluvian World, which was published in 1882. I remember reading this book in 1969, and being fascinated by the concept of a lost continent hidden somewhere beneath today’s oceans. While Atlantis is yet to be found, researchers have reported finding extensive continental landmasses beneath the waters of the South Pacific and Indian Oceans. Let’s take a look at these two mostly submerged continents and how improved knowledge of their subsea geography and geology can affect the definition of sovereign maritime zones.

Zealandia

In a 2016 paper entitled, “Zealandia: Earth’s Hidden Continent,” the authors, N. Mortimer, et al., reported on finding a submerged, coherent (i.e., not a collection of continental fragments) continental landmass about the size of India, located in the South Pacific Ocean off the eastern coast of Australia and generally centered on New Zealand. The extent of Zealandia is shown in the following map.

Source: N. Mortimer, et al., “Zealandia: Earth’s Hidden Continent,” GSA Today

The authors explain:

“A 4.9 Mkm2 region of the southwest Pacific Ocean is made up of continental crust. The region has elevated bathymetry relative to surrounding oceanic crust, diverse and silica-rich rocks, and relatively thick and low-velocity crustal structure. Its isolation from Australia and large area support its definition as a continent—Zealandia. Zealandia was formerly part of (the ancient supercontinent) Gondwana. Today it is 94% submerged, mainly as a result of widespread Late Cretaceous crustal thinning preceding supercontinent breakup and consequent isostatic balance. The identification of Zealandia as a geological continent, rather than a collection of continental islands, fragments, and slices, more correctly represents the geology of this part of Earth. Zealandia provides a fresh context in which to investigate processes of continental rifting, thinning, and breakup.”

The authors claim that Zealandia is the seventh largest continental landmass, the youngest, and thinnest. While they also claim it is the “most submerged,” that claim may have been eclipsed by the discovery of another continental landmass in the Indian Ocean.

You can read the complete paper on Zealandia on the Geological Society of America (GSA) website at the following link:

http://www.geosociety.org/gsatoday/archive/27/3/pdf/GSATG321A.1.pdf

Mauritia

In the February 2013 paper, “A Precambrian microcontinent in the Indian Ocean,” authors T. Torsvik, et al., noted that an arc of volcanic islands in the western Indian Ocean, stretching from the west coast of India to the east coast of Madagascar, had been thought to be formed by the Réunion mantle plume (a hotspot in the Earth’s crust) and then distributed by tectonic plate movement over the past 65 million years. Their analysis of ancient rock zircons 660 million to 2 billion years old, found in beach sand, led them to a different conclusion. The presence of the ancient zircons was inconsistent with the geology of the more recently formed volcanic islands, and was evidence of “ancient fragments of continental lithosphere beneath Mauritius (that) were brought to the surface by plume-related lavas.”

The ages of the zircon samples were determined using U-Pb (uranium-lead) dating. This dating technique is particularly effective with zircons, which originally contain uranium and thorium, but no lead. The lead content of a present-day zircon is attributed to uranium and thorium radioactive decay that has occurred since the zircon was formed. The authors also used gravity data inversion (a technique to extract 3-D structural details from gravity survey data) to map crustal thicknesses in their areas of interest in the Indian Ocean.

The key results from this study were:

“…..Mauritius forms part of a contiguous block of anomalously thick crust that extends in an arc northwards to the Seychelles. Using plate tectonic reconstructions, we show that Mauritius and the adjacent Mascarene Plateau may overlie a Precambrian microcontinent that we call Mauritia.”

This paper is available for purchase on the Nature Geoscience website at the following link:

http://www.nature.com/ngeo/journal/v6/n3/full/ngeo1736.html

This ancient continent of Mauritia is better defined in the 2016 article, “Archaean zircons in Miocene oceanic hotspot rocks establish ancient continental crust beneath Mauritius,” by L. Ashwai, et al.. The authors provide further evidence of this submerged continental landmass, the approximate extent of which is shown in the following map.Source: L. Ashwai, et al., Nature Communications

The authors report:

“A fragment of continental crust has been postulated to underlie the young plume-related lavas of the Indian Ocean island of Mauritius based on the recovery of Proterozoic zircons from basaltic beach sands. Here we document the first U–Pb zircon ages recovered directly from 5.7 Ma (million year old) Mauritian trachytic rocks (a type of igneous volcanic rock). We identified concordant Archaean xenocrystic zircons ranging in age between 2.5 and 3.0 Ga (billion years old) within a trachyte plug that crosscuts Older Series plume-related basalts of Mauritius. Our results demonstrate the existence of ancient continental crust beneath Mauritius; based on the entire spectrum of U–Pb ages for old Mauritian zircons, we demonstrate that this ancient crust is of central-east Madagascar affinity, which is presently located ∼700 km west of Mauritius. This makes possible a detailed reconstruction of Mauritius and other Mauritian continental fragments, which once formed part of the ancient nucleus of Madagascar and southern India.”

Starting about 85 million years ago, the authors suggest that the former contiguous continental landmass of Mauritia was “fragmented into a ribbon-like configuration because of a series of mid-ocean ridge jumps,” associated with various tectonic and volcanic events.

You can read the complete article on the Nature Communications website at the following link:

http://www.nature.com/articles/ncomms14086

Implications to the definition of maritime zones

The UN Convention on the Law of the Sea (UNCLOS) provides the basic framework whereby nations define their territorial sea, contiguous zone, and exclusive economic zone (EEZ). These maritime zones are depicted below.

Source: http://continentalshelf.gov/media/ECSposterDec2010.pdf

UNCLOS Article 76 defines the basis whereby a nation can claim an extended territorial sea by demonstrating an “extended continental shelf,” using one of two methods: formula lines or constraint lines. These options are defined below.

Source: http://continentalshelf.gov/media/ECSposterDec2010.pdf

You’ll find more details (than you ever wanted to know) in the paper, “A Practical Overview of Article 76 of the United Nations Convention on the Law of the Sea,” at the following link:

http://www.un.org/depts/los/nippon/unnff_programme_home/fellows_pages/fellows_papers/persand_0506_mauritius.pdf

New Zealand’s Article 76 application

New Zealand ratified UNCLOS in 1996 and undertook the Continental Shelf Project with the firm GNS Science “to identify submarine areas that are the prolongation of the New Zealand landmass”. New Zealand submitted an Article 76 application on 19 April 2006. Recommendations by the UN Commission on the Limits of the Continental Shelf (CLCS) were adopted on 22 August 2008. A UN summary of New Zealand’s application is available here:

http://www.un.org/depts/los/clcs_new/submissions_files/submission_nzl.htm

The detailed CLCS recommendations are available here:

http://www.un.org/depts/los/clcs_new/submissions_files/nzl06/nzl_summary_of_recommendations.pdf

Additional information in support of New Zealand’s application is available on the GNS Science website here:

https://www.gns.cri.nz/static/unclos/

Seychelles and Mauritius joint Article 76 application

The Republic of Seychelles ratified UNCLOS on 16 November 1994 and the Republic of Mauritius followed suit on 4 December 1994. On 1 December 2008, these countries jointly made an Article 76 application claiming continental shelf extensions in the region of the Mascarene Plateau. A UN summary of this joint application is available here:

http://www.un.org/depts/los/clcs_new/submissions_files/submission_musc.htm

The CLCS recommendations were adopted on 30 March 2011, and are available here:

http://www.un.org/depts/los/clcs_new/submissions_files/musc08/sms08_summary_recommendations.pdf

Implications for the future

The recent definitions of the mostly submerged continents of Zealandia and Mauritia greatly improve our understanding of how our planet evolved from a supercontinent in a global sea to the distributed landmasses in multiple oceans we know today.

Beyond the obvious scientific interest, improved knowledge of subsea geography and geology can give a nation the technical basis for claiming a continental shelf extension that expands their EEZ. The new data on Zealandia and Mauritia postdate the UNCLOS Article 76 applications by New Zealand, Seychelles and Mauritius, which already have been resolved. It will be interesting to see if these nations use the new research findings on Zealandia and Mauritia to file new Article 76 applications with broader claims.

Stratospheric Tourism Coming Soon

Peter Lobner

On 31 May 1931 Professor Auguste Piccard and Paul Kipfer made the first balloon flight into the stratosphere in a pressurized gondola. These aeronauts reached an altitude of 51,777 ft (15,782 m) above Augsburg, Germany in the balloon named FNRS (Belgian National Foundation for Scientific Research). At that time, a state-of-the-art high-altitude balloon was made of relatively heavy rubberized fabric. Several nations made stratospheric balloon flights in the 1930s, with the U.S. National Geographic Society’s Explorer II setting an altitude record of 72,395 ft (22,065 m) on 11 November 1935.

After World War II, very large, lightweight, polyethylene plastic balloons were developed in the U.S. by Jean Piccard (August Piccard’s twin brother) and Otto Winzen. These balloons were used primarily by the U.S. military to fly payloads to very high altitudes for a variety of research and other projects.

The Office of Naval Research (ONR) launched its first Project Skyhook balloon (a Piccard-Winzen balloon) on 25 September 1947, and launched more than 1,500 Skyhook balloons during the following decade. The first manned flight in a Skyhook balloon occurred in 1949.

The record for the highest unmanned balloon flight was set in 1972 by the Winzen Research Balloon, which achieved a record altitude of 170,000 ft (51,816 m) over Chico, CA.

USAF Project Man High & U.S. Navy Strato-Lab: 1956 – 1961

Manned stratospheric balloon flights became common in the 1950s and early 1960s under the U.S. Air Force’s Man High program and the U.S. Navy’s Strato-Lab program. One goal of these flights was to gather physiological data on humans in pressure suits exposed to near-space conditions at altitudes of about 20 miles (32.2 km) above the Earth. You’ll find an overview of these military programs at the following link:

http://www.space-unit.com/articles/manned_pioneer_flights_in_the_usa.pdf

Three Man High flights were conducted between June 1957 and October 1958. In August 1957, the Man High II balloon flight by Major David Simons reached the highest altitude of the program: 101,516 feet (30,942 m). The rather cramped Man High II gondola is shown in the following diagram.

Man High II gondola. Source: USAF.

The Man High II gondola is on display at the National Museum of the United States Air Force, Dayton, OH. You’ll find details on the Man High II mission at the following link:

http://stratocat.com.ar/fichas-e/1957/CBY-19570819.htm

Five Strato-Lab flights were made between August 1956 and May 1961, with some flights using a pressurized gondola and others an open, unpressurized gondola. The last mission, Strato-Lab High V, carrying Commander Malcolm Ross and scientist Victor Prather in an unpressurized gondola, reached a maximum altitude of 113,740 ft (34,575 meters) on the 4 May 1961. The main objective of this flight was to test the Navy’s Mark IV full-pressure flight suit.

Strato-Lab V open gondola. Source: stratocat.com

See the following link for details on Strato-Lab missions.

http://stratocat.com.ar/artics/stratolab-e.htm

USAF Project Excelsior: 1959 – 60

To study the effects of high-altitude bailout on pilots, the USAF conducted Project Excelsior in 1959 and 1960, with USAF Capt. Joseph Kittinger making all three Excelsior balloon flights. In the Excelsior III flight on 16 August 1960, Capt. Kittinger bailed out from the unpressurized gondola at an altitude of 102,800 feet (31,330 m) and was in free-fall for 4 minutes 36 seconds. Thanks to lessons learned on the previous Excelsior flights, a small drogue stabilized Kittinger’s free-fall, during which he reached a maximum vertical velocity of 614 mph (988 km/h) before slowing to a typical skydiving velocity of 110 – 120 mph (177 – 193 kph) in the lower atmosphere. You’ll find Capt. Kittinger’s personal account of this record parachute jump at the following link:

http://news.nationalgeographic.com/news/2012/10/121008-joseph-kittinger-felix-baumgartner-skydive-science/

Project Stargazer: 1960

Capt. Kittinger and astronomer William White performed 18 hours of astronomical observations from the open gondola of the Stargazer balloon. The flight, conducted on 13 – 14 December 1960, reached a maximum altitude of 82,200 feet (25,100 m).

Red Bull Stratos: 2012

On 14 October 2012, Felix Baumgartner exited the Red Bull Stratos balloon gondola at 128,100 feet (39,045 m) and broke Joe Kittinger’s 52-year old record for the highest parachute jump. Shortly after release, Baumgartner started gyrating uncontrollably due to asymmetric drag in the thin upper atmosphere and no means to stabilize his attitude until reaching denser atmosphere. During his perilous 4 minute 40 second free-fall to an altitude of about 8,200 ft (2,500 m), he went supersonic and reached a maximum vertical velocity of 833.9 mph (1,342.8 kph, Mach 1.263).

You’ll find details on Baumgartner’s mission at the following link:

http://www.redbullstratos.com

You can read the 4 February 2013 Red Bull Stratos Summary Report here:

https://issuu.com/redbullstratos/docs/red_bull_stratos_summit_report_final_050213

Capt. Kittinger was an advisor to the Red Bull Stratos team. The gondola, Felix Baumgartner’s pressure suit and parachute are on display at the Smithsonian Air & Space Museum’s Udvar-Hazy Center in Chantilly, VA.

Red Bull Stratos gondola & pressure suit. Source: Smithsonian

Stratospheric Explorer: 2014

Baumgartner’s record was short-lived, being broken on 14 October 2014 when Alan Eustace jumped from the Stratospheric Explorer (StratEx) balloon at an altitude of 135,899 ft (41,422 meters).  Eustace used a drogue device to help maintain stability during the free-fall, before his main parachute opened. He fell 123,235 ft (37,623 meters) with the drogue and reached a maximum vertical velocity of 822 mph (1,320 km/h); faster than the speed of sound. You can read an interview of Alan Eustace, including his thoughts on stratosphere balloon tourism, at the following link:

http://www.popsci.com/moonshot-man-why-googles-alan-eustace-set-new-free-fall-record

More information of this record-setting parachute jump is at the following link:

http://www.space.com/34725-14-minutes-from-earth-supersonic-skydive.html

World View® Voyager

If you’re not ready to sign up for a passenger rocket flight, and the idea of bailing out of a balloon high in the stratosphere isn’t your cup of tea, then perhaps you’d consider a less stressful flight into the stratosphere in the pressurized gondola of the Voyager passenger balloon being developed by World View Enterprises, Inc. They describe an ascent in the Voyager passenger balloon as follows:

“With World View®, you’ll discover what it’s like to leave the surface of the Earth behind. Every tree, every building, even the mountains themselves become smaller and smaller as you gently and effortlessly rise above. The world becomes a natural collage of magnificent beauty, one you can only appreciate from space. Floating up more than 100,000 feet within the layers of the atmosphere, you will be safely and securely sailing at the very threshold of the heavens, skimming the edge of space for hours. The breathtaking view unfolds before you—our home planet suspended in the deep, beckoning cosmos. Your world view will be forever changed.”

You can view an animated video of such a flight at the following link:

https://vimeo.com/76082638

The following screenshots from this video show the very large balloon and the pressurized Voyager gondola, which is suspended beneath a pre-deployed parafoil parachute connected to the balloon. After reaching maximum altitude, the Voyager balloon will descend until appropriate conditions are met for releasing the parafoil and gondola, which will glide back to a predetermined landing point.

Source for five screenshots, above:  WorldView Enterprises, Inc.

In February 2017, World View opened a large facility at Spaceport Tucson to support its plans for developing and deploying unmanned balloons for a variety of missions as well as Voyager passenger balloons. World View announced plans to a fly a test vehicle named Explorer from Spaceport Tucson in early 2018, with edge-of-space passenger flights by the end of the decade.

For more information on World View Enterprises and the Voyager stratosphere balloon, visit their website at the following link:

http://www.worldview.space/about/#overview

Significant Advances in the Use of Flow Cell Batteries

Peter Lobner

My 31 January 2015 post, “Flow Cell Battery Technology Being Tested as an Automotive Power Source,” addressed flow cell battery (also known as redox flow cell battery) technology being applied by the Swiss firm nanoFlowcell AG for use in automotive all-electric power plants. The operating principles of their nanoFlowcell® battery are discussed here:

http://emagazine.nanoflowcell.com/technology/the-redox-principle/

This flow cell battery doesn’t use rare or hard-to-recycle raw materials and is refueled by adding “bi-ION” aqueous electrolytes that are “neither toxic nor harmful to the environment and neither flammable nor explosive.” Water vapor is the only “exhaust gas” generated by a nanoFlowcell®.

The e-Sportlimousine and the QUANT FE cars successfully demonstrated a high-voltage electric power automotive application of nanoFlowcell® technology.

Since my 2015 post, flow cell batteries have not made significant inroads as an automotive power source, however, the firm now named nanoFlowcell Holdings remains the leader in automotive applications of this battery technology. You can get an update on their current low-voltage (48 volt) automotive flow cell battery technology and two very stylish cars, the QUANT 48VOLT and the QUANTiNO, at the following link:

https://www.nanoflowcell.com

QUANT 48VOLT. Source: nanoFlowcell Holdings.QUANTiNO. Source: nanoFlowcell Holdings.

In contrast to most other electric car manufacturers, nanoFlowcell Holdings has adopted a low voltage (48 volt) electric power system for which it claims the following significant benefits.

“The intrinsic safety of the nanoFlowcell® means its poles can be touched without danger to life and limb. In contrast to conventional lithium-ion battery systems, there is no risk of an electric shock to road users or first responders even in the event of a serious accident. Thermal runaway, as can occur with lithium-ion batteries and lead to the vehicle catching fire, is not structurally possible with a nanoFlowcell® 48VOLT drive. The bi-ION electrolyte liquid – the liquid “fuel” of the nanoFlowcell® – is neither flammable nor explosive. Furthermore, the electrolyte solution is in no way harmful to health or the environment. Even in the worst-case scenario, no danger could possibly arise from either the nanoFlowcell® 48VOLT low-voltage drive or the bi-ION electrolyte solution.”

In comparison, the more conventional lithium-ion battery systems in the Tesla, Nissan Leaf and BMW i3 electric cars typically operate in the 355 – 375 volt range and the Toyota Mirai hydrogen fuel cell electric power system operates at about 650 volts.

In the high-performance QUANT 48VOLT “supercar,” the low-voltage application of flow cell technology delivers extreme performance [560 kW (751 hp), 300 km/h (186 mph) top speed] and commendable range [ >1,000 kilometers (621 miles)]. The car’s four-wheel drive system is comprised of four 140 kW (188 hp), 45-phase, low-voltage motors and has been optimized to minimize the volume and weight of the power system relative to the previous high-voltage systems in the e-Sportlimousine and QUANT FE.

The smaller QUANTiNO is designed as a practical “every day driver.”  You can read about a 2016 road test in Switzerland, which covered 1,167 km (725 miles) without refueling, at the following link:

http://emagazine.nanoflowcell.com/technology/1167-kilometre-test-drive-in-the-quantino/

A version of the QUANTiNO without supercapacitors currently is being tested. In this version, the energy for the electric motors comes directly from the flow cell battery, without any buffer storage in between. These tests are intended to refine the battery management system (BMS) and demonstrate the practicality of an even simpler, but lower performance, 48-volt power system.

Both the QUANT 48VOLT and QUANTiNO were represented at the 2017 Geneva Auto Show.

QUANT 48VOLT (left) and QUANTiNO (right). Source: nanoFlowcell Holdings.

You can read more about these cars at this auto show at the following link:

http://emagazine.nanoflowcell.com/viewpoint/nanoflowcell-at-the-2017-geneva-international-motor-show/

I think the automotive applications of flow cell battery technology look very promising, particularly with the long driving range possible with these batteries, the low environmental impact of the electrolytes, and the inherent safety of the low-voltage power system. I wouldn’t mind having a QUANT 48VOLT or QUANTiNO in my garage, as long as I could refuel at the end of a long trip.

Electrical utility-scale applications of flow cell batteries

In my 4 March 2016 post, “Dispatchable Power from Energy Storage Systems Help Maintain Grid Stability,” I noted that the reason we need dispatchable grid storage systems is because of the proliferation of grid-connected intermittent generators and the need for grid operators to manage grid stability regionally and across the nation. I also noted that battery storage is only one of several technologies available for grid-connected energy storage systems.

Flow cell battery technology has entered the market as a utility-scale energy storage / power system that offers some advantages over more conventional battery storage systems, such as the sodium-sulfur (NaS) battery system offered by Mitsubishi, the lithium-ion battery systems currently dominating this market, offered by GS Yuasa International Ltd. (system supplied by Mitsubishi), LG Chem, Tesla, and others, and the lithium iron phosphate (LiFePO4) battery system being tested in California’s GridSaverTM program. Flow cell battery advantages include:

  • Flow cell batteries have no “memory effect” and are capable of more than 10,000 “charge cycles”. In comparison, the lifetime of lead-acid batteries is about 500 charge cycles and lithium-ion battery lifetime is about 1,000 charge cycles. While a 1,000 charge cycle lifetime may be adequate for automotive applications, this relatively short battery lifetime will require an inordinate number of battery replacements during the operating lifetime of a utility-scale, grid-connected energy storage system.
  • The energy converter (the flow cell) and the energy storage medium (the electrolyte) are separate. The amount of energy stored is not dependent on the size of the battery cell, as it is for conventional battery systems. This allows better storage system scalability and optimization in terms of maximum power output (i.e., MW) vs. energy storage (i.e., MWh).
  • No risk of thermal runaway, as may occur in lithium-ion battery systems

The firm UniEnergy Technologies (UET) offers two modular energy storage systems based on flow cell battery technology: ReFlex and the much larger Uni.System™, which can be applied in utility-scale dispatchable power systems. UET describes the Uni.System™ as follows:

“Each Uni.System™ delivers 600kW power and 2.2MWh maximum energy in a compact footprint of only five 20’ containers. Designed to be modular, multiple Uni.System can be deployed and operated with a density of more than 20 MW per acre, and 40 MW per acre if the containers are double-stacked.”

One Uni.System™ module. Source: UET

You can read more on the Uni.System™ at the following link:

http://www.uetechnologies.com/products/unisystem

The website Global Energy World reported that UET recently installed a 2 MW / 8 MWh vanadium flow battery system at a Snohomish Public Utility District (PUD) substation near Everett, Wash. This installation was one of five different energy storage projects awarded matching grants in 2014 through the state’s Clean Energy Fund. See the short article at the following link:

http://www.globalenergyworld.com/news/29516/Flow_Battery_Based_on_PNNL_Chemistry_Commissioned.htm

Source: Snohomish PUD

Snohomish PUD concurrently is operating a modular, smaller (1 MW / 0.5 MWh) lithium ion battery energy storage installation. The PUD explains:

“The utility is managing its energy storage projects with an Energy Storage Optimizer (ESO), a software platform that runs in its control center and maximizes the economics of its projects by matching energy assets to the most valuable mix of options on a day-ahead, hour-ahead and real-time basis.”

You can read more about these Snohomish PUD energy storage systems at the following link:

http://www.snopud.com/PowerSupply/energystorage.ashx?p=2142

The design of both Snohomish PUD systems are based on the Modular Energy Storage Architecture (MESA), which is described as, “an open, non-proprietary set of specifications and standards developed by an industry consortium of electric utilities and technology suppliers. Through standardization, MESA accelerates interoperability, scalability, safety, quality, availability, and affordability in energy storage components and systems.” You’ll find more information on MESA standards here:

http://mesastandards.org

Application of the MESA standards should permit future system upgrades and module replacements as energy storage technologies mature.

Many LLNL Atmospheric Nuclear Test Videos Declassified

Peter Lobner

Lawrence Livermore National Laboratory (LLNL) has posted 64 declassified videos of nuclear weapons tests on YouTube. LLNL reports:

“The U.S. conducted 210 atmospheric nuclear tests between 1945 and 1962, with multiple cameras capturing each event at around 2,400 frames per second. But in the decades since, around 10,000 of these films sat idle, scattered across the country in high-security vaults. Not only were they gathering dust, the film material itself was slowly decomposing, bringing the data they contained to the brink of being lost forever.

For the past five years, Lawrence Livermore National Laboratory (LLNL) weapon physicist Greg Spriggs and a crack team of film experts, archivists and software developers have been on a mission to hunt down, scan, reanalyze and declassify these decomposing films. The goals are to preserve the films’ content before it’s lost forever, and provide better data to the post-testing-era scientists who use computer codes to help certify that the aging U.S. nuclear deterrent remains safe, secure and effective.”

Operation Hardtack-1 – Nutmeg 51538. Source: LLNL

Here’s the link:

https://www.youtube.com/playlist?list=PLvGO_dWo8VfcmG166wKRy5z-GlJ_OQND5

Update 7 July 2018:

LLNL has posted more than 250 declassified videos of nuclear weapons tests on YouTube.  The newly digitized videos document several of the U.S. government’s 210 nuclear weapons tests carried out between 1945 and 1962.  You’ll find these videos at the following link:

https://www.youtube.com/user/LivermoreLab/videos

The Event Horizon Telescope

Peter Lobner

The Event Horizon Telescope (EHT) is a huge synthetic array for Very Long Baseline Interferometry (VLBI), which is created through the collaboration of millimeter / submillimeter wave radio telescopes and arrays around the world. The goal of the EHT “is to directly observe the immediate environment of a black hole with angular resolution comparable to the event horizon.”

The primary target for observation is Sagittarius A* (Sgr A*), which is the massive black hole at the center of our Milky Way galaxy. This target is of particular interest to the EHT team because it “presents the largest apparent event horizon size of any black hole candidate in the Universe.” The Sgr A* event horizon is estimated to have a Schwarzschild radius of 12 million kilometers (7.46 million miles) or a diameter of 24 million km (14.9 million miles). The galactic core (and hence Sgr A*) is estimated to be 7.6 to 8.7 kiloparsecs (about 25,000 to 28,000 lightyears, or 1.47 to 1.64e+17 miles) from Earth. At that distance, the Sgr A* black hole subtends an angle of about 2e-5 arcseconds (20 microarcseconds).

Another EHT target of interest is a much more distant black hole in the Messier 87 (M87) galaxy.

The member arrays and telescopes supporting EHT are:

  • Arizona Radio Observatory /Submillimeter Wave Telescope (ARO/SMT, Arizona, USA)
  • Atacama Pathfinder EXperiment (APEX, Chile)
  • Atacama Submillimeter Telescope Experiment (ASTE, Chile)
  • Combined Array for Research in Millimeter-wave Astronomy (CARMA, California, USA)
  • Caltech Submillimeter Observatory (Hawaii, USA)
  • Institute de Radioastronomie Millimetrique (IRAM, Spain)
  • James Clerk Maxwell Telescope (JCMT, Hawaii)
  • Large Millimeter Telescope Alfonso Serrano (LMT, Mexico)
  • The Submillimeter Array (Hawaii, USA)

The following arrays and telescopes are expected to join the EHT collaboration:

  • Atacama Large Millimeter / submillimeter Array (ALMA, Chile)
  • Northern Extended Millimeter Array (NOEMA, France)
  • South Pole Telescope (SPT, Antarctica)

Collectively, the arrays and telescopes forming the EHT provide a synthetic aperture that is almost equal to the diameter of the Earth (12,742 km, 7,918 miles).

EHT array sizeSource: graphics adapted by A. Cuadra / Science; data from Event Horizon Telescope

Technical improvements to the member telescopes and arrays are underway with the goal of systematically improving EHT performance. These improvements include development and deployment of:

  • Submillimeter dual-polarization receivers (energy content of cosmic radiation is split between two polarizations)
  • Highly stable frequency standards to enable VLBI at frequencies between 230 to 450 GHz (wavelengths of 1.3 mm – 0.6 mm).
  • Higher-bandwidth digital VLBI backends and recorders

In operations to date, EHT has been observing the Sgr A* and M87 black holes at 230 GHz (1.3 mm) with only some of the member arrays and telescopes participating. These observations have yielded angular resolutions of better than 60 microarcseconds. Significantly higher angular resolutions, up to about 15 microarcseconds, are expected from the mature EHT operating at higher observing frequencies and with longer baselines.

Coordinating observing time among all of the EHT members is a challenge, since participation in EHT is not a dedicated mission for any site. Site-specific weather also is a factor, since water in the atmosphere absorbs radiation in the EHT observing frequency bands. The next observing opportunity is scheduled between 5 – 14 April 2017. Processing the data from this observing run will take time, hence results are not expected to be known until later this year.

For more information on EHT, see the 2 March 2017 article by Daniel Clery entitled, ”This global telescope may finally see the event horizon of our galaxy’s giant black hole,” at the following link:

http://www.sciencemag.org/news/2017/03/global-telescope-may-finally-see-event-horizon-our-galaxys-giant-black-hole?utm_campaign=news_daily_2017-03-02&et_rid=215579562&et_cid=1194555

Much more information is available on the EHT website at the following link:

http://www.eventhorizontelescope.org

Radio telescope resolution

An article on the Las Cumbres Observatory (LCO) website explains how the angular resolution of radio telescopes, including VLBI arrays, is determined. In this article, the author, D. Stuart Lowe, states that “an array of radio telescopes of 217 km in diameter can produce an image with a resolution equivalent to the Hubble Space Telescope.” You’ll find this article here:

https://lco.global/spacebook/radio-telescopes/

The Hubble Space Telescope has an angular resolution of 1/10th of an arcsecond (1e-1 arcsecond).

A VLBI array with the diameter of the Earth (1.27e+7 meters) operating in the EHT’s millimeter / submillimeter wavelength band (1.3e-3 to 6.0e-4 meters) has a theoretical angular resolution of 2.6e-5 to 1.2e-5 arcseconds (25 to 12 microarcseconds).

EHT should be capable of meeting its goal of angular resolution comparable to a black hole’s event horizon.

X-ray observation of Sgr A*

Combining infrared images from the Hubble Space Telescope with images the Chandra X-ray Observatory, NASA created the following composite image showing the galactic core in the vicinity of Sgr A*. NASA reports:

“The large image contains X-rays from Chandra in blue and infrared emission from the Hubble Space Telescope in red and yellow. The inset shows a close-up view of Sgr A* in X-rays only, covering a region half a light year wide. The diffuse X-ray emission is from hot gas captured by the black hole and being pulled inwards.”

This image gives you a perspective on the resolution of Sgr A* possible at X-ray frequencies with current equipment. EHT will have much higher resolution in its radio frequency bands.

NASA Sgr A* picSource: X-Ray: NASA/UMass/D.Wang et al., IR: NASA/STScI

More details on this image are available at the following NASA link:

https://www.nasa.gov/mission_pages/chandra/multimedia/black-hole-SagittariusA.html

Animation of Sgr A* effects on nearby stars

See my 24 January 2017 post, “The Black Hole at our Galactic Center is Revealed Through Animations,” for more information on how teams of astronomers are developing a better understanding of the unseen Sgr A* black hole through long-term observations of the relative motions of nearby stars that are under the influence of this black hole.  These observations have been captured in a very interesting animation.

The First Test of Standard and Holographic Cosmology Models Ends in a Draw

Peter Lobner

Utrecht University (Netherlands) Professor Gerard ’t Hooft was the first to propose the “holographic principle,” in which all information about a volume of space can be thought of as being encoded on a lower-dimensional “boundary” of that volume.

Stanford Professor Leonard Susskind was one of the founders of string theory and, in 1995, developed the first string theory interpretation of the holographic principle to black holes. Dr. Susskind’s analysis showed that, consistent with quantum theory, information is not lost when matter falls into a black hole. Instead, it is encoded on a lower-dimensional “boundary” of the black hole, namely the event horizon.

Black hole event horizonSource: screenshot from video, “Is the Universe a Hologram?”

Extending the holographic principle to the universe as a whole, a lower-dimensional “boundary,” or “cosmic horizon,” around the universe can be thought of as a hologram of the universe. Quantum superposition suggests that this hologram is indistinguishable from the volume of space within the cosmic horizon.

You can see a short (15:49 minute) 2015 video interview of Dr. Susskind, “Is The Universe A Hologram?” at the following link:

https://www.youtube.com/watch?v=iNgIl-qIklU

If you have the time, also check out the longer (55:26) video lecture by Dr. Susskind entitled, “Leonard Susskind on The World As Hologram.” In this video, he explains the meaning of “information” and how information on an arbitrary volume of space can be encoded in one less dimension on a surface surrounding the volume.

https://www.youtube.com/watch?v=2DIl3Hfh9tY

You also might enjoy the more detailed story in Dr. Susskind’s 2008 book, “The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics.”

Leonard Susskind book cover   Source: Little, Brown and Company

In my 28 September 2016 post, “The Universe is Isotropic,” I reported on a conclusion reached by researchers using data from the Planck spacecraft’s all-sky survey of the cosmic microwave background (CMB). The researchers noted that an anisotropic universe would leave telltale patterns in the CMB. However, these researchers found that the actual CMB shows only random noise and no signs of such patterns.

More recently, a team of researchers from Canada, UK and Italy, also using the Planck spacecraft’s CBM data set, have offered an alternative view that the universe may be a hologram.  You’ll find the abstract for the 27 January 2017 original research paper by N. Afshordi, et al., “From Planck Data to Planck Era: Observational Tests of Holographic Cosmology,” in Physical Review Letters at the following link:

http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.118.041301

The authors note:

“We test a class of holographic models for the very early Universe against cosmological observations and find that they are competitive to the standard cold dark matter model with a cosmological constant (Λ CDM) of cosmology.”

“Competitive” means that neither model disproves the other.  So, we have a draw.

If you are a subscriber to Physical Review Letters, you can download the complete paper by N. Afshordi, et al. from the Physical Review Letters site.

5G Wireless Defined

Peter Lobner

In my 20 April 2016 post, “5G is Coming, Slowly,” I discussed the evolution of mobile communications technology and the prospects for the deployment of the next generation: 5G. The complexity of 5G service relative to current generation 4G (LTE) service is daunting because of rapidly increasing technical demands that greatly exceed LTE core capabilities. Examples of technical drivers for 5G include the population explosion in the Internet of Things (IoT), the near-term deployment of operational self-driving cars, and the rise of virtual and augmented reality mobile applications.

Progress toward 5G is steadily being made. Here’s a status update.

1. International Telecommunications Union (ITU) technical performance requirements

The ITU is responsible for international standardization of mobile communications technologies. On 23 February 2017, the ITU released a draft report containing their current consensus definition of the minimum technical performance requirements for 5G wireless (IMT-2020) radio service.

The ITU authors note:

“….the capabilities of IMT-2020 are identified, which aim to make IMT-2020 more flexible, reliable and secure than previous IMT when providing diverse services in the intended three usage scenarios, including enhanced mobile broadband (eMBB), ultra-reliable and low-latency communications (URLLC), and massive machine type communications (mMTC).”

This ITU’s draft technical performance requirements report is a preliminary document that is a product of the second stage of the ITU’s standardization process for 5G wireless deployment, which is illustrated below:

ITU-IMT2020 roadmap crop

Source: ITU

The draft technical performance requirements report provides technical definitions and performance specifications in each of the following categories:

  • Peak data rate
  • Peak spectral efficiency (bits per hertz of spectrum)
  • User experience data rate
  • 5th percentile user spectral efficiency
  • Average spectral efficiency
  • Area traffic capacity
  • Latency
  • Connection density
  • Energy efficiency
  • Reliability
  • Mobility
  • Mobility interruption time
  • Bandwidth

You’ll find a good overview of the ITU’s draft performance requirements in an article by Sebastian Anthony entitled, “5G Specs Announced: “20 Gbps download, 1 ms latency, 1M device per square km,” at the following link:

https://arstechnica.com/information-technology/2017/02/5g-imt-2020-specs/?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

You can download the ITU’s draft report, entitled “DRAFT NEW REPORT ITU-R [IMT-2020 TECH PERF REQ] – Minimum requirements related to technical performance for IMT-2020 radio interface(s),” at the following link:

https://www.itu.int/md/R15-SG05-C-0040/en

In the ITU standardization process diagram, above, you can see that their final standardization documents will not be available until 2019 – 2020.

2. Industry 5G activities

Meanwhile, the wireless telecommunications industry isn’t waiting for the ITU to finalize IMT-2020 before developing and testing 5G technologies and making initial 5G deployments.

3rd Generation Partnership Project (3GPP)

In February 2017, the organization 5G Americas summarized the work by 3GPP as follows:

“As the name implies the IMT-2020 process is targeted to define requirements, accept technology proposals, evaluate the proposals and certify those that meet the IMT-2020 requirements, all by the 2020 timeframe. This however, requires that 3GPP start now on discussing technologies and system architectures that will be needed to meet the IMT-2020 requirements. 3GPP has done just that by defining a two phased 5G work program starting with study items in Rel-14 followed by two releases of normative specs spanning Rel-15 and Rel-16 with the goal being that Rel-16 includes everything needed to meet IMT-2020 requirements and that it will be completed in time for submission to the IMT-2020 process for certification.”

The 2016 3GPP timeline for development of technologies and system architectures for 5G is shown below.

3GGP roadmap 2016

Source: 3GPP / 5G Americas White Paper

Details are presented in the 3GPP / 5G Americas white paper, “Wireless Technology Evolution Towards 5G: 3GPP Releases 13 to Release 15 and Beyond,” which you can download at the following link:

http://www.5gamericas.org/files/6814/8718/2308/3GPP_Rel_13_15_Final_to_Upload_2.14.17_AB.pdf

Additional details are in a February 2017 3GPP presentation, “Status and Progress on Mobile Critical Communications Standards,” which you can download here:

http://www.3gpp.org/ftp/Information/presentations/Presentations_2017/CCE-2017-3GPP-06.pdf

In this presentation, you’ll find the following diagram that illustrates the many functional components that will be part of 5G service. The “Future IMT” in the pyramid below is the ITU’s IMT-2020.

ITU 5G functions

Source: 3GPP presentation

AT&T and Verizon plan initial deployments of 5G technology

In November 2016, AT&T and Verizon indicated that their initial deployment of 5G technologies would be in fixed wireless broadband services. In this deployment concept, a 5G wireless cell would replace IEEE 802.11 wireless or wired routers in a small coverage area (i.e., a home or office) and connect to a wired / fiber terrestrial broadband system. Verizon CEO Lowell McAdam referred to this deployment concept as “wireless fiber.” You’ll find more information on these initial 5G deployment plans in the article, “Verizon and AT&T Prepare to Bring 5G to Market,” on the IEEE Spectrum website at the following link:

http://spectrum.ieee.org/telecom/wireless/verizon-and-att-prepare-to-bring-5g-to-market

Under Verizon’s current wireless network densification efforts, additional 4G nodes are being added to better support high-traffic areas. These nodes are closely spaced (likely 500 – 1,000 meters apart) and also may be able to support early demonstrations of a commercial 5G system.

Verizon officials previously has talked about an initial launch of 5G service in 2017, but also have cautioned investors that this may not occur until 2018.

DARPA Spectrum Collaboration Challenge 2 (SC2)

In my 6 June 2016 post, I reported on SC2, which eventually could benefit 5G service by:

“…developing a new wireless paradigm of collaborative, local, real-time decision-making where radio networks will autonomously collaborate and reason about how to share the RF (radio frequency) spectrum.”

SC2 is continuing into 2019.  Fourteen teams have qualified for Phase 3 of the competition, which will culminate in the Spectrum Collaboration Challenge Championship Event, which will be held on 23 October 2019 in conjunction with the 2019 Mobile World Congress in Los Angeles, CA.  You can follow SC2 news here:

https://www.spectrumcollaborationchallenge.com/media/

If SC2 is successful and can be implemented commercially, it would enable more efficient use of the RF bandwidth assigned for use by 5G systems.

3. Conclusion

Verizon’s and AT&T’s plans for early deployment of a subset of 5G capabilities are symptomatic of an industry in which the individual players are trying hard to position themselves for a future commercial advantage as 5G moves into the mainstream of wireless communications. This commercial momentum is outpacing ITU’s schedule for completing IMT-2020. The recently released draft technical performance requirements provide a more concrete (interim) definition of 5G that should remove some uncertainty for the industry.

3 April 2019 Update:  Verizon became the first wireless carrier to deliver 5G service in the U.S.

Verizon reported that it turned on its 5G networks in parts of Chicago and Minneapolis today, becoming the first wireless carrier to deliver 5G service to customers with compatible wireless devices in selected urban areas.  Other U.S. wireless carriers, including AT&T, Sprint and T-Mobile US, have announced that they plan to start delivering 5G service later in 2019.

Perspective on the Detection of Gravitational Waves

Peter Lobner

On 14 September 2015, the U.S. Laser Interferometer Gravitational-Wave Observatory (LIGO) became the first observatory to detect gravitational waves. With two separate detector sites (Livingston, Louisiana, and Hanford, Washington) LIGO was able to define an area of space from which the gravitational waves, dubbed GW150914, are likely to have originated, but was not able to pinpoint the source of the waves. See my 11 February 2016 post, “NSF and LIGO Team Announce First Detection of Gravitational Waves,” for a summary of this milestone event.

You’ll find a good overview on the design and operation of LIGO and similar laser interferometer gravity wave detectors in the short (9:06) Veratisium video, “The Absurdity of Detecting Gravitational Waves,” at the following link:

https://www.youtube.com/watch?v=iphcyNWFD10

The LIGO team reports that the Advanced LIGO detector is optimized for “a range of frequencies from 30 Hz to several kHz, which covers the frequencies of gravitational waves emitted during the late inspiral, merger, and ringdown of stellar-mass binary black holes.”

First observing run (O1) of the Advanced LIGO detector

The LIGO team defines O1 as starting on 12 September 2015 and ending on 19 January 2016. During that period, the LIGO team reported that it had, “unambiguously identified two signals, GW150914 and GW151226, with a significance of greater than 5σ,” and also identified a third possible signal, LVT151012. The following figure shows the time evolution of the respective gravitational wave signals from when they enter the LIGO detectors’ sensitive band at 30 Hz.

LIGO GW signals screenshot

Source: B. P. Abbot et al., PHYS. REV. X 6, 041015 (2016)

The second detection of gravitational waves, GW151226, occurred on 26 December 2015. You’ll find the 16 June 2016 LIGO press release for this event at the following link:

https://www.ligo.caltech.edu/news/ligo20160615

At the following link, you can view a video showing a simulation of GW151226, starting at a frequency of 35 Hz and continuing through the last 55 gravitational-wave cycles before the binary black holes merge:

https://www.ligo.caltech.edu/video/ligo20160615v3

GW151226 simularion screenshotSource: Max Planck Institute for Gravitational Physics/ Simulating eXtreme Spacetime (SXS) project

In their GW151226 press release, the LIGO team goes out on a limb and makes the following estimate:

“….we can now start to estimate the rate of black hole coalescences in the Universe based not on theory, but on real observations. Of course with just a few signals, our estimate has big uncertainties, but our best right now is somewhere between 9 and 240 binary black hole coalescences per cubic Gigaparsec per year, or about one every 10 years in a volume a trillion times the volume of the Milky Way galaxy!”

More details on the GW151226 detection are available in the paper “GW151266: Observation of Gravitational Waves from a 22-Solar Mass Black Hole Coalescence,” at the following link:

https://dcc.ligo.org/public/0124/P151226/013/LIGO-P151226_Detection_of_GW151226.pdf

LIGO releases its data to the public. Analyses of the LIGO public data already are yielding puzzling results. In December 2016, researchers reported finding “echoes” in the gravitational wave signals detected by LIGO. If further analysis indicates that the “echoes” are real, they may indicate a breakdown of Einstein’s general theory of relativity at or near the “edge” of a black hole. You can read Zeeya Marali’s 9 December 2016 article, “LIGO black hole echoes hint at general relativity breakdown,” at the following link:

http://www.nature.com/news/ligo-black-hole-echoes-hint-at-general-relativity-breakdown-1.21135

Second observing run (O2) of the Advanced LIGO detector is in progress now

Following a 10-month period when they were off-line for modifications, the Advanced LIGO detectors returned to operation on 30 November 2016 with a 10% improvement in the sensitivity of their interferometers. The LIGO team intends to further improve this sensitivity by a factor of two during the next few years.

VIRGO will add the capability to triangulate the source of gravitational waves

In my 16 December 2015 post, “100th Anniversary of Einstein’s General Theory of Relativity and the Advent of a New Generation of Gravity Wave Detectors,” I reported on other international laser interferometer gravitational wave detectors. The LIGO team has established a close collaboration with their peers at the European Gravitational Observatory, which is located near Pisa, Italy. Their upgraded detector, VIRGO, in collaboration with the two LIGO detectors, is expected to provide the capability to triangulate gravitational wave sources. With better location information on the source of gravitational waves, other observatories can be promptly notified to join the search using other types of detectors (i.e., optical, infrared and radio telescopes).

VIRGO is expected to become operational in 2017, but technical problems, primarily with the mirror suspension system, may delay startup. You’ll find a 16 February 2017 article on the current status of VIRGO at the following link:

http://www.sciencemag.org/news/2017/02/european-gravitational-wave-detector-falters

Perspective on gravitational wave detection

Lyncean member Dave Groce recommends the excellent video of an interview of Caltech Professor Kip Thorne (one of the founders of LIGO) by “Einstein” biographer Walter Issacson. This 2 November 2016 video provides a great perspective on LIGO’s first detection of gravitational waves and on the development of gravitational wave detection capabilities. You’ll find this long (51:52) but very worthwhile video at the following link:

https://www.youtube.com/watch?v=mDFF27Nr-EU

Dr. Thorne noted that, at the extremely high sensitivity of the Advanced LIGO detectors, we are beginning to see the effects of quantum fluctuations in “human sized objects,” in particular, the 40 kg (88.2 pound) mirrors in the LIGO interferometers. In each mirror, the center of mass (the average position of all the mass in the mirror) fluctuates due to quantum physics at just the level of the Advanced LIGO noise.

In the interview, Dr. Thorne also discusses several new observatories that will be become available in the following decades to expand the spectrum of gravitational waves that can be detected. These are shown in the following diagram.

Spectrum for gravitational wave detection screenshotSource: screenshot from Kip Thorne / Walter Issacson interview

  •  LISA = Laser Interferometer Space Antenna
  • PTA = Pulsar Timing Array
  • CMB = Cosmic microwave background

See my 27 September 2016 post, “Space-based Gravity Wave Detection System to be Deployed by ESA,” for additional information on LISA.

Clearly, we’re just at the dawn of gravitational wave detection and analysis. With the advent of new and upgraded gravitational wave observatories during the next decade, there will be tremendous challenges to align theories with real data.   Through this process, we’ll get a much better understanding of our Universe.

Long-duration Space Missions May Affect the Human Gut Microbiome

Peter Lobner

On 23 November 2016, Dr. Stanley Maloy gave the presentation, “Beneficial Microbes and Harmful Antibiotics,” (Talk #107) to the Lyncean Group. The focus of this presentation was on the nature of the human gut microbiome, its relationship to personal health, disruption of the gut microbiome by antibiotics and other causes, and how to restore a disrupted gut microbiome. You can find his presentation on the Past Meetings tab on the Lyncean home page or use the following direct link:

https://lynceans.org/talk-107-11232016/

In a story that’s related to Dr. Maloy’s presentation, a 3 February 2017 article by Megan Fellman entitled, “Changes in astronaut’s gut bacteria attributed to spaceflight,” provides the first results of a comparative analysis by Northwestern University researchers on changes in the gut microbiomes of NASA astronaut identical twins Scott and Mark Kelly. As part of a NASA experiment to examine the effects of long-duration space missions on humans, Scott Kelly was continuously in orbit on the International Space Station (ISS) for 340 days during 2015 – 2016, while Mark Kelly remained on Earth and served as the control subject.

Mark & Scott KellyMark (left) and Scott Kelly (right). Source: NASA

The key points reported by Northwestern University researchers were:

  • There was a shift in the balance between the two dominant groups of bacteria (Firmicutes and Bacteroidetes) in Scott Kelly’s gastrointestinal (GI) tract when he was in space. The balance returned to pre-flight levels when Scott Kelly returned to Earth.
  • Fluctuations in the same bacterial groups were seen in Mark Kelly, the control on Earth, but the fluctuations were not as great as those seen in Scott Kelly in space.
  • The surprise finding was that an expected change in diversity of gut microbes (the number of different species) was not observed in Scott Kelly while in space.
  • “Right now, we do not see anything alarming….”

You can read the complete article on this Northwestern University research at the following link:

https://news.northwestern.edu/stories/2017/february/change-in-astronauts-gut-bacteria-attributed-to-spaceflight/

So far, it looks like the human gut microbiome may not be a limiting factor in long-duration spaceflight.

Architect David Fisher’s Dynamic Skyscraper

Peter Lobner

David Fisher is the leading proponent of dynamic architecture and the inventor of the shape-changing dynamic skyscraper. The shape-changing feature is a clear differentiator between the dynamic skyscraper and earlier symmetrical rotating high-rise buildings like Suite Vollard, which was the first rotating high-rise building. This unique residential building opened in 2001 in Brazil.

David FisherSource: costruzionipallotta.it

GE.DI Group

The GE.DI Group is an Italian construction firm that has become a leading proponent of new construction systems, including David Fisher’s dynamic architecture.

“GE.DI. Group (GEstione DInamica stand for Dynamic Management) in 2008, decided to embark on a new era of architecture: the Dynamic Architecture, a project of the architect David Fisher for rotating towers, continually evolving: dynamic, ecological, made with industrial systems.”

“The revolution of Fisher put an end to the era of the static and immutable architecture and it inaugurates a new one, at the sign of the dynamism and the lifestyle. These buildings will become the symbol of a new philosophy that will change the image of our cities and the concept of living.”

More information on GE.DI Group is available at the following link:

http://www.costruzionipallotta.it/dynamic_architecture_en.htm

Concept of a Dynamic Skyscraper

Dynamic skyscraper conceptShape-changing rotating skyscraper. Source: costruzionipallotta.it

Three unique features of the dynamic skyscraper are:

1. Building exterior shape changes continuously: Each floor can rotate slowly thru 360 degrees independently of the other floors, with control over speed and direction of rotation. Coordination of the rotating floors to produce the artistic building shapes shown above may not be implemented in some applications. Nonetheless, the building’s exterior shape now has a fourth dimension: Time. The artistic possibilities of the dynamic skyscraper are shown (in time lapse) in the following 2011 video.

https://www.youtube.com/watch?v=QR2HukuFkQo

2. Prefabricated construction, except for the reinforced concrete core: After the reinforced concrete core has been completed and building services have been installed inside the core, factory manufactured prefabricated units will be transported to the construction site completely finished and will be hung from the central core. Connecting each rotating floor to electrical and water services in the stationary core will be an interesting engineering challenge. The extensive use of prefabricated construction (about 85% of total construction) greatly reduces site labor requirements, construction environmental impacts, and overall construction time. Read more on plans for prefabrication at the following link:

http://www.costruzionipallotta.it/prefabrication.htm

Building plan - dynamic skyscraper

Assembly plan for a dynamic skyscraper. Source: Source: costruzionipallotta.it

Modular unit installationPrefabricated modules being lifted into place. Source: Source: costruzionipallotta.it

3. Generates its own electric power: Horizontal wind turbine generators installed in the approximately two-foot gap between the rotating floors will be the building’s primary source of power. Roof-mounted solar panels on each floor also will be employed. Surplus power will be delivered to the grid, delivering enough power to supply about five similarly sized buildings in the vicinity. Read more on the energy generating and energy saving features of the dynamic skyscraper at the following link:

http://www.costruzionipallotta.it/green_building.htm

Wind turbine installationWind turbine installation.  Source: Source: costruzionipallotta.it

The first dynamic skyscraper may be built in Dubai

In a 14 February 2017 article entitled, “Dubai Will Have the World’s First Rotating Skyscraper by 2020,” Madison Margolin reported on the prospects for an 80-story mixed-use (office, hotel, residential) rotating skyscraper in Dubai. You can read the complete article on the Motherboard website at the following link:

https://motherboard.vice.com/en_us/article/dubai-will-have-the-worlds-first-rotating-skyscraper-by-2020?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

Da Vinci rotating-tower DubaiSource: http://www.slideshare.net/swapnika15/dynamic-da-vincirotating-tower

Each floor of the 420 meter (1,378 ft.) Da Vinci Tower will consist of 40 factory-built modules hung from the load-bearing 22-meter (72.2 ft.) diameter reinforced concrete core. Each module will be cantilevered up to 15 meters (49.2 ft.) from the core.

Cantilevered floorsCantilevered rotating floors. Source: costruzionipallotta.it

The lower retail / office floors of the Da Vinci Tower will not rotate. The upper hotel and residential floors will rotate and each will require about 4 kW of power to rotate. Each residential floor can be configured into several individual apartments or a single “villa.” You’ll find a concept for a “luxury penthouse villa” at the following link:

http://www.costruzionipallotta.it/lifestyle.htm

You’ll find more details on the Da Vinci Tower in a slideshow at the following link:

http://www.slideshare.net/swapnika15/dynamic-da-vincirotating-tower

If it is built, the Da Vinci Tower will be the world’s first dynamic skyscraper. It also will be David Fisher’s first skyscraper.