Farewell Magnox: 1956 – 2015

Peter Lobner

Magnox reactors were the first generation of nuclear power plants developed in the UK. They were CO2-cooled, graphite moderated power reactors with natural uranium metal fuel. The name Magnox refers to the magnesium-aluminum alloy cladding on the metal fuel rods.

The first two Magnox sites, Calder Hall and Chapelcross, each had four dual-purpose reactors primarily intended to produce nuclear material for the UK nuclear weapons program, with the secondary role of generating electricity for UK’s national grid. The first unit at Calder Hall went critical in May 1956 and, on 27 August 1956, became the first UK nuclear power plant to connect to the national power grid. All subsequent Magnox plants were larger, two-unit commercial nuclear power plants. The UK’s fleet of Magnox plants reached a total of 26 units at 11 sites. On 30 December 2015, the final Magnox plant, Wylfa unit 1, ceased generation after operating for 44 years. This milestone ended 59 years of Magnox reactor operation in the UK.

The only remaining CO2-cooled, graphite moderated commercial power reactor in the world are the UK’s Advanced Gas-cooled Reactors (AGRs). Other commercial operators of CO2-cooled, graphite-moderated reactors have retired their units: Italy in 1987, Spain in 1990, France in 1994, and Japan in 1998. North Korea operates a small CO2-cooled, graphite-moderated reactor that likely has been used for nuclear material production.

Following is a brief overview of this pioneering reactor type.

Overview of Magnox reactors:

A Magnox reactor has a large reactor core that operated at low power density (< 1 kW/liter) and relatively temperatures, which enabled the use of uranium metal fuel and Magnox cladding. The relatively low operating pressure (typically 130 – 150 psig) of the primary circuit enabled the use mild steel for the primary pressure boundary.

Here’s a comparison of some key parameters for the early Bradwell Magnox reactor and the similar vintage early U.S. pressurized water reactor (PWR), Yankee.

Magnox-PWR comparison

The basic gas and fluid flow paths in the earlier Magnox plants are shown in the following diagram, which shows one of four steam generators. In the closed-loop steel primary circuit, forced circulation of CO2 transfers heat from the reactor core to the steam generators, which in turn transfer heat into the secondary circuit. In the closed-loop secondary circuit, water delivered to the steam generators is heated and converted to steam, which drives turbine generators to produce electricity. The steam exhausted from the turbines is condensed in the main condensers and returned to the steam generators. An open- or closed-loop circulating water system transfers waste heat from the main condensers to a heat sink (i.e., cooling towers or a body of water).

Magnox reactor 1_IEE adapted

Image credit: adapted from The Institution of Electrical Engineers, London, ISBN 0 85296 581 8

The first two 4-unit Magnox sites, Calder Hall and Chapelcross, were dual-use sites producing nuclear material for the military and electric power for the UK power grid. Calder Hall was the world’s first nuclear power plant capable of delivering “industrial scale“ electric power (initially 35 MWe net per reactor, 140 MWe total), which far exceeded the generating capacities of the two nuclear plants that previously had connected to their local grids in Russia (Obninsk, 27 June 1954, 6 MWe) and the USA (Borax III, 17 July 1955, 500 kWe).

Calder Hall operated from 1956 to 2003 and produced weapons-grade plutonium until about 1995, when the UK government announced that the production of plutonium for weapons purposes had ceased. Chapelcross operated from 1959 to 2004. Two Chapelcross units produced tritium for the UK nuclear weapons program and required enriched uranium fuel.

The first two 2-unit Magnox commercial power stations were Berkeley and Bradwell, which were followed by seven more 2-unit Magnox stations in the UK. The physical arrangement of Magnox plants varied significantly from plant to plant, as designers revised gas circuit designs, refueling schemes (top or bottom refueling), and other features. The following diagrams show the differences between the Hinkley Point and later Sizewell gas circuits.

Magnox gas circuit-Hinkley Point

Hinkley Point with separate CO2 gas blower and steam generator.  Image credit: Nuclear Engineering, April 1965

Magnox gas circuit-Sizewell

Sizewell with CO2 gas blower integral with the steam generator. Image credit: Nuclear Engineering, April 1965

In the last two Magnox plants, Oldbury and Wylfa, the steel primary circuit pressure vessel, piping and external steam generators were replaced by an integral primary circuit housed inside a prestressed concrete reactor vessel (PCRV) with integral steam generators. The Oldbury PCRV was cylindrical and Wylfa’s was spherical. The physical arrangement for Wylfa’s primary circuit is shown in the following diagram, which shows a CO

In the last two Magnox plants, Oldbury and Wylfa, the steel primary circuit pressure vessel, piping and external steam generators were replaced by an integral primary circuit housed inside a prestressed concrete reactor vessel (PCRV) with integral steam generators. The Oldbury PCRV was cylindrical and Wylfa’s was spherical. The physical arrangement for Wylfa’s primary circuit is shown in the following diagram, which shows a CO2 blower drive unit outside the PCRV.

Magnox gas circuit - Wylfa

Wylfa integral primary circuit.  Image credit: Nuclear Engineering, April 1965

The basic gas and fluid flow paths in the Oldbury and Wylfa Magnox plants are shown in the following diagram. Note that steam generator modules surrounding the reactor core inside the PCRV.

Magnox reactor 2_IEE adapted

Image credit: adapted from The Institution of Electrical Engineers, London, ISBN 0 85296 581 8

A generic issue for all Magnox plants was the corrosion of mild steel components by the high temperature CO2 coolant. To manage this issue, the average core outlet gas temperature was reduced from the original design temperature of 414 °C to 360 – 380 °C, with a corresponding decrease in net power output and thermal efficiency.

None of the Magnox reactors are enclosed in a pressure-retaining containment building, as is common practice for most other types of power reactors. In the early Magnox plants, only the reactor was inside an industrial-style building , while the steam generators and parts of the primary circuit piping were outside the building, as shown in the following diagram of a single unit at Calder Hall. The steam generators were enclosed in later plants, primarily to protect them from the weather.

Calder Hall

Image source: NKS-2, ISBN 87-7893-050-2

Accident conditions in a Magnox reactor are very different than in a water-cooled reactor. Magnox reactors do not encounter coolant phase change during an accident or have a risk of the core becoming “uncovered” because of a loss of coolant through a breach in the primary circuit. The low core power density, the large heat capacity of the graphite moderator, and the availability of natural circulation flow paths for core cooling limit post-accident core temperatures. On this basis, Magnox reactors were permitted to operate with three barriers to the release of fission products to the atmosphere: the metal fuel matrix, the Magnox fuel cladding, and the mild steel primary circuit pressure boundary.

Export Magnox plants:

The following two single-unit Magnox nuclear power plants were exported to Italy and Japan.

Latina:       Operated from 1963 to 1987; originally rated at 210 MWe, derated to 160 MWe

Tōkai 1:     Operated from 1966 to 1998; 166 MWe

UK’s successor to the Magnox:

The UK’s second generation commercial power reactor is the Advanced Gas-cooled Reactor (AGR), which is a more advanced, higher-temperature, CO2-cooled, graphite moderated reactor with stainless steel clad, enriched UO2 fuel. The UK’s fleet of AGRs totals 14 units at 6 sites. All are currently operating. Retirement of the oldest units is expected to start in about 2023.

French CO2-cooled, graphite moderated reactors:

In the early 1950s, the UK and France arrived at the same basic fuel / coolant / moderator selection for their first generation power reactors:

  • Natural uranium metal fuel
  • CO2 coolant
  • Graphite moderator

This choice was driven largely by the desire for nuclear independence and the unavailability of enriched uranium, which the U.S. refused to export, and heavy water (moderator), which was not available in significant quantities.

In France, these reactors were known as UNGG (Uranium Naturel Graphite Gaz), which were developed independently of the UK Magnox reactors. All UNGGs used magnesium-zirconium alloy fuel cladding instead of the Magnox magnesium-aluminum alloy.

The first UNGGs were the dual-use Marcoule G2 and G3 reactors, which produced nuclear material for the French nuclear weapons programs and also had a net electric power output of about 27 MWe. The horizontal reactor core was housed in a steel-lined PCRV with external steam generators.

Initial criticality of Marcoule G2 occurred on 21 July 1958, and it first generated electricity in April 1959. Marcoule G3 began operation in 1960. G2 was retired in 1980 and G3 in 1984.

Electricité de France (EDF) built six larger UNGG commercial nuclear power plants in three basic configurations:

  • The early Chinon A1 and A2 plants had a vertical reactor core in a steel primary circuit. These plants had net electrical outputs of 70 MWe (A1) and 200 MWe (A2). A1 operated for 10 years from 1963 to 1973. A2 operated longer, from 1965 to 1985.
  • The later 480 MWe Chinon A3 plant adopted a different design, with a vertical reactor core in a PCRV with external steam generators. A3 operated for 24 years, from 1966 to 1990.
  • The 480 – 515 MWe A1 and A2 plants at Saint Laurent-des-Eaux and the 540 MWe Bugey 1 plant adopted a more advanced and compact design, with an integral primary circuit in a steel-lined PCRV. Unlike the Wylfa Magnox plant, the steam generators were placed under the reactor core in a tall PCRV, as shown in the following diagram. This basic arrangement is similar to the U.S. Fort St. Vrain helium-cooled high-temperature gas-cooled reactor (HTGR) built in the 1970s. Saint Laurent A1 operated from 1969 to 1990, A2 from 1971 to 1994, and Bugey 1 from 1972 to 1994.

St Laurent gas circuit adapted

Image credit: adapted from Nuclear Engineering, Feb 1968

Export UNGG:

France exported to Spain one UNGG similar to the Saint Laurent-des-Eaux plant. This became the 508 MWe Vandellos unit 1 power plant, which operated from 1972 to 1990.

French successor to the UNGG:

After Bugey 1, France abandoned gas-cooled reactor technology for commercial nuclear power plants. Pressurized water reactor (PWR) technology was chosen for the next generation of French commercial power reactors: the CP0 900 MW PWR.

North Korean CO2-cooled, graphite moderated reactor:

North Korea’s Yongbyon nuclear plant is a CO2-cooled, graphite-moderated reactor with natural uranium fuel. This is a logical choice of reactor type because natural uranium and graphite are domestically available in North Korea. Yongbyon is believed to be a dual-use production reactor that is modeled after the UK’s Calder Hall Magnox reactor.

Yongbyon is believed to have a thermal power in the 20 – 25 MWt range and a net electrical output of about 5 MWe. Reactor operation began in 1986. In 2007, operation of Yongbyon was disabled when the cooling towers were demolished to comply with an international agreement related to preventing North Korean production of nuclear material. On 2 April 2013, North Korea announced it would restart Yongbyon. NTI reported that Yongbyon has been operating since September 2013. See details at the following link:

http://www.nti.org/learn/facilities/766/

Fukushima Daiichi Current Status and Lessons Learned

Peter Lobner

The International Atomic Energy Agency (IAEA) presents a great volume of information related to the 12 March 2011 Fukushima Daiichi accident and the current status of planning and recovery actions on their website at the following link:

https://www.iaea.org/newscenter/focus/fukushima

From this web page, you can navigate to many resources, including: Fukushima Daiichi Status Updates, 6 September 2013 – Present. Here is the direct link to the status updates:

https://www.iaea.org/newscenter/focus/fukushima/status-update

The IAEA’s voluminous 2015 report, The Fukushima Daiichi Accident, consists of the Report by the IAEA Director General and five technical volumes. The IAEA states that this report is the result of an extensive international collaborative effort involving five working groups with about 180 experts from 42 Member States with and without nuclear power programs and several international bodies. It provides a description of the accident and its causes, evolution and consequences based on the evaluation of data and information from a large number of sources.

IAEA Fukushima  Source: IAEA

You can download all or part of this report and its technical annexes at the following link to the IAEA website:

http://www-pub.iaea.org/books/IAEABooks/10962/The-Fukushima-Daiichi-Accident

There have been many reports on the Fukushima Daiichi accident and lessons learned. A few of the more recent notable documents are identified briefly below along with the web links from which you can download these documents.

Japan’s Nuclear Regulatory Authority (NRA):

A summary of the NRA’s perspective on Fukushima accident and lessons learned is the subject of the March 2014 presentation, “Lessons Learned from the Fukushima Dai-ichi Accident and Responses in New Regulatory Requirements.” You can download this presentation at the following link:

http://www-pub.iaea.org/iaeameetings/cn233p/OpeningSession/1Fuketa.pdf

 National Academy of Sciences:

The U.S. Congress asked the National Academy of Sciences to conduct a technical study on lessons learned from the Fukushima Daiichi accident for improving safety and security of commercial nuclear power plants in the U.S. This study was carried out in two phases. The Phase 1 report, Lessons Learned from the Fukushima Nuclear Accident for Improving Safety of U.S. Nuclear Plants, was issued in 2014, and focused on the causes of the Fukushima Daiichi accident and safety-related lessons learned for improving nuclear plant systems, operations, and regulations exclusive of spent fuel storage.

NAP Fukushima Phase 1  Source: NAP

If you have a MyNAP account, you can download the Phase 1 report at the following link to the National Academies Press website:

http://www.nap.edu/catalog/18294/lessons-learned-from-the-fukushima-nuclear-accident-for-improving-safety-of-us-nuclear-plants

The Phase 2 report, Lessons Learned from the Fukushima Accident for Improving Safety and Security of U.S. Nuclear Plants: Phase 2, recently issued in 2016, focuses on three issues: (1) lessons learned from the accident for nuclear plant security, (2) lessons learned for spent fuel storage, and (3) reevaluation of conclusions from previous Academies studies on spent fuel storage.

NAP Fukushima Phase 2  Source: NAP

If you have a MyNAP account, you can download the Phase 2 report at the following link:

http://www.nap.edu/catalog/21874/lessons-learned-from-the-fukushima-nuclear-accident-for-improving-safety-and-security-of-us-nuclear-plants

U.S. Nuclear Regulatory Commission (NRC):

A summary of the U.S. NRC’s response to the Fukushima accident is contained in the May 2014 presentation, “NRC Update, Fukushima Lessons Learned.” You can download this presentation at the following link:

http://nnsa.energy.gov/sites/default/files/nnsa/07-14-multiplefiles/May%2013%20-%208_LAUREN%20GIBSON%20NRC%20Update%20-%20Fukushima%20Lessons%20Learned.pdf

U.S. Energy Information Administration’s (EIA) Early Release of a Summary of its Annual Energy Outlook (AEO) Provides a Disturbing View of Our Nation’s Energy Future

Peter Lobner

Each year, the EIA issues an Annual Energy Outlook that provides energy industry recent year data and projections for future years. The 2016 AEO includes actual data of 2014 and 2015, and projections to 2040. These data include:

  • Total energy supply and disposition demand
  • Energy consumption by sector and source
  • Energy prices by sector and source
  • Key indicators and consumption by sector (Residential, Commercial, Industrial, Transportation)
  • Electricity supply, disposition, prices and emissions
  • Electricity generating capacity
  • Electricity trade

On 17 May, EIA released a PowerPoint summary of AEO2016 along with the data tables used in this Outlook.   The full version of AEO2016 is scheduled for release on 7 July 2016.

You can download EIA’s Early Release PowerPoint summary and any of the data tables at the following link:

http://www.eia.gov/forecasts/aeo/er/index.cfm

EIA explains that this Summary features two cases: the Reference case and a case excluding implementation of the Clean Power Plan (CPP).

  • Reference case: A business-as-usual trend estimate, given known technology and technological and demographic trends. The Reference case assumes Clean Power Plan (CPP) compliance through mass-based standards (emissions reduction in metric tones of carbon dioxide) modeled using allowances with cooperation across states at the regional level, with all allowance revenues rebated to ratepayers.
  • No CPP case: A business-as-usual trend estimate, but assumes that CPP is not implemented.

You can find a good industry assessment of the AEO2016 Summary on the Global Energy World website at the following link:

http://www.globalenergyworld.com/news/24141/Obama_Administration_s_Electricity_Policies_Follow_the_Failed_European_Model.htm

A related EIA document that is worth reviewing is, Assumptions to the Annual Energy Outlook 2015, which you will find at the following link:

http://www.eia.gov/forecasts/aeo/assumptions/

This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in AE02015. A 2016 edition of Assumptions is not yet available. The functional organization of NEMS is shown below.

EIA NEMS

The renewable fuels module in NEMS addresses solar (thermal and photovoltaic), wind (on-shore and off-shore), geothermal, biomass, landfill gas, and conventional hydroelectric.

The predominant renewable sources are solar and wind, both of which are intermittent sources of electric power generation. Except for the following statements, the EIA assumptions are silent on the matter of energy storage systems that will be needed to manage electric power quality and grid stability as the projected use of intermittent renewable generators grows.

  • All technologies except for storage, intermittents and distributed generation can be used to meet spinning reserves
  • The representative solar thermal technology assumed for cost estimation is a 100-megawatt central-receiver tower without integrated energy storage
  • Pumped storage hydroelectric, considered a nonrenewable storage medium for fossil and nuclear power, is not included in the supply

In my 4 March 2016 post, “Dispatchable Power from Energy Storage Systems Help Maintain Grid Stability,” I addressed the growing importance of such storage systems as intermittent power generators are added to the grid. In the context of the AEO, the EIA fails to address the need for these costly energy storage systems and they fail to allocate the cost of energy storage systems to the intermittent generators that are the source of the growing demand for the energy storage systems. As a result, the projected price of energy from intermittent renewable generators is unrealistically low in the AEO.

Oddly, NEMS does not include a “Nuclear Fuel Module.” Nuclear power is represented in the Electric Market Module, but receives no credit as a non-carbon producing source of electric power. As I reported in my posts on the Clean Power Plan, the CPP gives utilities no incentives to continue operating nuclear power plants or to build new nuclear power plants (see my 27 November 2015 post, “Is EPA Fudging the Numbers for its Carbon Regulation,” and my 2 July 2015 post, “EPA Clean Power Plan Proposed Rule Does Not Adequately Recognize the Role of Nuclear Power in Greenhouse Gas Reduction.”). With the current and expected future low price of natural gas, nuclear power operators are at a financial disadvantage relative to operators of large central station fossil power plants. This is the driving factor in the industry trend of early retirement of existing nuclear power plants.

The following 6 May 2016 announcement by Exelon highlights the current predicament of a high-performing nuclear power operator:

“Exelon deferred decisions on the future of its Clinton and Quad Cities plants last fall to give policymakers more time to consider energy market and legislative reforms. Since then, energy prices have continued to decline. Despite being two of Exelon’s highest-performing plants, Clinton and Quad Cities have been experiencing significant losses. In the past six years, Clinton and Quad Cities have lost more than $800 million, combined.“

“Exelon announced today that it will need to move forward with the early retirements of its Clinton and Quad Cities nuclear facilities if adequate legislation is not passed during the spring Illinois legislative session, scheduled to end on May 31 and if, for Quad Cities, adequate legislation is not passed and the plant does not clear the upcoming PJM capacity auction later this month.”

“Without these results, Exelon would plan to retire Clinton Power Station in Clinton, Ill., on June 1, 2017, and Quad Cities Generating Station in Cordova, Ill., on June 1, 2018.”

You can read Exelon’s entire announcement at the following link:

http://www.exeloncorp.com/newsroom/exelon-statement-on-early-retirement-of-clinton-and-quad-cities-nuclear-facilities

Together the Clinton and Quad Cities nuclear power plants have a combined Design Electrical Rating of 2,983 MWe from a non-carbon producing source. For the period 2013 – 2015, the U.S. nuclear power industry as a whole had a net capacity factor of 90.41. That means that the nuclear power industry delivered 90.41% of the DER of the aggregate of all U.S. nuclear power plants. The three Exelon plants being considered for early retirement exceeded this industry average performance with the following net capacity factors: Quad Cities 1 @ 101.27; Quad Cities 2 @ 92.68, and Clinton @ 91.26.

For the same 2013 – 2015 period, EIA reported the following net capacity factors for wind (32.96), solar photovoltaic (27.25), and solar thermal (21.25).  Using the EIA capacity factor for wind generators, the largest Siemens D7 wind turbine, which is rated at 7.0 MWe, delivers an average output of about 2.3 MWe. We would need more than 1,200 of these large wind turbines just to make up for the electric power delivered by the Clinton and Quad Cities nuclear power plants. Imagine the stability of that regional grid.

CPP continues subsidies to renewable power generators. In time, the intermittent generators will reduce power quality and destabilize the electric power grid unless industrial-scale energy storage systems are deployed to enable the grid operators to match electricity supply and demand with reliable, dispatchable power.

As a nation, I believe we’re trending toward more costly electricity with lower power quality and reliability.

I hope you share my concerns about this trend.

Wave Glider Autonomous Vehicle Harvests Wave and Solar Power to Deliver Unique Operational Capabilities at Sea

Peter Lobner

The U.S. firm Liquid Robotics, Inc., in Sunnyvale, CA, designs, manufactures, and sells small unmanned surface vehicles (USVs) called Wave Gliders, which consist of two parts: an underwater “glider” that provides propulsion and a surface payload vehicle that houses electronics and a solar-electric power system. The physical arrangement of a Wave Glider is shown in the following diagrams. The payload vehicle is about 10 feet (305 cm) long. The glider is about 7 feet (213 cm) long and is suspended about 26 feet (800 cm) below the payload vehicle.

Wave Glider configurationSource: Liquid Robotics. Note: 800 cm suspension distance is not to scale.

The payload vehicle is topped with solar panels and one or more instrumentation / communication / navigation masts. The interior modular arrangement of a Wave Glider is shown in the following diagram. Wave Glider is intended to be an open, extensible platform that can be readily configured for a wide range of missions.

Wave Glider configuration 2Source: Liquid Robotics

The Wave Glider is propelled by wave power using the operational principle for wave power harvesting shown in the following diagram. Propulsion power is generated regardless of the heading of the Wave Glider relative to the direction of the waves, enabling sustained vehicle speeds of 1 to 3 knots.

Wave Glider propulsion schemeSource: Liquid Robotics

The newer SV3 Wave Glider has a more capable electric power system than its predecessor, the SV2, enabling the SV3 glider to be equipped with an electric motor-driven propeller for supplementary solar-electric propulsion. SV3 also is capable of towing and supplying power to submerged instrument packages.

Autonomous navigation and real-time communications capabilities enable Wave Gliders to be managed individually or in fleets. The autonomous navigation capability includes programmable course navigation, including precise hold-station capabilities, and surface vessel detection and avoidance.

Originally designed to monitor whales, the Wave Glider has matured into a flexible, multi-mission platform for ocean environmental monitoring, maritime domain awareness / surveillance, oil and gas exploration / operations, and defense.

More information and short videos on the operation of the Wave Glider are available on the Liquid Robotics website at the following link:

http://www.liquid-robotics.com/platform/overview/

On 28 April 2016, the U.S. Navy announced that it was in the process of awarding Liquid Robotics a sole-source contract for Wave Glider USV hardware and related services. You can read the Notice of Intent at the following link:

https://www.fbo.gov/index?s=opportunity&mode=form&id=6abb899b3e3286bfcd861fc5dedfdb65&tab=core&_cview=0

As described by the Navy:

“The required USV is a hybrid sea-surface USV comprised of a submerged ‘glider’ that is attached via a tether to a surface float. The vehicle is propelled by the conversion of ocean wave energy into forward thrust, independent of wave direction. No electrical power is generated by the propulsion mechanism.”

Navy requirements for the Wave Glider USV include the following:

  • Mission: Capable of unsupported autonomous missions of up to ten months duration, with long distance transits of up to 1,000 nautical miles in the open ocean
  • Propulsion: Wave power harvesting at all vehicle-to-wave headings, with sustained thrust adequate under own propulsion sufficient to tow significant loads
  • Electric Power: Solar energy harvesting during daylight hours, with power generation / storage capabilities sufficient to deliver ten watts to instrumentation 24/7
  • Instrumentation: Payload of 20 pounds (9.1 kg)
  • Navigation: Commandable vehicle heading and autonomous on-board navigation to a given and reprogrammable latitude/longitude waypoint on the ocean’s surface
  • Survivability: Sea states up to a rating of five and winds to 50 knots
  • Stealth: Minimal radar return, low likelihood of visual detectability, minimal radiated acoustic noise

In my 11 April 2016 post, I discussed how large autonomous surface and underwater vehicles will revolutionize the ways in which the U.S. Navy conducts certain operational missions. Wave Glider is at the opposite end of the autonomous vehicle size range, but retains the capability to conduct long-duration, long-distance missions. It will be interesting to see how the Navy employs this novel autonomous vehicle technology.

Stunning Ultra High Resolution Images From the Google Art Camera

Peter Lobner

The Google Cultural Institute created the ultra high resolution Art Camera as a tool for capturing extraordinary digital images of two-dimensional artwork. The Institute states:

 “Working with museums around the world, Google has used its Art Camera system to capture the finest details of artworks from their collection.”

A short video at the following link provides a brief introduction to the Art Camera.

https://www.youtube.com/watch?v=dOrJesw5ET8

The Art Camera simplifies and speeds up the process of capturing ultra high resolution digital images, enabling a 1 meter square (39.4 inch square) piece of flat art to be imaged in about 30 minutes. Previously, this task took about a day using third-party scanning equipment.

The Art Camera is set up in front of the artwork to be digitized, the edges of the image to be captured are identified for the Art Camera, and then the camera proceeds automatically, taking ultra high-resolution photos across the entire surface within the identified edges. The resulting set of digital photos are processed by Google and converted into a single gigapixel file.

Google has built 20 Art Cameras and is lending them out to institutions around the world at no cost to assist in capturing digital images of important art collections.

You can see many examples of artwork images captured by the Art Camera at the following link:

https://www.google.com/culturalinstitute/project/art-camera

Among the images on this site is the very detailed Chinese ink and color on silk image shown below. The original image measures about 39 x 31 cm (15 x 12 inches). The first image below is of the entire scene. Following are two images that show the higher resolution available as you zoom in on the dragon’s head and reveal the fine details of the original image, including the weave in the silk fabric.

Google cultural Institute image

GCI image detail 1

GCI image detail 2

Image credit, three images above: Google Cultural Institute/The Nelson-Atkins Museum of Art

In the following pointillist painting by Camille Pissarro, entitled Apple Harvest, the complex details of the artist’s brush strokes and points of paint become evident as you zoom in and explore the image. The original image measures about 74 x 61 cm (29 x 24 inches).

Pissaro Apple Harvest

Pissaro image detail 1

Pissaro image detail 2

Image credit, three images above: Google Cultural Institute/Dallas Museum of Art

Hopefully, art museums and galleries around the world will take advantage of Google’s Art Camera or similar technologies to capture and present their art collections to the world in this rich digital format.

5G is Coming, Slowly

Peter Lobner

The 5th generation of mobile telephony and data services, or 5G, soon will be arriving at a cellular service provider near each of us, but probably not this year. To put 5G in perspective, let’s start with a bit of telecommunications history.

1. Short History of Mobile Telephony and Data Services

0G non-cellular radio-telephone service:

  • 1946 – Mobile Telephone Service (MTS): This pre-cellular, operator assisted, mobile radio-telephone service required a full duplex VHF radio transceiver in the mobile user’s vehicle to link the mobile user’s phone to the carrier’s base station that connected the call to the public switched telephone network (PSTN) and gave access to the land line network. Each call was allocated to a specific frequency in the radio spectrum allocated for radio-telephone use. This type of access is called frequency division multiple access (FDMA). When the Bell System introduced MTS in 1946 in St. Louis, only three channels were available, later increasing to 32 channels.
  • 1964 – Improved Mobile Telephone Service (IMTS): This service provided full duplex UHF/VHF communications between a radio transceiver (typically rated at 25 watts) in the mobile user’s vehicle and a base station that covered an area 40 – 60 miles (64.3 – 96.6 km) in diameter. Each call was allocated to a specific frequency. The base station connected the call to the PSTN, which gave access to the land line network.

1G analog cellular phone service:

  • 1983 – Advanced Mobile Phone System (AMPS): This was the original U.S. fully automated, wireless, portable, cellular standard developed by Bell Labs. AMPS operated in the 800 MHz band and supported phone calls, but not data. The control link between the cellular phone and the cell site was a digital signal. The voice signal was analog. Motorola’s first cellular phone, DynaTAC, operated on the AMPS network.
    •  AMPS used FDMA, so each user call was assigned to a discrete frequency for the duration of the call. FDMA resulted in inefficient use of the carrier’s allocated frequency spectrum.
    • In Europe the comparable 1G standards were TACS (Total Access Communications System, based on AMPS) and NMT (Nordic Mobile Telephone).
    • The designation “1G” was retroactively assigned to analog cellular services after 2G digital cellular service was introduced. As of 18 February 2008, U.S. carriers were no longer required to support AMPS.

2G digital cellular phone and data services:

  • 1991 – GSM (Global System for Mobile), launched in Finland, was the first digital wireless standard. 2G supports digital phone calls, SMS (Short Message Service) text messaging, and MMS (Multi-Media Message). 2G networks typically provide data speeds ranging from 9.6 kbits/s to 28.8 kbits/s. This relatively slow data speed is too slow to provide useful Internet access in most cases. Phone, messages and the control link between the cellular phone and the cell site all are digital signals.
    •  GSM operates on the 900 and 1,800 MHz bands using TDMA (time division multiple access) to manage up to 8 users per frequency channel. Each user’s digital signal is parsed into discrete time slots on the assigned frequency and then reassembled for delivery.
    • Today GSM is used in about 80% of all 2G devices
  • 1995 – Another important 2G standard is Interim Standard 95 (IS-95), which was the first code division multiple access (CDMA) standard for digital cellular technology. IS-95 was developed by Qualcomm and adopted by the U.S. Telecommunications Industry Association in 1995. In a CDMA network, each user’s digital signal is parsed into discrete coded packets that are transmitted and then reassembled for delivery. For a similar frequency bandwidth, a CDMA cellular network can handle more users than a TDMA network.
  • 2003 – EDGE (Enhanced Data Rates for GSM Evolution) is a backwards compatible evolutionary development of the basic 2G GSM system. EDGE generally is considered to be a pre-3G cellular technology. It uses existing GSM spectra and TDMA access and is capable of improving network capacity by a factor of about three.
  •  In the U.S., some cellular service providers plan to terminate 2G service by the end of 2016.

3G digital cellular phone and data services:

  • 1998 – There are two main 3G standards for cellular data: IMT-2000 and CDMA2000. All 3G networks deliver higher data speeds of at least 200 kbits/s and lower latency (the amount of time it takes for the network to respond to a user command) than 2G. High Speed Packet Access (HSPA) technology enables even higher 3G data speeds, up to 3.6 Mbits/s. This enables a very usable mobile Internet experience with applications such as global positioning system (GPS) navigation, location-based services, video conferencing, and streaming mobile TV and on-demand video.
    •  IMT (International Mobile Telecommunications) -2000 accommodates three different access technologies: FDMA, TDMA and CDMA. Its principal implementations in Europe, Japan, Australia and New Zealand use wideband CDMA (W-CDMA) and is commonly known as the Universal Mobile Telecommunications System (UMTS). Service providers must install almost all new equipment to deliver UMTS 3G service. W-CDMA requires a larger available frequency spectrum than CDMA.
    • CDMA2000 is an evolutionary development of the 2G CDMA standard IS-95. It is backwards compatible with IS-95 and uses the same frequency allocation. CDMA2000 cellular networks are deployed primarily in the U.S. and South Korea.
  • 3.5G enhances performance further, bringing cellular Internet performance to the level of low-end broadband Internet. With peak data speeds of about 7.2 Mbits/sec.

4G digital cellular phone and data services:

  • 2008 – IMT Advanced: This standard, adopted by the International Telecommunications Union (ITU), defines basic features of 4G networks, including all-IP (internet protocol) based mobile broadband, interoperability with existing wireless standards, and a nominal data rate of 100 Mbit/s while the user is moving at high speed relative to the station (i.e., in a vehicle).
  • 2009 – LTE (Long Term Evolution): The primary standard in use today is known as 4G LTE, which first went operational in Oslo and Stockholm in December 2009. Today, all four of the major U.S. cellular carriers offer LTE service.
    • In general, 4G LTE offers full IP services, with a faster broadband connection with lower latency compared to previous generations. The peak data speed typically is 1Gbps, which translates to between 1Mbps and 10Mbps for the end user.
    • There are different ways to implement LTE. Most 4G networks operate in the 700 to 800 MHz range of the spectrum, with some 4G LTE networks operating at 3.5 GHz.

2. The Hype About 5G

The goal of 5G is to deliver a superior wireless experience with speeds of 10Mbps to 100Mbps and higher, with lower latency, and lower power consumption than 4G. Some claim that 5G has the potential to offer speeds up to 40 times faster than today’s 4G LTE networks. In addition, 5G is expected to reduce latency to under a millisecond, which is comparable to the latency performance of today’s high-end broadband service.

With this improved performance, 5G will enable more powerful services on mobile devices, including:

  • Rapid downloads / uploads of large files; fast enough to stream “8K” video in 3-D. This would allow a person with a 5G smartphone to download a movie in about 6 seconds that would take 6 minutes on a 4G network.
  • Enable deployment of a wider range of IoT (Internet of Things) devices on networks where everything is connected to everything else and IoT devices are communicating in real-time. These devices include “smart home” devices and longer-lasting wearable devices, both of which benefit from 5G’s lower power consumption and low latency.
  • Provide better support for self-driving cars, each of which is a complex IoT node that needs to communicate in real time with external resources for many functions, including navigation, regional (beyond the range of the car’s own sensors) situation awareness, and requests for emergency assistance.
  • Provide better support for augmented reality / virtual reality and mobile real-time gaming, both of which benefit from 5G’s speed and low latency

3. So what’s the holdup?

5G standards have not yet been yet and published. The ITU’s international standard is expected to be known as IMT-2020. Currently, the term “5G” doesn’t signify any particular technology.

5G development is focusing on use of super-high frequencies, as high as 73 GHz. Higher frequencies enable faster data rates and lower latency. However, at the higher frequencies, the 5G signals are usable over much shorter distances than 4G, and the 5G signals are more strongly attenuated by walls and other structures. This means that 5G service will require deployment of a new network architecture and physical infrastructure with cell sizes that are much smaller than 4G. Cellular base stations will be needed at intervals of perhaps every 100 – 200 meters (328 to 656 feet). In addition, “mini-cells” will be needed inside buildings and maybe even individual rooms.

Fortunately, higher frequencies allow use of smaller antennae, so we should have more compact cellular hardware for deploying the “small cell” architecture. Get ready for new cellular nomenclature, including “microcells”, “femtocells” and “picocells”.

Because of these infrastructure requirements, deployment of 5G will require a significant investment and most likely will be introduced first in densely populated cities.

Initial introduction date is unlikely to be before 2017.

More details on 5G are available in a December 2014 white paper by GMSA Intelligence entitled, “Understanding 5G: Perspectives on Future Technological Advances in Mobile,” which you can download at the following link:

https://gsmaintelligence.com/research/?file=141208-5g.pdf&download

Note that 5G’s limitations inside buildings and the need for “mini-cells” to provide interior network coverage sound very similar to the limitations for deploying Li-Fi, which uses light instead of radio frequencies for network communications. See my 12 December 2015 post for information of Li-Fi technology.

15 July 2016 Update: FCC allocates frequency spectrum to facilitate deploying 5G wireless technologies in the U.S.

On 14 July 2016, the Federal Communications Commission (FCC) announced:

“Today, the FCC adopted rules to identify and open up the high frequency airwaves known as millimeter wave spectrum. Building on a tried-and-true approach to spectrum policy that enabled the explosion of 4G (LTE), the rules set in motion the United States’ rapid advancement to next-generation 5G networks and technologies.

The new rules open up almost 11 GHz of spectrum for flexible use wireless broadband – 3.85 GHz of licensed spectrum and 7 GHz of unlicensed spectrum. With the adoption of these rules, the U.S. is the first country in the world to open high-band spectrum for 5G networks and technologies, creating a runway for U.S. companies to launch the technologies that will harness 5G’s fiber-fast capabilities.”

You can download an FCC fact sheet on this decision at the following link:

https://www.fcc.gov/document/rules-facilitate-next-generation-wireless-technologies

These new rules should hasten the capital investments needed for timely 5G deployments in the U.S.

Landing a Reusable Booster Rocket on a Dime

Updated 18 March 2020

Peter Lobner

There are two U.S. firms that have succeeded in launching and recovering a booster rocket that was designed to be reusable. These firms are Jeff Bezos’ Blue Origin and Elon Musk’s SpaceX.   Their booster rockets are designed for very different missions.

  • Blue Origin’s New Shepard booster and capsule are intended for brief, suborbital flights for space tourism and scientific research. The booster and capsule will be “man-rated” for passenger-carrying suborbital missions.
  • In contrast, SpaceX’s Falcon 9 booster rocket is designed to deliver a variety of payloads to Earth orbit. The payload may be the SpaceX Dragon capsule or a different civilian or military spacecraft. Currently, the Falcon 9 booster and Dragon capsule are not “man-rated” for orbital missions. SpaceX is developing a crewed version of the Dragon capsule that, in the future, will be used to deliver and return crewmembers for the International Space Station (ISS).

Both firms cite a cost advantage of recovering and reusing an expensive booster rocket and space capsule. Let’s see how they’re doing.

Blue Origin

The basic flight profiles of a single-stage, single engine New Shepard booster and capsule are shown in the following diagram. The primary goals of each flight are to boost the capsule and passengers above 62.1 miles (100 km), safely recover the capsule and passengers, and safely recover the booster rocket. You can see in the diagram that the booster rocket and the capsule separate after the booster’s rocket engine is shutdown and they are recovered separately. At separation, the booster and capsule are traveling at about Mach 3 (about 1,980 mph, 3,186 kph). The orientation of the booster rocket is controlled during descent and the rocket engine is restarted once at low altitude to bring the booster to a soft, vertical landing. Both the booster rocket and the capsule are designed for reuse.

Blue-origin-flight-profileSource: Blue Origin

On 23 November 2015, Blue Origin made history when, on its first attempt, the New Shepard booster completed a suborbital flight that culminated with the autonomous landing of the booster rocket near the launch site in west Texas. The capsule landed nearby under parachutes. You can view a video of this historic flight at the following link:

https://www.youtube.com/watch?v=9pillaOxGCo

This same New Shepard booster was launched again on 22 January 2016, completed the planned suborbital flight, and again made an autonomous safe landing. This flight marked the first reuse of a booster rocket.

Again using the same hardware, New Shepard was launched on its third flight and safely recovered on 2 April 2016. On this flight, the rocket engine was re-started at a lower altitude (3,635 feet, 1,107 m) than on the previous flights to demonstrate the fast startup of the engine. The booster rocket made an on-target landing, touching down at a velocity of 4.8 mph (7.7 kph).

New Shepard landing 3Source: Blue Origin

You can view a short video of the third New Shepard flight at the following link:

https://www.blueorigin.com/news/news/pushing-the-envelope#youtubeYU3J-jKb75g

In this video, the view from the capsule at 64.6 miles (104 km) above the Earth is stunning. As the landing of the booster rocket approaches, it is dropping like a stone until the rocket engine powers up, quickly stops the descent, and brings the booster rocket in for an accurate, soft, vertical landing.

So, the current score for Blue Origin is 3 attempts and 3 successful soft, vertical landings in less than 5 months. The same New Shepard booster was used all three times (i.e., it has been reused twice).

Refer to the Blue Origin website at the following link for more information.

https://www.blueorigin.com

SpaceX Falcon 9 (F9R)

The basic flight profile for a two-stage Falcon 9 recoverable booster on an orbital mission is shown in the following diagram. For ISS re-supply missions, the target for the Dragon capsule is in a near-circular orbit at an altitude of about 250 miles (403 km) and an orbital velocity of about 17,136 mph (27,578 kph). The first stage shuts down and separates from the second stage at an altitude of about 62.1 miles (100 km) and a speed of about 4,600 mph (7,400 kph, Mach 7). These parameters are for illustrative purposes only and will vary as needed to meet the particular mission requirements. The second stage continues into orbit with a Dragon capsule or other payload.

The nine-engine first stage carries extra fuel to enable some of the booster rockets to re-start three times after stage separation to adjust trajectory, decelerate, and make a soft vertical landing on an autonomous recovery barge floating in the ocean 200 miles (320 km) or more downrange from the launch site.

The empty weight of the recoverable version of the Falcon 9 first stage (the F9R) is 56,438 pounds (25,600 kg,), which is about 5,511 pounds (2,500 kg) more than the basic, non-recoverable version (V1.1). The added fuel and structural weight to enable recovery of the first stage reduces the payload mass that can be delivered to orbit.

Falcon flight profile to barge landingSource: SpaceX

The autonomous “drone” barge is a very small target measuring about 170 ft. × 300 ft. (52 m × 91 m). It is equipped with azimuthal thrusters that provide precise positioning using GPS position data. The Falcon 9 booster knows where the drone barge should be. The Falcon 9’s four landing legs span 60 ft. (18 m), and all must land on the barge.

SpaceX_ASDSSource: SpaceX

SpaceX made a series of unsuccessful attempts to land on a drone barge before their first successful landing:

  • 10 January 2015: First attempt; hard landing; booster destroyed.
  • 11 February 2015: High seas prevented use of the barge. Instead, the Falcon 9 first stage was flown to a soft, vertical landing in the ocean, simulating a barge landing.
  • 14 April 2015: Second attempt; successful vertical landing but the booster toppled, likely due to remaining lateral momentum.
  • 7 January 2016: Third attempt; successful vertical landing but the booster toppled, likely due to a mechanical failure in one landing leg.
  • 4 March 2016: Fourth attempt, with low fuel reserve and using only three engines; hard landing; booster destroyed.

On 8 April 2016, a Falcon 9 booster was launched from Cape Canaveral on an ISS re-supply mission. The first stage of this booster rocket became the first to make a successful landing on the drone barge downrange in the Atlantic.

A002_C002_0408A9Source: SpaceX

You can view a short video of the Falcon 9 booster landing on the drone barge at the following link:

https://www.youtube.com/watch?v=RPGUQySBikQ

In the video, you will note the barge heaving in the moderate seas. After landing, the 156 foot (47.5 m) tall booster rocket is just balanced on its landing legs. Before the barge can be towed back to port, crew must board the barge and secure the booster. This is done by placing “shoes” over the landing feet and welding the shoes to the deck of the barge. Once back at Cape Canaveral, the booster will be examined and the rocket engine will be test fired to determine if the first stage can be reused.

Previously, on 21 December 2015, SpaceX successfully launched its Falcon 9 booster on an orbital mission and then landed the first stage back on the ground at Cape Canaveral. As shown in the diagram below, this involved a very different flight profile than for a Falcon 9 flight with a landing on the downrange drone barge. For the December 2015 flight, the Falcon 9 first stage had to reverse direction to fly back to Cape Canaveral from about 59 miles (95 km) downrange and then decelerate and maneuver for a soft, vertical landing about 10 minutes after launch.

Blue Origin-Falcon flight profile comparedSource: SpaceX

After recovering the booster, the Falcon 9 was inspected and the engines were successfully re-tested on 15 January 2016, on a launch pad at Cape Canaveral. I could not determine if this Falcon 9 first stage has been reused.

So, the current score for SpaceX is 6 attempts (not counting the February 2015 soft landing in the ocean) and 2 successes (one on land and one on the drone barge) in 15 months.

Refer to the SpaceX website at the following link for more information.

http://www.spacex.com

The bottom line

In the above diagram for the December 2015 Falcon 9 flight, the relative complexity of a typical New Shepard flight profile and the Falcon 9 flight profile with return to Cape Canaveral is clear. The Falcon 9 flight profile for a landing on the small, moving, down-range drone barge is even more complex.

The New Shepard sub-orbital mission is much less challenging than any Falcon 9 orbital mission. Nonetheless, both booster rockets face very similar challenges as they approach the landing site to execute an autonomous, soft, vertical landing.

Both Blue Origin and SpaceX have made tremendous technological leaps in demonstrating that a booster rocket can make an autonomous, soft, vertical landing and remain in a condition that allows its reuse in a subsequent mission. Blue Origin actually has reused their booster rocket and capsule twice, further demonstrating the maturity of reusable rocket technology.

It remains to be seen if this technology actually delivers the operating cost savings anticipated by Blue Origin and SpaceX. I hope it does. When space tourism becomes a reality, the hoped-for cost benefits of reusable booster rockets and spacecraft could affect my ticket price.

18 March 2020 Update:  Four years later

On 6 March 2020, SpaceX launched its 20th commercial resupply services mission (CRS-20) to the International Space Station (ISS).  The successful launch concluded with the 50thsuccessful landing of the first stage of a Falcon 9 launch vehicle.  On this mission, the first stage flew back for a landing at Cape Canaveral in the windiest conditions encountered to date, 25 to 30 mph.  This was the last launch with the original cargo-only version of the Dragon capsule. Subsequent launches will use 2nd-generation Dragon capsules that are roomier and designed to also accommodate astronauts.

About two weeks later, on 18 March 2020, SpaceX launched another successful Falcon 9 mission, for the first time using a first stage that had flown on four prior missions.  The satellite payload was launched into the intended orbit.  However, a malfunction in one of nine first stage engines prevented recovery of the booster rocket.

On 11 December 2019, Blue Origin reported that New Shepard mission NS-12 was successfully completed.  “This was the 6th flight for this particular New Shepard vehicle. Blue Origin has so far reused two boosters five times each consecutively, so today marks a record with this booster completing its 6th flight to space and back.”

Booster reusability has become a reality for SpaceX and Blue Origin, and other firms are following their lead by developing new reusable launch vehicles.  These are encouraging steps toward more economic access to Earth orbit and beyond.  Both SpaceX and Blue Origin have advanced reusable launch vehicle technology significantly in the past four years.  Both soon will begin human space flight using their respective launch vehicles and space capsules.

Polymagnets® will Revolutionize the Ways in Which Magnets are Used

Peter Lobner

The U.S firm Correlated Magnetics Research (CMR), Huntsville, AL, invented and is the sole manufacturer of Polymagnets®, which are precision-tailored magnets that enhance existing and new products with specific behaviors that go far beyond the simple attract-and-repel behavior of common magnets. Polymagnets have been granted over 100 patents, all held by CMR. You can visit their website at the following link:

http://www.polymagnet.com

CMR describes Polymagnets® as follows:

“Essentially programmable magnets, Polymagnets are the first fundamental advance in magnets in 180 years, since the introduction of electromagnets. With Polymagnets, new products can have softer ‘feel’ or snappier or crisper closing or opening behavior, and may be given the sensation of a spring or latch”.

On a conventional magnet, there is a North (N) pole on one surface and a South (S) pole on the opposite surface. Magnetic field lines flow around the magnetic from pole to pole. On a Polymagnet®, many small, polarized (N or S) magnetic pixels (“maxels”) are manufactured by printing in a desired pattern on the same surface. The magnetic field lines are completed between the maxels on that surface, resulting in a very compact, strong magnetic field. This basic concept is shown in the following figure.

Polymagnet field comparison

The mechanical 3-D behavior of a Polymagnet® is determined by the pattern and strength of the maxels embedded on the surface of the magnet. These customizable behaviors include spring, latch, shear, align, snap, torque, hold, twist, soften and release. The very compact magnetic field reduces magnetic interference with other equipment, opening new applications for Polymagnets® where a conventional magnet wouldn’t be suitable.

The above figure is a screenshot from the Smarter Every Day 153 video, which you can view at the following link. Thanks to Mike Spaeth for sending me this is a 10-minute video, which I think you will enjoy.

https://www.youtube.com/watch?v=IANBoybVApQ

More information on Polymagnet® technology, including short videos that demonstrate different mechanical behaviors, and a series of downloadable white papers, is available at the following link.

http://www.polymagnet.com/polymagnets/

This is remarkable new technology in search of novel applications. Many practical applications are identified on the Polymagnet® website. What are your ideas?

If you really want to look into this technology, you can buy a Polymagnet® demonstration kit at the following links:

https://www.magnetics.com/product.asp?ProductID=164

or,

http://www.mechanismsmarket.com/kits/

Polymagnet demo kit   Source: Mechanisms Market

Large Autonomous Vessels will Revolutionize the U.S. Navy

Peter Lobner

In this post, I will describe two large autonomous vessels that are likely to revolutionize the way the U.S. Navy operates. The first is the Sea Hunter, originally sponsored by Defense Advanced Projects Agency (DARPA), and the second is Echo Voyager developed by Boeing.

DARPA Anti-submarine warfare (ASW) Continuous Trail Unmanned Vessel (ACTUV)

ACTUV conceptSource: DARPA

DARPA explains that the program is structured around three primary goals:

  • Demonstrate the performance potential of a surface platform conceived originally as an unmanned vessel.
    • This new design paradigm reduces constraints on conventional naval architecture elements such as layout, accessibility, crew support systems, and reserve buoyancy.
    • The objective is to produce a vessel design that exceeds state-of-the art manned vessel performance for the specified mission at a fraction of the vessel size and cost.
  •  Advance the technology for unmanned maritime system autonomous operation.
    • Enable independently deploying vessels to conduct missions spanning thousands of kilometers of range and months of duration under a sparse remote supervisory control model.
    • This includes autonomous compliance with maritime laws and conventions for safe navigation, autonomous system management for operational reliability, and autonomous interactions with an intelligent adversary.
  • Demonstrate the capability of an ACTUV vessel to use its unique sensor suite to achieve robust, continuous track of the quietest submarine targets over their entire operating envelope.

While DARPA states that ACTUV vessel is intended to detect and trail quiet diesel electric submarines, including air-independent submarines, that are rapidly proliferating among the world’s navies, that detect and track capability also should be effective against quiet nuclear submarines. The ACTUV vessel also will have capabilities to conduct counter-mine missions.

The ACTUV program is consistent with the Department of Defense (DoD) “Third Offset Strategy,” which is intended to maintain U.S. military technical supremacy over the next 20 years in the face of increasing challenges from Russia and China. An “offset strategy” identifies particular technical breakthroughs that can give the U.S. an edge over potential adversaries. In the “Third Offset Strategy”, the priority technologies include:

  • Robotics and autonomous systems: capable of assessing situations and making decisions on their own, without constant human monitoring
  • Miniaturization: enabled by taking the human being out of the weapons system
  • Big data: data fusion, with advanced, automated filtering / processing before human involvement is required.
  • Advanced manufacturing: including composite materials and additive manufacturing (3-D printing) to enable faster design / build processes and to reduce traditionally long supply chains.

You can read more about the “Third Offset Strategy” at the following link:

http://breakingdefense.com/2014/11/hagel-launches-offset-strategy-lists-key-technologies/

You also may wish to read my 19 March 2016 post on Arthur C. Clarke’s short story “Superiority.” You can decide for yourself if it relates to the “Third Offset Strategy.”

Leidos (formerly SAIC) is the prime contractor for the ACTUV technology demonstrator vessel, Sea Hunter. In August 2012, Leidos was awarded a contract valued at about $58 million to design, build, and operationally test the vessel.

In 2014, Leidos used a 32-foot (9.8 meter) surrogate vessel to demonstrate the prototype maritime autonomy system designed to control all maneuvering and mission functions of an ACTUV vessel. The first voyage of 35 nautical miles (65.8 km) was conducted in February 2014. A total of 42 days of at-sea demonstrations were conducted to validate the autonomy system.

Sea Hunter is an unarmed 145-ton full load displacement, diesel-powered, twin-screw, 132 foot (40 meters) long, trimaran that is designed to a wide range of sea conditions. It is designed to be operational up to Sea State 5 [moderate waves to 6.6 feet (2 meters) height, winds 17 – 21 knots] and to be survivable in Sea State 7 [rough weather with heavy waves up to 20 feet (6 meters) height]. The vessel is expected to have a range of about 3,850 miles (6,200 km) without maintenance or refueling and be able to deploy on missions lasting 60 – 90 days.

Sea Hunter side view cropSource: DARPA

Raytheon’s Modular Scalable Sonar System (MS3) was selected as the primary search and detection sonar for Sea Hunter. MS3 is a medium frequency sonar that is capable of active and passive search, torpedo detection and alert, and small object avoidance. In the case of Sea Hunter, the sonar array is mounted in a bulbous housing at the end of a fin that extends from the bottom of the hull; looking a bit like a modern, high-performance sailboat’s keel.

Sea Hunter will include sensor technologies to facilitate the correct identification of surface ships and other objects on the sea surface. See my 8 March 2015 post on the use of inverse synthetic aperture radar (ISAR) in such maritime surveillance applications.

During a mission, an ACTUV vessel will not be limited by its own sensor suit. The ACTUV vessel will be linked via satellite to the Navy’s worldwide data network, enabling it to be in constant contact with other resources (i.e., other ships, aircraft, and land bases) and to share data.

Sea Hunter was built at the Vigor Shipyard in Portland, Oregon. Construction price of the Sea Hunter is expected to be in the range from $22 to $23 million. The target price for subsequent vessels is $20 million.

You can view a DARPA time-lapse video of the construction and launch of Sea Hunter at the following link:

http://www.darpa.mil/attachments/ACTUVTimelapseandWalkthrough.mp4

Sea Hunter launch 1Source: DARPA

Sea Hunter lauunch 2Source: DARPA

In the above photo, you can see on the bottom of the composite hull, just forward of the propeller shafts, what appears to be a hatch. I’m just speculating, but this may be the location of a retractable sonar housing, which is shown in the first and second pictures, above.

You can get another perspective of the launch and the subsequent preliminary underway trials in the Puget Sound in the DARPA video at the following link:

http://www.darpa.mil/attachments/ACTUVTimelapseandWalkthrough.mp4

During the speed run, Sea Hunter reached a top speed of 27 knots. Following the preliminary trials, Sea Hunter was christened on 7 April 2016. Now the vessel starts an operational test phase to be conducted jointly by DARPA and the Office of Naval Research (ONR). This phase is expected to run through September 2018.

DARPA reported that it expects an ACTUV vessel to cost about $15,000 – $20,000 per day to operate. In contrast, a manned destroyer costs about $700,000 per day to operate.

The autonomous ship "Sea Hunter", developed by DARPA, is shown docked in Portland, Oregon before its christening ceremonySource: DARPA

You can find more information on the ACTUV program on the DARPA website at the following link:

http://www.darpa.mil/news-events/2016-04-07

If ACTUV is successful in demonstrating the expected search and track capabilities against quiet submarines, it will become the bane of submarine commanders anywhere in the world. Imagine the frustration of a submarine commander who is unable to break the trail of an ACTUV vessel during peacetime. During a period of conflict, an ACTUV vessel may quickly become a target for the submarine being trailed. The Navy’s future conduct of operations may depend on having lots of ACTUV vessels.

28 July 2016 update: Sea Hunter ACTUV performance testing

On 1 May 2016, Sea Hunter arrived by barge in San Diego and then started initial performance trial in local waters.

ACTUV in San Diego BaySource: U.S. Navy

You can see a video of Sea Hunter in San Diego Bay at the following link:

https://news.usni.org/2016/05/04/video-navys-unmanned-sea-hunter-arrives-in-san-diego

On 26 July 2016, Leidos reported that it had completed initial performance trials in San Diego and that the ship met or surpassed all performance objectives for speed, maneuverability, stability, seakeeping, acceleration, deceleration and fuel consumption. These tests were the first milestone in the two-year test schedule.

Leidos indicated that upcoming tests will exercise the ship’s sensors and autonomy suite with the goals of demonstrating maritime collision regulations compliance capability and proof-of-concept for different Navy missions.

4 October 2018 update:  DARPA ACTUV program completed.  Sea Hunter testing and development is being continued by the Office of Naval Research

In January 2018, DARPA completed the ACTUV program and the Sea Hunter was transferred to the Office of Naval Research (ONR), which is continuing to operate the technology demonstration vessel under its Medium Displacement Unmanned Surface Vehicle (MDUSV) program.  You can read more about the transition of the DARPA program to ONR here:
 
 
It appears that ONR is less interested in the original ACTUV mission and more interested in a general-purpose “autonomous truck” that can be configured for a variety of missions while using the basic autonomy suite demonstrated on Sea Hunter.  In December 2017, ONR awarded Leidos a contract to build the hull structure for a second autonomous vessel that is expected to be an evolutionary development of the original Sea Hunter design.  You can read more about this ONR contract award here:
 

Echo Voyager Unmanned Underwater Vehicle (UUV)

Echo Explorer - front quarter viewSource: BoeingEcho Explorer - top openSource: Boeing

Echo Voyager is the third in a family of UUVs developed by Boeing’s Phantom Works. The first two are:

  • Echo Ranger (circa 2002): 18 feet (5.5 meters) long, 5 tons displacement; maximum depth 10,000 feet; maximum mission duration about 28 hours
  • Echo Seeker (circa 2015): 32 feet (9.8 meter) long; maximum depth 20,000 feet; maximum mission duration about 3 days

Both Echo Ranger and Echo Seeker are battery powered and require a supporting surface vessel for launch and recovery at sea and for recharging the batteries. They successfully have demonstrated the ability to conduct a variety of autonomous underwater operations and to navigate safely around obstacles.

Echo Voyager, unveiled by Boeing in Huntington Beach, CA on 10 March 2016, is a much different UUV. It is designed to deploy from a pier, autonomously conduct long-duration, long-distance missions and return by itself to its departure point or some other designated destination. Development of Echo Voyager was self-funded by Boeing.

Echo Voyager is a 50-ton displacement, 51 foot (15.5 meters) long UUV that is capable of diving to a depth of 11,000 feet (3,352 meters). It has a range of about 6,500 nautical miles (12,038 km), and is expected to be capable of autonomous operations for three months or more. The vessel is designed to accommodate various “payload sections” that can extend the length of the vessel up to a maximum of 81 feet (24.7 meters).

You can view a Boeing video on the Echo Voyager at the following link:

https://www.youtube.com/watch?v=L9vPxC-qucw

The propulsion system is a hybrid diesel-electric rechargeable system. Batteries power the main electric motor, enabling a maximum speed is about 8 knots. Electrically powered auxiliary thrusters can be used to precisely position the vessel at slow speed. When the batteries require recharging,

The propulsion system is a hybrid diesel-electric rechargeable system. Batteries power the main electric motor, enabling a maximum speed is about 8 knots. Electrically powered auxiliary thrusters can be used to precisely position the vessel at slow speed. When the batteries require recharging, Echo Voyager will rise toward the surface, extend a folding mast as shown in the following pictures, and operate the diesel engine with the mast serving as a snorkel. The mast also contains sensors and antennae for communications and satellite navigation.

Echo Explorer - mast extendingSource: screenshot from Boeing video at link aboveEcho Explorer - snorkelingSource: screenshot from Boeing video at link above

The following image, also from the Boeing video, shows deployment of a payload onto the seabed.Echo Explorer - emplacing on seabedSource: screenshot from Boeing video at link above

Initial sea trials off the California coast were conducted in mid-2016.

Boeing currently does not have a military customer for Echo Voyager, but foresees the following missions as being well-suited for this type of UUV:

  • Surface and subsurface intelligence, surveillance, and reconnaissance (ISR)
  • ASW search and barrier patrol
  • Submarine decoy
  • Critical infrastructure protection
  • Mine countermeasures
  • Weapons platform

Boeing also expects civilian applications for Echo Voyager in offshore oil and gas, marine engineering, hydrography and other scientific research.

4 October 2018 update:  Progress in Echo Voyager development

Echo Voyager is based at a Boeing facility in Huntington Beach, CA.  In June 2018, Boeing reported that Echo Voyager had returned to sea for a second round of testing.  You can read more on Echo Voyager current status and the Navy’s plans for future large UUVs here:

http://www.latimes.com/business/la-fi-boeing-echo-voyager-20180623-story.html

Echo Voyager operating near the surface with mast extended. Source.  Boeing

The Invisible Man may be Blind!

Peter Lobner

Metamaterials are a class of material engineered to produce properties that don’t occur naturally.

The first working demonstration of an “invisibility cloak” was achieved in 2006 at the Duke University Pratt School of Engineering using the complex metamaterial-based cloak shown below.

Duke 2006 metamaterial cloakSource: screenshot from YouTube link below.

The cloak deflected an incoming microwave beam around an object and reconstituted the wave fronts on the downstream side of the cloak with little distortion. To a downstream observer, the object inside the cloak would be hidden.

Effect of Duke metamaterial cloakSource: screenshot from YouTube link below.

You can view a video of this Duke invisibility cloak at the following link:

https://www.youtube.com/watch?v=Ja_fuZyHDuk

In a paper published in the 18 September 2015 issue of Science, researchers at UC Berkley reported creating an ultra-thin, metamaterial-based optical cloak that was successful in concealing a small scale, three-dimensional object. The abstract of this paper, “An ultrathin invisibility skin cloak for visible light”, by Ni et al., is reproduced below.

“Metamaterial-based optical cloaks have thus far used volumetric distribution of the material properties to gradually bend light and thereby obscure the cloaked region. Hence, they are bulky and hard to scale up and, more critically, typical carpet cloaks introduce unnecessary phase shifts in the reflected light, making the cloaks detectable. Here, we demonstrate experimentally an ultrathin invisibility skin cloak wrapped over an object. This skin cloak conceals a three-dimensional arbitrarily shaped object by complete restoration of the phase of the reflected light at 730-nanometer wavelength. The skin cloak comprises a metasurface with distributed phase shifts rerouting light and rendering the object invisible. In contrast to bulky cloaks with volumetric index variation, our device is only 80 nanometer (about one-ninth of the wavelength) thick and potentially scalable for hiding macroscopic objects.”

If you have a subscription to Science, you can read the full paper at the following link:

http://science.sciencemag.org/content/349/6254/1310

Eric Grundhauser writes on the Atlas Obscura website about an interesting quandary for users of an optical invisibility cloak.

“Since your vision is based on the light rays that enter your eyes, if all of these rays were diverted around someone under an invisibility cloak, the effect would be like being covered in a thick blanket. Total darkness.”

So, the Invisible Man is likely to be less of a threat than he appeared in the movies. You should be able to locate him as he stumbles around a room, bumping into everything he can’t see at visible light frequencies. However, he may be able to navigate and sense his adversary at other electromagnetic and/or audio frequencies that are less affected by his particular invisibility cloak.

You can read Eric Grundhauser’s complete article, “The Problem With Invisibility is Blindness,” at the following link:

http://www.atlasobscura.com/articles/the-problem-with-invisibility-is-the-blindness?utm_source=howtogeek&utm_medium=email&utm_campaign=newsletter

Recognizing this inconvenient aspect of an invisibility cloak, researchers from Yunnan University, China, have been investigating the concept of a “reciprocal cloak,” which they describe as, “an intriguing metamaterial device, in which a hidden antenna or a sensor can receive electromagnetic radiation from the outside but its presence will not be detected.” One approach is called an “open cloak,” which includes a means to, “open a window on the surface of a cloak, so that exchanging information and matter with the outside can be achieved.”

You can read the complete 2011 paper, “Electromagnetic Reciprocal Cloak with Only Axial Material Parameter Spatially Variant,” by Yang et al., at the following link:

http://www.hindawi.com/journals/ijap/2012/153086/

An all-aspect, broadband (wide range of operational frequencies) invisibility cloak is likely to remain in the realm of fantasy and science fiction. A 10 March 2016 article entitled, “Invisibility cloaks can never hide objects from all observers,” by Lisa Zyga, explains:

“….limitations imposed by special relativity mean that the best invisibility cloaks would only be able to render objects partially transparent because they would suffer from obvious visible distortions due to motion. The result would be less Harry Potter and more like the translucent creatures in the 1987 movie Predator.”

You can read the complete article at the following link:

http://phys.org/news/2016-03-invisibility-cloaks.html

Further complications are encountered when applying an invisibility cloak to a very high-speed vessel. A 28 January 2016 article, also by Lisa Zyga, explains:

“When the cloak is moving at high speeds with respect to an observer, relativistic effects shift the frequency of the light arriving at the cloak so that the light is no longer at the operational frequency. In addition, the light emerging from the cloak undergoes a change in direction that produces a further frequency shift, causing further image distortions for a stationary observer watching the cloak zoom by.”

You can read the complete article, “Fast-moving invisibility cloaks become visible,” at the following link:

http://phys.org/news/2016-01-fast-moving-invisibility-cloaks-visible.html

So, there you have it! The Invisible Man may be blind, the Predator’s cloak seems credible even when he’s moving, and a really fast-moving cloaked Klingon battlecruiser is vulnerable to detection.