[Editor's note: This is the final part a three-part series. A version of this story appears in the December 2019 edition of Oil and Gas Investor. Subscribe to the magazine here.]

Forecasts are now commonly made, often rhetorically and without evidence, that we should expect to see a rapid decline in future costs for wind/solar/battery technologies that continue the gains already achieved. The first two decades of commercialization, after the 1980s, saw a greater than tenfold reduction in cost of solar and wind hardware. But the path for further improvements won’t emulate the past. Instead, it now follows what mathematics call an asymptote; or, put in economic terms, improvements are subject to a law of diminish­ing returns where every incremental gain yields less progress than in the past (Figure 1).

This is a normal phenomenon in all physical systems. Throughout history, engineers have achieved big gains in the early years of a tech­nology’s development, whether wind or gas tur­bines, steam or sailing ships, internal combus­tion or photovoltaic cells. Over time, engineers manage to approach nature’s limits. Bragging rights for gains in efficiency—or speed, or oth­er equivalent metrics such as energy density (power per unit of weight or volume)—then shrink from double-digit percentages to frac­tional percentage changes. Whether it’s solar, wind tech or aircraft turbines, the gains in per­formance are now all measured in single-digit percentage gains. Such progress is economical­ly meaningful but is not revolutionary.


RELATED:

Part One: Debunking The New Energy Economy

Part Two: Batteries Cannot Save The Grid Or The Planet


Special

The physics-constrained limits of energy sys­tems are unequivocal. Solar arrays can’t convert more photons than those that arrive from the sun. Wind turbines can’t extract more energy than exists in the kinetic flows of moving air. Batteries are bound by the physical chemistry of the molecules chosen. Similarly, no matter how much better jet engines become, an A380 will never fly to the moon. An oil-burning engine can’t produce more energy than contained in the physical chemistry of hydrocarbons.

Combustion engines have what’s called a Carnot Efficiency Limit, which is anchored in the temperature of combustion and the en­ergy available in the fuel. The limits are long established and well understood. In theory, at a high-enough temperature, 80% of the chem­ical energy that exists in the fuel can be turned into power. Using today’s high-temperature materials, the best hydrocarbon engines con­vert about 50% to 60% to power. There’s still room to improve, but it’s nothing like the ten­fold to nearly hundredfold revolutionary ad­vances achieved in the first couple of decades after their invention. Wind/solar technologies are now on the same place of that asymptotic technology curve.

For wind, the boundary is called the Betz Limit, which dictates how much of the kinet­ic energy in air a blade can capture; that limit is about 60%. Capturing all the kinetic energy would mean, by definition, no air movement and thus nothing to capture. There needs to be wind for the turbine to turn. Modern turbines already exceed 45% conversion. That leaves some real gains to be made but, as with com­bustion engines, nothing revolutionary. Anoth­er tenfold improvement is not possible.

Special

For silicon photovoltaic (PV) cells, the phys­ics boundary is called the Shockley-Queisser Limit: A maximum of about 33% of incoming photons are converted into electrons. State-of- the-art commercial PVs achieve just over 26% conversion efficiency—in other words, near the boundary. While researchers keep unearth­ing new non-silicon options that offer tantaliz­ing performance improvements, all have sim­ilar physics boundaries and none is remotely close to manufacturability at all—never mind at low costs. There are no tenfold gains left.

Future advances in wind turbine and solar economics are now centered on incremental engineering improvements: economies of scale in making turbines enormous, taller than the Washington Monument, and similarly massive, square-mile utility-scale solar arrays. For both technologies, all the underlying key compo­nents—concrete, steel and fiberglass for wind; and silicon, copper and glass for solar—are all already in mass production and well down as­ymptotic cost curves in their own domains.

While there are no surprising gains in econo­mies of scale available in the supply chain, that doesn’t mean that costs are immune to improve­ments. In fact, all manufacturing processes ex­perience continual improvements in production efficiency as volumes rise. This ex­perience curve is called Wright’s Law. (That “law” was first docu­mented in 1936, as it related then to the challenge of manufacturing aircraft at costs that markets could tolerate. Analogously, while avi­ation took off and created a big, worldwide transportation industry, it didn’t eliminate automobiles or the need for ships.) Experience leading to lower incremental costs is to be expected; again, that’s not the kind of revolutionary improve­ment that could make a new energy economy even remotely plausible.

As for modern batteries, there are still promising options for significant improvements in their underlying physical chem­istry. New non-lithium materials in research labs offer as much as a 200% and even 300% gain in inherent performance. Such gains nevertheless don’t constitute the kinds of tenfold or hundred­fold advances in the early days of combustion chemistry. Prospective improvements will still leave batteries miles away from the real competition: petroleum.

There are no subsidies and no engineering from Silicon Valley or elsewhere that can close the physics-centric gap in energy densities be­tween batteries and oil (Figure 2). The energy stored per pound is the critical metric for ve­hicles and, especially, aircraft. The maximum potential energy contained in oil molecules is about 1,500% greater, pound for pound, than the maximum in lithium chemistry. That’s why aircraft and rockets are powered by hydrocar­bons. And that’s why a 20% improvement in oil propulsion (eminently feasible) is more valuable than a 200% improvement in batter­ies (still difficult).

Finally, when it comes to limits, it is rele­vant to note that the technologies that unlocked shale oil and gas are still in the early days of engineering development, un­like the older technologies of wind, solar and batteries. Ten­fold gains are still possible in terms of how much energy can be extracted by a rig from shale rock before approaching physics limits. That fact helps explain why shale oil and gas have added 2,000% more to U.S. energy production over the past decade than have wind and solar combined.

Special
Digitalization won’t uberize the energy sector

While there are no new physics on the horizon to offer 10x gains in any ener­gy technology, a lot of hope and hype have been afforded to what analytics and artificial intelligence could do to optimize things. Digital tools are already improving and can further improve all manner of efficiencies across entire swaths of the economy, and it is reasonable to expect that software will yet bring significant im­provements in both the underlying efficien­cy in both fabricating and using wind/solar/ battery machines and in the efficiency of how such machines are integrated into infrastruc­tures. Silicon logic has improved, for exam­ple, the control and thus the fuel efficiency of combustion engines, and it is doing the same for wind turbines. Similarly, software epito­mized by Uber has shown that optimizing the efficiency of using expensive physical assets lowers costs. Uberizing all manner of capital assets is inevitable.

Uberizing the electric grid without hydro­carbons is another matter entirely.

The peak demand problem that software can’t fix

In the energy world, one of the most vexing problems is in optimally matching electricity supply and demand (Figure 3). Here the data show that society and the electricity-consum­ing services that people like are generating a growing gap between peaks and valleys of de­mand. The net effect for a hydrocarbon-free grid will be to increase the need for batteries to meet those peaks.

All this has relevance for encouraging elec­tric vehicles (EVs). In terms of managing the inconvenient cyclical nature of demand, shifting transportation fuel use from oil to the grid will make peak management far more challenging. People tend to refuel when it’s convenient; that’s easy to accommodate with oil, given the ease of storage. EV refueling will exacerbate the already-episodic nature of grid demand.

To ameliorate this problem, one proposal is to encourage or even require off-peak EV fu­eling. The jury is out on just how popular that will be or whether it will even be tolerated.

Although kilowatt-hours and cars—key targets in the new energy economy prescrip­tions—constitute only 60% of the energy economy, global demand for both is centu­ries away from saturation. Green enthusiasts make extravagant claims about the effect of Uber-like options and self-driving cars. How­ever, data show that the economic efficiencies from uberizing have so far increased the use of cars and urban congestion. Similarly, many analysts now see autonomous vehicles ampli­fying, not dampening, that effect.

Special

That’s because people, and thus markets, are focused on economic efficiency and not on energy efficiency. The former can be associat­ed with reducing energy use; but it is also, and more often, associated with increased energy demand. Cars use more energy per mile than a horse, but the former offers enormous gains in economic efficiency. Computers, similarly, use far more energy than pencil-and-paper.

Uberizing improves energy efficiencies but increases demand

Every energy conversion in our universe entails built-in inefficiencies—converting heat to propulsion, carbohydrates to motion, photons to electrons, electrons to data and so forth. All entail a certain energy cost, or waste, that can be reduced but never eliminat­ed. But, in no small irony, history shows—as economists have often noted—that improve­ments in efficiency lead to increased, not de­creased, energy consumption.

If at the dawn of the modern era, affordable steam engines had remained as inefficient as those first invented, they would have never proliferated, nor would the attendant econom­ic gains and the associated rise in coal de­mand have happened. We see the same thing with modern combustion engines. Today’s aircraft, for example, are three times as ener­gy-efficient as the first commercial passenger jets in the 1950s. That didn’t reduce fuel use but propelled air traffic to soar and, with it, a fourfold rise in jet fuel burned.

Similarly, it was the astounding gains in computing’s energy efficiency that drove the meteoric rise in data traffic on the internet— which resulted in far more energy used by computing. Global computing and commu­nications, all told, now consumes the energy equivalent of 3 billion barrels of oil per year, more energy than global aviation.

Special

The purpose of improving efficiency in the real world, as opposed to the policy world, is to reduce the cost of enjoying the benefits from an energy-consuming engine or machine. So as long as people and businesses want more of the benefits, declining cost leads to increased de­mand that, on average, outstrips any “savings” from the efficiency gains. Figure 4 shows how this efficiency effect has played out for com­puting and air travel.

Of course, the growth in demand for a specif­ic product or service can subside in a (wealthy) society when limits are hit: the amount of food a person can eat, the miles per day an individ­ual is willing to drive, the number of refrig­erators or lightbulbs per household, etc. But a world of 8 billion people is a long way from reaching any such limits.

The macro picture of the relationship be­tween efficiency and world energy demand is clear (Figure 5). Technology has continually improved society’s energy efficiency. But far from ending global energy growth, efficiency has enabled it. The improvements in cost and efficiency brought about through digital tech­nologies will accelerate, not end, that trend.

Energy revolutions are still beyond the horizon

When the world’s poorest 4 billion people increase their energy use to just 15% of the per-capita level of developed economies, glob­al energy consumption will rise by the equiv­alent of adding an entire United States’ worth of demand. In the face of such projections, there are proposals that governments should constrain demand, and even ban certain ener­gy-consuming behaviors. One academic arti­cle proposed that the “sale of energy-hungry versions of a device or an application could be forbidden on the market, and the limitations could become gradually stricter from year to year, to stimulate energy-saving product lines.” Others have offered proposals to “reduce de­pendency on energy” by restricting the sizes of infrastructures or requiring the use of mass transit or car pools.

The issue here is not only that poorer people will inevitably want to—and will be able to—live more like wealthier people but that new inventions continually create new demands for energy. The invention of the aircraft means that every $1 billion in new jets produced leads to some $5 billion in aviation fuel consumed over two decades to operate them. Similarly, every $1 billion in data centers built will consume $7 billion in electricity over the same period. The world is buying both at the rate of about $100 billion a year.

The inexorable march of technology prog­ress for things that use energy creates the se­ductive idea that something radically new is also inevitable in ways to produce energy. But sometimes, the old or established technology is the optimal solution and nearly immune to disruption. We still use stone, bricks and con­crete, all of which date to antiquity. We do so because they’re optimal, not “old.” So are the wheel, water pipes, electric wires … the list is long. Hydrocarbons are, so far, optimal ways to power most of what society needs and wants.

More than a decade ago, Google focused its vaunted engineering talent on a project called “RE<C,” seeking to develop renewable ener­gy cheaper than coal. After the project was canceled in 2014, Google’s lead engineers wrote: “Incremental improvements to exist­ing [energy] technologies aren’t enough; we need something truly disruptive. … We don’t have the answers.” Those engineers rediscov­ered the physics and scale realities of energy systems.

An energy revolution will come only from the pursuit of basic sciences. Or, as Bill Gates has phrased it, the challenge calls for scientif­ic “miracles.” These will emerge from basic research, not from subsidies for yesterday’s technologies. The internet didn’t emerge from subsidizing the dial-up phone, or the transistor from subsidizing vacuum tubes, or the auto­mobile from subsidizing railroads.

However, 95% of private-sector R&D spend­ing and the majority of government R&D are directed at “development” and not basic re­search. If policymakers want a revolution in energy tech, the single most important action would be to radically refocus and expand sup­port for basic scientific research.

Hydrocarbons—oil, natural gas and coal—are the world’s principal energy resource today and will continue to be so in the foreseeable future. Wind turbines, solar arrays and bat­teries, meanwhile, constitute a small source of energy, and physics dictates that they will remain so. Meanwhile, there is simply no pos­sibility that the world is undergoing—or can undergo—a near-term transition to an entirely “new energy economy.”


Mark P. Mills is a senior fellow at the Man­hattan Institute and a faculty fellow at North­western University’s School of Engineering and Applied Science. He is also a strategic partner with Cottonwood Venture Partners, an energy tech venture fund. He holds a degree in physics from Queen’s University in Ontario, Canada.