[Editor's Note: This is part 1 of a three-part series. A version of this story appears in the October 2019 edition of Oil and Gas Investor. Subscribe to the magazine here.]

A growing chorus of voices is exhorting the public, as well as government policymakers, to embrace the necessity—indeed, the inevitability—of society’s transition to a “new energy economy.” Advocates claim that rapid technological changes are becoming so disruptive and renewable energy is becoming so cheap, so fast, that there is no economic risk in accelerating the move to—or even mandating—a post-hydrocarbon world that no longer needs to use much, if any, oil, natural gas or coal.

Central to that worldview is the proposition that the energy sector is undergoing the same kind of technology disruptions that Silicon Valley tech has brought to so many other markets. Indeed, “old economy” energy companies are a poor choice for investors, according to proponents of the new energy economy, because the assets of hydrocarbon companies will soon become worthless, or “stranded.” Betting on hydrocarbon companies today is like betting on Sears instead of Amazon a decade ago.

“Mission Possible,” a 2018 report by the international Energy Transitions Commission, crystalized this growing body of opinion on both sides of the Atlantic. To “decarbonize” energy use, the report calls for the world to engage in three “complementary” actions: aggressively deploy renewables or so-called clean tech, improve energy efficiency and limit energy demand.

This prescription should sound familiar, as it is identical to a nearly universal energy-policy consensus that coalesced following the 1973–74 Arab oil embargo that shocked the world. But while the past half-century’s energy policies were animated by fears of resource depletion, the fear now is that burning the world’s abundant hydrocarbons releases dangerous amounts of CO2 into the atmosphere.

1

To be sure, history shows that grand energy transitions are possible. The key question today is whether the world is on the cusp of another.

The short answer is no. There are two core flaws with the thesis that the world can soon abandon hydrocarbons. The first: physics realities do not allow energy domains to undergo the kind of revolutionary change experienced on the digital frontiers. The second: no fundamentally new energy technology has been discovered or invented in nearly a century—certainly, nothing analogous to the invention of the transistor or the Internet.

Before these flaws are explained, it is best to understand the contours of today’s hydrocarbon-based energy economy and why replacing it would be a monumental, if not arguably an impossible, undertaking.

Moonshot policies and the challenge of scale

The universe is awash in energy. For humanity, the challenge has always been to deliver energy in a useful way that is both tolerable and available when it is needed, not when nature or luck offers it. Whether it be wind or water on the surface, sunlight from above, or hydrocarbons buried deep in the earth, converting an energy source into useful power always requires capital-intensive hardware.

Given the world’s population and the size of modern economies, scale matters. In physics, when attempting to change any system, one has to deal with inertia and various forces of resistance; it’s far harder to turn or stop a Boeing than it is a bumblebee. In a social system, it’s far more difficult to change the direction of a country than it is a local community.

Today’s reality: hydrocarbons—oil, natural gas and coal—supply 84% of global energy, a share that has decreased only modestly from 87% two decades ago (Figure 1). During those two decades, total world energy use rose by 50%, an amount equal to adding two entire U.S.’ worth of demand.

The small percentage-point decline in the hydrocarbon share of world energy use required over $2 trillion in cumulative global spending on alternatives during that period. Popular visuals of fields festooned with windmills and rooftops laden with solar cells don’t change the fact that these two energy sources today provide less than 2% of the global energy supply and 3% of the U.S. energy supply.

The scale challenge for any energy resource transformation begins with a description. Today, the world’s economies require an annual production of 35 billion barrels (Bbbl) of petroleum, plus the energy equivalent of another 30 Bbbl of oil from natural gas, plus the energy equivalent of yet another 28 Bbbl of oil from coal. In visual terms: if all that fuel were in the form of oil, the barrels would form a line from Washington, D.C., to Los Angeles, and that entire line would increase in height by one Washington Monument every week.

To completely replace hydrocarbons during the next 20 years, global renewable energy production would have to increase by at least ninetyfold. For context: it took a half-century for global oil and gas production to expand by tenfold. It is a fantasy to think, costs aside, that any new form of energy infrastructure could now expand nine times more than that in under half the time.

If the initial goal were more modest—say, to replace hydrocarbons only in the U.S. and only those used in electricity generation—the project would require an industrial effort greater than a World War II–level of mobilization. A transition to 100% non-hydrocarbon electricity by 2050 would require a U.S. grid construction program fourteenfold bigger than the grid build-out rate that has taken place during the past half-century. Then, to finish the transformation, this Promethean effort would need to be more than doubled to tackle nonelectric sectors, where 70% of U.S. hydrocarbons are consumed. And all that would affect a mere 16% of world energy use, America’s share.

2

This daunting challenge elicits a common response: “If we can put a man on the moon, surely we can [fill in the blank with any aspirational goal].” But transforming the energy economy is not like putting a few people on the moon a few times. It is like putting all of humanity on the moon—permanently.

The physics-driven cost realities of wind and solar

The technologies that frame the “new energy economy” vision distill to just three things: windmills, solar panels and batteries. While batteries don’t produce energy, they are crucial for ensuring that episodic wind and solar power is available for use in homes, businesses and transportation.

Yet windmills and solar power are themselves not “new” sources of energy. The modern wind turbine appeared 50 years ago and was made possible by new materials, especially hydrocarbon-based fiberglass. The first commercially viable solar tech also dates back a half-century, as did the invention of the lithium battery (by an Exxon researcher).

Over the decades, all three technologies have greatly improved and become roughly tenfold cheaper. Subsidies aside, that fact explains why, in recent decades, the use of wind/solar has expanded so much from a base of essentially zero.

Nonetheless, wind, solar and battery tech will continue to become better, within limits. Those limits matter a great deal—about which, more later—because of the overwhelming demand for power in the modern world and the realities of energy sources on offer from Mother Nature.

With today’s technology, $1 million worth of utility-scale solar panels will produce about 40 million kilowatt-hours (kWh) over a 30-year operating period. A similar metric is true for wind: $1 million worth of a modern wind turbine produces 55 million kWh over the same 30 years. Meanwhile, $1 million worth of hardware for a shale rig will produce enough natural gas over 30 years to generate more than 300 million kWh. That constitutes about 600% more electricity for the same capital spent on primary energy-producing hardware (Figure 2).

The fundamental differences between these energy resources can also be illustrated in terms of individual equipment. For the cost to drill a single shale well, one can build two 500-foot-high, 2-megawatt (MW) wind turbines. Those two wind turbines produce a combined output averaging over the years to the energy equivalent of 0.7 bbl of oil per hour. The same money spent on a single shale rig produces 10 bbl of oil, or its equivalent in natural gas, averaged over the decades.

The huge disparity in output arises from the inherent differences in energy densities that are features of nature immune to public aspiration or government subsidy. The high energy density of the physical chemistry of hydrocarbons is unique and well understood, as is the science underlying the low energy density inherent in surface sunlight, wind volumes, and velocity. Regardless of what governments dictate that utilities pay for that output, the quantity of energy produced is determined by how much sunlight or wind is available during any period of time and the physics of the conversion efficiencies of photovoltaic cells or wind turbines.

These kinds of comparisons between wind, solar and natural gas illustrate the starting point in making a raw energy resource useful. But for any form of energy to become a primary source of power, additional technology is required. For gas, one necessarily spends money on a turbo-generator to convert the fuel into grid electricity. For wind/solar, spending is required for some form of storage to convert episodic electricity into utility-grade, 24/7 power.

The high cost of ensuring energy ‘availability’

Availability is the single most critical feature of any energy infrastructure, followed by price, followed by the eternal search for decreasing costs without affecting availability. Until the modern energy era, economic and social progress had been hobbled by the episodic nature of energy availability. That’s why, so far, more than 90% of America’s electricity, and 99% of the power used in transportation, comes from sources that can easily supply energy any time on demand.

In our data-centric, increasingly electrified society, always-available power is vital. But, as with all things, physics constrains the technologies and the costs for supplying availability. For hydrocarbon-based systems, availability is dominated by the cost of equipment that can convert fuel-to-power continuously for at least 8,000 hours a year, for decades. Meanwhile, it’s inherently easy to store the associated fuel to meet expected or unexpected surges in demand, or delivery failures in the supply chain caused by weather or accidents.

It costs less than $1 a barrel to store oil or natural gas (in oil-energy equivalent terms) for a couple of months. Storing coal is even cheaper. Thus, unsurprisingly, the U.S., on average, has about one to two months’ worth of national demand in storage for each kind of hydrocarbon at any given time.

Meanwhile, with batteries, it costs roughly $200 to store the energy equivalent to 1 bbl of oil. Thus, instead of months, barely two hours of national electricity demand can be stored in the combined total of all the utility-scale batteries on the grid plus all the batteries in the 1 million electric cars that exist today in America.

For wind/solar, the features that dominate cost of availability are inverted, compared with hydrocarbons. While solar arrays and wind turbines do wear out and require maintenance as well, the physics and thus additional costs of that wear-and-tear are less challenging than with combustion turbines. But the complex and comparatively unstable electrochemistry of batteries makes for an inherently more expensive and less efficient way to store energy and ensure its availability.

Since hydrocarbons are so easily stored, idle conventional power plants can be dispatched—ramped up and down—to follow cyclical demand for electricity. Wind turbines and solar arrays cannot be dispatched when there’s no wind or sun. As a matter of geophysics, both wind-powered and sunlight-energized machines produce energy, averaged over a year, about 25% to 30% of the time, often less. Conventional power plants, however, have very high “availability,” in the 80% to 95% range, and often higher.

A wind/solar grid would need to be sized to meet both peak demand and to have enough extra capacity beyond peak needs in order to produce and store additional electricity when sun and wind are available. This means, on average, that a pure wind/solar system would necessarily have to be about threefold the capacity of a hydrocarbon grid: i.e., one needs to build 3 kW of wind/solar equipment for every 1 kW of combustion equipment eliminated. That directly translates into a threefold cost disadvantage, even if the per-kW costs were all the same.

Even this necessary extra capacity would not suffice. Meteorological and operating data show that average monthly wind and solar electricity output can drop as much as twofold during each source’s respective “low” season.

The myth of grid parity

How do these capacity and cost disadvantages square with claims that wind and solar are already at or near “grid parity” with conventional sources of electricity? The U.S. Energy Information Administration (EIA) and other similar analyses report a “levelized cost of energy” (LCOE) for all types of electric power technologies. In the EIA’s LCOE calculations, electricity from a wind turbine or solar array is calculated as 36% and 46%, respectively, more expensive than from a natural gas turbine—i.e., approaching parity.

But in a critical and rarely noted caveat, EIA states: “The LCOE values for dispatchable and non-dispatchable technologies are listed separately in the tables because comparing them must be done carefully (emphasis added). Put differently, the LCOE calculations do not take into account the array of real, if hidden, costs needed to operate a reliable 24/7 and 365-day-per-year energy infrastructure—or, in particular, a grid that used only wind/solar.

The LCOE considers the hardware in isolation while ignoring real-world system costs essential to supply 24/7 power. Equally misleading, an LCOE calculation, despite its illusion of precision, relies on a variety of assumptions and guesses subject to dispute, if not bias.

For example, an LCOE assumes that the future cost of competing fuels—notably, natural gas—will rise significantly. But that means that the LCOE is more of a forecast than a calculation. This is important because a “levelized cost” uses such a forecast to calculate a purported average cost over a long period. The assumption that gas prices will go up is at variance with the fact that they have decreased over the past decade and the evidence that low prices are the new normal for the foreseeable future. Adjusting the LCOE calculation to reflect a future where gas prices don’t rise radically increases the LCOE cost advantage of natural gas over wind/solar.

An LCOE incorporates an even more subjective feature, called the “discount rate,” which is a way of comparing the value of money today versus the future. A low discount rate has the effect of tilting an outcome to make it more appealing to spend precious capital today to solve a future (theoretical) problem. Advocates of using low discount rates are essentially assuming slow economic growth.

A high discount rate effectively assumes that a future society will be far richer than today (not to mention have better technology). Economist William Nordhaus’s work in this field, wherein he advocates using a high discount rate, earned him a 2018 Nobel Prize.

An LCOE also requires an assumption about average multidecade capacity factors, the share of time the equipment actually operates (i.e., the real, not theoretical, amount of time the sun shines and wind blows). The EIA assumes, for example, 41% and 29% capacity factors, respectively, for wind and solar. But data collected from operating wind and solar farms reveal actual median capacity factors of 33% and 22%. The difference between assuming a 40% but experiencing a 30% capacity factor means that, over the 20-year life of a 2-MW wind turbine, $3 million of energy production assumed in the financial models won’t exist—and that’s for a turbine with an initial capital cost of about $3 million.

U.S. wind farm capacity factors have been getting better but at a slow rate of about 0.7% per year over the past two decades. Notably, this gain was achieved mainly by reducing the number of turbines per acre trying to scavenge moving air—resulting in average land used per unit of wind energy increasing by some 50%.

LCOE calculations do reasonably include costs for such things as taxes, the cost of borrowing, and maintenance. But here, too, mathematical outcomes give the appearance of precision while hiding assumptions. For example, assumptions about maintenance costs and performance of wind turbines over the long term may be overly optimistic. Data from the U.K., which is further down the wind-favored path than the U.S., point to far faster degradation (less electricity per turbine) than originally forecast.

To address at least one issue with using LCOE as a tool, the International Energy Agency (IEA) recently proposed the idea of a “value-adjusted” LCOE, or VALCOE, to include the elements of flexibility and incorporate the economic implications of dispatchability. IEA calculations using a VALCOE method yielded coal power, for example, far cheaper than solar, with a cost penalty widening as a grid’s share of solar generation rises.

One would expect that, long before a grid is 100% wind/solar, the kinds of real costs outlined above should already be visible. As it happens, regardless of putative LCOEs, we do have evidence of the economic impact that arises from increasing the use of wind and solar energy.

The hidden costs of a ‘green’ grid

Subsidies, tax preferences and mandates can hide real-world costs, but when enough of them accumulate, the effect should be visible in overall system costs. And it is. In Europe, the data show that the higher the share of wind/solar, the higher the average cost of grid electricity (Figure 3).

Germany and Britain, well down the “new energy” path, have seen average electricity rates rise 60% to 110% during the past two decades. The same pattern—more wind/solar and higher electricity bills—is visible in Australia and Canada.

Since the share of wind power, on a per-capita basis, in the U.S. is still at only a small fraction of that in most of Europe, the cost impacts on American ratepayers are less dramatic and less visible. Nonetheless, average U.S. residential electric costs have risen some 20% over the past 15 years. That should not have been the case. Average electric rates should have gone down, not up.

Here’s why: coal and natural gas together supplied about 70% of electricity over that 15-year period. The price of fuel accounts for about 60% to 70% of the cost to produce electricity when using hydrocarbons. Thus, about half the average cost of America’s electricity depends on coal and gas prices. The price of both those fuels has gone down by more than 50% over that 15-year period. Utility costs, specifically, to purchase gas and coal are down some 25% over the past decade alone. In other words, cost savings from the shale-gas revolution have significantly insulated consumers, so far, from even higher rate increases.

The increased use of wind/solar imposes a variety of hidden, physics-based costs that are rarely acknowledged in utility or government accounting. For example, when large quantities of power are rapidly, repeatedly and unpredictably cycled up and down, the challenge and costs associated with “balancing” a grid (i.e., keeping it from failing) are greatly increased. OECD analysts estimate that at least some of those “invisible” costs imposed on the grid add 20% to 50% to the cost of grid kilowatt-hours.

Furthermore, flipping the role of the grid’s existing power plants from primary to backup for wind/solar leads to other real but unallocated costs that emerge from physical realities. Increased cycling of conventional power plants increases wear-and-tear and maintenance costs. It also reduces the utilization of those expensive assets, which means that capital costs are spread out over fewer kWh produced—thereby arithmetically increasing the cost of each of those kWh.

Then, if the share of episodic power becomes significant, the potential rises for complete system blackouts. That has happened twice after the wind died down unexpectedly (with some customers out for days in some areas) in the state of South Australia, which derives more than 40% of its electricity from wind.

After a total system outage in South Australia in 2018, Tesla, with much media fanfare, installed the world’s single largest lithium battery “farm” on that grid. For context, to keep South Australia lit for one half-day of no wind would require 80 such “world’s biggest” Tesla battery farms, and that’s on a grid that serves just 2.5 million people.

3

Engineers have other ways to achieve reliability; using old-fashioned giant diesel-engine generators as backup (engines essentially the same as those that propel cruise ships or that are used to back up data centers). Without fanfare, because of rising use of wind, U.S. utilities have been installing grid-scale engines at a furious pace. The grid now has over $4 billion in utility-scale, engine-driven generators (enough for about 100 cruise ships), with lots more to come. Most burn natural gas, and a lot of them are oil-fired. Three times as many such big reciprocating engines have been added to America’s grid over the past two decades as over the half-century prior to that.

All these costs are real and are not allocated to wind or solar generators. But electricity consumers pay them. A way to understand what’s going on: managing grids with hidden costs imposed on non-favored players would be like levying fees on car drivers for the highway wear-and-tear caused by heavy trucks while simultaneously subsidizing the cost of fueling those trucks.

The issue with wind and solar power comes down to a simple point: their usefulness is impractical on a national scale as a major or primary fuel source for generating electricity. As with any technology, pushing the boundaries of practical utilization is possible but usually not sensible or cost-effective. Helicopters offer an instructive analogy.

The development of a practical helicopter in the 1950s (four decades after its invention) inspired widespread hyperbole about that technology revolutionizing personal transportation. Today, the manufacture and use of helicopters is a multibillion-dollar niche industry providing useful and often-vital services. But one would no more use helicopters for regular Atlantic travel—though doable with elaborate logistics—than employ a nuclear reactor to power a train or photovoltaic systems to power a country.


Mark P. Mills is a senior fellow at the Manhattan Institute and a faculty fellow at Northwestern University’s School of Engineering and Applied Science. He is also a strategic partner with Cottonwood Venture Partners, an energy tech venture fund. He holds a degree in physics from Queen’s University in Ontario, Canada.

Part two of this series, “Batteries Cannot Save The Grid Or The Planet,” will appear in the November issue of Oil and Gas Investor.