A Collection of Articles by Prof Tom Murphy on Why Energy and Entropy Matters
Excerpted from: http://library.fora.tv/2011/10/26/Growth_Has_an_Expiration_Date
Tom Murphy is an associate professor of physics at the University of California, San Diego. He currently leads a project to test General Relativity by bouncing laser pulses off of the reflectors left on the Moon by the Apollo astronauts, achieving one-millimeter range precision.
Murphy’s keen interest in energy topics began with his teaching a course on energy and the environment for non-science majors at UCSD. He has explored the quantitatively convincing case that our pursuit of an ever-bigger scale of life faces gigantic challenges and carries significant risks.
Reproduced from: http://physics.ucsd.edu/do-the-math/2011/07/galactic-scale-energy/
Posted on 2011-07-12
Since the beginning of the Industrial Revolution, we have seen an impressive and sustained growth in the scale of energy consumption by human civilization. Plotting data from the Energy Information Agency on U.S. energy use since 1650 (1635-1945, 1949-2009, including wood, biomass, fossil fuels, hydro, nuclear, etc.) shows a remarkably steady growth trajectory, characterized by an annual growth rate of 2.9% (see figure). It is important to understand the future trajectory of energy growth because governments and organizations everywhere make assumptions based on the expectation that the growth trend will continue as it has for centuries—and a look at the figure suggests that this is a perfectly reasonable assumption. (See this update for nuances.)
Growth has become such a mainstay of our existence that we take its continuation as a given. Growth brings many positive benefits, such as cars, television, air travel, and iGadgets. Quality of life improves, health care improves, and, aside from a proliferation of passwords to remember, life tends to become more convenient over time. Growth also brings with it a promise of the future, giving reason to invest in future development in anticipation of a return on the investment. Growth is then the basis for interest rates, loans, and the finance industry.
Because growth has been with us for “countless” generations—meaning that everyone we ever met or our grandparents ever met has experienced it—growth is central to our narrative of who we are and what we do. We therefore have a difficult time imagining a different trajectory.
This post provides a striking example of the impossibility of continued growth at current rates—even within familiar timescales. For a matter of convenience, we lower the energy growth rate from 2.9% to 2.3% per year so that we see a factor of ten increase every 100 years. We start the clock today, with a global rate of energy use of 12 terawatts (meaning that the average world citizen has a 2,000 W share of the total pie). We will begin with semi-practical assessments, and then in stages let our imaginations run wild—even then finding that we hit limits sooner than we might think. I will admit from the start that the assumptions underlying this analysis are deeply flawed. But that becomes the whole point, in the end.
A Race to the Galaxy
I have always been impressed by the fact that as much solar energy reaches Earth in one hour as we consume in a year. What hope such a statement brings! But let’s not get carried away—yet.
Only 70% of the incident sunlight enters the Earth’s energy budget—the rest immediately bounces off of clouds and atmosphere and land without being absorbed. Also, being land creatures, we might consider confining our solar panels to land, occupying 28% of the total globe. Finally, we note that solar photovoltaics and solar thermal plants tend to operate around 15% efficiency. Let’s assume 20% for this calculation. The net effect is about 7,000 TW, about 600 times our current use. Lots of headroom, yes?
When would we run into this limit at a 2.3% growth rate? Recall that we expand by a factor of ten every hundred years, so in 200 years, we operate at 100 times the current level, and we reach 7,000 TW in 275 years. 275 years may seem long on a single human timescale, but it really is not that long for a civilization. And think about the world we have just created: every square meter of land is covered in photovoltaic panels! Where do we grow food?
Now let’s start relaxing constraints. Surely in 275 years we will be smart enough to exceed 20% efficiency for such an important global resource. Let’s laugh in the face of thermodynamic limits and talk of 100% efficiency (yes, we have started the fantasy portion of this journey). This buys us a factor of five, or 70 years. But who needs the oceans? Let’s plaster them with 100% efficient solar panels as well. Another 55 years. In 400 years, we hit the solar wall at the Earth’s surface. This is significant, because biomass, wind, and hydroelectric generation derive from the sun’s radiation, and fossil fuels represent the Earth’s battery charged by solar energy over millions of years. Only nuclear, geothermal, and tidal processes do not come from sunlight—the latter two of which are inconsequential for this analysis, at a few terawatts apiece.
But the chief limitation in the preceding analysis is Earth’s surface area—pleasant as it is. We only gain 16 years by collecting the extra 30% of energy immediately bouncing away, so the great expense of placing an Earth-encircling photovoltaic array in space is surely not worth the effort. But why confine ourselves to the Earth, once in space? Let’s think big: surround the sun with solar panels. And while we’re at it, let’s again make them 100% efficient. Never-mind the fact that a 4 mm-thick structure surrounding the sun at the distance of Earth’s orbit would require one Earth’s worth of materials—and specialized materials at that. Doing so allows us to continue 2.3% annual energy growth for 1350 years from the present time.
At this point you may realize that our sun is not the only star in the galaxy. The Milky Way galaxy hosts about 100 billion stars. Lots of energy just spewing into space, there for the taking. Recall that each factor of ten takes us 100 years down the road. One-hundred billion is eleven factors of ten, so 1100 additional years. Thus in about 2500 years from now, we would be using a large galaxy’s worth of energy. We know in some detail what humans were doing 2500 years ago. I think I can safely say that I know what we won’t be doing 2500 years hence.
WHY SINGLE OUT SOLAR?
Some readers may be bothered by the foregoing focus on solar/stellar energy. If we’re dreaming big, let’s forget the wimpy solar energy constraints and adopt fusion. The abundance of deuterium in ordinary water would allow us to have a seemingly inexhaustible source of energy right here on Earth. We won’t go into a detailed analysis of this path, because we don’t have to. The merciless growth illustrated above means that in 1400 years from now, any source of energy we harness would have to outshine the sun.
Let me restate that important point. No matter what the technology, a sustained 2.3% energy growth rate would require us to produce as much energy as the entire sun within 1400 years. A word of warning: that power plant is going to run a little warm. Thermodynamics require that if we generated sun-comparable power on Earth, the surface of the Earth—being smaller than that of the sun—would have to be hotter than the surface of the sun!
We can explore more exactly the thermodynamic limits to the problem. Earth absorbs abundant energy from the sun—far in excess of our current societal enterprise. The Earth gets rid of its energy by radiating into space, mostly at infrared wavelengths. No other paths are available for heat disposal. The absorption and emission are in near-perfect balance, in fact. If they were not, Earth would slowly heat up or cool down. Indeed, we have diminished the ability of infrared radiation to escape, leading to global warming. Even so, we are still in balance to within less than the 1% level. Because radiated power scales as the fourth power of temperature (when expressed in absolute terms, like Kelvin), we can compute the equilibrium temperature of Earth’s surface given additional loading from societal enterprise.
The result is shown above. From before, we know that if we confine ourselves to the Earth’s surface, we exhaust solar potential in 400 years. In order to continue energy growth beyond this time, we would need to abandon renewables—virtually all of which derive from the sun—for nuclear fission/fusion. But the thermodynamic analysis says we’re toasted anyway.
Stop the Madness!
The purpose of this exploration is to point out the absurdity that results from the assumption that we can continue growing our use of energy—even if doing so more modestly than the last 350 years have seen. This analysis is an easy target for criticism, given the tunnel-vision of its premise. I would enjoy shredding it myself. Chiefly, continued energy growth will likely be unnecessary if the human population stabilizes. At least the 2.9% energy growth rate we have experienced should ease off as the world saturates with people. But let’s not overlook the key point: continued growth in energy use becomes physically impossible within conceivable timeframes. The foregoing analysis offers a cute way to demonstrate this point. I have found it to be a compelling argument that snaps people into appreciating the genuine limits to indefinite growth.
Once we appreciate that physical growth must one day cease (or reverse), we can come to realize that all economic growth must similarly end. This last point may be hard to swallow, given our ability to innovate, improve efficiency, etc. But this topic will be put off for another post.
I thank Kim Griest for comments and for seeding the idea that in 2500 years, we use up the Milky Way galaxy, and I thank Brian Pierini for useful comments.
Can Economic Growth Last?
Posted on 2011-07-14
As we saw in the previous post, the U.S. has expanded its use of energy at a typical rate of 2.9% per year since 1650. We learned that continuation of this energy growth rate in any form of technology leads to a thermal reckoning in just a few hundred years (not the tepid global warming, but boiling skin!). What does this say about the long-term prospects for economic growth, if anything?
World economic growth for the previous century, expressed in constant 1990 dollars. For the first half of the century, the economy tracked the 2.9% energy growth rate very well, but has since increased to a 5% growth rate, outstripping the energy growth rate.
The figure at left shows the rate of global economic growth over the last century, as reconstructed by J. Bradford DeLong. Initially, the economy grew at a rate consistent with that of energy growth. Since 1950, the economy has outpaced energy, growing at a 5% annual rate. This might be taken as great news: we do not necessarily require physical growth to maintain growth in the economy. But we need to understand the sources of the additional growth before we can be confident that this condition will survive the long haul. After all, fifty years does not imply everlasting permanence.
The difference between economic and energy growth can be split into efficiency gains—we extract more activity per unit of energy—and “everything else.” The latter category includes sectors of economic activity not directly tied to energy use. Loosely, this could be thought of as non-manufacturing activity: finance, real estate, innovation, and other aspects of the “service” economy. My focus, as a physicist, is to understand whether the impossibility of indefinite physical growth (i.e., in energy, food, manufacturing) means that economic growth in general is also fated to end or reverse. We’ll start with a close look at efficiency, then move on to talk about more spritely economic factors.
Exponential vs. Linear Growth
First, let’s address what I mean when I say growth. I mean a steady rate of fractional expansion each year. For instance, 5% economic growth means any given year will have an economy 5% larger than the year before. This leads to exponential behavior, which is what drives the conclusions. If you object that exponentials are unrealistic, then we’re in agreement. But such growth is the foundation of our current economic system, so we need to explore the consequences. If you think we could save ourselves much of the mess by transitioning to linear growth, this indeed dramatically shifts the timeline—but it’s also a death knell for economic growth.
Let’s say we lock in today’s 5% growth and make it linear, so that we increase by a fixed absolute amount every year—not by a fixed fraction of that year’s level. We would then double in 20 years, and in a century would be five times bigger (as opposed to 132 times bigger under exponential 5% growth). But after just 20 years, the fractional growth rate is 2.5%, and after a century, it’s 1%. So linear growth starves the economic beast, and would force us to abandon our current debt-based financial system of interest and loans. This post is all about whether we can maintain our current, exponential trajectory.
Squeezing Efficiency: Rabbits out of the Hat
It seems clear that we could, in principle, rely on efficiency alone to allow continued economic growth even given a no-growth raw energy future (as is inevitable). The idea is simple. Each year, efficiency improvements allow us to drive further, light more homes, manufacture more goods than the year before—all on a fixed energy income. Fortunately, market forces favor greater efficiency, so that we have enjoyed the fruits of a constant drum-beat toward higher efficiency over time. To the extent that we could continue this trick forever, we could maintain economic growth indefinitely, and all the institutions that are built around it: investment, loans, banks, etc.
But how many times can we pull a rabbit out of the efficiency hat? Barring perpetual motion machines (fantasy) and heat pumps (real; discussed below), we must always settle for an efficiency less than 100%. This puts a bound on how much gain we might expect to accomplish. For instance, if some device starts out at 50% efficiency, there is no way to squeeze more than a factor of two out of its performance. To get a handle on how much there is to gain, and how fast we might expect to saturate, let’s look at what we have accomplished historically.
THE GOOD, THE BAD, AND THE AVERAGE
A few shining examples stand out. Refrigerators use half the energy that they did about 35 years ago. The family car that today gets 40 miles per gallon achieved half this value in the 1970′s. Both cases point to a 2% per year improvement (doubling time of 35 years).
Not everything has seen such impressive improvements. The Boeing 747 established a standard for air travel efficiency in 1970 that has hardly budged since. Electric motors, pumps, battery charging, hydroelectric power, electricity transmission—among many other things—operate at near perfect efficiency (often around 90%). Power plants that run on coal, natural gas, or nuclear reactions have seen only marginal gains in efficiency in the last 35 years: well less than 1% per year.
Taken as a whole, we might then loosely guess that overall efficiency has improved by about 1% per year over the past few decades—being bounded by 0% and 2%. This corresponds to a doubling time of 70 years. How many more doublings might we expect?
POTENTIAL GAINS AND LIMITS
Many of our large-scale applications of energy use heat engines to extract useful energy out of combustion or other source of heat. These include fossil-fuel and nuclear power plants operating at 30–40% efficiency, and automobiles operating at 15–25% efficiency. Heat engines therefore account for about two-thirds of the total energy use in the U.S. (27% in transportation, 36% in electricity production, a bit in industry). The requirement that the entropy of a closed system may never decrease sets a hard limit on how much efficiency one might physically achieve in any heat engine. The maximum theoretical efficiency, in percent, is given by 100×(Th−Tc)/Th, where Th and Tc denote absolute temperatures (in Kelvin) of the hot part of the heat engine and the “cold” environment, respectively. Engineering limitations prevent realization of the theoretical maximum. But in any case, a heat engine operating between 1500 K (hot for a power plant) and room temperature could at most achieve 80% efficiency. So a factor of two improvement is probably impractical in this dominant domain.
The reverse of a heat engine is a heat pump, which uses a little bit of energy to move a lot. Air conditioners, refrigerators, and some home heating systems use this technique. Somewhat magically, moving a certain quantity of heat energy can require less than that amount of energy to accomplish. For cooling applications, the thermodynamic limit to efficiency is given by 100×Tc/(Th−Tc), again expressing temperatures on an absolute scale. A refrigerator (usually a freezer with a piggybacked refrigerator) operating at room temperature can theoretically achieve 1100% efficiency. The Energy Efficiency Ratio (EER), which is displayed for most new cooling devices, is theoretically bounded by 3.4×Tc/(Th−Tc), which in this example is 36. Today’s refrigerators achieve EER values of about 12, so that only a factor of three remains. The same can be said for the Coefficient of Performance (COP) for heat pumps, which is bounded by Th/(Th−Tc). Like refrigerators, these are performing within a factor of 2–3 of theoretical limits.
Lighting has seen dramatic improvements in recent decades, going from incandescent performances of 14 lumens per Watt to compact fluorescent efficacies that are four times better, at 50–60 lumens per Watt. LED lighting currently achieves 60–80 lumens per Watt. An ideal light source emitting a spectrum we would call white (sharing the exact spectrum of daylight) but contrived to have no emission outside our visible range would have a luminous efficacy of 251 lm/W. The best LEDs are now within a factor of three of this hard limit.
The efficiency of gasoline-powered cars can not easily improve by any large factor (see heat engines, above), but the effective efficiency can be improved significantly by transitioning to electric drive trains. While a car getting 40 m.p.g. may have a 20% efficient gasoline engine, a battery-powered drive train might achieve something like 70% efficiency (85% efficiency in charging batteries, 85% in driving the electric motor). The factor of 3.5 improvement in efficiency suggests effective mileage performance of 140 m.p.g. One caution, however: if the input electricity comes from a fossil-fuel power plant operating at 40% efficiency and 90% transmission efficiency, the effective fossil-to-locomotion efficiency is reduced to 25%, and is not such a significant step.
As mentioned above, a broad swath of common devices already operate at close to perfect efficiency. Electrical devices in particular can be quite impressively frugal with energy. That which isn’t used constructively appears as waste heat, which is one way to quickly assess efficiency for devices that do not have heat generation as a goal: power plants are hot; car engines are hot; incandescent lights are hot. On the flip side, hydroelectric plants stay cool, LED lights are cool, and a car battery being charged stays cool.
SUMMING IT UP
Given that two-thirds of our energy resource is burned in heat engines, and that these cannot improve much more than a factor of two, more significant gains elsewhere are diminished in value. For instance, replacing the 10% of our energy budget spent on direct heat (e.g., in furnaces and hot water heaters) with heat pumps operating at their maximum theoretical efficiency effectively replaces a 10% expenditure with a 1% expenditure. A factor of ten sounds like a fantastic improvement, but the overall efficiency improvement in society is only 9%. Likewise with light bulb replacement: large gains in a small sector. We should still pursue these efficiency improvements with vigor, but we should not expect this gift to provide a form of unlimited growth.
On balance, the most we might expect to achieve is a factor of two net efficiency increase before theoretical limits and engineering realities clamp down. At the present 1% overall rate, this means we might expect to run out of gain this century. Some might quibble about whether the factor of two is too pessimistic, and might prefer a factor of 3 or even 4 efficiency gain. Such modifications may change the timescale of saturation, but not the ultimate result.
Faith in Technology
We have developed an unshakable faith in technology to address our problems. Its track record is most impressive. I myself can sit at my dining room table in California and direct a laser in New Mexico to launch pulses at the astronaut-placed reflectors on the moon and measure the distance to one millimeter. I built much of the system, so I am no stranger to technology, and embrace the possibilities it offers. And we’ve seen the future in our movies—it’s almost palpably real. But we have to be careful about faith, and periodically reexamine its validity or possible limits. Following are a few key examples.
What About Substitutions?
The previous discussion is rooted in the technologies of today: coal-fired power plants, for goodness sake! Any self-respecting analysis of the long term future should recognize the near-certainty that tomorrow’s solutions will look different than today’s. We may not even have a name yet for the energy source of the future!
First, I refer you to the previous post: the continued growth of any energy technology—if consumed on the planet—will bring us to a boil. Beyond that, we hit astrophysically nonsensical limits within centuries. So energy scale must cease growth. Likewise, efficiency limits will prevent us from increasing our effective energy available without bound.
Second, you might wonder: can’t we consider solar, wind and other renewables to be more efficient than fossil fuel power, since the energy has free delivery? It’s true that unlike the business model for the printer (cheap printer, expensive ink cartridges that ruin you in the end), the substantial cost for renewables is in the initial investment, with little in the way of consumables. But fossil fuels—although a limited-time offer—are also a free gift of nature. We do have to put effort into retrieving them (delivery not free), although far less than the benefit they deliver. The important metric on the energy/efficiency front is energy return on energy invested (EROEI). Fossil fuels have enjoyed EROEI values typically in the range of 20:1 to 100:1, meaning that less than 5% of the eventual benefit must be invested up front. Solar and wind are less, at 10:1 and 18:1, respectively. These technologies would avoid wasting a majority of the energy in heat engines, but the lower EROEI means it’s less of a freebee than the current juice. And yes, the 15% efficiency of many solar panels does mean that most of the remaining 85% goes to heating the dark panel.
What About Accomplishing the Same Tasks with Less?
One route to coping with a fixed energy income is to invent new devices or techniques that accomplish the same tasks using less energy, rather than incrementally improve on the efficiency of current devices. This works marvelously in some areas (e.g., generational changes in computers, cell phones, shift to online banking/news).
But some things are hard to shave down substantially. Global transportation means pushing through air or water over vast distances that will not shrink. Cooking means heating meal-sized portions of food and water. Heating a home against the winter cold involves a certain amount of thermal energy for a fixed-size home. A hot shower requires a certain amount of energy to heat a sufficient volume of water. Can all of these things be done more efficiently with better aero/hydrodynamics or traveling more slowly; foods requiring less heat to cook; insulation and heat pumps in homes; and taking showers using less water? Absolutely. Can this go on forever to maintain growth? No. As long as these physically-bounded activities comprise a finite portion of our portfolio, no amount of gadget refinement will allow indefinite economic growth. If it did, eventually economic activity would be wholly dominated by us “servicing” each other, and not the physical “stuff.”
What About Paying More to Use Less?
Owners of solar panels or Prius cars have elected to plunk down a significant amount of money to consume fewer resources. Sometimes these decisions are based on more than straight dollars and cents calculations, in that the payback can be very long term and may not be competitive against opportunity cost. Could social conscientiousness become fashionable enough to drive overall economic growth? I suppose it’s possible, but generally most people are only interested in this when the cost of energy is high to start with. Below, we’ll see that if the economy continues its growth trend after energy use flattens, the cost of energy becomes negligibly small—deflating the incentive to pay more for less.
The Unphysical Economy
In a future world where energy growth has ceased, and efficiency has been squeezed to a practical limit, can we still expect to grow our economy through innovation, technology, and services? One way to approach the problem is to demand that we maintain 5% economic growth over the long term, and see what fraction of economic activity has to come from the non-energy-demanding sector. Of course all economic activity requires some energy, so by “non-energy” or “unphysical,” I mean those activities that require minimal energy inputs and approach the economist’s dream of “decoupling.”
We start by setting energy to flatten out as a logistic function (standard S-curve in population studies), with an inflection point at the year 2000 (halfway along). We then let efficiency boost our effective energy at the present rate of 1% gain per year, ultimately saturating at a factor of two. The figure below provides a toy example of how this might look.
The timescale is not the important feature of the figure. The important result is that trying to maintain a growth economy in a world of tapering raw energy growth (perhaps accompanied by leveling population) and diminishing gains from efficiency improvements would require the “other” category of activity to eventually dominate the economy. This would mean that an increasingly small fraction of economic activity would depend heavily on energy, so that food production, manufacturing, transportation, etc. would be relegated to economic insignificance. Activities like selling and buying existing houses, financial transactions, innovations (including new ways to move money around), fashion, and psychotherapy will be effectively all that’s left. Consequently, the price of food, energy, and manufacturing would drop to negligible levels relative to the fluffy stuff. And is this realistic—that a vital resource at its physical limit gets arbitrarily cheap? Bizarre.
This scenario has many problems. For instance, if food production shrinks to 1% of our economy, while staying at a comparable absolute scale as it is today (we must eat, after all), then food is effectively very cheap relative to the paychecks that let us enjoy the fruits of the broader economy. This would mean that farmers’ wages would sink far lower than they are today relative to other members of society, so they could not enjoy the innovations and improvements the rest of us can pay for. Subsidies, donations, or any other mechanism to compensate farmers more handsomely would simply undercut the “other” economy, preventing it from swelling to arbitrary size—and thus limiting growth.
Another way to put it is that since we all must eat, and a certain, finite fraction of our population must be engaged in the production of food, the price of food cannot sink to arbitrarily low levels. The economy is rooted in a physical world that has historically been joined at the hip to energy use (through food production, manufacturing, transport of goods in the global economy). It is fantastical to think that an economy can unmoor itself from its physical underpinnings and become dominated by activities unrelated to energy, food, and manufacturing constraints.
I’m not claiming that certain industries will not grow: there will always be growth in some sector. But net growth will be constrained. Winners will not outpace the losers. Nor am I claiming that some economic activities cannot exist virtually independent of energy. We can point to plenty of examples of this today. But these things can’t grow to 90%, then 99%, then 99.9%, etc. of the total economic activity—as would be mandated if economic growth is to continue apace.
Where Does this Leave Us?
Together with the last post, I have used physical analysis to argue that sustained economic growth in the long term is fantastical. Maybe for some, this is stating the obvious. After all, Adam Smith imagined a 200-year phase of economic growth followed by a steady state. But our mentality is currently centered on growth. Our economic systems rely on growth for investment, loans, and interest to make any sense. If we don’t deliberately put ourselves onto a steady state trajectory, we risk a complete and unchoreographed collapse of our economic institutions.
Admittedly, the argument that economic growth will stop is not as direct a result of physics as is the argument that physical growth will stop, and as such represents a stretch outside my usual comfort zone. But besides physical limits, I think we must also apply notions of common sense and human psychology. The artificial world that must be envisioned to keep economic growth alive in the face of physical limits strikes me as preposterous and untenable. It would be an existence far removed from demonstrated modes of human economic activity. Not everyone would want to participate in this whimsical society, preferring instead to spend their puffy paychecks on constrained physical goods and energy (which is now dirt cheap, by the way, so a few individuals could easily afford to own all of it!).
Recognizing the need to ultimately transition to a non-growth economy, I am personally disconcerted by the fact that we lack a tested economic system based on steady-state conditions. I would like to take a conservative, low-risk approach to the future and smartly place ourselves on a sustainable trajectory. There are well-developed steady-state economic models, pioneered by Herman Daly and others. There are even stepwise plans to transition our economy into a steady-state. But not one of those steps will be taken if people (who elect politicians) do not crave this result. The only way people will crave this result is if they understand (or experience) the impossibility of continued growth and the consequences of not acting soon enough. I hope we can collectively be smart enough to make this transition.
Note: A later post on the meaning of sustainability is a natural follow-on to this post.
Acknowledgments: Thanks to Brian Pierini for his review and comments.
Sustainable Means Bunkty to Me
Posted on 2011-10-05
What? Don’t know what bunkty means? Now you know how I feel about the word “sustainable.” My paper towels separate into smaller segments than they once did. It’s sustainable! These potato chips arrive in a box that says SUSTAINABLE in big letters on the side. I’m eating green! When I’m in a hotel, I hang the towel back up rather than throw it on the floor (would I ever do this anyway?) and the placard says I’m being sustainable. Can it be that easy? I claim that not one among our host of 7 billion really knows what our world would look like if we lived in a truly sustainable fashion. Let’s try to come to terms with what it might mean.
I think most would agree that the rapid depletion we currently witness in natural resources and services, climate stability, water availability, soil quality, and fisheries—to name a few—suggests that we do not live sustainably at present. We can not expect to keep up our current practices with 7 billion people in this world without some major changes.
Sustainability, in Numbers
I have made the case in the past that growth—either in physical measures like population, energy use, etc., or in economic terms—cannot continue indefinitely in our finite world. This post rounds out the trilogy.
If we think about the fact that growth must one day end, we realize that an ultimate steady state would tend to reduce income inequalities. Given growth, we have little trouble rationalizing inequality, since those at the bottom have growth opportunity ahead of them. As long as the plight of the poor improves with time, the well-heeled among us can feel justified in living large. But without a growth argument, it would become morally awkward to perpetuate the inequality of two people contributing comparable time and energy to humanity’s steady-state upkeep.
Our dream is that the poor of the world can improve their standard of living toward first-world norms. The U.S. uses about 25% of the world’s annual energy resource (which I will use as a proxy for standard of living) while harboring about 5% of the population. Thus the average U.S. citizen uses energy at five times the rate of the average global citizen. For everyone to get where we are today in the U.S. would require a five-fold increase in the total energy expenditure of the planet. Make that 7-fold allowing the population to swell to 10 billion. And even that requires a freeze in growth at the top end (the U.S.). Since that’s not about to happen—at least not voluntarily—we should call it a ten-fold increase for everyone to get what they want.
If we are not sustainable today, how could we possibly achieve sustainability under the burden of a ten-fold increase in scale?
You may object that Americans don’t eat five times more food than the average Earthling today. True enough, but we eat a meat-rich diet that consumes much more land, grain, and energy than would a simpler diet by something close to that same factor. You might also object that Americans use energy at an obscene rate, and that this should not be the goal for the rest of the world. I won’t argue. This post is aimed at those who see nothing wrong with such an approach.
Yeah, But… and Other Clever Dodges
Might we not improve our efficiency in tandem with development so that standards of living could improve at a constant energy/impact? This is the subtext behind many of today’s sustainability drives. We just need to do things better and smarter and we’ll be fine.
I refer you to the post on the necessary end to economic growth, in which I demonstrate that efficiency improvements might gain us only a factor of two (taking about 70 years to do this at historical rates of efficiency improvement of 1% per year). And we have to watch the pernicious Jevons’ paradox, whereby efficiency improvements in the past have tended to lead to a greater expenditure of energy than before the “improvements.” But I’m feeling generous, so I’ll knock our prescribed ten-fold increase down to a factor of five to allow for efficiency improvements and other oversights.
So in the absence of anyone being able to define how we turn today’s unsustainable practices into sustainable ones at five times the present scale, you’ll forgive me if I remain skeptical. If we could demonstrate the ability to seize control of the current scale and live sustainably today, I might grant that we have some hope of managing a similar trick at five times that scale. Instead, we intend to race headlong into a bigger tomorrow without proving ourselves capable of handling today’s world.
A child who wants a pony might first be asked to demonstrate that he or she can feed and take care of a gerbil, then graduate to a kitten, a puppy, a goat, and finally a pony. Currently, we’re not taking care of our gerbil, so we have not demonstrated that we deserve a pony. Perhaps we also don’t deserve to be brandishing the term “sustainable” for chicken-scratch contributions.
Sure, hanging the towel back up on a multi-day hotel visit is definitely a step in the right direction, and I’m all for it. But if we don’t focus on the big picture, these little acts are mere distractions.
“Yeah, but don’t they add up?” Can a bunch of 1% solutions produce a 100% change (or the 500% change we seek)? Maybe in the same way that if you strapped enough gerbils together, you might get something you can ride like a pony. I think I just invented a new sport.
Blowing Through Our Inheritance
All of this would bother me less if we were at least living within our budget presently. But we are tearing through one-time resources like mineral deposits, aquifers, and the big one: fossil fuels. The easy stuff is grabbed first, and it gets harder and harder as time passes.
In fossil fuels, we found the Earth’s solar battery—charged over millions of years—and we promptly hooked up Las Vegas to help us burn through the resource in mere centuries. In fact, our social and political structures have typically worked to maximize the rate of growth, which has the effect of blowing through resources as fast as may be managed—albeit with an eye toward practical efficiencies of the day. I have always found it compelling to look at a graph of fossil fuel use over a very long time span. This puts it in perspective as a towering blip in the human experience. It’s only schematic, but in case you’re interested, I put the peak at 2050, with a width at the half-max point of 235 years (Gaussian σ = 100 years).
We know that the era before fossil fuels used firewood, animals, and human (often slave) labor as sources of energy. Some supplemental energy came from wind, water, and animal fats as well. In most cases, this arrangement was by definition sustainable in the true sense: living off of the yearly energy income provided by the sun. Even then, deforestation and hunting some animals to extinction (or to scarcity) still happened. Looking at the symmetry inherent in the graph, it begs the question of whether this same existence lies in store for us on the right-hand side. Our accumulated scientific knowledge has the potential to break that symmetry, but only if coupled with collective wisdom. The future is not yet written, and may not care what we imagine could happen.
Meanwhile, we sit roughly at the position of the star in the figure. We’re living large and feeling pretty heady about our cleverness and the promise of the future. Up, up, up! That’s the world we’ve known. Surely it will always be so, now that we finally got smart.
I love the metaphor—expressed in the documentary, The Corporation—that early attempts at flight always failed because the flying contraptions were not built on the aerodynamic principles of sustained flight. Nonetheless, the pilot wannabe would launch off the cliff and momentarily feel the wind in their hair, and indeed be airborne for some time—feeling magnificent. But then came the inevitable crash. Likewise, a civilization that is not built on a foundation of sustainable practices is doomed to over-reach and fail. Yet today we feel the wind in our hair, and it feels pretty good. It makes us think we can do anything, and that we’re too clever for words. But we have the same brains we had a few thousand years ago. What’s changed is a windfall of surplus energy.
So the big question is: can we transition to a truly sustainable lifestyle for the long haul at an energy level akin to what we enjoy today—or even several times higher? No one knows the answer, and thus a true understanding of “sustainable” remains elusive. The following graph schematically shows what a level-off at the energy scale of peak fossil fuels (this century) would look like.
The “go green” function is a logistic curve with an inflection at 1965: very similar to the best-fit logistic for the U.S. energy historical data.
This figure merely illustrates that we have to find a full-scale replacement for fossil fuels in a relatively short period, and sustain it indefinitely. Today, only 15% of our energy comes from non-fossil origins—almost all of it hydroelectric and nuclear. Only hydroelectric is renewable (ignoring the detail that dams silt up), but all the main prizes have been taken, so that this sector generally cannot be expanded by even a factor of two. Uranium limits nuclear fission to short-term (< 100 years), unless proliferation-prone breeder programs are adopted, or fusion pans out in time to make a difference. Of course solar and wind could become more prominent. But all these are primarily useful for electricity, and tend to be expensive or difficult options. The freebee days will end, and we’ll have to work harder to satisfy our energy demand year by year. Future Do the Math posts will dissect the possibilities in greater detail.
We have no historical precedent to tell us whether we can pull off the sustainable right-hand-side of the graph. Will we pull together a technological solution, or disappoint ourselves with a return to muscle power and firewood? It seems a preposterous question, and many readers are now steaming with indignation—much like we might expect a kid to throw a tantrum when told that they can’t have a pony. Are such readers perturbed because they have a crystal-ball vision of the unwritten future and how things will play out? I’ll pretend I’m from Missouri (borders on true), and demand: “show me!”
But I’m not done yanking chains yet. Leveling off near today’s global rate of energy use spells an eventual decline in the U.S. standard of living by a large factor. Remember our premise at the beginning: if the goal is to pull up the world population to American standards of living, we need something more like a factor of five increase in scale. And that’s after taking a factor of two haircut to account for efficiency improvements achieved in tandem. What does this look like?
I fittingly pick a “blue sky” color to represent this state of affairs, again using a logistic function, this time having an inflection in 2110. If anyone thought the green portion looked hard, the blue piece is a doozy. It makes the remarkable fossil fuel age look like some insignificant anomaly. Many people react by saying “exactly so,” believing that fossil fuels are merely the kick-start for something bigger, something grander. Better than a pony, even. We’ll be free of the infantile shackles of the Earth and expand into the limitless void. We’ve seen it in numerous TV shows and movies—what more evidence do we need? I’ll have to address this issue in a future post. For now, let’s focus on the here and now, and the serious challenges this century hurls at us as we are weened from the lifeblood that started us on our industrial tear.
My skepticism that we can make it to the 5× sustainable future has led me to anticipate that Americans will have to reduce their energy, material, and dietary consumption. I have reacted by modifying my own behavior, and in so doing have proven to myself that the challenge is one that can be met at a personal level while maintaining a less-than-primitive lifestyle. Choices in diet, indoor temperature, transportation, hot water use, household appliances, etc. have reduced my home impact by a factor of four or more, and this gives me great hope. But I am cheating by riding on top of an energy-rich society. It is not as clear that an entire civilization can ratchet down by a similar factor and maintain today’s basic functionality.
Why Now is Special
Energy is not the only dimension to this problem. From a purely energetic point of view, we have enough solar input to allow sustained energy use at high rates (though not sustained growth). That’s the good news. But we would still strain the throughput of materials harvested from the planet. Pollution will continue to pile up; arable land will be lost to erosion, desertification, salinity increase, and exhaustion of ancient aquifers; fisheries will collapse; important metals will become ever harder to find and extract; we will learn too late that species driven to extinction by climate change and other human impositions are actually vital to our well-being. No one knows for sure what the ultimate carrying capacity of the Earth is: many estimates indicate that we have already exceeded it. And it is distressing that we do not have a plan for living within our means at today’s level of industrial activity, let alone a 5× expansion.
The basic point is that we are entering uncharted territory. This toothless statement has been true at every point in history. But I believe that this century is the one in which we must confront the thorniest issue ever presented to the human race. This moment is special because we have dramatically built up our population, technology, science, medicine, and democratic institutions as a direct result of vast amounts of surplus energy stemming from a one-time resource. The fossil fuel experience has made us dangerously confident about our cleverness and dominance over nature. What makes this century special, then, is that we will have to cope with a diminishing supply rate of the resource that has been of paramount importance to our high-tech existence.
Some will point out that folks 200 years ago could never have predicted the marvels of today, and that we should adopt a similar humility about the future. Fair point. We should also not assume that we won’t be protecting our food supplies by clubbing each other over the head with half-gnawed bones 200 years from now. Who, at the height of the fossil fuel age could have predicted such a reversal of fate?! Did you see that coming? I’m all about exercising humility in our prognostications of the future, but this cuts both ways. Currently, we see an asymmetry in the glorious vs. disastrous prediction score. I’m merely providing counterbalance by pointing out that our recent, rapid ascent provides a compelling reason as to why this asymmetric “limitless” outlook might be expected at this moment in history (look at the star on the fossil fuel graph).
We talk with confidence about the pony we will one day own (eventually equipped with warp drive upgrade), but our gerbil is meanwhile gasping under our neglect. And the rush we have experienced on our fossil fuel binge has made us a bit loopy. Only by looking at the sober possibility that we risk reverting to a low-tech existence after the fossil fuels are spent can we make honest plans for our future. Those honest plans may well involve a substantial ratcheting-down of the lifestyle to which we have become accustomed. And that same honesty suggests refraining from using the term “sustainable” until we better understand what it actually means. I’m more attracted to the words: possible, practical, preservation, and price—oh—and pony.
Exponential Economist Meets Finite Physicist
Posted on 2012-04-10
Some while back, I found myself sitting next to an accomplished economics professor at a dinner event. Shortly after pleasantries, I said to him, “economic growth cannot continue indefinitely,” just to see where things would go. It was a lively and informative conversation. I was somewhat alarmed by the disconnect between economic theory and physical constraints—not for the first time, but here it was up-close and personal. Though my memory is not keen enough to recount our conversation verbatim, I thought I would at least try to capture the key points and convey the essence of the tennis match—with some entertainment value thrown in.
Cast of characters: Physicist, played by me; Economist, played by an established economics professor from a prestigious institution. Scene: banquet dinner, played in four acts (courses).
Note: because I have a better retention of my own thoughts than those of my conversational companion, this recreation is lopsided to represent my own points/words. So while it may look like a physicist-dominated conversation, this is more an artifact of my own recall capabilities. I also should say that the other people at our table were not paying attention to our conversation, so I don’t know what makes me think this will be interesting to readers if it wasn’t even interesting enough to others at the table! But here goes…
Act One: Bread and Butter
Physicist: Hi, I’m Tom. I’m a physicist.
Economist: Hi Tom, I’m [ahem..cough]. I’m an economist.
Physicist: Hey, that’s great. I’ve been thinking a bit about growth and want to run an idea by you. I claim that economic growth cannot continue indefinitely.
Economist: [chokes on bread crumb] Did I hear you right? Did you say that growth can not continue forever?
Physicist: That’s right. I think physical limits assert themselves.
Economist: Well sure, nothing truly lasts forever. The sun, for instance, will not burn forever. On the billions-of-years timescale, things come to an end.
Physicist: Granted, but I’m talking about a more immediate timescale, here on Earth. Earth’s physical resources—particularly energy—are limited and may prohibit continued growth within centuries, or possibly much shorter depending on the choices we make. There are thermodynamic issues as well.
Economist: I don’t think energy will ever be a limiting factor to economic growth. Sure, conventional fossil fuels are finite. But we can substitute non-conventional resources like tar sands, oil shale, shale gas, etc. By the time these run out, we’ll likely have built up a renewable infrastructure of wind, solar, and geothermal energy—plus next-generation nuclear fission and potentially nuclear fusion. And there are likely energy technologies we cannot yet fathom in the farther future.
Physicist: Sure, those things could happen, and I hope they do at some non-trivial scale. But let’s look at the physical implications of the energy scale expanding into the future. So what’s a typical rate of annual energy growth over the last few centuries?
Economist: I would guess a few percent. Less than 5%, but at least 2%, I should think.
Physicist: Right, if you plot the U.S. energy consumption in all forms from 1650 until now, you see a phenomenally faithful exponential at about 3% per year over that whole span. The situation for the whole world is similar. So how long do you think we might be able to continue this trend?
Economist: Well, let’s see. A 3% growth rate means a doubling time of something like 23 years. So each century might see something like a 15–20× increase. I see where you’re going. A few more centuries like that would perhaps be absurd. But don’t forget that population was increasing during centuries past—the period on which you base your growth rate. Population will stop growing before more centuries roll by.
Physicist: True enough. So we would likely agree that energy growth will not continue indefinitely. But two points before we continue: First, I’ll just mention that energy growth has far outstripped population growth, so that per-capita energy use has surged dramatically over time—our energy lives today are far richer than those of our great-great-grandparents a century ago [economist nods]. So even if population stabilizes, we are accustomed to per-capita energy growth: total energy would have to continue growing to maintain such a trend [another nod].
Second, thermodynamic limits impose a cap to energy growth lest we cook ourselves. I’m not talking about global warming, CO2 build-up, etc. I’m talking about radiating the spent energy into space. I assume you’re happy to confine our conversation to Earth, foregoing the spectre of an exodus to space, colonizing planets, living the Star Trek life, etc.
Economist: More than happy to keep our discussion grounded to Earth.
Physicist: [sigh of relief: not a space cadet] Alright, the Earth has only one mechanism for releasing heat to space, and that’s via (infrared) radiation. We understand the phenomenon perfectly well, and can predict the surface temperature of the planet as a function of how much energy the human race produces. The upshot is that at a 2.3% growth rate (conveniently chosen to represent a 10× increase every century), we would reach boiling temperature in about 400 years. [Pained expression from economist.] And this statement is independent of technology. Even if we don’t have a name for the energy source yet, as long as it obeys thermodynamics, we cook ourselves with perpetual energy increase.
Economist: That’s a striking result. Could not technology pipe or beam the heat elsewhere, rather than relying on thermal radiation?
Physicist: Well, we could (and do, somewhat) beam non-thermal radiation into space, like light, lasers, radio waves, etc. But the problem is that these “sources” are forms of high-grade, low-entropy energy. Instead, we’re talking about getting rid of the waste heat from all the processes by which we use energy. This energy is thermal in nature. We might be able to scoop up some of this to do useful “work,” but at very low thermodynamic efficiency. If you want to use high-grade energy in the first place, having high-entropy waste heat is pretty inescapable.
Economist: [furrowed brow] Okay, but I still think our path can easily accommodate at least a steady energy profile. We’ll use it more efficiently and for new pursuits that continue to support growth.
Physicist: Before we tackle that, we’re too close to an astounding point for me to leave it unspoken. At that 2.3% growth rate, we would be using energy at a rate corresponding to the total solar input striking Earth in a little over 400 years. We would consume something comparable to the entire sun in 1400 years from now. By 2500 years, we would use energy at the rate of the entire Milky Way galaxy—100 billion stars! I think you can see the absurdity of continued energy growth. 2500 years is not that long, from a historical perspective. We know what we were doing 2500 years ago. I think I know what we’re not going to be doing 2500 years hence.
Economist: That’s really remarkable—I appreciate the detour. You said about 1400 years to reach parity with solar output?
Physicist: Right. And you can see the thermodynamic point in this scenario as well. If we tried to generate energy at a rate commensurate with that of the Sun in 1400 years, and did this on Earth, physics demands that the surface of the Earth must be hotter than the (much larger) surface of the Sun. Just like 100 W from a light bulb results in a much hotter surface than the same 100 W you and I generate via metabolism, spread out across a much larger surface area.
Economist: I see. That does make sense.
Act Two: Salad
Economist: So I’m as convinced as I need to be that growth in raw energy use is a limited proposition—that we must one day at the very least stabilize to a roughly constant yearly expenditure. At least I’m willing to accept that as a starting point for discussing the long term prospects for economic growth. But coming back to your first statement, I don’t see that this threatens the indefinite continuance of economic growth.
For one thing, we can keep energy use fixed and still do more with it in each passing year via efficiency improvements. Innovations bring new ideas to the market, spurring investment, market demand, etc. These are things that will not run dry. We have plenty of examples of fundamentally important resources in decline, only to be substituted or rendered obsolete by innovations in another direction.
Physicist: Yes, all these things happen, and will continue at some level. But I am not convinced that they represent limitless resources.
Economist: Do you think ingenuity has a limit—that the human mind itself is only so capable? That could be true, but we can’t credibly predict how close we might be to such a limit.
Physicist: That’s not really what I have in mind. Let’s take efficiency first. It is true that, over time, cars get better mileage, refrigerators use less energy, buildings are built more smartly to conserve energy, etc. The best examples tend to see factor-of-two improvements on a 35 year timeframe, translating to 2% per year. But many things are already as efficient as we can expect them to be. Electric motors are a good example, at 90% efficiency. It will always take 4184 Joules to heat a liter of water one degree Celsius. In the middle range, we have giant consumers of energy—like power plants—improving much more slowly, at 1% per year or less. And these middling things tend to be something like 30% efficient. How many more “doublings” are possible? If many of our devices were 0.01% efficient, I would be more enthusiastic about centuries of efficiency-based growth ahead of us. But we may only have one more doubling in us, taking less than a century to realize.
Economist: Okay, point taken. But there is more to efficiency than incremental improvement. There are also game-changers. Tele-conferencing instead of air travel. Laptop replaces desktop; iPhone replaces laptop, etc.—each far more energy frugal than the last. The internet is an example of an enabling innovation that changes the way we use energy.
Physicist: These are important examples, and I do expect some continuation along this line, but we still need to eat, and no activity can get away from energy use entirely. [semi-reluctant nod/bobble] Sure, there are lower-intensity activities, but nothing of economic value is completely free of energy.
Economist: Some things can get awfully close. Consider virtualization. Imagine that in the future, we could all own virtual mansions and have our every need satisfied: all by stimulative neurological trickery. We would stil need nutrition, but the energy required to experience a high-energy lifestyle would be relatively minor. This is an example of enabling technology that obviates the need to engage in energy-intensive activities. Want to spend the weekend in Paris? You can do it without getting out of your chair. [More like an IV-drip-equipped toilet than a chair, the physicist thinks.]
Physicist: I see. But this is still a finite expenditure of energy per person. Not only does it take energy to feed the person (today at a rate of 10 kilocalories of energy input per kilocalorie eaten, no less), but the virtual environment probably also requires a supercomputer—by today’s standards—for every virtual voyager. The supercomputer at UCSD consumes something like 5 MW of power. Granted, we can expect improvement on this end, but today’s supercomputer eats 50,000 times as much as a person does, so there is a big gulf to cross. I’ll take some convincing. Plus, not everyone will want to live this virtual existence.
Economist: Really? Who could refuse it? All your needs met and an extravagant lifestyle—what’s not to like? I hope I can live like that myself someday.
Physicist: Not me. I suspect many would prefer the smell of real flowers—complete with aphids and sneezing; the feel of real wind messing up their hair; even real rain, real bee-stings, and all the rest. You might be able to simulate all these things, but not everyone will want to live an artificial life. And as long as there are any holdouts, the plan of squeezing energy requirements to some arbitrarily low level fails. Not to mention meeting fixed bio-energy needs.
Act Three: Main Course
Physicist: But let’s leave the Matrix, and cut to the chase. Let’s imagine a world of steady population and steady energy use. I think we’ve both agreed on these physically-imposed parameters. If the flow of energy is fixed, but we posit continued economic growth, then GDP continues to grow while energy remains at a fixed scale. This means that energy—a physically-constrained resource, mind—must become arbitrarily cheap as GDP continues to grow and leave energy in the dust.
Economist: Yes, I think energy plays a diminishing role in the economy and becomes too cheap to worry about.
Physicist: Wow. Do you really believe that? A physically limited resource (read scarcity) that is fundamental to every economic activity becomes arbitrarily cheap? [turns attention to food on the plate, somewhat stunned]
Economist: [after pause to consider] Yes, I do believe that.
Physicist: Okay, so let’s be clear that we’re talking about the same thing. Energy today is roughly 10% of GDP. Let’s say we cap the physical amount available each year at some level, but allow GDP to keep growing. We need to ignore inflation as a nuisance in this case: if my 10 units of energy this year costs $10,000 out of my $100,000 income; then next year that same amount of energy costs $11,000 and I make $110,000—I want to ignore such an effect as “meaningless” inflation: the GDP “growth” in this sense is not real growth, but just a re-scaling of the value of money.
Physicist: Then in order to have real GDP growth on top of flat energy, the fractional cost of energy goes down relative to the GDP as a whole.
Physicist: How far do you imagine this can go? Will energy get to 1% of GDP? 0.1%? Is there a limit?
Economist: There does not need to be. Energy may become of secondary importance in the economy of the future—like in the virtual world I illustrated.
Physicist: But if energy became arbitrarily cheap, someone could buy all of it, and suddenly the activities that comprise the economy would grind to a halt. Food would stop arriving at the plate without energy for purchase, so people would pay attention to this. Someone would be willing to pay more for it. Everyone would. There will be a floor to how low energy prices can go as a fraction of GDP.
Economist: That floor may be very low: much lower than the 5–10% we pay today.
Physicist: But is there a floor? How low are you willing to take it? 5%? 2%? 1%?
Economist: Let’s say 1%.
Physicist: So once our fixed annual energy costs 1% of GDP, the 99% remaining will find itself stuck. If it tries to grow, energy prices must grow in proportion and we have monetary inflation, but no real growth.
Economist: Well, I wouldn’t go that far. You can still have growth without increasing GDP.
Physicist: But it seems that you are now sold on the notion that the cost of energy would not naturally sink to arbitrarily low levels.
Economist: Yes, I have to retract that statement. If energy is indeed capped at a steady annual amount, then it is important enough to other economic activities that it would not be allowed to slip into economic obscurity.
Physicist: Even early economists like Adam Smith foresaw economic growth as a temporary phase lasting maybe a few hundred years, ultimately limited by land (which is where energy was obtained in that day). If humans are successful in the long term, it is clear that a steady-state economic theory will far outlive the transient growth-based economic frameworks of today. Forget Smith, Keynes, Friedman, and that lot. The economists who devise a functioning steady-state economic system stand to be remembered for a longer eternity than the growth dudes. [Economist stares into the distance as he contemplates this alluring thought.]
Act Four: Dessert
Economist: But I have to object to the statement that growth must stop once energy amount/price saturates. There will always be innovations that people are willing to purchase that do not require additional energy.
Physicist: Things will certainly change. By “steady-state,” I don’t mean static. Fads and fashions will always be part of what we do—we’re not about to stop being human. But I’m thinking more of a zero-sum game here. Fads come and go. Some fraction of GDP will always go toward the fad/innovation/gizmo of the day, but while one fad grows, another fades and withers. Innovation therefore will maintain a certain flow in the economy, but not necessarily growth.
Economist: Ah, but the key question is whether life 400 years from now is undeniably of higher quality than life today. Even if energy is fixed, and GDP is fixed once the cost of energy saturates at the lower bound, will quality of life continue to improve in objectively agreed-upon ways?
Physicist: I don’t know how objective such an assessment can be. Many today yearn for days past. Maybe this is borne of ignorance or romanticism over the past (1950′s often comes up). It may be really exciting to imagine living in Renaissance Europe, until a bucket of nightsoil hurled from a window splatters off the cobblestone and onto your breeches. In any case, what kind of universal, objective improvements might you imagine?
Economist: Well, for instance, look at this dessert, with its decorative syrup swirls on the plate. It is marvelous to behold.
Physicist: And tasty.
Economist: We value such desserts more than plain, unadorned varieties. In fact, we can imagine an equivalent dessert with equivalent ingredients, but the decorative syrup unceremoniously pooled off to one side. We value the decorated version more. And the chefs will continue to innovate. Imagine a preparation/presentation 400 years from now that would blow your mind—you never thought dessert could be made to look so amazing and taste so delectably good. People would line the streets to get hold of such a creation. No more energy, no more ingredients, yet of increased value to society. That’s a form of quality of life improvement, requiring no additional resources, and perhaps costing the same fraction of GDP, or income.
Physicist: I’m smiling because this reminds me of a related story. I was observing at Palomar Observatory with an amazing instrumentation guru named Keith who taught me much. Keith’s night lunch—prepared in the evening by the observatory kitchen and placed in a brown bag—was a tuna-fish sandwich in two parts: bread slices in a plastic baggie, and the tuna salad in a small plastic container (so the tuna would not make the bread soggy after hours in the bag). Keith plopped the tuna onto the bread in an inverted container-shaped lump, then put the other piece of bread on top without first spreading the tuna. It looked like a snake had just eaten a rat. Perplexed, I asked if he intended to spread the tuna before eating it. He looked at me quizzically (like Morpheus in the Matrix: “You think that’s air you’re breathing? Hmm.”), and said—memorably, “It all goes in the same place.”
My point is that the stunning presentation of desserts will not have universal value to society. It all goes in the same place, after all. [I’ll share a little-known secret. It’s hard to beat a Hostess Ding Dong for dessert. At 5% the cost of fancy desserts, it’s not clear how much value the fancy things add.]
The evening’s after-dinner keynote speech began, so we had to shelve the conversation. Reflecting on it, I kept thinking, “This should not have happened. A prominent economist should not have to walk back statements about the fundamental nature of growth when talking to a scientist with no formal economics training.” But as the evening progressed, the original space in which the economist roamed got painted smaller and smaller.
First, he had to acknowledge that energy may see physical limits. I don’t think that was part of his initial virtual mansion.
Next, the efficiency argument had to shift away from straight-up improvements to transformational technologies. Virtual reality played a prominent role in this line of argument.
Finally, even having accepted the limits to energy growth, he initially believed this would prove to be of little consequence to the greater economy. But he had to ultimately admit to a floor on energy price and therefore an end to traditional growth in GDP—against a backdrop fixed energy.
I got the sense that this economist’s view on growth met some serious challenges during the course of the meal. Maybe he was not putting forth the most coherent arguments that he could have made. But he was very sharp and by all measures seemed to be at the top of his game. I choose to interpret the episode as illuminating a blind spot in traditional economic thinking. There is too little acknowledgement of physical limits, and even the non-compliant nature of humans, who may make choices we might think to be irrational—just to remain independent and unencumbered.
I recently was motivated to read a real economics textbook: one written by people who understand and respect physical limitations. The book, called Ecological Economics, by Herman Daly and Joshua Farley, states in its Note to Instructors:
…we do not share the view of many of our economics colleagues that growth will solve the economic problem, that narrow self-interest is the only dependable human motive, that technology will always find a substitute for any depleted resource, that the market can efficiently allocate all types of goods, that free markets always lead to an equilibrium balancing supply and demand, or that the laws of thermodynamics are irrelevant to economics.
This is a book for me!
The conversation recreated here did challenge my own understanding as well. I spent the rest of the evening pondering the question: “Under a model in which GDP is fixed—under conditions of stable energy, stable population, steady-state economy: if we accumulate knowledge, improve the quality of life, and thus create an unambiguously more desirable world within which to live, doesn’t this constitute a form of economic growth?”
I had to concede that yes—it does. This often falls under the title of “development” rather than “growth.” I ran into the economist the next day and we continued the conversation, wrapping up loose ends that were cut short by the keynote speech. I related to him my still-forming position that yes, we can continue tweaking quality of life under a steady regime. I don’t think I ever would have explicitly thought otherwise, but I did not consider this to be a form of economic growth. One way to frame it is by asking if future people living in a steady-state economy—yet separated by 400 years—would always make the same, obvious trades? Would the future life be objectively better, even for the same energy, same GDP, same income, etc.? If the answer is yes, then the far-future person gets more for their money: more for their energy outlay. Can this continue indefinitely (thousands of years)? Perhaps. Will it be at the 2% per year level (factor of ten better every 100 years)? I doubt that.
So I can twist my head into thinking of quality of life development in an otherwise steady-state as being a form of indefinite growth. But it’s not your father’s growth. It’s not growing GDP, growing energy use, interest on bank accounts, loans, fractional reserve money, investment. It’s a whole different ballgame, folks. Of that, I am convinced. Big changes await us. An unrecognizable economy. The main lesson for me is that growth is not a “good quantum number,” as physicists will say: it’s not an invariant of our world. Cling to it at your own peril.
Note: This conversation is my contribution to a series at www.growthbusters.org honoring the 40th anniversary of the Limits to Growth study. You can explore the series here. Also see my previous reflection on the Limits to Growth work. You may also be interested in checking out and signing the Pledge to Think Small and consider organizing an Earth Day weekend house party screening of the GrowthBusters movie.
Reproduced from: http://physics.ucsd.edu/do-the-math/2012/06/heat-pumps-work-miracles/
Heat Pumps Work Miracles
Posted on 2012-06-12
Part of the argument that we cannot expect growth to continue indefinitely is that efficiency gains are capped. Many of our energy applications are within a factor of two of theoretical efficiency limits, so we can’t squeeze too much more out of this orange. After all, nothing can be more than 100% efficient, can it? Well, it turns out there is one domain in which we can gleefully break these bonds and achieve far better than 100% efficiency: heat pumps (includes refrigerators). Even though it sounds like magic, we still must operate within physical limits, naturally. In this post, I explain how this is possible, and develop the thermodynamic limit to heat engines and heat pumps. It’s a story of entropy.
Whole books can be written about the gnarly properties of entropy. Put simply, entropy is a measure of disorder. Strictly speaking, entropy is all about counting the number of quantum-mechanical states that can be occupied at a certain system energy. In this sense, the total entropy of a system is S = kBln(Ω), where Ω is the number of states available (a rather large number), ln(x) is the natural log function, and kB is the Boltzmann constant, having a value of 1.38×10−23 J/K (Joules per Kelvin) in SI units.
Okay, that’s deep and cool, but let’s not bog ourselves down counting states. The main purpose of the previous paragraph is to indicate that entropy has a fundamental prescription, and that it carries actual units. Mostly entropy is discussed in a hand-wavy way, but it can be pinned down.
Change Heat: Change Entropy
More relevant to our discussion is the thermodynamic result that if we add/subtract thermal energy (heat) to/from a thermal “bath” (large reservoir of thermal energy, like outside air, a body of water, rock) at a temperature T—measured on an absolute scale like Kelvin—the entropy changes according to:
ΔQ = TΔS
We read this to mean that adding an amount of heat (ΔQ: negative if removing heat) will result in a concomitant increase in entropy (decrease if negative) with the bath temperature as the proportionality constant. Looking at this equation, the units of J/K for entropy (S) should make more sense.
Wait a minute! Did I just allow for the condition that entropy could decrease? Isn’t one of the fundamental rules of thermodynamics that entropy can never go down?
Almost right. The entropy of a closed system cannot decrease. But it can easily decrease locally at the expense of an increase elsewhere. You can re-stack books on the shelves after an earthquake, restoring order. But via exertion, you transfer heat to the ambient air in the process—increasing its entropy.
A heat pump, rather than creating heat, simply moves heat. It may move thermal energy from cooler outdoor air into the warmer inside, or from the cooler refrigerator interior into the ambient air. It pushes heat in a direction counter to its normal flow (cold to hot, rather then hot to cold). Thus the word pump.
So let’s imagine I have a cold environment at temperature Tc and a hot environment at Th. Cold and hot are relative terms here: the “hot” environment could be uncomfortably cool—it just needs to be hotter than the “cold” environment.
If I pull an amount of heat, ΔQc out of the cold environment and put it into the warmer environment, I reduce the entropy in the cold region by ΔSc = ΔQc/Tc. Both ΔQc and ΔSc are negative in this case.
Inevitably, I have to run some machinery to affect this flow of heat against the natural gradient (pushing heat uphill). Let’s call the amount of work (energy) needed to force this thermal extraction ΔW. That mechanical/electrical/whatever energy also ultimately turns to heat, and if I cleverly send this additional energy to the hot place, I end up pumping an amount of heat into the hot environment that is ΔQh = −ΔQc + ΔW (just the sum of the two; as indicated by arrow thicknesses in the diagram above).
The entropy change in the hot environment is determined by ΔSh = ΔQh/Th. Because total entropy must increase, we need the sum of entropy changes to be positive: ΔSc + ΔSh > 0—remembering that ΔSc is negative.
So where does this leave us? If we’re trying to heat a home, we care about how much heat is delivered into the home: ΔQh = −ΔQc + ΔW. And we’d like to do as little work, ΔW, as possible to pull this off. So an appropriate figure of merit is ε = ΔQh/ΔW.
A little algebra with the relations above (the steps are shown in the following graphic) results in the maximum efficiency of a heat pump of ε < Th/ΔT, where ΔT = Th − Tc is the temperature difference between hot and cold baths.
If, instead, you want to cool something down (refrigeration, A/C), the figure of merit is how much heat is removed from the cold zone divided by the input work: ε = −ΔQc/ΔW. In this case, the maximum efficiency works out to ε < Tc/ΔT.
As an aside, if we turn the heat flow around, so that ΔQh flows naturally out of the hot source (ΔQh is negative in this case) and a lesser ΔQc flows into the cold source (positive), the same entropy considerations lead us to derive a maximum amount of work that is extractable from the heat flow, and the efficiency, ε = ΔW/ΔQh works out to be no better than ΔT/Th. (The bolder among you may want to take up the algebraic challenge.) This is the familiar thermodynamic limit for the amount of work obtainable from a heat engine, like a car’s engine, a coal-fired power plant, or even a nuclear plant. The reason we hit a maximum efficiency is really all about not violating the second law of thermodynamics: that the total entropy of a system may never decrease.
The remarkable thing about the heat pump efficiencies we derived above is that ΔT is in the denominator! Since T is absolute temperature (Kelvin), typical situations will have T ≈ 300 K, and ΔT often a few tens of Kelvin—leading to efficiencies around 10×, or 1000%!! How can this possibly be true? It seems like a total cheat on nature.
The key is that unlike an electric coil or a flame, the heat pump does not create the thermal energy, it moves thermal energy that already exists. A heat pump is always moving thermal energy from a cooler environment to a warmer one. That means that a heat pump heating a house in the winter is grabbing heat from outside and shoving it inside. This may seem counter-intuitive, but I assure you, even freezing air has plenty of thermal energy, being hundreds of degrees above absolute zero. Capturing some of that energy and moving it can take a lot less energy than creating it directly.
One aspect of heat pump efficiency worthy of note is that the theoretical limit gets better as ΔT gets smaller. So a refrigerator in a hot garage will not only have to work harder to maintain a larger ΔT, but it becomes less efficient at the same time, compounding the problem. Likewise, heat pumps operate more efficiently in mild-winter climates than in extreme arctic zones. For instance, the theoretical efficiency of a heat pump operating between 293 K indoors (20°C, or 68°F) and freezing outside is 293/20 = 14.7, while a frigid −20°C (−4°F) would only allow a theoretical efficiency of 7—half as good.
COP and EER
If shopping for heat pumps, one should look for the specification called the Coefficient of Performance, or COP, which is essentially the same ε = ΔQh/ΔW metric from before. Realized values are typically around 3–4. This is a factor of several below the theoretical limit, as is so often the case. But still, it’s rather impressive to me that I can add 4 J of heat energy into my home while expending only 1 J to make it happen (apply any energy unit you wish: kWh, Btu, etc. and get the same 4:1 ratio for a COP=4).
But before we get carried away, let’s say your electricity comes from natural gas turbines, converted to electricity at 40% efficiency (via a heat engine). Coupled with a heat pump achieving a COP of 3.5, each unit of energy injected at the natural gas plant yields 0.4×3.5 units of thermal energy in the home, for a net gain of 40% over just burning the gas directly in the home. I’ll take the gain, but the benefit goes from overwhelming to just plain whelming. If carbon intensity is your thing, then a heat-pump supplied with coal-fired electricity does worse than burning gas directly in a home furnace, since coal generates 70% more CO2 per unit of energy delivered than does gas, eating up the 40% margin previously detailed. We only get the full factor of 3.5 improvement if replacing electric heat.
For cooling applications, one may also see a COP reported. But in the U.S., the efficiency metric is often the Energy Efficiency ratio, or EER The EER. is a freak of nature, and I hope it asphyxiates on its own stupidity. It is the rate of heat extraction, in Btu/hr, divided by the electrical power supplied, in Watts. Geez—Btu/hr is already a power: 1 Btu/hr is 1055 J in 3600 s, or 0.293 J/s = 0.293 W. Why complicate things?! So multiply EER by 0.293 to get an apples-to-apples comparison, arriving at a COP for cooling that corresponds to our measure from before: ε = −ΔQc/ΔW. Air conditioners getting EER values above about 11 qualify as Energy Star, corresponding to a COP above about 3.
My Fridge Performance
I had a thought that I could test the > 100% efficiency delivered by a heat pump by watching my fridge go through a defrost cycle. The idea is that a bunch of heat is dumped into the coils periodically to melt accumulated ice. I noticed that the first cooling cycle after the defrost is always longer, as the deposited heat must be removed.
My fridge normally runs on an off-grid stand-alone photovoltaic (PV) system, and I record the system energy expenditures at 5 minute resolution. I see defrost cycles routinely in the data (every day or two). But because of the coarse sample rate and the indirectness of the measurement (measuring battery current, not AC power), I prefer to use TED (the energy detective) data, when available. During unusually cloudy periods, the PV system switches over to utility input, in which case the fridge is monitored by TED and I get one minute samples of direct AC power. One such sequence is shown for my fridge in October, 2011.
We can see in the figure above a baseload power of 108 W, two normal fridge cycles preceding the defrost pulse at 12:30 AM (and a partial cooling cycle on its leading edge). The first cooling cycle after the defrost deposit is obviously longer than the rest, and subsequent cycles may be slightly fatter and more frequent. The energy expenditure (above baseline) is reported for each cooling pulse in Watt-hours, as is the corresponding average cooling-cycle power measured from the start of one pulse to the start of the next.
Performing a careful accounting for the energy expended while cooling vs. the energy deposited during defrost, and projecting the rate of power use prior to defrosting (43 W) forward, we find that we would have expected to spend 163 Wh over the time interval between the first cooling cycle and the last (the pre-cycles are 19.7 Wh each), but the actual expenditure cooling was 184 Wh, leaving an extra 21 Wh-worth of cooling (roughly equivalent to one extra cycle). Meanwhile, the defrost activity expended 155 Wh in 23 minutes (400 W). So it took 21 Wh of wall-plug energy in cooling mode to remove 155 Wh of deposited thermal energy, implying a coefficient of performance around 7.5!
Impossibly high, methinks. One problem is that the defrost cycle puts energy into melting ice, which subsequently runs out to a drip pan below the refrigerator. So the refrigerator does not later need to remove this heat: it found another sort of exit. The heat of fusion barrier that must be overcome amounts to 334 J/g (compared to about 20 J/g to raise the temperature of water by 5°C, or ice by 10°C). If the defrost cycle produces two cupfuls of water (about a half-liter) each time, the investment is approximately 50 Wh of energy. This brings the COP estimate down to 5. Additionally, since time elapses after the defrost injection is complete, some portion of the heat no doubt diffuses out from the cold coils to the hot fins before the cooling kicks in.
In retrospect, the defrost cycle is not the best way to experimentally determine the COP—despite the fact that the “experiment” runs all the time without my having to lift a finger.
A More deliberate Experiment
Taking matters into my own hands, I rigged an incandescent light bulb operating on a timer and stuffed in the fridge (in a clip-light fixture). I set the timer to pop the light on from 3AM to 4AM, figuring the fridge would be perfectly quiescent (no door openings, etc. during that time). A couple of long, tapered pieces of wood provided a channel for the cord without compromising the door seal. I shifted the fridge over to utility for the night so TED would catch the action. It took three times to get a good result. “When are you going to take that light out of the fridge?!”
I placed the light high in the refrigerator, and shone the light onto aluminum foil on an otherwise empty glass shelf (the foil was to keep direct light off the food below). The data were beautifully collected, but the refrigerator stayed on the entire time the light was on. The bulb was rated at 60 W, and the refrigerator typically runs around 120 W, so instantly I knew the measurement indicated a COP less than 1. Not good.
I presume the bulb was near enough to the thermostat as to elevate the local temperature and fool the refrigerator into staying on the full time. Bet the ice cream got rock hard…
I moved the light to a lower portion of the refrigerator, hopefully well-enough baffled that the thermostat would not be impacted. This time, I was foiled by a sleepless wife, who turned on and off all manner of electrical devices during the course of the experiment. The refrigerator itself was not disturbed, and if pressed, I could still identify cooling cycles and extract useful data. Just like in astronomy, crummy nights produce crummy data, and you have to work much harder to get marginally useful results. Better to wait for a clear night, if you can. I could at least see that the fridge cycled this time during the light-on phase.
Fool me once, shame on you. Fool me twice, shame on me. Since the saying has no “thrice” aspect, I felt I had no choice but to make it work. Actually, I did nothing different (no straps to confine wife to bed—patience was already wearing thin on the interminable experiment in the refrigerator). But fortunately, a quiet night resulted in a clean dataset.
The bulb turned out to expend energy at a rate of 52 W. Subtracting a baseload of 44 W, the first five cycles averaged 35.4 W to keep the fridge cold. The light bulb was on for a bit over 56 minutes, depositing 49 Wh of energy. From the time the light bulb came on until the end of the last cooling pulse before the defrost cycle began (spanning 131 minutes), five cycles totaled 57 minutes of on-time, using 113 Wh of electrical energy. Yet we would have expected 35.4×131/60 = 77 Wh at the nominal rate. Thus the refrigerator consumed an extra 36 Wh of energy to remove the 49 Wh deposited by the light. We calculate a COP of ΔQc/ΔW = 49/36 = 1.36.
Hmmm. Not in the ballpark of 3. It’s bigger than 1, at least—indicating some degree of heat-pump magic. But I am disappointed in the result.
REFLECTIONS ON THE EXPERIMENTS
My mode of testing certainly deviated from the intended operation of refrigerators. A concentrated pulse of constant heat is not quite the same as putting warm food into the refrigerator. It may also be that the freezer achieves a COP around 3 while the refrigerator volume does not. I would be curious to know how the COP is actually measured. Do we realize similar values in daily operation? After all, the light bulb test fell short. If I average the ice-melt-corrected defrost value and the light bulb value, I get a COP around 3, but I have no solid justification for performing this average.
Alternative tests may include placing a known thermal mass into the refrigerator and seeing how much energy is required to bring it to temperature. Door access is a problem, though.
Close the Door!
While I’m on the subject of refrigerators, how about a quick detour to assess how problematic it is to stand with the door open, or to repeatedly and inefficiently access items within. Should I be irked?
Let’s say the inner refrigerator volume is half a cubic meter (about 17 cubic feet; American freezer + fridge often are in the low 20′s). Air has a specific heat capacity of 1000 J/kg/K. At a density of 1.2 kg/m³, we’re talking 0.6 kg of air, total. And let’s assume a ΔT of 20 K between ambient air and fridge air.
A complete air exchange then costs 12 kJ (3.3 Wh). Even at a COP of 1.0, the refrigerator will remove this amount of energy in 100 seconds at a power of 120 W. It’s a tiny fraction of the daily work of the refrigerator: 0.3%.
A more serious problem is condensation. If the outside air is saturated (100% humidity), containing about 20 g/m³ of water, we deposit 10 g into the fridge. The latent heat of vaporization means that 2257 J are deposited for every gram of water condensing, plus about 800 J to cool the water down. In total, we drop another 23 kJ (6.5 Wh) of energy into the fridge.
So depending on how moist the air is, we may drop anywhere from 12–35 kJ. Our 0.3% becomes a 1% effect. Open the fridge 20 times in a day, and you might have a significant issue.
Another consideration is that each door opening may trip the thermostat before it ordinarily would have been. In doing so, the cooling “schedule” advances forward, and could result in more “on” activity than would otherwise occur.
Heat pumps are really cool, and seem to violate our sense that 100% is the best efficiency we can ever get. Cooling applications have little choice but to use heat pumps, as cooling inevitably involves getting rid of (moving; pumping) thermal energy. Heating applications can see a factor of three or more increase in efficiency over direct heating. Increasingly, the stable thermal mass of the ground is used as the “bath”—often erroneously referred to as geothermal.
So I’m generally a fan of seeing more use of heat pumps. Replacing direct electrical heating with a heat pump is a clear win. Replacing a gas furnace with a heat pump is a marginal win if your electricity comes from gas; not so much for coal-derived electricity. But heat pumps pave the way for efficient use of renewable energy sources, like solar or wind. In this sense, getting away from gas furnaces while promoting non-fossil electricity generation may be the best ticket—especially when coupled with concerns over global warming.
Reproduced from: http://physics.ucsd.edu/do-the-math/2013/05/elusive-entropy/
Posted on 2013-05-28
We’ve all heard it. We think we understand it: entropy is a measure of disorder. Combined with the Second Law of Thermodynamics—that the total entropy of a closed system may never decrease—it seems we have a profound statement that the Universe is destined to become less ordered.
The consequences are unsettling. Sure, the application of energy can reverse entropy locally, but if our society enters an energy-scarce regime, how can we maintain order? It makes intuitive sense: an energy-neglected infrastructure will rust and crumble. And the Second Law stands as a sentinel, unsympathetic to deniers of this fact.
A narrative has developed around this theme that we take in low entropy energy and emit a high entropy wake of waste. That life displays marvelous order—permitted by continuous feeding of this low entropy energy—while death and decay represent higher entropy end states. That we extract low entropy concentrations of materials (ores) from the ground, then disperse the contents around the world in a higher entropy arrangement. The Second Law warns that there is no going back: at least not without substantial infusion of energy.
But wait just a minute! The preceding paragraph is mostly wrong! An unfortunate conflation of the concepts of entropy and disorder has resulted in widespread misunderstanding of what thermodynamic entropy actually means. And if you want to invoke the gravitas of the Second Law of Thermodynamics, you’d better make darned sure you’re talking about thermodynamic entropy—whose connection to order is not as strong as you might be led to believe. Entropy can be quantified, in Joules per Kelvin. Let’s build from there.
The Measure of Entropy
From a thermodynamic standpoint, the total entropy of a system has a simple definition. If I add an amount of energy ΔE (measured in Joules, say), to a system at temperature T (measured on an absolute scale, like Kelvin), the entropy changes according to ΔS = ΔE/T. The units are Joules per Kelvin.
This is very closely related to the heat capacity of a system or object. If we measure for a substance how much the temperature changes when we add a bit of energy, the ratio is the heat capacity. Divide by the object’s mass and we have a property of the material: the specific heat capacity. For example, the specific heat capacity of water is cp ≈ 4184 J/kg/K. If we heat one liter (1 kg) of water by 10°C (same as a change by 10 K), it takes 41,840 J of energy. Most everyday substances (air, wood, rock, plastic) have specific heat capacities around 1000 J/kg/K. Metals have lower specific heat capacities, typically in the few-hundred J/kg/K range.
So if we know the specific heat capacity as a function of temperature, and start the material out at absolute zero temperature, adding energy until it comes up to the temperature of interest (room temperature, in many cases), we can compute the total entropy by adding (integrating) all the little pieces ΔS = ΔE/T, where ΔE = cpmΔT, and m is the mass of the object of interest.
Most materials have a specific heat capacity dropping to zero at zero temperature, and rising to some nearly constant value at intermediate temperatures. The result of the integration for total entropy (sparing details) pencils out to approximately equal the heat capacity, cpm, within a factor of a few. For a kilogram of ordinary matter, the total entropy therefore falls into the ballpark of 1000 J/K.
Because entropy and heat capacity are so intimately related, we can instantly order entropies of everyday substances: metals are lowest, followed by stuff like wood and rock, and liquids have the highest (water, especially), on a per-kilogram basis.
Where is the Disorder?
Note that we have managed to quantify entropy—at least in broad brush, order-of-magnitude style—without making reference to order.
Well, it turns out that if one can count the number of quantum mechanical states available to a system at a given (fixed) energy—in other words, counting all the possible configurations that result in the same total energy—and call this ginormous number Ω, then the absolute entropy can also be described as S = kBlnΩ, where kB = 1.38×10−23 J/K is the Boltzmann constant (note that it has units of entropy), and ln() is the natural logarithm function. This relation is inscribed on Boltzmann’s tomb.
It is this amazing relationship that forms the foundation of statistical mechanics, by which classical thermodynamics can be understood as the way energy distributes among microscopic states in the form of velocity distributions, collisions, vibrations, rotations, etc. Intuitively, the more ways energy can tuck into microscopic modes of motion, the less apparent it is to the outside world in the form of increased temperature. A system with deep pockets will not increase temperature as much for a given injection of energy. Substances with higher heat capacities have deep pockets, and therefore more ways to spread out the energy internally. The states of these systems require a greater amount of information to describe (e.g., rotational and vibrational modes of motion in addition to velocities, intermolecular interactions, etc.): they are a mess. This is the origin of the notion of entropy as disorder. But we must always remember that it is in the context of how energy can be distributed into the microscopic states (microstates) of a system.
The fans went wild. In 1949, Claude Shannon was characterizing information loss, and needed a term for the degree to which information is scrambled. Visiting mathematical physicist John von Neumann, he received the following advice:
You should call it entropy…nobody knows what entropy really is, so in a debate you will always have the advantage.
Gee, von Neumann couldn’t have been more right. The resulting duplicate use of the term “entropy” in both thermodynamic and information contexts has created an unfortunate degree of confusion. While they share some properties and mathematical relationships, only one is bound to obey the Second Law of Thermodynamics (can you guess which one?). But this does not stop folks from invoking entropy as a trump card in arguments—usually unchallenged.
But informational entropy does not generally transfer into the thermodynamic realm. A deck of cards has the same thermodynamic properties (including thermodynamic entropy) no matter how the cards are sequenced within the deck. A shuffled deck has increased informational entropy, but is thermodynamically identical to the ordered deck.
What’s the Difference?
To determine whether two different states of some real or imagined system is meaningfully different in thermodynamic entropy, ask yourself these questions:
- If I took the system to zero temperature and then added energy until getting back to the original temperature, would the amount of energy required be different for the two configurations?
- Is there an intrinsic physical process by which one state may evolve to the other spontaneously? In other words, are the items continuously jostling about and changing configuration via collisions or some other form of agitation?
The first question primarily boils down to whether the microscopic structure has been changed, so that the places energy gets stored will look different before and after. If the change has been chemical in nature, then almost certainly the micro-level energy storage properties will be different. If it’s just a matter of moving macroscopic pieces about, then any entropy change is probably too small to care about.
The second question concerns the relevance of entropic differences, and highlights the notion that entropy only really makes sense for systems in thermodynamic equilibrium. Salt grains and coffee grains sitting in two separate piles on a flat surface will sit that way indefinitely over any timescale we consider to be relevant. Absent air currents or other disturbances, there may be small thermal jostling that over billions of times the age of the Universe could work to mix the two populations. But such timescales lose meaning for practical situations. Likewise, books and papers heaped on the floor have no random process of self-rearrangement, so the configuration is not thermodynamically relevant. Applying the test outlined by the first question above would have the same thermodynamic result in either configuration.
Another way to say this is: it does not make sense to characterize the entropy of a given frozen arrangement. The salt and coffee example, whether mixed or separated (with no barrier between, let’s say) are both equally probable instances of the same system energy. Yes, there are myriad more ways to arrange a mixed state. But a particular mixed state is just as special as the separated piles. If we had a removable barrier between separated piles and provided a random agitating process by which grains could rearrange themselves on relevant timescales, then we could describe the entropy difference between the ensemble of separated states with a barrier to the ensemble of mixed states without a barrier. But we can’t really get away with discussing the entropy of a particular non-thermalized (static) arrangement.
Okay, after saying that configuration changes of macroscopic arrangements effectively carry no difference in thermodynamic entropy, I will make the tiniest retraction and clarify that this is not exactly true. Going back to the coffee/salt grains example, a system of two species of particles does carry a finite and quantifiable entropic change associated with mixing—assuming some agitating mechanism exists. In the case where the number of grains per unit area is the same for the two clusters, the post-mixed arrangement (occupying the same area as the initial separate piles) has an entropy change of
where kB is the tiny Boltzmann constant, and the N values count the number of particles or grains in group 1 and group 2. Simplifying to the case where each group contains the same number of particles just gives ΔS = 2NkBln2, or about 1.4NkB.
In atomic and molecular arrangements, we commonly deal with moles of particles, so that N ≈ 1024 particles (the Avogadro number), and the mixing entropy comes out to something of order 10 J/K (compare to absolute entropy often around 1000 J/K). But dealing with macroscopic items, like grains of salt or coffee, we might have N ≈ 10,000, in which case the entropy difference in mixing is about 10−19 J/K.
So there can be a real thermodynamic difference between the two states, some twenty orders of magnitude down from the gross thermodynamic entropy of the system. Why do I use the words “can be” and not the simpler “is?” Because question 2 comes in. If there is no statistical process by which the particles can thermalize (mix) over timescales relevant to our interest, then the entropy difference has no meaning. If we apply the test in question 1 to the pre-mixed and post-mixed piles, the procedure does not provide an opportunity for random rearrangements, and thus no measured change in system entropy will manifest itself in an observable way.
In order to clarify some mistaken themes relating to entropy, let’s look again at the third paragraph of the post, repeated here:
A narrative has developed around this theme that we take in low entropy energy and emit a high entropy wake of waste. That life displays marvelous order—permitted by continuous feeding of this low entropy energy—while death and decay represent higher entropy end states. That we extract low entropy concentrations of materials (ores) from the ground, then disperse the contents around the world in a higher entropy arrangement. The Second Law warns that there is no going back: at least not without substantial infusion of energy.
LOW ENTROPY ENERGY?
Characterizing an energy source as high or low entropy makes little sense. Take the Sun, for example. The surface of the Sun is about 5800 K. Every Joule of energy that leaves the Sun removes about 0.17 mJ/K of entropy from the Sun, according to ΔS = ΔE/T. In this way, the Sun’s total entropy actually decreases with time (internally, it consolidates micro-particles: hydrogen into helium; externally, it spews photons, neutrinos, and solar wind hither and yon). So the Sun is a prodigious exporter of entropy. Let’s say we catch this Joule of energy on Earth. When absorbed at a temperature of 300 K, we could say that we have deposited 3.3 mJ/K of entropy. So that Joule of energy does not have a fixed entropic price tag associated with it: 0.17 mJ/K became 3.3 mJ/K. If we cleverly divert the energy into a useful purpose, rather than letting it thermalize (heat something up), the Second Law requires that we at least increase terrestrial entropy by 0.17 mJ/K to balance the books. We are therefore mandated to deposit at least 0.17/3.3, or 5% (50 mJ) of the energy into thermal absorption, leaving 0.95 J free to do useful work. This results in a 95% efficiency, which is the standard thermodynamic limit associated with operation between 5800 K and 300 K (see related post on heat pumps).
The point is that rather than characterize solar input energy as low entropy (little meaning), we should just focus on the fact that we have a large temperature difference between Sun and Earth. It is the large temperature difference that allows a flow of energy from one to the other, and the Second Law allows diversion of some fraction of this energy into a non-thermal path without reducing overall system entropy.
By the way, the entropy of the Earth as a whole, like the Sun, also decreases in the long term, made possible by a net exodus of stored thermal energy and the lightest gases (hydrogen and helium).
THE QUICK AND THE DEAD
What about the entropy of living vs. dead things? If we drop our notion of which is more or less orderly, and think thermodynamics, it becomes easy. A 50 kg living person has lots of water content. The heat capacity is high. The entropy of this system is large. A dry, decaying corpse, let’s say also 50 kg, has a lower heat capacity, lacking liquids. So the thermodynamic entropy of the corpse is lower than that of the living thing.
This comparison may or may not be surprising, but it wasn’t necessarily fair. The living version of the 50 kg corpse had a larger living mass, and as the water evaporated the entropy of the entire system (tracking all the mass) goes up. It’s just that the solid remains, in a pound-for-pound comparison, ends up at lower entropy. Note that this result does not respect our sense of “order” as low entropy. The presence of lots of improbably operational subsystems in the living organism does not translate to a lower entropy state, thermodynamically speaking.
A related matter is the notion that we eat low entropy food and produce high entropy waste. In this context we associate “usefulness” with entropy—or lack thereof. We can eat a useful burrito, but cannot derive sustenance by eating our solid waste. In a direct comparison, the solid waste (out of which our bodies remove as much water as possible) has lower thermodynamic entropy than the same mass of burrito—since the latter has more water content. Sorry to be gross here, but this makes the comparisons personally relevant. Sure, the system entropy increased in the process of digesting food (e.g., via respirated gases). But the measure of thermodynamic entropy for a “thing” is not a measure of its usefulness.
The story goes that we extract low entropy (i.e., concentrated) resources from the ground, and turn them into high entropy products. Sometimes this happens, but often it is the reverse. When we pull fossil fuels out of the ground and combust them into several species of gases, we increase entropy. All the king’s horses and all the king’s men will never put fossil fuels back together again. At least not at an energetic bargain.
But let’s look at another common case. Mineral ores are local concentrations of some material of value—like copper, aluminum, gold, etc. The ore is fantastically more concentrated than the average crustal abundance. Our information-entropy minds tag this as low entropy material. But the ore is still far from pure: maybe a few percent of the ore contains the metal we want. The rest is rock we care little about. Our quest is to purify (concentrate further) the material.
First, let’s compare a kilogram of copper ore and a kilogram of refined copper. The ore has low-heat-capacity metal (copper), plus higher-heat-capacity rock. The entropy in the ore is higher than the entropy in the product. So far, no one is perturbed, because the purity, or orderliness, has increased (wrong reason to think the copper is lower entropy, but okay). Now the copper is deposited on circuit boards in small traces and put into cell phones that get shipped around the world, many ending up in landfills. What is the entropy of the 1 kg of copper now, having been strewn across the planet? Thermodynamically, it’s the same. If we somehow contrived the test of adding energy to bring this globally distributed copper from 0 K to 300 K, the amount of energy required (performed quickly enough that we may ignore diffusion into surrounding media) would be the same for the block as for the distributed mass. Macroscopic configuration changes don’t contribute measurably to changes in thermodynamic entropy.
Note that if for some reason I happened to be interested in the material with higher heat capacity—mixed with lower heat capacity material—the process of separating the material would produce a chunk of pure material with a higher thermodynamic entropy than a similar mass of raw material. So it’s not the purification, or ordering, that makes the entropy go down. It’s the thermodynamic properties with respect to how readily energy is absorbed and distributed into microstates.
The other way to look at the ore situation is to take 100 kg of a 1% concentration ore, and separate it into 99 kg of rock and 1 kg of the target material. What is the entropy difference in the original ore and the separated piles? As long as the grain size of the good stuff is semi-macroscopic (well away from atomic scale), then the entropic difference is negligible. If it is chemically mixed at the atomic scale, like if we wanted to extract chlorine from salt, then the entropy difference could in principle go either way, depending on resultant material properties. But the sorting process has negligible impact on entropy.
The context of this discussion is mis-application of the Second Law of Thermodynamics to systems that might appear to exhibit entropy differences in the form of orderliness of macroscopic arrangements of matter. But many of these “intuitive” cases of entropy differences translate to little or no thermodynamic entropy differences, and therefore do not fall under the jurisdiction of the Second Law.
Simpler statements that are consistent with the laws of thermodynamics and bear on our societal options are:
- Energy may be extracted when temperature differences exist (e.g., combustion chamber compared to ambient environment; solar surface temperature compared to Earth surface). Entropy measures of the energy itself are not meaningful.
- Net energy from fossil fuels may only be extracted once.
- Efficiencies are capped at 100%, and often are theoretically much lower as a consequence of the Second Law.
Meanwhile, we should break the habit of invoking the Second Law to point to the irreversibility or even just the energy cost of restoring ordered arrangements of matter (as in mined ores and recycling). Even if the thermodynamic entropy of processed goods is higher than the feedstock (usually not the case, and at best negligibly different), the Second Law is not the primary barrier to reversing the process. As long as 1017 W flows from the Sun to the Earth, physics and entropy impose no fundamental limits on irreversibility. Our limitations are more on the practical than on the theoretical side.
I thank Eric Michelsen, Kim Griest, and George Fuller for sharing insights in a fascinating discussion about the nature of entropy. It is said that when a group of scientists discusses entropy, they’ll be talking nonsense inside of ten minutes. I think we managed to steer clear of this common peril. I also learned from these links.