The Road to Entropy – Clausius undaunted

Have you ever experienced that wondrous “Eureka!” moment of insight when you’ve discovered some hidden secret of nature? Archimedes did when he realized that the volume of water displaced is equal to the volume of the body submerged. Kekulé did when discovered benzene’s structure. Hubble did when he discovered that the stars are all moving away from us at speeds that increase with distance. And Rudolf Clausius arguably did when he realized that he could correct Sadi Carnot’s “flawed” masterpiece (here) by replacing the caloric theory of heat with James Joule’s theory of work-heat equivalence (here). Clausius’s 1850 publication on this topic gave us the 1st Law of Thermodynamics. I capture the essence of Clausius’s realization in this video.

Carnot’s original work together with Clausius’s 1850 publication are captured well in this book:

Carnot, Sadi, E Clapeyron, and R Clausius. 1988. Reflections on the Motive Power of Fire by Sadi Carnot and Other Papers on the Second Law of Thermodynamics by E. Clapeyron and R. Clausius.  Edited with an Introduction by E. Mendoza. Edited by E Mendoza. Mineola (N.Y.): Dover.

I go into much more depth on Clausius’s 1850 publication in my book Block by Block – The Historical and Theoretical Foundations of Thermodynamics.

This image has an empty alt attribute; its file name is 2021-03-22_09-25-48.png

The Road to Entropy – James Joule and the power of his curiosity (video)

James Joule could have observed what he did and then done nothing with it. Instead, he became driven to understand and explain it and so discovered the mechanical equivalent of heat, a forerunner of the concept of energy and the 1st law of Thermodynamics. His story is a good one, an inspiring one, an example of how good science is conducted and how a good scientist behaves. I share a piece of his story here in this video.

I go into much more detail about the life of James Joule, including his wonderful collaboration with William Thomson, later Lord Kelvin, in my book Block by Block – The Historical and Theoretical Foundations of Thermodynamics.

The Road to Entropy – Sadi Carnot’s use of analogy to create his “flawed” masterpiece (video)

The commercialization of the high-pressure steam engines by the Cornish Engineers of Britain inspired Sadi Carnot, a French military engineer, to analyze these engines and seek the theories to guide their improvement.

If you’re interested in doing a deep dive into Sadi Carnot’s work, here are two excellent references.

Carnot, Sadi, E Clapeyron, and R Clausius. 1988. Reflections on the Motive Power of Fire by Sadi Carnot and Other Papers on the Second Law of Thermodynamics by E. Clapeyron and R. Clausius. Edited with an Introduction by E. Mendoza. Edited by E Mendoza. Mineola (N.Y.): Dover.
Carnot, Sadi. 1986. Reflexions on the Motive Power of Fire. Edited and translated by Robert Fox. University Press.

I go into much more depth on Sadi Carnot’s work, including a detailed analysis of his eponymous heat cycle, in my book Block by Block – The Historical and Theoretical Foundations of Thermodynamics.

This image has an empty alt attribute; its file name is 2021-03-22_09-25-48.png

The oldest surviving steam engine is on display at the Henry Ford Museum of Innovation in Michigan

I was traveling in Michigan this past week and took a day to visit the Henry Ford Museum of Innovation. All I can say is, WOW! Together with the adjacent Greenfield Village, well worth the visit.

The Innovation Museum offers great displays of engine technologies, including the oldest surviving steam engine in the world, a 1760-ish Newcomen engine, which was given to Henry Ford from the Earl of Stamford in 1929. The photograph is of me standing next to this large machine. The fact that these displays were just one part of the larger museum is remarkable. Again, well worth the visit.

The Road to Entropy – Phil Hosken on Richard Trevithick and the invention of the high-pressure steam engine (video)

As shared in my previous post (here), the historical road to entropy started with Denis Papin’s development of the piston-in-cylinder assembly and Thomas Newcomen’s and James Watt’s subsequent efforts to commercialize and continuously improve fire engines or atmospheric engines built around this assembly. Steam at atmospheric pressure was employed in these engines, not as a driving force but instead as a means to create a vacuum inside the cylinder (condensation via water spray), thus causing atmospheric air to drive the piston down into the cylinder and so generate useful work.

Around 1800 in Cornwall, England, Richard Trevithick transformed this technology by inventing an entirely new engine based on the (safe) use of pressurized steam as the driving force. Several technological breakthroughs were required, including the design and fabrication of the first shell-and-tube heat exchanger. The arrival of these steam engines, now accurately named, quickly attracted the entrepreneurial interests of other Cornish engineers, as these engines were proving themselves more efficient (quantified by work done divided by bushels of coal consumed) than those of Newcomen and Watt; rapid improvements followed, as best reflected by the eventual increase in steam pressure from 0 to 50 psig. Arthur Woolf commercialized one such improved design and his former business partner, Humphrey Edwards, brought the end product to France. The importance of this history? It was arguably Woolf’s design that inspired Sadi Carnot in 1824 to conduct the first theoretical analysis of the steam engine.

In the act of writing my book on this topic, I had the good fortune to connect with Phil Hosken. Phil lives in Cornwall and is an expert on the life and times of Richard Trevithick, having once served as President of The Trevithick Society. As Trevithick played such a critical role in this historical timeline, I invited Phil to prepare the short video shown above. I do think you’ll enjoy learning something new by watching this video and also think that questions will come to your mind. Phil gladly welcomes such questions so please do not hesitate to engage. His email is: philip@htpbook.co.uk. His website is: http://www.htpbook.co.uk/.

If you’re interested in a deeper dive, Phil wrote the following two books on Richard Trevithick.

Hosken, Philip M. 2011. Oblivion of Richard Trevithick. Cornwall, England: Trevithick Society.
Hosken, Philip M., 2013. Genius, Richard Trevithick’s Steam Engines. Place of publication not identified: Footsteps Press.

I go into more detail about Richard Trevithick and the rise of the pressurized steam engine in my book Block by Block – The Historical and Theoretical Foundations of Thermodynamics.

The Road to Entropy – The Newcomen and Watt “Steam” Engines (videos)

The road to entropy began with the 18th century development of the “steam” engine by Thomas Newcomen and James Watt. But steam was not the driving force in these engines. So what was? And what was the purpose of the steam? Check out this video for the answers:

Note the shout-out in the video to Professor Bill Snyder, Bucknell University. Back in 1979, he demonstrated a pretty cool phenomenon in our chemical engineering class that I had fun repeating for this video.

For those of you interested in my kitchen experiment, I isolate it for you here:

I go into more detail about the origins of the steam engine in my book Block by Block – The Historical and Theoretical Foundations of Thermodynamics.

Riddle me this: why does dS = 0 for reversible, adiabatic expansion?

While attending an event in Syracuse, New York, I got to talking with an older chemical engineer who had once worked with my dad at Bristol-Myers Laboratories. I shared that I was writing a book on thermodynamics and we spoke some about this. At the conclusion, he looked at me and said, “You know, I never understood entropy.” I’m sure it wasn’t the first time this sentence has been spoken.

What is it about entropy that creates such a stumbling block to learning thermodynamics? More importantly, how can this stumbling block be removed? That’s the question in front of me and likely other educators right now. It’s a challenge that I am taking on, as you’ll see in my upcoming posts.

But before I start taking on that challenge, I must first better understand entropy myself. To this end, there’s a certain feature of entropy that I’ve never completely understood, a feature that has been one of my own stumbling blocks. In this post, I share this situation with you in the form of a Riddle me this question. Let me give you some context first, and then I’ll get to my question.

Entropy is a property of state that we can’t measure, feel, or viscerally understand

Entropy (S) is a property of state that quantifies the most probable distribution of a system of particles based on location and velocity (momentum) for different fixed properties of the system, such as, for example, volume (V) and energy (U). One could consider the following a valid statement: S = f(V,U). Unfortunately, we can’t directly measure or otherwise sense this property but instead must calculate it. Thus, our understanding of entropy must come from an abstract thought process rather than our gut feel. No wonder it’s so challenging to understand.

Given this, let’s start a discussion about the two commonly used approaches to understanding entropy. The first approach involves probability. The second approach involves the famous equation, dS = δQrev/T, and the corresponding fact that absolute entropy can be calculated by the integration of this equation from absolute zero to a given temperature, since the entropy of a given substance in its pure crystalline form is zero. The fascinating thing about these two seemingly different approaches is that they are fundamentally connected.

Entropy and probability

Have you ever put a drop of dye into a glass of water and watched what happens? The small drop spreads out and eventually disperses throughout the water, resulting in a nice, light, uniform shade of color. There’s no internal force making this happen. Instead, the uniform spread results from the random motion of the dye molecules in the water, no direction of motion being statistically more probable than any other direction.

To generalize this example, randomization in nature leads to events that are likely or “probable” to occur. The uniform spread of the dye molecules through water or the uniform spread of gas molecules through a room are both highly probable. The 50:50 distribution of heads and tails after the randomized flipping of a coin many times is highly probable. This is how randomization, probability, and large numbers work.

Taking a conceptual leap here, the absence of energy gradients in a system, such as those involving mechanical (pressure), thermal (temperature), and chemical (chemical potential) forms, is highly probable; the presence of such energy gradients is highly improbable, for they don’t naturally occur. When they are made to occur, such as when you drop a hot metal object into a cold lake, the temperature gradient between the two dissipates over time. You won’t see the gradient become larger over time; the metal object won’t become hotter. The probability-based mathematics involved with statistical mechanics shows why this is so.

Taking another, larger, conceptual leap, assume you have a system comprising a very large number of helium atoms for which the total energy (U) is simply the total number of atoms times their average kinetic energy. If you put all of these atoms into a single container of volume V and energy U, then, absent any external energy field like gravity, the atoms will spread themselves out to achieve a uniform density and a Maxwell-Boltzmann distribution of energies. (See illustration detail at right, from the larger illustration at the end of this post.)

While scope limitations prevent me from diving into the mathematics behind these distributions, suffice to say that while the presence of the moving atoms inside the system may seem very chaotic and random (if you could actually see them), the reality is actually the opposite. The distributions of the atoms by location and velocity (momentum) have beautiful structures, ones defined as being the most probable.

It was Ludwig Boltzmann who developed the mathematics that revealed these distributions. His mathematics involved no assumption of an acting force or of any kind of favoritism, only of probability, the probability of nature’s natural, random tendencies. And it was also Boltzmann who used these mathematics to explain the second approach to understanding entropy.

Entropy and δQ/T

Many of us learned about entropy by the following equation that Rudolf Clausius discovered deep in his theoretical analysis of Sadi Carnot’s heat engine:

dS = δQrev/T [1]

An infinitesimal change in entropy (dS) equals the infinitesimal change in energy of the system caused by reversible thermal energy exchange with another system (δQrev) divided by the absolute temperature (T). This equation enabled Clausius to complete the first (differentiated) fundamental equation of state when he started with the 1st Law of Thermodynamics

dU = Q – W

and made two substitutions, one with equation [1] above and the other by assuming a fluid system:

dU = TdS – PdV

The arrival of [1] together with the later realization that the entropy of a substance in its pure crystalline form equals zero at absolute zero enabled calculation of absolute entropy at any temperature. Using heat capacity and phase-change data, [1] can be mathematically integrated from absolute zero to yield absolute entropy. In this way, entropy quantifies the total amount of thermal energy (Q), adjusted with division by T, required to construct the system of moving atoms and molecules, including all forms of motion such as translation, vibration, and rotation, at a given temperature and pressure. The integration also accounts for phase change and volume expansion [the heat capacity involved is Cp and thus allows for variable volume].

One can thus think of entropy as the quantification of what I call the “structural energy” of a system. It’s the energy required to create the structure of moving atoms and molecules. But why is this concept connected with probability?

Boltzmann’s probabilistic reasoning explains why dS = δQ/T

The equality shown in [1] has always interested me. I searched for a physical explanation of why the equation itself actually works, but couldn’t find an answer until I read one of Boltzmann’s papers (here). His explanation made sense to me.

Two variables, location and velocity (momentum), play a critical role in Boltzmann’s mathematics. Let’s leave location aside for this discussion and focus on momentum, and more specifically, energy. Building on the illustration to the right, also taken from the larger illustration at the end of this post, Boltzmann proposed an infinite series of hypothetical buckets, each characterized by an infinitesimal range of energy, and then played the game of how many different ways can atoms be placed into the buckets to result in the fixed total energy constraint? (Total energy = summation of # atoms in bucket x energy of bucket). After much math, he discovered that by far the most commonly occurring placements or arrangements of the atoms aligned with the Maxwell-Boltzmann distribution. While all arrangements, no matter how improbable, were possible, only the most probable arrangement dominated. No other arrangement even came close. In the example at right, you can see how the Maxwell-Boltzmann distribution evolves with 7 balls (atoms). When you add 1023 additional atoms plus a large number of buckets, the Maxwell-Boltzmann distribution locks in as the only answer. It clearly dominates based on pure probability.

Let’s turn our attention to the number of buckets involved in Boltzmann’s mathematics, and more specifically, consider how many of the buckets are actually accessible. At absolute zero, there’s only one accessible energy bucket and thus one way to arrange the atoms. [Note: The equation linking entropy to number of arrangements(W) is the famed S = k ln(W). When W = 1, S = 0.] As temperature increases, the atoms start bubbling up into the higher energy buckets, meaning that the number of accessible buckets increases. And as temperature approaches infinity, so too does the number of accessible buckets approach infinity. In this way, temperature defines the number of energy buckets accessible for the distribution of atoms.

Now consider that the value of δQ quantifies an incremental amount of energy added to the system (from thermal energy exchange), and then take this to the next step: δQ also quantifies an incremental number of accessible buckets. So δQ/T quantifies, in a way, a fractional increase in accessible buckets and thus the infinitesimal increase in entropy. In his publication, Boltzmann demonstrated this connection between his probability-based mathematics and [1], which had been derived from classical thermodynamics. This is rather fascinating, the connection between these two very different concepts of entropy. And perhaps this is also why entropy is confusing. Entropy is related to the amount of thermal energy entering a system, and it’s also related to the number of different ways that the system can be constructed. It’s not obvious how the two are connected. They are, as shown by Boltzmann. It’s just not obvious.

But then why does dS exactly equal zero for adiabatic reversible work?

Based on the above discussion, I can see why dS equals δQ/T from a physics and statistical mechanics viewpoint. And I can further understand why this works regardless of the substance involved; temperature alone governs the number of energy buckets accessible for the distribution of atoms and molecules. The nature of the substance plays no role in [1].

But what doesn’t make sense to me is the other important feature of entropy that we learn in the university: entropy experiences no change (none!) during adiabatic reversible work. Consider that S = f(U,V). During reversible adiabatic expansion, such as occurs in a work-generating turbine, energy (U) decreases and so decreases entropy, while volume (V) increases and so increases entropy. This process is labeled isentropic, meaning that entropy remains constant, which means that the two changes in entropy exactly cancel each other. Not almost. Exactly. That’s what is meant by saying dS = 0 during reversible adiabatic expansion. I just don’t understand how the physics involved results in a such an exact trade-off between energy and volume. Is there something fundamental that I’m missing here? Is there a clean proof that this must be so?

In my reading of Rudolf Clausius’s works, I never saw him state that reversible adiabatic expansion has no impact on entropy. His focus was primarily on the steps in Carnot’s ideal heat engine cycle involved in the transfer of heat into (Qin) and out of (Qout) the working substance. He sought to understand how heat is transformed into work(W), and it was through this search that he learned that Qin/Thot = Qout/Tcold and also that W = Qin – Qout.  These discoveries led to his defining the maximum efficiency for the continuous transformation of heat into work [Wmax / Qin  =  (Thot – Tcold)/Thot] and also led to discovery of the new state function, entropy (S), and its main feature, dS = δQ/T. But in this, I did not find any discussion around the proof that dS = 0 for reversible adiabatic expansion. I did see discussion that because δQ is zero during the reversible adiabatic expansion, then dS also is zero. However, this, to me, is not proof. What if the changes in entropy for the two adiabatic volume changes in Carnot’s cycle are both non-zero and also cancel each other?

This then is my question to you.

Have you ever read an explanation based on physics of why entropy experiences zero change during reversible adiabatic expansion? I welcome any information you have on this topic, hopefully in documented form. Thank you in advance.

My journey continues.

The below figure is from Block by Block – The Historical and Theoretical Foundations of Thermodynamics

Illustrated by Carly Sanker

References

Sharp, Kim, and Franz Matschinsky. 2015. “Translation of Ludwig Boltzmann’s Paper ‘On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium’ Sitzungberichte Der Kaiserlichen Akademie Der Wissenschaften. Mathematisch-Naturwissen Classe. Abt. II, LXXVI 1877, Pp 373-435 (Wien. Ber. 1877, 76:373-435). Reprinted in Wiss. Abhandlungen, Vol. II, Reprint 42, p. 164-223, Barth, Leipzig, 1909.” Entropy 17 (4): 1971–2009.

Cercignani, Carlo. 2006. Ludwig Boltzmann: The Man Who Trusted Atoms. Oxford: Oxford Univ. Press.

Thermodynamics: What is “heat”? (video)

The word “heat” can be very confusing to those trying to learn and understand thermodynamics. I created the below video to help clarify things. I go into more detail about this topic and many others in my book Block by Block – The Historical and Theoretical Foundations of Thermodynamics.

Riddle me this: what is the physical significance of T∆S in Gibbs’ maximum work equation?

Remember this?

Maximum work = -∆Grxn = -(∆Hrxn – T∆Srxn)

At some point toward the end of undergraduate thermodynamics, we were taught this equation.  Unfortunately, most of us, myself included, graduated without actually understanding it.  Why?  You already know the answer, just by looking at it.  Because entropy is involved.  While many have a reasonable understanding of heat of reaction (∆Hrxn) and temperature (T), few understand the physical meaning of entropy, and fewer still the physical meaning of T∆Srxn.

In the many references I researched while writing Block by Block – The Historical and Theoretical Foundations of Thermodynamics, I did not find a single one that offered a physical interpretation of T∆Srxn and so ended up proposing my own.  At the end of this post I share my proposal and, in the spirit of the Riddle me this theme, invite you to check it out and let me know what you think.

Background – What problem did the arrival of this equation solve?

In the mid-1800s, technical analysis of the steam engine led to the discovery of two properties of matter, energy and entropy, and so laid the foundation for the science of thermodynamics.  In this new way of thinking, the steam engine transforms chemical energy (the burning of coal) to mechanical energy (the generation of work as defined by weight times change in vertical height lifted).  Furthermore, the maximum energy gained by the latter can’t be greater than the energy lost by the former.  The conservation of energy dictates this.  But what’s the mathematical equation to accompany this statement?  That was the challenge.  In other words, how would one go about calculating the maximum work possible from the combustion of one bushel of coal?

In one of the first attempts to answer this question, Danish chemist Julius Thomsen in 1854 and separately French chemist Marcellin Berthelot in 1864 proposed that maximum work is determined by the energy difference between reactants and products as quantified by the heat of reaction (-∆Hrxn) measured in a calorimeter.  [Note:  Exothermic reactions require heat removal in a calorimeter, thus resulting in a negative value for ∆Hrxn].  They further reasoned that when a reaction is exothermic (∆Hrxn < 0), it can generate work on its own, without added energy, and must therefore be spontaneous.  Conversely, when a reaction is endothermic, then it requires energy to make it go and so can’t be spontaneous.  This was their thinking at least.

Despite its lack of a theoretical underpinning, Thomsen and Berthelot’s thermal theory of affinity, as it became known, worked reasonably well for many processes.  But not all of them.  Sometimes all it takes is a single data point to eliminate a theory.  In this case, the data point was the spontaneous endothermic reaction.  According to Thomsen and Berthelot, it wasn’t supposed to happen, and yet it did. 

It was left to J. Willard Gibbs to show us the way.  In his 300-page “paper” (1875-78), Gibbs created a new property of matter, G = H – TS (G = Gibbs Energy, H = enthalpy = U + PV, U = internal energy, P = pressure, V = volume, T = absolute temperature, S = entropy), and showed how this property enabled calculation of maximum work through the following:

Maximum work = -∆Grxn = -(∆Hrxn – T∆Srxn)       (constant temperature and pressure)

Gibbs additionally proved that it is ∆Grxn < 0 that defines a spontaneous reaction and not ∆Hrxn < 0.

Unfortunately, Gibbs’ maximum work equation, as powerful as it proved to be, arrived absent a physical explanation.  Yes, per Gibbs’ argument, it works based on the macroscopic thermodynamics he developed, but how does it work based on the microscopic world of atoms and molecules?

A deep dive into Gibbs maximum work equation

As you’ve likely heard somewhere before, the entropy of an isolated system increases to a maximum.  It’s a fascinating characteristic of entropy.  What this means in the world of statistical mechanics is that the atoms and molecules of the system always move toward the most probable distribution based on location and momentum (velocity).  What this means physically, without getting into the details, is that—absent external energy fields like gravity—pressure, temperature, and chemical potential (of each species) equilibrate within a system such that each is constant throughout. 

Chemical potential, an invention of Gibbs, quantifies the interactions between the electrons and protons in a system, not directly, but instead through two manifestations of their interactions:

Type I – energy associated with orbital electron distribution around the proton-containing nucleus.

Type II – energy associated with intermolecular interactions (attraction and repulsion) between atoms and molecules, reflective of the fact that molecules have varying degrees of polarity. 

The fact that the chemical potential of each species evolves to equality throughout the system becomes especially relevant in the concept of phase and reaction equilibria, wherein each species and the atoms of each species distribute themselves across multiple phases, e.g., solid, liquid, or gas, and between reactants and products.

As regards chemical reactions, a common approach to analyzing them is to assume that they operate at constant temperature and pressure as this is consistent with many industrial processes.  Conveniently, these assumptions greatly facilitate calculations because they remove equilibration of temperature and pressure from consideration and put the focus solely on the equilibration of chemical potential.  But which aspect of chemical potential, the electron orbitals (Type I) or the intermolecular interactions (Type II) or both?  And how are these concepts connected with Gibbs’ maximum work equation, and especially the main topic of this post, the physical meaning of T∆Srxn?

Energy changes in both Type I and II contribute to temperature change in a chemical reaction

A chemical reaction occurs spontaneously when the reactant electrons can move to a more probable distribution in the products, which is all about Type I energy changes.  The most probable distribution of a set of particles occurs when larger numbers of them populate the lower energy levels.  (The relationship between distribution and electron orbital energy levels is addressed in statistical mechanics—alas, outside the scope of this post.)  The decrease in energy resulting from the movement of electrons from high potential to low means that, due to conservation principles, energy must increase somewhere else.  But where?  One would think that an immediate response would be an increase in kinetic energy of the products, which would result in an increase in temperature.  But this clearly can’t always be the case, because how then could the spontaneous endothermic reaction be possible?

While the Type I energy changes described above determine whether or not a reaction spontaneously occurs, it’s the sum of all energy changes that affects temperature, for at the moment of reaction, the Type I redistribution of orbital electron energies from high to low causes an immediate temperature-affecting change in what I call “system-structure” energy. To me, system-structure energy is comprised of Type II intermolecular interactions in addition to the degrees of freedom governing molecular motion (translation, rotation, vibration) and the number of molecules present. As the number of molecules change at constant pressure, system volume changes, resulting in P∆V work along with a corresponding change in temperature. Each of these system-structure energy changes contributes to temperature change and thus to ∆Hrxn. Whether the change in system-structure energy results in an increase or decrease in temperature at constant temperature and pressure depends on the specific reaction involved.  And it’s the size of this change relative to that for the Type I energy change that then determines whether or not the reaction is deemed exothermic or endothermic in the calorimeter.

Thermodynamic difference between the calorimeter and the electrochemical cell

In the reaction calorimeter, changes in Type I and system-structure energies occur together, inseparable, both contributing to the total heat release (∆Hrxn), neither capable of being isolated and studied.  In the electrochemical cell, on the other hand, the two changes are inadvertently kept separate (just a feature of the design), which favorably reveals the physical meaning of Gibbs’ maximum work equation in the cell’s operation.  In this cell, reactants are separated from each other by a porous membrane into two half-cells, and the complete cell is maintained at constant temperature via a thermal bath.  Once set up, this cell establishes a voltage difference between the two half-cells.  Electrons flow from high voltage to low through an external wire to convert reactants to products.

Assigning the variables in Gibbs’ maximum work equation to Type I and structural energy changes

With this background, let’s now consider the thermodynamics of the electrochemical cell as a means to assign the variables in Gibbs’ maximum work equation to Type I and system-structure energy changes and so position ourselves to gain insight into the physical meaning of T∆Srxn (see illustration below).  Once converted into energy units, the cell voltage difference quantifies the maximum work possible for the chemical reaction, for if you ran the wire through a motor, you could employ the voltage difference to lift a weight and generate useful work.  This voltage difference reflects a difference in orbital electron energies (Type I).  Recall earlier that maximum work is defined as -∆Grxn.  Thus, the physical meaning of -∆Grxn can be understood based on Type I energy changes.

Also recall that, considering the chemical reaction as a whole, the heat released by the reaction corresponds to the value of ∆Hrxn as quantified in the calorimeter.  Thus, the value of ∆Hrxn is determined by the sum of the two energy changes: Type I plus system-structure.

This now leaves us with the T∆Srxn term in Gibbs’ equation and also with system-structure energy changes.  These two must be linked as they’re the only terms remaining.  But how?  And how are they, in turn, connected with the electrochemical process?  Again, I couldn’t find the answer to these questions in my references.  Fortunately, Gibbs guided me toward a hypothesis.

My hypothesis on the physical meaning of TSrxn

While Gibbs intentionally didn’t speculate about the underlying physics behind his equation, especially since the atomic theory of matter had not yet arrived, he did point in the direction to follow by suggesting that the value of T∆Srxn shows up in the heating and cooling requirements of the constant temperature bath of industrial electrochemical processes.  Based on this, I proposed in Block by Block that the link between T∆Srxn and system-structure energy change must exist and that it makes sense that it does, because entropy itself is a function of system-structure energy.

The entropy of a system is quantified by integrating δQrev/T from absolute zero to a given temperature, for which δQrev encompasses the total thermal energy added (reversibly) to the system to not only put the particles in motion (and so increase temperature) but also to push the particles away from each other—think  phase change—overcoming intermolecular attraction in so doing.  At constant temperature, I propose that the difference in entropy between reactants and products (∆Srxn) times temperature (T) quantifies the difference in system-structure energy, which I define as that energy required to establish the location and momentum of each particle since these are the two sources of entropy in Boltzmann’s statistical mechanics.  When reactants transform to products, the resulting atoms and molecules accelerate or decelerate to their new positions in response to the change in intermolecular interactions, the change in degrees of freedom, and to the change in volume. Each motion contributes to temperature change and the resulting heating/cooling requirements of the reaction system.

While Gibbs’ equation isn’t evident in the calorimeter, it is in the electrochemical cell.  And because of this, Gibbs was able to explain the thermodynamics of this cell.  It was this contribution that enabled the chemistry community to finally accept entropy as a thermodynamic property of matter.

To share a concluding piece of trivia, the term ∆Grxn came to be called the “free” energy available to do useful work while T∆Srxn was called unavailable “bound” energy.  Helmholtz originally used these descriptors but without the physics-based “structural energy” reasoning I used above.

What are your thoughts on my hypothesis?

There remains more work to do with my above hypothesis.  I need to validate it with a deeper analysis.  If you have any thoughts on the matter, please leave a public comment or contact me directly at:  rthanlon@mit.edu.

My journey continues.

Thanks to Andreas Trupp and Ken Dill for providing helpful comments on the original version of this post in February 2021.

Like a Bird – Flying Balloons on Other Planets

An artistic concept of a balloon flying at Venus. Image courtesy of Tibor Balint, used with permission.
You can see a cool video on Venus Variable Altitude Balloons produced by Tibor Balint here.

* * *

For this post I invited back fellow thermodynamics enthusiast Mike Pauken, principal engineer at NASA’s Jet Propulsion Laboratory and author of Thermodynamics for Dummies, to complete this 3-part series related to his work on developing balloons for Venus. His first post covered the developmental history of balloons while his second dove into the fundamental reasons why balloons float to begin with. For his third and final post, Mike provides a general discussion of how his group at NASA designs balloons to fly on other planets and especially Venus. Please extend a warm return welcome to Mike! – Bob Hanlon

* * *

Getting to fly a balloon on another planet is not an easy venture. The first step involves telling a compelling story on why it’s important to fly a balloon at places such as Venus, Mars or even Titan. It’s not the designer or builder of balloons that gets to tell this story. Rather, the argument for scientific exploration, using a balloon, may come from a collective agreement within a community of planetary scientists interested in studying how planets are similar or differ from each other. Or it may come from political will, to demonstrate as a matter of national pride, that a nation can pursue and accomplish something very difficult to display superiority among nations.

A compelling story of exploration, whether scientifically or politically motivated, weighs the risks and benefits, looks at all options and makes recommendations on the best path forward to accomplish the objectives of a new mission. This story must reach the right ears and eyes. It has to reach those that are capable of funding and supporting it and carrying it out. Once that is done, then the design, build, testing of a planetary balloon mission can begin.

A good story though, needs artwork to illustrate new concepts and help people see the vision. One of my friends loves to create artwork of space exploration. His concept of a balloon flight at Venus is shown as the front image of this post. I hope it caught your attention.

Earth is not the only planet in our solar system where balloons can fly. In fact, two balloons flew at Venus in 1986. The Soviet Vega missions carried a balloon as part of their payload. You can click on the Vega missions link to read more. I think the fact that the Soviets flew balloons on Venus in the cold war era was a matter of political will, to demonstrate superiority and they did a great job in discovering much of what we know about Venus today.

The primary motivations today for exploring other planets is to improve our understanding of how the universe was formed and whether life exists on other places than Earth. There are amazing telescopes today that allow us to detect planets at far away stars. These are called exoplanets. But how can we tell what they are like? Well, for starters, lets get a better understanding of the exoplanets right in our own backyard: the solar system! It’s likely many star systems are similar to our solar system. But there are also some that are very much different. What causes these differences? How did stars and their solar systems form? Can we detect signatures of life at other stars or planets? There are many questions to ask and in the course of scientific discovery, it seems that for every question we find an answer for, we get at least two or more new questions popping up.

There are many ways to explore the solar system: telescopes on the ground or in orbit around earth were our first eyes outward. These were followed by spacecraft probes that orbit around other planets/moons/asteroids, and space vehicles that land on the surface and deploy science instruments in situ to make measurements. Some of these are stationary landers, others are mobile vehicles. Some vehicle are capable of exploration in the “air” of other planets. There are many ways we have explored the universe around us. But for now, our focus is on one particular mode of exploration: the Planetary Balloon.

Planetary Balloons

When developing a balloon for other planets the first question that needs to be addressed is “Why a balloon?” What are the advantages and risks of flying a balloon compared to other means of collecting data? The competition for balloons includes everything we have in our Earthly airspace such as airplanes, helicopters, gliders, and blimps. Just as we have many choices of aerial vehicles on Earth, each fills a specific need, each has their own advantages and disadvantages.

Without going into specific details on each type of aerial vehicle, I will describe the major strengths and weaknesses of balloons for exploring planetary atmospheres. Balloons do not need to expend power to stay aloft like airplanes and helicopters. Since power can be limited on planetary missions, it is an advantage to not need much power for staying up in the air. Another strength: balloons can act as tracers to the atmospheric winds providing in situ data on wind conditions around a planet. But this is also a weakness for balloons, for they are at the mercy of the wind and cannot fly directly to any specific location. Blimps can stay aloft without much power and can move about relative to the wind and may be able to reach specific target locations. However, blimps are significantly more complex to deploy in the atmosphere of another planet compared to balloons.

Even a balloon has competition with other balloons. The simplest balloon has a fixed volume and will have a fixed gas density inside. It would be helpful to overfill the balloon and have some reserve gas inside to extend the lifetime of the balloon. All balloons will develop leaks over time and slowly descend. If the balloon gas has a fixed density, it will generally float at the same altitude all the time except if vertical winds push it up or down. The Soviet Vega Balloons experienced vertical winds quite frequently in their 2-day missions indicating the Venus has a turbulent atmosphere. More complex balloons are able to change the density of the gas inside by either compressing the lift gas into a storage vessel or by compressing atmospheric gas into a bladder within the balloon. These allow the balloon to change altitude and since wind may change direction at different altitudes, it could be possible to perform a bit of balloon “steering” by taking advantage of differing winds.

Making Decisions

Let’s say we’ve managed to put together a compelling story about exploring the atmosphere of another planet and we’ve convinced ourselves that a balloon mission is the best means for accomplishing our objectives. There are still a lot of decisions that need to be made to convince a panel of reviewers to recommend our project for funding a space exploration mission. The biggest thing in our story is, what kind of science are we going to do? What instruments to we need to have to accomplish our science objectives? We start our balloon mission concept by developing a list of instruments. We figure out how much power each one consumes, how much they weigh, how much space they occupy. In addition to the science instruments, the balloon payload will need many support systems: mechanical structure, thermal protection, electrical power, onboard computer processing, data storage, and telecommunications.

We sum up the estimates for cost, mass, volume and power to determine if it fits within the scope of the project. If not, then we must make decisions on what and where to make cuts. Once we have these figured out, we can determine what size the balloon needs to be to support the payload at the altitude range we desire to conduct our mission. We then figure out how all of the systems are going to be packaged inside a vehicle that will deliver the balloon and payload to the planet…working our way backwards, we figure out how big of a rocket is needed to get us on our way. It takes a lot of iteration back and forth to figure all these things out.

Even within the balloon design itself there are many decisions. What materials should be used? How will the balloon be assembled and tested? What lift gas should we use? Helium is generally safer to work with, hydrogen provides a bit more lift. Are there other factors that need to be considered? All of these questions and more need to be answered in designing a balloon system. Let’s move on from here and go over a bit of detail in developing planetary balloons.

Fundamentals of Balloon Design

What are the differences between flying a balloon on another planet compared to Earth? We will start with the thermodynamic equations describing balloons and jump in from there. In my previous post I showed the lift force of a balloon can be calculated from gas density, gravity, and volume using this equation:

FL = V·g·(ρa – ρb)

Where FL is the lift force, V is the balloon gas volume, g is the gravitational acceleration, ρa is the density of the atmospheric gas, and ρb is the density of the balloon gas.

The other force acting on a balloon is the mass gravitational force, FB, which is the weight of the balloon plus its payload. We calculate the balloon mass gravitational force, FB, using the following equation:

FB = g·(mb + mp)

where g, is the gravitational acceleration, mb is the mass of the balloon and mp is the mass of the payload the balloon carries.

Physics Holds True Across the Solar System

These equations apply to any planet or moon with an atmosphere. The values you’ll use for the gravitational acceleration and atmospheric density will depend on the properties of the planet or moon. In Table 1, I’ve summarized a comparison of the gravitational acceleration, g, atmospheric gas composition and molar mass between 4 places in our solar system where one might want to fly a balloon:

DestinationGravitational Constant, m/s2Atmosphere Composition
(approximate)
Molar Mass
Venus8.8796% CO2, 4% N243.4
Earth9.8178% N2, 21% O228.97
Mars3.7195% CO2, 3% N242.6
Titan1.3597% N2, 3% CH427.6
Table 1. Comparison of gravitational acceleration, atmospheric composition and molar mass at different planets/moons in our solar system

These things affect the size of the payload that can be carried by a specific size balloon on each planet/moon. One of the fun things about fantasizing about designing balloons for other planets and moons is getting to use different values of the physical parameters for balloon flight than what we customarily use here on the home planet. Let’s take a walk through our solar system and figure out how balloons or payloads will differ depending on the place we wish to fly. The first thing we’ll want to understand on this journey is the pressure/temperature and density profiles of different atmospheres as a function of altitude. It is convenient that density is defined by absolute pressure, temperature and molecular mass, so we can use density to compare the atmospheric profiles. We can then worry about temperature and pressure profiles later.

If we are to fly balloons at other places in the solar system, it may be useful to fly in conditions that are similar to our experiences on Earth. I put together a graph of the atmospheric density profiles of Mars, Earth, Titan and Venus in Figure 1 to illustrate how the density of the atmospheres of the different planets/moons compare with each other. Titan is a moon of Saturn with a wonderful atmosphere for flight. I placed black marks on the density profile for Earth that are equivalent to the density of the Mars atmosphere. Flying a balloon on Mars from 0 to 5 km altitude is like flying a balloon on Earth between 30 and 32 km altitude as I marked on the y-axis in the figure. This is the region of stratospheric Earth balloon flights where balloons the size of stadiums carry science payloads that weigh several thousand pounds.

Figure 1. A comparison of atmospheric density with altitude for Mars, Earth, Titan and Venus.

In contrast, flying a balloon on Earth from 0 to 10 km altitude would be equivalent to flying a balloon on Titan at an altitude range from 30 to 48 km as shown by the blue marks on the Titan profile. For Venus, the balloon altitude would be from 52 to 62 km illustrated with red marks on the Venus profile. If we were to design, build and test balloons for other planets/moons, we need to consider where we would do this testing in our atmosphere. Testing a balloon destined for Mars would be performed in our stratosphere, while testing for Titan or Venus would be accomplished in the Earth’s troposphere. You can imagine testing in the stratosphere is more difficult than testing in the troposphere.

Let’s Design Balloons for Other Planets

We will compare balloon flight on different planets/moons by determining how much payload a particular balloon could carry on Mars, Titan and Venus. I suggest we use a 10-meter (33 feet) diameter balloon to fly our payload to make our comparisons. We can set the atmospheric density for our balloon flights on Earth, Titan and Venus at 1 kg/m3. This gas density occurs at approximately 2 km on Earth, 34 km on Titan and 54 km on Venus. The atmosphere of Mars does not reach this density, so we’ll do a separate comparison for Mars using a lower atmospheric density of 0.014 kg/m3. This density occurs around 0.8 km on Mars and 33 km on Earth. We could use this density on Titan and Venus, but the altitudes would be over 80 km on Venus and 100 km on Titan. This is so high in their atmospheres it is not of current scientific interest and most observations at these altitudes can be accomplished by an orbiting satellite anyway. Furthermore, making high altitude balloons for other planets is much more difficult than designing for flights at lower altitudes.

We will calculate the lift force, FL for Mars, Earth, Titan and Venus to compare our payload capacities for each location. The 10-m diameter balloon fixes the volume at 524 cubic meters. The gravitational acceleration is shown in Table 1 above and the density of the atmosphere is fixed at 1 kg/m3 except at Mars where we are using 0.014 kg/m3. All we need now the is density of the lift gas to complete our comparisons. If we choose helium as our lift gas we can compute the density of the helium for each destination very simply by using the molecular mass values. We don’t need to know what the temperature and pressure are of the atmospheres to do this, instead, the density of the balloon lift gas, ρb, can be found using:

ρb = Mb/Ma·ρa

Where Mb is the molecular mass of the lift gas, which for helium is 4, and Ma is the molecular mass of the atmosphere gases which are listed in Table 1 above.

The other data we need is the mass estimate of the balloon. For robust balloons designed for Venus we would use a material for the envelope that has an area density of around 150 grams/m2. If we assume for simplicity we use the same area density for Earth and Titan, the balloon mass would be around 57 kg allowing for features besides the balloon material such as fittings, tapes for seams, load lines, et cetera, that add up to the mass of a balloon. But this mass is much too heavy for Mars. In fact, the mass of a balloon for Mars needs to be 1/10 of that for other destinations in order to work. This is the strategy used for Earth’s high altitude balloons: very thin films for balloon envelopes. We will set the Mars balloon mass at 5.7 kg for this example.

The results of the comparisons are shown in Table 2. Despite the differences in gravity between Earth, Venus and Titan, the amount of payload carried by a 10-m balloon is about the same for each: 405 kg +/- 15 kg, less than 10% variation. If you look at the two equations we used to calculate the payload mass above, in the force balance between the lift force, FL, and the mass gravitational force, FB, the gravity drops out and we find that the variation in payload mass is due only to the differences in the helium density at the planets/moon.

DestinationAltitude, kmHe Density, kg/m3Lift Force, NNet Lift Mass, kg
Earth’s Troposphere20.1394420394
Venus540.0924220419
Titan340.145605391
Earth’s Stratosphere330.0021661.1
Mars0.80.0014261.5
Table 2. Comparison of payload capacity of a 10-m diameter balloon between Earth, Venus, Titan and Mars.

You will notice that even though we picked an atmospheric density of 1 kg/m3 for Earth, Venus and Titan, the Helium density is not the same for each. The molecular mass of each atmosphere is different because they vary in composition. Earth is mostly nitrogen and oxygen. Venus is mostly carbon dioxide with some nitrogen. In fact, Venus has 4 times more nitrogen (by mass) in its atmosphere than Earth. Titan’s atmosphere is mostly nitrogen with a small amount of methane. These differing atmospheric compositions have a density of 1 kg/m3 at temperatures and pressures different than on Earth as shown in Table 3. We assume here, for simplicity, that the helium inside the balloon is at the same temperature and pressure as the atmosphere. In reality, the gas inside the balloon will have a different temperature and possibly different pressure than the atmosphere due to heating from the sun (in daylight hours) and exchanging energy with the atmosphere, ground and sky surrounding the balloon.

AltitudePressureTemperatureDensity
LocationkmkPa°Ckg/m3
Earth, Troposphere2.079.52.01
Venus54.359.336.91
Titan34.121.9-201.51
Earth, Stratosphere32.40.98-410.014
Mars0.80.65-31.80.014
Table 3. The pressure and temperatures of the atmosphere and helium at different planets/moons for density fixed at 1 kg/m3 or 0.014 kg/m3.

It’s very interesting that the pressure and temperature conditions at Venus are not widely different than those here on Earth. This makes Venus a compelling destination for a balloon flight. While Titan is also a very good place to fly a balloon, since the thermodynamics are suitable, the materials challenges are significant. The temperature of the atmosphere is cryogenic. Mars atmosphere is very much like the Earth’s stratosphere and there have been plenty of balloon flights in our stratosphere. The chief difficulty is making very light weight balloons that can be deployed at Mars reliably. This is not an easy thing to do.

Verifying VEGA Balloon Data from Science Papers

I would like to close by presenting some actual data on the 1985 Vega balloon missions to Venus and show that the calculations we use for designing balloons on Earth hold for other planets as well. There are two significant publications with pertinent information about the balloons flown at Venus. I give you the references at the end. The first publication is “Overview of VEGA Venus Balloon in Situ Meteorological Measurements” by Sagdeev et al, (1986) and the second is “VEGA Balloon System and Instrumentation” by Kremnev et al, (1986). Sagdeev publishes several plots of the temperature and pressure of the Venus atmosphere and I’ve snipped a small section in Figure 2 to collect specific flight parameter data.

Figure 2. Pressure, temperature and altitude data collected from the VEGA balloon missions in 1985. The red circled areas were used to select a point for verifying the specific details of the balloon mission.

In Figure 2, the pressure starts out high (almost 900 mbar) and the altitude is low, nearly 50 km. This is the very beginning of the balloon mission. The balloon starts the mission out at a lower altitude because it has heavy tanks of compressed helium attached for inflating the balloon. When the helium is fully injected into the balloon, the tanks are dropped and the balloon rises up to its float altitude of roughly 53.6 km shown here. Data is transmitted intermittently, so there are gaps. I put a red circle about 7 hours into the mission to grab a place where we can use the data to calculate the balloon lift and compare it to the balloon mass. I estimate, from these plots, the pressure is 550 mbar (55 kPa) and the temperature is 311 K (38°C). From the ideal gas law, the atmospheric density is 0.92 kg/m3 under these conditions. The balloon is on the night side of the planet so we can assume the balloon is not heated by sunlight.

The 1986 paper by Kremnev gives us information about the balloon size and mass: diameter 3.54 meters, mass of balloon system: 12 kg, mass of payload: 6.9 kg. There are some disparities in the actual balloon size and mass. Some sources report the balloon diameter is 3.4 m and the balloon system mass of 12.5 kg. Either way, we’re just looking to get into the ballpark here with the data we have on hand. The pressure of the helium inside the balloon was not equal to the atmospheric pressure. It was slightly pressurized to provide a constant density balloon and provide some reserve helium to account for leakage over time. The balloon pressure was about 3 kPa over the atmospheric pressure. Using a pressure of 58 kPa and temperature of 311K, the helium density inside the balloon is about 0.09 kg/m3. This density is confirmed also by the report that 2.1 kg of helium was used to inflate the balloon which nominally has a volume of 23.2 m3.

Using our balloon lift equation, we can calculate how much lift force the VEGA balloon had at Venus:

FL = V·g·(ρa – ρb)

FL = 23.2m3 · 8.87m/s2 · (0.92 – 0.09)kg/m3 = 171N

And we can compare this to our mass gravitational force of the VEGA balloon and payload at Venus:

FB = g·(mb + mp)

FB = 8.87m/s2 · (12.5 + 6.9) kg = 172N

There you have it! We’ve been able to pick out data from scientific papers and run the numbers through the well known balloon equations and verify they even work on other worlds, not just Earth!

References:

Sagdeev, R. Z., et al. “Overview of VEGA Venus Balloon in Situ Meteorological Measurements.” Science, vol. 231, no. 4744, 1986, pp. 1411–1414. JSTOR, http://www.jstor.org/stable/1696344. Accessed 29 Dec. 2020.

Kremnev, R. S., et al. “VEGA Balloon System and Instrumentation.” Science, vol. 231, no. 4744, 1986, pp. 1408–1411. JSTOR, http://www.jstor.org/stable/1696343. Accessed 29 Dec. 2020.