Remember this?

Maximum work = -∆G_{rxn} = -(∆H_{rxn} – T∆S_{rxn})

At some point toward the end of undergraduate thermodynamics, we were taught this equation. Unfortunately, most of us, myself included, graduated without actually understanding it. Why? You already know the answer, just by looking at it. Because entropy is involved. While many have a reasonable understanding of heat of reaction (∆H_{rxn}) and temperature (T), few understand the physical meaning of entropy, and fewer still the physical meaning of T∆S_{rxn}.

In the many references I researched while writing *Block by Block – The Historical and Theoretical Foundations of Thermodynamics*, I did not find a single one that offered a physical interpretation of T∆S_{rxn} and so ended up proposing my own. At the end of this post I share my proposal and, in the spirit of the *Riddle me this* theme, invite you to check it out and let me know what you think.

*Background – What problem did the arrival of this equation solve?*

In the mid-1800s, technical analysis of the steam engine led to the discovery of two properties of matter, energy and entropy, and so laid the foundation for the science of thermodynamics. In this new way of thinking, the steam engine transforms chemical energy (the burning of coal) to mechanical energy (the generation of work as defined by weight times change in vertical height lifted). Furthermore, the maximum energy gained by the latter can’t be greater than the energy lost by the former. The conservation of energy dictates this. But what’s the mathematical equation to accompany this statement? That was the challenge. In other words, how would one go about calculating the maximum work possible from the combustion of one bushel of coal?

In one of the first attempts to answer this question, Danish chemist Julius Thomsen in 1854 and separately French chemist Marcellin Berthelot in 1864 proposed that maximum work is determined by the energy difference between reactants and products as quantified by the heat of reaction (-∆H_{rxn}) measured in a calorimeter. [Note: Exothermic reactions require heat removal in a calorimeter, thus resulting in a negative value for ∆H_{rxn}]. They further reasoned that when a reaction is exothermic (∆H_{rxn} < 0), it can generate work on its own, without added energy, and must therefore be spontaneous. Conversely, when a reaction is endothermic, then it requires energy to make it go and so can’t be spontaneous. This was their thinking at least.

Despite its lack of a theoretical underpinning, Thomsen and Berthelot’s *thermal theory of affinity*, as it became known, worked reasonably well for many processes. But not all of them. Sometimes all it takes is a single data point to eliminate a theory. In this case, the data point was the spontaneous *endothermic* reaction. According to Thomsen and Berthelot, it wasn’t supposed to happen, and yet it did.

It was left to J. Willard Gibbs to show us the way. In his 300-page “paper” (1875-78), Gibbs created a new property of matter, G = H – TS (G = Gibbs Energy, H = enthalpy = U + PV, U = internal energy, P = pressure, V = volume, T = absolute temperature, S = entropy), and showed how this property enabled calculation of maximum work through the following:

Maximum work = -∆G_{rxn} = -(∆H_{rxn} – T∆S_{rxn}) (constant temperature and pressure)

Gibbs additionally proved that it is ∆G_{rxn} < 0 that defines a spontaneous reaction and not ∆H_{rxn} < 0.

Unfortunately, Gibbs’ maximum work equation, as powerful as it proved to be, arrived absent a physical explanation. Yes, per Gibbs’ argument, it works based on the macroscopic thermodynamics he developed, but how does it work based on the microscopic world of atoms and molecules?

*A deep dive into Gibbs’ maximum work equation*

As you’ve likely heard somewhere before, the entropy of an isolated system increases to a maximum. It’s a fascinating characteristic of entropy. What this means in the world of statistical mechanics is that the atoms and molecules of the system always move toward the most probable distribution based on location and momentum (velocity). What this means physically, without getting into the details, is that—absent external energy fields like gravity—pressure, temperature, and chemical potential (of each species) equilibrate within a system such that each is constant throughout.

Chemical potential, an invention of Gibbs, quantifies the interactions between the electrons and protons in a system, not directly, but instead through two manifestations of their interactions:

Type I – energy associated with orbital electron distribution around the proton-containing nucleus.

Type II – energy associated with intermolecular interactions (attraction and repulsion) between atoms and molecules, reflective of the fact that molecules have varying degrees of polarity.

The fact that the chemical potential of each species evolves to equality throughout the system becomes especially relevant in the concept of phase and reaction equilibria, wherein each species and the atoms of each species distribute themselves across multiple phases, e.g., solid, liquid, or gas, and between reactants and products.

As regards chemical reactions, a common approach to analyzing them is to assume that they operate at constant temperature and pressure as this is consistent with many industrial processes. Conveniently, these assumptions greatly facilitate calculations because they remove equilibration of temperature and pressure from consideration and put the focus solely on the equilibration of chemical potential. But which aspect of chemical potential, the electron orbitals (Type I) or the intermolecular interactions (Type II) or both? And how are these concepts connected with Gibbs’ maximum work equation, and especially the main topic of this post, the physical meaning of T∆S_{rxn}?

*Energy changes in both Type I and II contribute to temperature change in a chemical reaction*

A chemical reaction occurs spontaneously when the reactant electrons can move to a more probable distribution in the products, which is all about Type I energy changes. The most probable distribution of a set of particles occurs when larger numbers of them populate the lower energy levels. (The relationship between distribution and electron orbital energy levels is addressed in statistical mechanics—alas, outside the scope of this post.) The decrease in energy resulting from the movement of electrons from high potential to low means that, due to conservation principles, energy must increase somewhere else. But where? One would think that an immediate response would be an increase in kinetic energy of the products, which would result in an increase in temperature. But this clearly can’t always be the case, because how then could the spontaneous endothermic reaction be possible?

While the Type I energy changes described above determine whether or not a reaction spontaneously occurs, it’s the sum of all energy changes that affects temperature, for at the moment of reaction, the Type I redistribution of orbital electron energies from high to low causes an immediate temperature-affecting change in what I call “system-structure” energy. To me, system-structure energy is comprised of Type II intermolecular interactions in addition to the degrees of freedom governing molecular motion (translation, rotation, vibration) and the number of molecules present. As the number of molecules change at constant pressure, system volume changes, resulting in P∆V work along with a corresponding change in temperature. Each of these system-structure energy changes contributes to temperature change and thus to ∆H_{rxn}. Whether the change in system-structure energy results in an increase or decrease in temperature at constant temperature and pressure depends on the specific reaction involved. And it’s the size of this change relative to that for the Type I energy change that then determines whether or not the reaction is deemed exothermic or endothermic in the calorimeter.

*Thermodynamic difference between the calorimeter and the electrochemical cell*

In the reaction calorimeter, changes in Type I and system-structure energies occur together, inseparable, both contributing to the total heat release (∆H_{rxn}), neither capable of being isolated and studied. In the electrochemical cell, on the other hand, the two changes are inadvertently kept separate (just a feature of the design), which favorably reveals the physical meaning of Gibbs’ maximum work equation in the cell’s operation. In this cell, reactants are separated from each other by a porous membrane into two half-cells, and the complete cell is maintained at constant temperature via a thermal bath. Once set up, this cell establishes a voltage difference between the two half-cells. Electrons flow from high voltage to low through an external wire to convert reactants to products.

*Assigning the variables in Gibbs’ maximum work equation to Type I and structural**energy changes*

With this background, let’s now consider the thermodynamics of the electrochemical cell as a means to assign the variables in Gibbs’ maximum work equation to Type I and system-structure energy changes and so position ourselves to gain insight into the physical meaning of T∆S_{rxn} (see illustration below). Once converted into energy units, the cell voltage difference quantifies the maximum work possible for the chemical reaction, for if you ran the wire through a motor, you could employ the voltage difference to lift a weight and generate useful work. This voltage difference reflects a difference in orbital electron energies (Type I). Recall earlier that maximum work is defined as -∆G_{rxn}. Thus, the physical meaning of -∆G_{rxn} can be understood based on Type I energy changes.

Also recall that, considering the chemical reaction as a whole, the heat released by the reaction corresponds to the value of ∆H_{rxn} as quantified in the calorimeter. Thus, the value of ∆H_{rxn} is determined by the sum of the two energy changes: Type I plus system-structure.

This now leaves us with the T∆S_{rxn} term in Gibbs’ equation and also with system-structure energy changes. These two must be linked as they’re the only terms remaining. But how? And how are they, in turn, connected with the electrochemical process? Again, I couldn’t find the answer to these questions in my references. Fortunately, Gibbs guided me toward a hypothesis.

*My hypothesis on the physical meaning o***f T**∆*S _{rxn}*

While Gibbs intentionally didn’t speculate about the underlying physics behind his equation, especially since the atomic theory of matter had not yet arrived, he did point in the direction to follow by suggesting that the value of T∆S_{rxn} shows up in the heating and cooling requirements of the constant temperature bath of industrial electrochemical processes. Based on this, I proposed in *Block by Block* that the link between T∆S_{rxn} and system-structure energy change must exist and that it makes sense that it does, because entropy itself is a function of system-structure energy.

The entropy of a system is quantified by integrating δQ_{rev}/T from absolute zero to a given temperature, for which δQ_{rev} encompasses the total thermal energy added (reversibly) to the system to not only put the particles in motion (and so increase temperature) but also to push the particles away from each other—think phase change—overcoming intermolecular attraction in so doing. At constant temperature, I propose that the difference in entropy between reactants and products (∆S_{rxn}) times temperature (T) quantifies the difference in system-structure energy, which I define as that energy required to establish the location and momentum of each particle since these are the two sources of entropy in Boltzmann’s statistical mechanics. When reactants transform to products, the resulting atoms and molecules accelerate or decelerate to their new positions in response to the change in intermolecular interactions, the change in degrees of freedom, and to the change in volume. Each motion contributes to temperature change and the resulting heating/cooling requirements of the reaction system.

While Gibbs’ equation isn’t evident in the calorimeter, it is in the electrochemical cell. And because of this, Gibbs was able to explain the thermodynamics of this cell. It was this contribution that enabled the chemistry community to finally accept entropy as a thermodynamic property of matter.

To share a concluding piece of trivia, the term ∆G_{rxn} came to be called the “free” energy available to do useful work while T∆S_{rxn} was called unavailable “bound” energy. Helmholtz originally used these descriptors but without the physics-based “structural energy” reasoning I used above.

*What are your thoughts on my hypothesis?*

There remains more work to do with my above hypothesis. I need to validate it with a deeper analysis. If you have any thoughts on the matter, please leave a public comment or contact me directly at: rthanlon@mit.edu.

My journey continues.

*Thanks to Andreas Trupp and Ken Dill for providing helpful comments on the original version of this post in February 2021.*