Newton: On whose shoulders did he stand?

No Newton, no Principia. That much is clear. But did Newton do it alone? He was naturally exposed to the ideas of such predecessors as Descartes and Galileo and such contemporaries as Leibniz and Huygens. That this collective influenced Newton is reflected in his own writing, “If I have seen further it is by standing on the shoulders of giants.” But the larger question regarding the Principia remains. Did Newton do it alone? The answer: not entirely.

Motion and change in motion

Motion and especially change in motion, thanks to Galileo’s work, remained the central focus in science during the 17th century, and the need to resolve these concepts, especially as they pertained to planetary motion, energized many coffeehouse discussions. Rising to the top of these discussions were the concepts of action-at-a-distance, circular motion, and the relation between the two.

1665-66 annus mirabilis

While to many, action-at-a-distance was impossible, toNewton, it wasn’t. Indeed, Newton embraced this concept when he began developing his theory of force during his annus mirabilis (miracle year). This was one of the most famous periods in science and it began in 1665 when Isaac Newton (1642-1727), seeking to get as far away from the Great Plague of London as possible, departed Cambridge University, where he was a student, with all of his books to his family home in the countryside. In his one year of isolation, Newton “voyag[ed] through strange seas of thought alone” [1] and uncovered the logic and laws of many different phenomena in nature. He was all of 24 years old at the time! [What a great example of Cal Newport’s Deep Work!]

The challenge was circular motion

For Newton, though, circular motion remained covered. Prior to 1679, Newton, along with many others, incorrectly viewed circular motion as an equilibrium between two opposing forces, an inward gravitational force that pulls a circling body toward the center and an apparent outward force that pushes a circling body away from the center. These are referred to as “centripetal”–center seeking–and “centrifugal”–center fleeing–forces, respectively.

1679 – Robert Hooke re-frames circular motion for Newton

But Newton’s mistaken view changed in 1679 when Robert Hooke (1635-1703) properly re-framed the issue. In his letters to Newton, Hooke proposed that planetary orbits follow motions caused by a central attraction continually diverting a straight-line inertial motion into a closed orbit. To Hooke, a single unbalanced force is at work in circular motion, specifically an inverse-square attraction of the sun for the planets, which leads to acceleration as opposed to the non-acceleration involved with Newton’s equilibrium view. Frustrated with the inability of his equilibrium view to describe nature, Newton immediately latched onto Hooke’s concept as the critical missing piece to his evolving philosophy.

Thank goodness for Hooke’s shoulders!

Without the insight provided by Hooke, Newton’s Principia probably would not have happened. It was Hooke who came along and sliced away the confusion by “exposing the basic dynamic factors [of circular motion] with striking clarity.” [2] It was Hooke who corrected the misconceptions about circular motion. It was Hooke who properly framed the problem. It was Hooke’s conceptual insight and mentoring that “freed Newton from the inhibiting concept of an equilibrium in circular motion.” [3] With his newfound clarity, Newton let go of the concept of the fictitious centrifugal force, embraced the concept of his newly created centripetal force (Universal Gravitation) pulling toward the center, and so changed the world of science. The year 1679 was a crucial turning point in Newton’s intellectual life and Hooke was the cause.

Why not Hooke?

Given all this, why Newton and not Hooke? Why didn’t Hooke author the Principia? Why all the acclaim to Newton? The short answer, according to science historian Richard Westfall, is that the bright idea is overrated when compared to the demonstrated theory. While Hooke did indeed provide a critical insight to Newton, the overall problem of motion, including a fundamental understanding of Universal Gravitation, remained unsolved, and the reason was that no one, including Hooke, could work out the math. Well, no one except Newton. You see, of Newton’s many annus mirabilis breakthroughs, one of the most impactful was his invention of calculus, and it was his insightful use of calculus that enabled him to quantify time-varying parameters, such as instantaneous rates-of-change.

Unfortunately, Newton had intentionally kept these breakthrough ideas of 1665-66 away from the public, a result of his “paralyzing fear of exposing his thoughts.” [4] They remained in his private notebooks, sitting on the sidelines, a tool waiting to be used.

1687 – The Principia

It wasn’t until 20 years after his miracle year that Newton finally sat down and created his famous Philosophiae Naturalis Principia Mathematica, since shortened to the Principia. Published in 1687 by the 45-yr-old and eventually hailed by the scientific community as a masterpiece, the Principia presented the foundation of a new physics of motion that we now call Classical Mechanics based on his Laws of Motion and Universal Gravitation.

Let’s not forget Halley’s shoulders!

But why then? What happened to trigger Newton’s massive undertaking? Here we meet the other person critical to the creation of the Principia, namely Edmund Halley (1656-1742). Halley recognized that Newton had something vital to share with the science community; Halley recognized Newton’s genius. And so it was Halley who travelled to Newton in 1684 to call him forward to solve the yet unsolved problems of motion and change in motion.

Thank goodness for Halley! He is one of the heroes in this story. What would have happened had he not been present? He was responsible for lighting the fire within Newton. And he did it with skill, approaching Newton with ego-stroking flattery as reflected by his even more direct, follow-up request in 1687: “You will do your self the honour of perfecting scientifically what all past ages have but blindly groped after.” [5]

And so the furnace was lit; “a fever possessed [Newton], like none since the plague years.” [6] Through Halley’s gentle but firm push, Newton shifted his intense focus away from his other pursuits–Newton was Professor of Mathematics at Cambridge at the time–and towards the cosmos. Newton drew upon his volume of unpublished annus mirabilis work and his later Hooke-inspired insights and pursued–slowly, methodically–the answer. And when it was all done, the Principia was born.

Clearly, no Hooke, no Halley, no Principia. But even more clearly, no Newton, no Principia.

The landscape has been so totally changed, the ways of thinking have been so deeply affected, that it is very hard to get hold of what it was like before. It is very hard to realize how total a change in outlook [Newton] produced – Hermann Bondi [7]

Both Joseph-Louis Lagrange and Pierre-Simon, Marquis de Laplace regretted that there was only one fundamental law of the universe, the law of universal gravitation, and that Newton had lived before them, foreclosing them from the glory of its discovery – I. Bernard Cohen and Richard S. Westfall [8]

[1] Wordsworth, William. 1850. The Prelude. Book Third. Residence at Cambridge, Lines 58-63. “And from my pillow, looking forth by light/Of moon or favouring stars, I could behold/The antechapel where the statue stood/Of Newton with his prism and silent face/The marble index of a mind for ever/Voyaging through strange seas of Thought, alone.”

[2] Westfall, Richard S. 1971. Force in Newton’s Physics: The Science of Dynamics in the Seventeenth Century. American Elsevier, New York, p. 426.

[3] Westfall, p. 433.

[4] Cohen, I. Bernard, and Richard S. Westfall, eds. 1995. Newton: Texts, Backgrounds, Commentaries. 1st ed. A Norton Critical Edition. New York, NY: W.W. Norton. p. 314. Referenced to John Maynard Keynes, “Newton, the Man,” in The Royal Society Newton Tercentenary Celebrations (1947). “[Newton’s] deepest instincts were occult, esoteric, semantic—with profound shrinking from the world, a paralyzing fear of exposing his thoughts, his beliefs, his discoveries in all nakedness to the inspection and criticism of the world.”

[5] Gleick, James. 2003. Isaac Newton. 1st ed. New York: Pantheon Books, p. 129.

[6] Gleick, p. 124.

[7] Bondi, Hermann. 1988. “Newton and the Twentieth Century—A Personal View.” In Let Newton Bel A New Perspective on His Life and Works (Editors: R. Flood, J. Fauvel, M. Shortland, R. Wilson).

[8] Cohen and Westfall, p. xiv-xv.

Thank you for reading my post. I go into much greater detail about the life and accomplishments of Sir Isaac Newton (1642-1727) in my book, Block by Block – The Historical and Theoretical Foundations of Thermodynamics. Energy came to be viewed through the Classical Mechanics paradigm created by Newton in the Principia. An excellent account of Newton’s work can be found in Richard Westfall’s Force in Newton’s Physics: The Science of Dynamics in the Seventeenth Century. American Elsevier, New York.

How did Galileo measure time?

Galileo, perhaps more than any other single person, was responsible for the birth of modern science – Steven Hawking [1]

Galileo was fascinated by motion and continually experimented with pendulums, cannons, and rolling balls to understand why bodies move the way they do. The arguable culmination of these efforts occurred in 1604 when he discovered what became known as “The Law of Fall.” The vertical distance travelled from rest (h) during free fall increases with the square of time (t).

h 𝜶 t2 Galileo’s Law of Fall

Galileo went on to assert that given the Law of Fall and given that the distance fallen (h) equals average speed (v) multiplied by time, then speed itself must be proportional to time.

v 𝜶 t

Combining the two relationships, Galileo arrived at one of the most significant discoveries in history.

h 𝜶 v2

Simply put, these findings were momentous, being the first to 1) identify the importance of v2 in science, and 2) discover the trade-off between what would become known as kinetic energy (1/2 mv2) and potential energy (mgh) as formalized in the conservation of mechanical energy (m = mass, g = gravitational acceleration) that was established around 1750.

1/2 mv2 + mgh = constant Conservation of Mechanical Energy

With this background, let’s now look at how Galileo accomplished this great feat.

The Law of Fall determined by an ingenious experiment

Hold a ball straight out in front of you. Then drop it. Watch how fast it accelerates and hits the ground. How could you possibly quantify this event, especially if you lived back in the 1600s absent of any kind of timing device? The fact that Galileo figured this out fascinates me. Here’s how he did it.

Galileo first focused on what he could directly measure: time and distance. But how did he measure these? When objects fall, they fall fast. So he slowed things down. He let balls roll down an inclined plane, which decreased the force in the direction of motion (look to your right). In this way he was able to mark on the plane distance from rest at fixed time increments. 

But wait a minute! Fixed time increments? Yes! How do we know? Because we have the original data! One would have thought all of Galileo’s papers would have been analyzed by the 20 century, but in 1973, Stillman Drake, a leading Galileo expert, discovered otherwise [2]. He was going through Galileo’s own notebooks and surprisingly unearthed the experimental data supporting the Law of Fall (look to your left). Galileo published the result but not the data leading to the result.

But wait another minute! How did Galileo measure those fixed time increments, especially in an era when the necessary timing devices didn’t even exist? Ah! This is where things get interesting, because Galileo didn’t say. Into this void stepped Drake. Drake suggested that since Galileo was raised in a musical world, then he likely had a deep respect for the strong internal rhythm innate to human beings. He proposed that Galileo made use of this by singing a song or reciting a poem and using the cadence to mark time with rubber “frets” along the incline during the experiments to create audible bumps when the ball passed by. By adjusting or tuning these frets, Galileo was able to accurately synch the bump sounds with his internal cadence, thus providing a means to achieve equal divisions of small time increments. This proposed approach is strongly supported by the fixed time increments in the data. To Drake, the only method that would result in accurate fixed time increments would be a fixed cadence. “But wait!,” you say, yet again. “How could this possibly provide the necessary accuracy?” Well, just observe yourself listening to live music when the drummer is but a fraction of a second off-beat. You cringe, right? This is because your innate rhythm is that strong.

Now let’s take a step back and consider the larger impact that Galileo had on science.

Galileo’s discoveries, including The Law of Fall, led to the rise of modern science. Here’re some reasons why.

The dawn of a new variable to science – time

Galileo was one of the first to use the concept of time as a dimension in a mathematical relationship. As noted by science historian Charles Gillispie [3], “Time eluded science until Galileo.” Linking time with another dimension, distance, opened the door to developing more complex relationships involving speed and acceleration.

Galileo’s brought mathematics into physics

Historically, physicists and mathematicians didn’t interact. Physicists resisted the use of math since professors in this area were invariably philosophers and not mathematicians. Galileo joined the two fields together by using a mathematical approach to describe and quantify the physical world and so test the hypotheses he formed. Moreover, he believed that, “[The universe] is written in the language of mathematics.” [4] and thus that mathematics could be used to describe all natural phenomena and conversely that all natural phenomena must follow mathematical behavior. In his search for the Law of Fall, for example, he believed that a simple equation existed and then found the equation.

The scientific method

Although we may not recognize it, we work today in a world largely created by Galileo. We make observations of some process or phenomenon and make a hypothesis, e.g., a mathematical model, to explain it. We then design experiments to generate data to test the hypothesis. Is it right or wrong? This approach is built on Galileo’s approach that favors data over preconceived ideas.

Galileo and the launch of the scientific revolution

[T] the study of nature entered on the secure methods of a science, after having for many centuries done nothing but grope in the dark. – Kant in reference to Galileo and others using experiments to understand nature. [5]

The scientific revolution arguably started, if one had to pick a year, in 1453 when Copernicus put the sun back at the center of the solar system where it belonged. But while Copernicus may have started the revolution, Galileo clearly fanned its flames. His conviction that practical and controlled experimental observation, enhanced by mathematical theory and quantification, was the key to a more satisfying understanding of nature became an inspiration to those who followed.

So what did Galileo truly do that made the difference?

Between Galileo and Aristotle there were just a lot of guys with theories they never bothered to test – Helen Monaco’s character in Philip Kerr’s Prayer: A Novel [6]

So what did Galileo truly do that made the difference? Data! It’s fascinating to know that while everyone from Aristotle to those immediately preceding Galileo thought about all sorts of things, many of the same things that Galileo was to think about, none of them took any measurements. Galileo measured while others thought. We see this around us today. Much thinking, proposing, and speculating. But without measurements, it really doesn’t mean anything. As a former boss of mine once wisely said, “One data point is worth one thousand opinions.” Rarely has this been better put.

Thank you for reading my post. I go into much greater detail about the life and accomplishments of Galileo Galilei (1564-1642) in my book, Block by Block – The Historical and Theoretical Foundations of Thermodynamics. It was Galileo’s work that helped eventually led to the 1st Law of Thermodynamics based on energy and its conservation.

The above illustrations are from my book. My thanks to Carly Sanker for bringing her great skill to creating them from my ideas. She is an excellent artist.

END

[1] Hawking, Stephen W., 1988, A Brief History of Time: From the Big Bang to Black Holes. A Bantam Book. Toronto: Bantam Books, p. 179.

[2] Drake, Stillman. 1973. “Galileo’s Discovery of the Law of Free Fall.” Scientific American 228 (5): pp. 84–92; 1975. “The Role of Music in Galileo’s Experiments.” Scientific American 232 (6): pp. 98–104.

[3] Gillispie, Charles Coulston. 1990. The Edge of Objectivity: An Essay in the History of Scientific Ideas. 10. paperback printing and first printing with the new preface. Princeton, NJ: Princeton Univ. Press. p. 42.

[4] Popkin, Richard Henry, ed. 1966. The Philosophy of the Sixteenth and Seventeenth Centuries. New York: The Free Press. p. 65.

[5] Kant, Immanuel. 1896. Immanuel Kant’s Critique of Pure Reason: In Commemoration of the Centenary of Its First Publication. Macmillan. p. 692.

[6] Kerr, Philip. 2015. Prayer: A Novel. G.P. Putnam’s Sons. p. 73.

Science and the power of multiple hypotheses

When asked my opinion on various science-related topics that are in the news, my usual reply is, “I don’t know.” It’s not that I’m incapable of knowing. It’s that I haven’t studied the topics in enough detail to have a well-grounded opinion. My scientific expertise lays elsewhere, in a less popular news cycle.

HOWEVER

If I were asked to develop a well-grounded opinion and had the time to do so, I would follow an approach that has withstood the test of time: the scientific method. My take is that while many have heard of this approach, only a few truly understand it, and fewer still employ it to its full capability. So my objectives here are to 1) share what this method entails, drawing largely from John Platt’s excellent article titled “Strong Inference” (1964), 2) provide examples from the evolution of thermodynamics to highlight key points, and 3) encourage you to embrace this approach in your own work.

Briefly speaking, the first step in the scientific method is INDUCTION. One gathers data, experiences, and observations and then induces a hypothesis to explain it all. In the second step, called DEDUCTION, one assumes the hypothesis to be true and then follows a rigorous cause-effect progression of thought to arrive at an array of consequences that have not yet been observed. The consequences inferred in this way cannot be false if the starting hypothesis is true (and no mistakes are made).

Thermodynamics generally evolved as laid out above. Rudolf Clausius reviewed years of data and analyses, especially including Sadi Carnot’s theoretical analysis of the steam engine and James Joule’s extensive work-heat experiments, and induced: dU = TdS – PdV. J. Willard Gibbs took this equation, assumed it to be true, and then deduced 300 pages of consequences, all the while excluding assumptions to ensure no weak links in his strong cause-effect chain of logic. To the best of my knowledge, he made no mistakes. Multiple experiments challenged his hypotheses; none succeeded. Gibbs’ success led scientists to view Clausius’ induced hypothesis as being true.

In parallel to the above efforts, which established classical thermodynamics, was the work of Clausius, James Clerk Maxwell, and Ludwig Boltzmann, among others, to establish statistical mechanics. One of my favorite examples of the scientific method in practice came from this work. Based on the induced assumption of the existence of gaseous atoms, Maxwell, an expert mathematician, deduced a kinetic model of gas behavior that predicted the viscosity of gas to be independent of pressure, a consequence that he simply couldn’t believe. But being a firm adherent of the scientific method, he fully understood the need to test the consequence. So he rolled up his sleeves and, together with his wife Katherine, assembled a large apparatus in their home to conduct a series of experiments that showed… the viscosity of gas to be independent of pressure! This discovery was a tremendous contribution to experimental physics and a wonderful example validating the worth of the scientific method.

HOWEVER

There’s a critical weakness in the above illustration. Can you spot it? It’s the thought that a single hypothesis is all you should strive toward when seeking to solve a problem.

Be honest with yourself. What happens when you come up with your own reason for why something happens the way it does? You latch onto it. You protect it. It’s your baby. It’s human nature because, when all is said and done, you want to be right. Ah, the ego at work! And it’s exactly this situation that can do great damage to science. People become wedded to their singular “I have the answer!” moments and then go forward, ‘cherry picking’ evidence that supports their theory while selectively casting aside evidence that doesn’t. And it is exactly this situation that inspired John Platt to take the scientific method to a higher level: strong inference.

Platt proposes that the induction process, illustrated below, should lead to not one but instead to multiple hypotheses, as many as one can generate that could explain the data. The act of proposing “multiple” ensures that scientists don’t become wedded to “one.” The subsequent deduction process assumes that each hypothesis is true, whereupon the resulting consequences are tested. Each hypothesis must be testable in this process, with the objective of the test being to effectively kill the hypothesis with a definitive experiment. Recall that a hypothesis can’t be proven correct but can be proven false. All it takes is a single data point. If by logical reasoning and accompanying experimentation the proposed hypothesis doesn’t lead to the specific consequence, then the hypothesis is assumed to be false and must be removed from consideration. As Richard Feynman famously stated, “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” Only the hypothesis that cannot be proven false, the last one standing, is taken to be the correct hypothesis. Even then, this does not constitute a proof. A hypothesis is only taken to be correct, for the time being, if it offers a means to be tested and if those tests can’t prove it incorrect.

While the illustration below suggests a linear process, in reality, the process is more likely to be iterative. The initiating step typically occurs once a problem or an unexplainable observation is detected. At this point, it is critical that a statement of the problem be written out to ensure clarity and bring focus. As more is learned about the problem, as hypotheses are proposed and tested, as some hypotheses are eliminated and others are expanded to multiple sub-hypotheses, the entire process, with evolving and more detailed problem statements, may repeat itself, over and over, until a single detailed testable hypothesis remains.

Returning to this post’s opening, while I don’t have the time to invest in researching the various sciences being debated today, I do have the time to read those who are doing the research. My criteria for trusting their conclusions? Whether or not they followed Platt’s strong inference model. I want to see the collected data, ensuring that no cherry picking or selective elimination has occurred. I want to see that dissent was encouraged and not ignored. I want to see multiple hypotheses laid out on the table. I want to see an intelligent experimental attack on each and every hypothesis. I want to see the reasoning that leaves hypotheses standing or falling. If I see all of this, then I trust.

I encourage all scientists, no matter the field, to embrace strong inference. Yes, it takes time. But thinking that this process could be short-circuited because you believe you know the answer will eventually lead to problems. As a PhD engineer and friend of mine once said, “some of my biggest errors were when I didn’t follow the methodology.”

A fitting conclusion to this post is the below wonderful quote from Louis Pasteur that captures the essence of Platt’s strong inference model.

What I am here asking of you, and what you in turn will ask of those whom you will train, is the most difficult thing the inventor has to learn. To believe that one has found an important scientific fact and to be consumed by desire to announce it, and yet to be constrained to combat this impulse for days, weeks, sometimes years, to endeavor to ruin one’s own experiments, and to announce one’s discovery only after one has laid to rest all the contrary hypotheses, yes, that is indeed an arduous task. But when after all these efforts one finally achieves certainty, one feels one of the deepest joys it is given to the human soul to experience.” – Louis Pasteur, Nov. 14, 1888, in a speech given at the inauguration of the Pasteur Institute in Paris.

Thank you for reading my post. The above illustrations are from my book, Block by Block – The Historical and Theoretical Foundations of Thermodynamics. My thanks to Carly Sanker for bringing her great skill to creating them from my ideas. She is an excellent artist.

Special thanks to Jim Faler and Brian Stutts for introducing me to John Platt’s work and also for their contributions to this post

Joule-Thomson Effect (Part 2) – my hypothesis

In a previous video (here), I stated my belief that a better understanding of thermodynamics is available by identifying the connections between the micro-world of moving and interacting atoms and the macro-world of classical thermodynamics. My goal is to do just this. My starting point? The Joule-Thomson effect, which is the temperature change that occurs in a gas stream as it is slowly depressurized. In this post I share my hypothesis as to what I believe is happening at the physical level to cause this effect.

Back in the mid-1800s, James Joule discovered that the temperature of a gas changes upon depressurization through a porous plug. At room temperature, most but not all gases cool down. Hydrogen and helium heat up.

Richard Feynman is my guiding light in trying to figure out the physical cause of this effect.

So let’s look at this. What happens as atoms approach each other? Well, at a large distance, nothing. They really don’t “see” each other since the forces of attraction and repulsion are insignificant. The motion is thus “free” and the gas can be modeled as an “ideal gas” with no intermolecular interactions.

As the atoms come closer toward each other, the attractive interaction becomes significant. This interaction happens when the electrons of one atom are attracted to the protons of the other. The atoms accelerate and their speeds increase.

At a certain point, closer still, the electrons of the two atoms repel each other and the interaction switches from attraction to strong repulsion. The atoms decelerate and their speeds decrease.

Since temperature is related to the average speed of atoms and molecules, let’s take a closer look at how these interactions affect the speed of atoms and thus the temperature of the gas as a whole..

Generally speaking, relative to “free motion”, when the attraction interaction is significant, atoms will be moving at higher speeds, and when the repulsion interaction is significant, atoms will moving at slower speeds.

Gas temperature is related to the time-averaged kinetic energy of the atoms and thus depends on the relative amount of time the atoms spend in each of these categories. At low pressure when large distances separate atoms, the interactions are insignificant and “free motion” dominates. At high pressure when small distances separate the atoms, the interactions are significant. So whether heating or cooling occurs during Joule-Thomson expansion from high to low pressure depends on which interaction dominates at high pressure, attraction or repulsion. Per below, attraction dominance leads to cooling, while repulsion dominance leads to heating.

So there’s my hypothesis. Now it’s time to test it. A small group of us is working to employ molecular dynamics simulation to model the above scenarios and, in so doing, uncover why some gases cool while others heat, and also why an inversion point exists. Stay tuned!

Goggins, Full Capability, and “Atoms First” Thermodynamics

David Goggins, ex-Navy SEAL, now ultra-athlete and motivational speaker, shared in a popular YouTube video (JRE #1212) something that I found incredibly motivating. His biggest fear, and I paraphrase here, is that he arrives at the gates of Heaven and sees God there with a clipboard, holding a list of many great accomplishments. Goggins’ fear is that he looks at this list and says to God, “But that’s not me. I didn’t do those things,” and the all-knowing God replies, “That’s correct. That’s who you could have been. This is a list of all those things you were capable of doing.”

What does Goggins have to do with thermodynamics and, more generally, the subject of education? Nothing and everything. While he has accomplished much in his life, I’m not sure he ever turned his eyes toward thermodynamics. But that’s not the point here. My point is the provocative question that rises from Goggins’ fear. How many students graduate from K-12, college, or grad school without having achieved what they are capable of? How many graduate with a significant gap between who they are and who they could have been? As I recently shared (video here), my short answer for the specific world of university-level thermodynamics is, “too many.” This is unacceptable.

To this end, I believe that each and every university student enrolled in a thermodynamics course is capable of graduating from that course with a solid understanding of thermodynamics. So why isn’t this happening? To me, one of the major reasons lies in the first rung of the multi-step education process: the teacher must understand the material. I don’t believe this is happening for the simple reason that we as educators don’t fully understand thermodynamics.

The deepest understanding of thermodynamics comes, of course, from understanding the actual machinery underneath.” – Richard Feynman

The understanding I’m referring to is not with the thermodynamic equations and their use to solve problems. The majority of teachers and textbooks already do a very good job teaching this material. What I’m referring to instead is the deep understanding of what the equations physically mean. We simply aren’t there yet as best manifested by the fact that there is no single textbook I’m aware of that presents a physical explanation of thermodynamics based on the motions and interactions of atoms and molecules. Because of this, students learn the equations without understanding what they mean and end up viewing thermodynamics as an indecipherable black box, leaving them intimidated by and so hesitant to use this powerful science. This is their loss as they fall short of who they could be, and this is society’s loss as real-world problems remain unsolved.

The opportunity exists to create a thermodynamics curriculum based on atoms. Employing such an “atoms first” approach will enable students to better learn, better understand, and more confidently employ thermodynamics in a proactive and creative way. The challenge in front of us is to develop this curriculum. Most of the material is already out there in the pages of books and journals and in the minds of many. It needs to be assimilated into one single place. And some of the material remains to be discovered. If students are to reach their full capabilities, we need to gather and create, where needed, this content. This is the task in front of us. Time to start.

Thermodynamic “pain point” results – here are your responses


I believe that a better understanding of thermodynamics is available by explaining the connections between the micro-world of moving and colliding atoms that attract and repel each other and the macro-world of classical thermodynamics. My goal is to identify and clarify such micro-to-macro connections. To ensure that I’m addressing true needs of the science community, I reached out to you all at the beginning of this year (here) to seek your personal “pain points” with thermodynamics. I asked, what are the stumbling blocks you encounter when trying to teach or learn the physical meanings behind thermodynamic equations and phenomena? Presented below are your responses. My thanks to those who engaged in this exercise.

If you feel you can address any items on the list with supportive references, could you please let me know? rthanlon@mit.edu

Based on a review of these responses, and subsequent discussions with some of you, I have decided to begin this journey by focusing on a single, specific phenomenon, the Joule-Thomson effect, number 13 in the list below. I hope to have some results to share with you in my next post.

A final note. One responder asked me, how can you animate thermodynamic concepts so that students can understand them? How can you translate physical chemistry and thermodynamics into practical real-world audiovisual content? If any of you has ideas on this, please let me know.

_ _ _ _ _

  1. Explain the physical meaning of not only temperature, but also energy, entropy, enthalpy, exergy, Gibbs energy, and the dreaded fugacity.
  2. Speaking of which, what exactly is fugacity and how does it relate to the material world?
  3. What is the physical mechanism behind the existence of the critical point?
  4. What does the concept of energy minimization physically mean, and how is this applied in the form of Gibbs free energy minimization of protein folding?
  5. Reversibility: What is it (really) and why is it important?
  6. What is the fundamental physical cause of the temperature effects that result when you depressurize a gas cylinder?
  7. Explain the presence of heterogeneous azeotropes.
  8. How deep a vacuum on steam turbines is worthwhile to pursue? How can this be more quickly understood and appreciated?
  9. Van der Waals equation. Why does long range attraction and short range repulsion give a liquid (ie phase transition), beyond just saying, “It’s in the math”? Does this same phenomenon hold with colloids and polymers in solution, that they undergo a “vapor-liquid type phase transition”?
  10. Column of gas in a gravitational field. Is it isothermal or is there a temperature gradient, and why? James Clerk Maxwell and Ludwig Boltzmann assumed the former, Josef Loschmidt the latter. Who was right?
  11. Where does thermodynamics begin? Is a perfect vacuum really a thermodynamic system? Without molecules, do we have pressure, temperature, Q, W, S & H?
  12. Gas Phase Behavior – ideal and non-ideal. Given that all atoms/molecules attract all atoms/molecules via London dispersion forces, what is it that makes a gas behave ideal in which such attraction has negligible influence? What is the dividing line between “ideal gas” and “non-ideal gas”? Some have suggested that it is the formation of dimers the causes of deviation from ideal gas law, but I have yet to find conclusive evidence of this in the literature.
  13. Joule-Thomson Effect – explain this. This effect is naturally related to the combined effects of intermolecular attraction and repulsion. But how exactly does this work at the molecular level? How does this explain, for example, no effect for ideal gas, heating effect for hydrogen, and the presence of an inversion temperature?
  14. Gas – flow. Explain in plain English Bernoulli’s equation, especially the trade-off between pressure and flow velocity.
  15. Photons. When are photons released? Solely with the acceleration of charges? Does this always release photons? As an unbound electron accelerates towards a single proton, are photons released? How about during chemical reactions? Because chemical reactions involve a change in energy level of electrons, are photons always released? If so, should photons be included in reaction equations? When photons are absorbed, heat is generated in the form of an increase in temperature. What is the energy balance around photon absorption (and emission)? What is the physical event that leads to an increase in kinetic energy of the atoms that absorb the photon? When does the presence of photons influence reaction equilibrium?
  16. Explain the micro-physics behind the Stefan-Boltzmann T4 law of radiation.
  17. Explain the micro-physics behind the existence of a supercritical fluid.
  18. Explain the Clausius-Clapeyron Equation in plain English. Why is it what it is?
  19. Gas Phase Reactions – Walk through exactly what happens at the atomic scale during reaction. For example, picture two hydrogen atoms. Long-range attraction draws each towards the other. But up close, the strong electron repulsion pushes them apart. How is this repulsive force overcome so that reaction occurs? Also, “heat” is generated when two hydrogen atoms combine to form molecular hydrogen. What exactly does this mean? What specific physical events lead to an increase in the kinetic energy of the atoms comprising the H-atom gas system when they react? Also, are photons emitted as a result of this reaction?
  20. What exactly does the change in Gibbs energy of a chemical reaction quantify? Is it simply the total change in energy of the orbital electrons?
  21. Why isn’t the distribution of orbital electrons included in the Boltzmann definition of entropy? If a chemical reaction is really the distribution of orbital electrons into their most probable distribution, shouldn’t the change in entropy account for this?
  22. Phase Change – Vapor/Liquid (similar discussion for liquid/solid). How does phase change occur? Walk through each step involved in energy balance. Also, walk through condensation. How does an atom/molecule slow down enough to be ‘captured’ by another atom/molecule? Do the slow atoms/molecules at the left end (slower speed) of the statistical distribution condense first? (Same could be asked of chemical reactions. Do the fast atoms/molecules at the right end of the statistical distribution react first?) When an atom escapes from liquid to vapor, what velocity does it end with? Is the resulting vapor initially at a very low temperature due to escape and then is this why some thermal energy is needed to bring the escaped gas up to temperature of the liquid? Also, does the average velocity of particles in vapor fall short of their average velocity in liquid, especially in case the liquid is a solution and the vapor pressure at a given temperature of the liquid is hence reduced?
  23. Absolute zero. Yes, the entropy of a pure crystal is zero at absolute zero. But aren’t the electrons still in motion? And wouldn’t this mean that the polarity of the atoms is not constant at absolute zero, and instead varies, and wouldn’t this result in a variation of attraction and hence result in motion, which is inconsistent with the concept of absolute zero? So what does matter physically look like at absolute zero?

END

What are your personal “pain points” with thermodynamics?

What are your personal “pain points” with thermodynamics? What are the stumbling blocks you encounter when trying to understand the physical meaning behind such thermodynamic equations and phenomena as Gibbs Free Energy, Joule-Thomson expansion, phase change, and even the physical properties of matter, including heat capacity and absolute temperature? Could you please share these with me in the comments section below or via direct email (rthanlon@mit.edu), and I’ll add them to my own list of stumbling blocks and unanswered questions. My objective in doing this is as follows.

I believe that a better understanding of thermodynamics is available by explaining the connections between the micro-world of moving and colliding atoms and the macro-world of classical thermodynamics. My goal is to identify and clarify the micro-to-macro connections for the final list of “paint points” generated here, this list serving to ensure I’m addressing true needs of the science community. I remain undecided on how best to share these results back to you all. It may be a 2nd book, creation of a special YouTube channel, or some other form. Again, not sure. Regardless, a long journey awaits, and I’m looking forward to it.

If you have ideas on where best to locate well documented micro-to-macro connections, please let me know. My starting point is Richard Feynman’s excellent “Lectures on Physics” but even there, while some of my own pain points are indeed addressed, many aren’t.

Professors & teachers – please consider sharing this with your colleagues and also with your current or past students. I’d be very interested to hear their take on things.

Wishing each of you well for 2022.

Thank you,
Bob

The Road to Entropy – Boltzmann and his probabilistic entropy

Ludwig Boltzmann (1844-1906) brought his mastery of mathematics to the kinetic theory of gases and provided us with our first mechanical understanding of entropy. To Boltzmann, his work proved that entropy ALWAYS increases or remains constant. But to others, most notably Josef Loschmidt (1821-1895), his work contained a paradox that needed to be addressed. Loschmidt asked a provocative question about this paradox that motivated Boltzmann to transform his mathematics from mechanics to probability. The end result was a probabilistic entropy: entropy ALMOST ALWAYS increases. What was the question? Watch to find out.

For an excellent in-depth analysis of the development of the kinetic theory of gases and Boltzmann’s connection of entropy to the movement of the hypothesized atoms and molecules, I highly recommend Stephen Brush’s Kinetic Theory of Gases, The: An Anthology of Classic Papers with Historical Commentary.

I delve into the mathematical details of Boltzmann’s work, and also the personal details of his battle to defend his work, in my book.

The Road to Entropy – The kinetic theory of gases & heat capacity

I believe that an improved approach to teaching thermodynamics can be created by starting with the atomic theory of matter and then explaining the connections between this theory and macroscopic thermodynamic phenomena. This micro-to-macro approach arguably began in the late 19th century when a small group of scientists, namely Rudolf Clausius, James Clerk Maxwell, and Ludwig Boltzmann, successfully developed the kinetic theory of gases, which eventually became the key bridge from classical thermodynamics to statistical mechanics. The prediction of heat capacity played a crucial role in this effort, and an interesting related story was the absence of heat capacity predictions for monatomic elements. Why was this? To me, the reason had to do with the structural difference between a sphere and an atom. Check out the below video in which I lay out my argument.

Addendum: It’s a real challenge trying to understand what the early theorists were truly thinking as they developed thermodynamics. I wish each had written an autobiography to share their thoughts, thoughts that were not suitable for publication but still, their own thoughts on what nature looked like. Especially Gibbs!

Regarding the heat capacity of gas atoms, it’s not that they attempted and made a mistake, it’s that they never attempted to begin with, and this is what struck me when I read their papers. I believe that the main reason for this is that they couldn’t conceive of an atom that didn’t spin, or have energy associated with spin. They couldn’t conceive of an atom comprised of mostly empty space, with all the mass concentrated in the center nucleus.

For an excellent in-depth analysis of the development of the kinetic theory of gases, I highly recommend Stephen Brush’s Kinetic Theory of Gases, The: An Anthology of Classic Papers with Historical Commentary.

I delve into the successful development of the kinetic theory of gases, which successive chapters on Clausius, then Maxwell, and finally Boltzmann, in my book.

The Road to Entropy – Clausius, Gibbs, and increasing entropy

At the conclusion of his famed 1865 paper announcing the discovery of a new property of matter that he named entropy, Rudolf Clausius stated: the entropy of the universe tends to a maximum. This statement came as a total surprise to me as there was no prior supportive discussion behind it, and it had me wondering whether or not Clausius truly understood its meaning. Fortunately for us, someone else did understand its meaning as manifested by the fact that this person evolved this statement into one more critically relevant to thermodynamics: the entropy of an isolated system increases to a maximum. The person? J. Willard Gibbs. To understand how this came about, check out this video.

I go into much more depth on the combined work of Clausius and Gibbs in my book Block by Block – The Historical and Theoretical Foundations of Thermodynamics.

This image has an empty alt attribute; its file name is 2021-03-22_09-25-48.png