Full access to Block by Block’s Introduction—and then some! they certainly share a lot!—is available on google books here (pp. xiii-xx). The Introduction shares the motivations that drove me along with the structure I created to guide me. In keeping with the intent of the series of posts I plan on publishing for the foreseeable future, which is to highlight a single idea from each chapter from my book, I want to draw your attention to the following quote that I used in the Introduction:
“People say, Think if we hadn’t discovered Emily Dickinson. I say, Think of all the Emily Dickinsons we’ve never discovered” – Catherine Thomas Hale character in Mark Helprin’s In Sunlight and in Shadow
I chose this quote for two reasons, the first being that Mark Helprin is one of my favorite authors. A Soldier of the Great War, Winter’s Tale, Refiner’s Fire. Beautiful, magical writing. Of the many, many quotes I could have used, I chose the above because of my second reason: it captured the historical reality of thermodynamics.
I feel for those who toiled away at the experimentalist’s lab bench or the theoretician’s desk and generated results that were ignored by history. So many individuals were involved in creating the new field of thermodynamics, but we only see the few. This post is a simple but deeply felt acknowledgment to all of those “Emily Dickinsons” we never discovered.
Me: I think I want to write a book. Friend: Does that mean you’re going to write it? Me: (to myself) oh no Me: (with sweating palms) Yes
I shared a someday-maybe dream with a friend and she challenged me. “Are you going to do it or not?” When I said, with some fear of the task in front me, “yes!”, she pulled out a scrap piece of paper and wrote, “Write that book!” I’ve kept it in my wallet ever since (see above foto). An ongoing inspiration to his day.
Consider being a listener for your friends’ dreams. Be ready to challenge them to commit. If they’re thinking about doing something that they’re passionate about, do what you can to have them transform the thought into action. They’ll appreciate it.
To my friend from a long time ago: Thank you!
P.S. Some would argue that the work isn’t done until you follow through and help your friends to achieve their dreams, whatever that might look like for you.
One of my objectives in creating a more effective approach to teaching thermodynamics is to bring clarity to some of the confusing terms and concepts embedded in this field. Initially I focused on the concept of heat by pointing out that there is no such thing. I now turn toward free energy.
As a very (very) brief historical summary, Rudolf Clausius created what became known as the 1st Law of Thermodynamics, which I wrote about here, based on energy and its conservation when he wrote the equation: dU = TdS – PdV [1]. J. Willard Gibbs then built upon this by creating a new property of matter, later to be given the symbol G after Gibbs, for which G = H – TS [2]. This energy term became very useful in thermodynamic analyses of physical phenomena and industrial processes that occur at constant temperature and pressure. Of relevance to this post, Gibbs showed that the change in G at constant T,P quantifies the maximum amount of work that can be generated by a given process such as a chemical reaction.
Prominent thermodynamics textbooks, such as Lewis and Randall [3, p. 158] and Smith and van Ness [4, p. 170], named G free energy. Today we often refer to G as Gibbs free energy and associate it with the amount of energy that is free to do useful work.
HOWEVER
Naming G free energy always confused me. While certain thermodynamic properties such as temperature, pressure, and mass are absolute, meaning that they can be referenced to zero, the property internal energy (U) is not. There is no zero for internal energy, which is why the primary focus in thermodynamics is based on changes and not absolutes. We’re largely concerned with changes in energy; absolute energy doesn’t exist. Thus, Gibbs’s G property, which is based on energy since it includes internal energy U, i.e., G = H – TS = U + PV – TS, is meaningless on its own. This is the reason for my confusion with naming G free energy. Free energy has meaning, while G itself does not.
Consider the intent of the two founders of “free energy” – Gibbs & Helmholtz [5]
To Gibbs, it’s the change in G that’s meaningful, not G itself. It is the distance between a given body’s non-equilibrated energy—non-equilibrated in the sense that the body is either not internally equilibrated or not equilibrated with the environment or both—and its equilibrium state energy that was important to Gibbs. He called this distance, which quantified change, “available energy” [6, pp. 49-54]. Today we view available energy and free energy as synonyms.
Hermann von Helmholtz created his own energy term A = U – TS, which served a similar purpose as Gibb’s G, but for constant temperature processes as opposed to constant temperature and pressure for Gibbs. It was Helmholtz who coined the term “free energy” as shown here from his publication on the matter [7]:
It has long been known that there are chemical processes which occur spontaneously and proceed without external force, and in which cold is produced. Of these processes the customary theoretical treatment, which deals only with the heat developed as the measure of the work-value of the chemical forces of affinity, can give no satisfactory account.
Here Helmholtz is referring to the Thomsen-Berthelot theory of thermal affinity, for which the “heat developed” is quantified by ∆H, the enthalpy change of reaction. This theory suggested that cold-producing endothermic reactions (∆H > 0) should not happen; and yet they did. Continuing…
If we now take into consideration that chemical forces can produce not merely heat but also other forms of energy… then it appears to me unquestionable that… a distinction must be made between the parts of their forces of affinity capable of free transformation into other forms of work, and the parts producible only as heat. In what follows I shall, for the sake of brevity, distinguish these two parts of the energy as the “free” and and as the “bound” energy.
It is clear to me that Helmholtz sought to replace ∆H with a term that quantified his concept of “free” energy. This term had to be similar to ∆H in that it had to quantify change as opposed to absolute. This is how he arrived at ∆A.
In Sum: Free energy was founded on change
That both Gibbs and Helmholtz based their respective concepts of free energy on change as opposed to absolute supports my contention is that G should be known as Gibbs energy and ∆G should be known as Gibbs free energy. In other words: Gibbs free energy (∆G) is the change in Gibbs energy (G).
I propose that textbooks make clear these definitions, especially since some confusingly refer to G as both Gibbs energy and Gibbs free energy. Is the above argument strong enough to justify this? What do you think?
[1] dU = Q – W = TdS – PdV. Q = thermal energy added to system = TdS, W = work done by system = PdV, U = internal energy, T = temperature, S = entropy, P = pressure, V = volume. If no thermal energy (i.e., heat) is added to the system and if no work is done by the system, then the internal energy of the system does not change, i.e., dU = 0.
[2] G = H – TS. H = enthalpy = U + PV. The change in G is thus: dG = dH – d(TS) = dU + PdV + VdP – TdS – SdT. For a constant temperature and pressure process, dT = dP = 0. If the system of interest is equilibrated then dU + PdV – TdS = 0, and thus dG = 0. The property G is particularly useful when considering phase equilibrium. Consider two phases, A and B, in equilibrium with each other. The values of G for the two phases are equal. If you change temperature and pressure together so as to maintain a two phase system, again dU + PdV – TdS = 0 as it’s an equilibrated system, and so dG = VdP – SdT. One can show that dG for A must equal dG for the B. With some re-arrangement, dP/dT = (SB – SA)/(VB-VA), which is a version of the famed Clapeyron and Clausius-Clapeyron equations.
[3] Lewis, Gilbert Newton, and Merle Randall. 1923. Thermodynamics and the Free Energy of Chemical Species. New York: McGraw-Hill Book Company, Inc.
[4] Smith, J. M., and H. C. Van Ness. 1975. Introduction to Chemical Engineering Thermodynamics. 3d ed. McGraw-Hill Chemical Engineering Series. New York: McGraw-Hill.
[5] Gibbs cited the influence of François Massieu on his work that included the creation of G = H – TS.
[6] Gibbs, J. Willard. 1993. The Scientific Papers of J. Willard Gibbs. Volume One Thermodynamics, Woodbridge, Conn: Ox Bow Press. p. 51.
[7] Helmholtz, H. von, On the thermodynamics of chemical processes, Physical memoirs selected and translated from foreign sources 1 (1882): 43-97.
In my book, Block by Block, I wrote about the attraction and repulsion forces between atoms. For the former, I stated that attraction results from the fact that atoms act like spinning magnets; they contain a positive charge (proton) that is separated from an orbiting negative charge (electron). The quickly varying dipole of one atom acts upon the same of another atom, thereby inducing dipoles that are in-phase with each other. The electrons of atoms attract the protons of other atoms, resulting in an attractive force between all atoms. This phenomenon is all very nicely laid out in a paper by F. W. London [1], who coined the term “dispersion effect.” But this wasn’t where I made a mistake.
The mistake was with my supposed understanding of the repulsion forces. On page 46 of my book I wrote that repulsion is caused by “the electromagnetic repulsion forces between electrons and between protons.” This might not look like a mistake to you, but it does to me, or at a minimum, as an incomplete answer, because I now know something I didn’t fully understand then. You see, I understood that Pauli exclusion was somehow involved in the repulsion effect but didn’t know why. I originally wondered whether or not it was a repulsive force itself. When I couldn’t find the answer I was seeking in the literature, I decided to go straight to the top for help. Professor Steven Weinberg at University of Texas at Austin. I was pleasantly shocked when he very kindly replied! In response to my question regarding whether or not Pauli exclusion is a force, Professor Weinberg replied, “It’s not a force.” To this day I remember that one line from his reply. He continued:
“There is no particle transmitted between the electrons in an atom other than the photons, which mediate the electromagnetic force. But the Pauli principle requires that the wave function of the atom be antisymmetric in the electron coordinates, and this has effects like a force – – – in particular, it prevents two electrons from occupying the same state.” [2]
As I recently began work on book #2, seeking to connect the micro-world of atoms to the macro-world of classical thermodynamics, I knew I had to resolve this issue as I consider it a very fundamental building block for the larger structure I want to create. Over the past two weeks I have done a deep dive into the literature, and here’s what I have found.
First things first. The Pauli exclusion principle states that no two particles can have the same four quantum numbers, which for orbital electrons translates into fact that no more than two electrons are permitted in any given orbit, and the two must have opposite spins.
Next. Electrostatic interactions, e.g., attraction and repulsion, are determined by the electron distribution. Again, the attraction force is caused by London dispersion, which is the induced shift in electron distributions that occurs when two atoms come near each other.
Regarding repulsion
Regarding repulsion. Let’s start with two simple hydrogen atoms approaching each other. Each atom consists of one proton and one electron. As they approach, their respective electron clouds overlap and the individual atomic orbitals of each merge into a bonding orbital. Both electrons populate this single orbital and then spend most of their time between the two nuclei, attracting both nuclei toward them and toward each other. A covalent bond results. No bond is stronger as emphasized by Richard Feynman: ‘‘It now becomes clear why the strongest and most important attractive forces arise when there is a concentration of charge between two nuclei. The nuclei on each side of the concentrated charge are each strongly attracted to it.’’ [3]
Now consider two helium atoms approaching each other. Each atom consists of two protons, two neutrons, and two electrons. As they approach, their respective electron clouds overlap, and their individual orbitals merge into a bonding orbital, which is then populated by two of the electrons, one from each atom.
Now comes the critical part of this post. What happens to the next two electrons? They can’t enter into this same bonding orbital with the others because of the Pauli exclusion principle. So they must enter into the next orbital available up on the energy ladder, which is an anti-bonding orbital. This orbital forms at the same time as the bonding orbital. The anti-bonding orbital is so named because the electrons in this orbital accumulate outside the region between the nuclei and so can’t contribute to bonding. Instead they contribute to anti-bonding since their location effectively decreases the pull of the two positively-charge nuclei toward the negative electrons between them, leaving the proton-proton repulsion force to dominate the situation.
The anti-bond negates the covalent-bond, leaving a very weak net bond. This is what happens when atoms “collide.” The bond between them is weak and they repel each other. Electron-electron interactions contribute to the net repulsion, but it’s the proton-proton interactions that dominates. Two helium atoms collide (and don’t react) because there is room in the bonding orbital for only one pair of electrons; the other pair must occupy the anti-bonding orbital.
The same general principles apply to all closed-shell molecules. Individual electron orbitals combine to form two orbitals, one bonding and one anti-bonding. Being closed-shell, two electrons are offered up by each atom. Two enter the bonding orbital, the other two the anti-bonding orbital, leaving a weak net bond that is easily broken by the strong proton-proton repulsion.
In the end, while Pauli exclusion doesn’t directly cause repulsion, and is not a repulsive force, it is because of Pauli exclusion that repulsion occurs.
I list three excellent references below that go into much greater detail on this matter: Henry Margenau [4], H.C. Longuet-Higgins [5], and Richard Bader [6]. I saw that Professor Bader’s 2007 paper was relatively recent and sought to contact him. I learned that he had passed in 2012 but noted that one of his students co-authored a relevant paper of his. It was through this path that I met this student, Chérif F. Matta. Chérif is Professor and Chair/Head of the Department of Chemistry and Physics at Mount Saint Vincent University in Halifax. He enthusiastically responded to my email inquiry and helped me with my understanding of the above topic; any mistakes above are all mine! I publicly thank him here for his contribution and look forward to further discussions with him.
(1) London, F. W., The General Theory of Molecular Forces, Trans. Faraday Soc., 1937,33, 8b-26. This paper, by the way, shows the origin of the r^6 attraction term in the Lennard-Jones potential equation.
(2) Weinberg, Steve, personal communication, 01 July 2009.
(3) Feynman, R. P., Forces in Molecules,Physical Review, 56, 15 August 1939, pp. 340-343.
(4) Margenau, Henry, The Nature of Physical Reality, McGraw Hill, 1950. Chapter 20, The Exclusion Principle, pp. 427-447.
(5) Longuet-Higgins, H. C., Intermolecular Forces, Spiers Memorial Lecture, The University, Cambridge, Received 23rd September, 1965, published in Discuss. Faraday Soc., 1965,40, 7-18
(6) Bader, R. F. W., J. Hernández-Trujillo, and F. Cortés-GuzmánBader, Chemical Bonding: From Lewis to Atoms in Molecules, J. Comput. Chem.. 2007 Jan 15;28(1):4-14
When I give thermodynamics presentations to high school and college students, I begin with a 10-minute discussion about career decision-making based on my own experiences. I now share this discussion with you, both to provide you with helpful and hopefully inspiring ideas and to also seek your feedback. Do your thoughts align with mine? Let me know! [Note: the examples I use are from my academic years to align with the students, but the process I lay out applies to my entire career.]
Overwhelm
Are you trying to decide what to do for your summer internship, your first job, the opportunity to switch jobs, to retire, or to accept an overseas assignment? If you are, I’m guessing you’re experiencing some degree of overwhelm. Here’s why.
Choice!
“…one of the most difficult types of emotional labor is staring into the abyss of choice and picking a path.” – Seth Godin, Linchpin, p. 57.
Fear of living without a map is the main reason people are so insistent that we tell them what to do. – Seth Godin, Linchpin, p. 125
Because you have options, you have choice, and that can be a source of overwhelm. Choice is both a blessing and a curse. Wouldn’t life would be so much simpler if someone told you what to do, what decisions to make? Yes. But wouldn’t that be rather boring? And who exactly would tell you what to do? Who knows you better than you? Sure, your parents might weigh in early in life, but at some point, it’s your choice, and that choice can be overwhelming, as well manifested by this scene with Robin Williams from Moscow on the Hudson. The goal then is to develop an approach to reduce the overwhelm by transforming the decision-making process from scary to exciting. Here’s how I did this. Maybe it will work for you.
Maslow’s Pyramid
In 1943 American psychologist Abraham Maslow proposed a priority sequence for human motivation. As later illustrated by others (to the right), humans are only motivated at each stage in sequence up the pyramid once they feel satisfied with the stage they’re in. It’s very hard to worry about feelings of accomplishment when you’re worried about where your next meal is coming from.
Survive then Thrive
For my career decision-making, I took Maslow’s pyramid and simplified it. I wasn’t aware that this is what I was doing in my early years. I’m only recognizing it now. When I had to make a decision, my first priority was survival. Once I felt comfortable with that, and narrowed down the field of options, I then made my final decision based on my desire to thrive. Let me explain this process in more detail.
Survive
For each individual career decision, I had a range of options to choose from. At times, yes, it was overwhelming. But in the end I was able to narrow down the number of options by first passing them through my “survive” filter. To me, survival meant financial independence. Each career decision took me toward financial independence, the point at which I no longer had to work for somebody else. Some of these decision didn’t earn me more money but instead earned me experiences that I knew would lead to higher-earning opportunities later on. Note the additional criteria I added to my “survive” filter: whatever it was I chose had to be something I was good at and enjoyed doing.
Thrive
There is no intellectualizing what resonates with you… When it reveals itself, you feel it. – Ryder Carroll, The Bullet Journal Method, p. 146.
The now-smaller list of options met up with my second “thrive” filter. This filter was governed by what I was passionate about, and the part of my body that best understood this was my gut. When a select group of “survive” choices came in front of me, I invariably knew, without necessarily knowing why, the one I wanted…as well as the one(s) I didn’t want. The one I wanted resonated with me, especially if it offered me the opportunity to journey on the road less traveled.
At some point, you have to make the decision
The important point I emphasize in the above illustration is the final red dot. At some point in the process I realized I had to make a decision, both to move forward and to gain experience. Sometimes the decision may have been to stay put, as this is always an option. But even with this seemingly non-decision decision, my choice to stay put was often accompanied by a decision to make a stronger commitment to what I was then doing.
The “thrive” decision was admittedly hard at times. Why? Because I often felt that a “right” decision existed and that my life would be forever damaged if I didn’t choose it. I now realize, in hindsight, that this is false. There is no “right” decision in my “thrive” filter. Once I realized this, it helped keep “analysis paralysis” at bay. Each decision leads down a different path, and for the most part, each path will work out just fine. They’ll just be different, that’s all.
Why I chose Bucknell University
For example, I considered a range of undergraduate universities: Bucknell, Lehigh, Clarkson, RPI. They were all good. I would have enjoyed any of them, each in a different way. Why did I choose Bucknell? Well, because when I visited the campus during high school spring break with my parents, Bucknell’s cherry blossoms were in full bloom–in hindsight, I think the grounds keepers somehow ensured this as it was spring break visit week!–and this sold me. Something in me clicked. My gut told me that Bucknell would work for me. I couldn’t list the reasons. The beauty of those trees played a role, perhaps. But I think there was much more to it than that. Sometimes decisions from the gut bypass the brain. All the experiences that I had in my life up until then, including the conversations with others and especially my parents, led me to that decision.
More experiences = better gut feel
This brings me to the curved arrow going from decision to experience. To me, the more decisions one makes, the more experiences one gains, and the better the gut feel develops. Gut feel doesn’t develop in a vacuum.
When you come to a fork in the road, take it – wisdom shared by a friend
Listening to your gut feel is so important when making career decisions. When considering the final decision from a range of options, you often just simply know deep down which decision you want. You feel it in your body. Trust this feeling. Use it to guide whether to do something… or not.
Consider the following, as described by Russ Roberts in his engaging book Wild Problems (p. 44). If you have to decide between two options, flip a coin, and while the coin is still spinning in the air, note which side you are hoping will come up. In that moment, you’ll realize that you don’t even need to see the outcome, because your decision will have already become clear to you. Trust your emotions. You don’t need to explain them to yourself or to others.
Why I went to Karlsruhe
As a final example of how this process worked for me, consider my decision to do a post-doctorate research project in Karlsruhe, Germany.
Remember those various display cases lining university hallways? They contain all sorts of interesting information. It was a rare occasion when I would stop and read, but all it took was once. I was walking down the MIT hallway, thinking about what company I was interested in joining upon graduation, when, for some unknown reason, I stopped at a case similar to the one on the right and actually read what was in it. A flyer spoke of scholarships offered by the German government to do post-graduate work at one of their universities. Bam! It hit me. I had never considered this before then. And all of a sudden it went to the top of my list.
Where did this decision come from? It came from everything, all of my experiences. My conversations with foreign students at MIT, the movies I watched, the stories from my dad about his international travel for Bristol-Myers, my interest in taking the fork, the road less traveled, the once-in-a-lifetime opportunity to live in a foreign country, not with a group, but on my own, knowing it would force me to learn the language. So many different experiences primed my gut to tell me, “Apply”. And I did. And I went. And I never looked back. This decision provided me with another experience, a big experience, that further developed my gut feel for the decisions I would be making later on in my life.
Final thoughts
The survive-then-thrive approach indeed helped me to manage the overwhelm during my career decision-making process. Along the way I learned to trust my gut more and more. How did you approach your own decision making process? The same? Different? If you do try out any of these ideas, please let me know. In the meantime, thank you for reading my post. While I don’t specifically discuss the above concepts in my recently published book Block by Block – The Historical and Theoretical Foundations of Thermodynamics, they do make for a stimulating starting point for an engaging conversation around what motivated the early thermodynamics scientists in the directions they took in their own lives?
Publisher: “Before we go to print, we just wanted to make sure you got permissions for the epigraphs in your book.”
Me: “What’s an epigraph?”
As I was traveling through the final stages of publishing my book, I learned that there are two approaches to using a quote. One is to embed the quote in the paragraph. The other is to use the quote at the beginning of a chapter, or a section within a chapter, in order to suggest its theme; this type of quote is called an epigraph. Much to my dismay, I learned that while the former doesn’t require permission from the publisher (so long as it is appropriately referenced), the latter does. This would have been fine… had I not liberally sprinkled well over one hundred epigraphs throughout my book!
So I sat down and wrote to many publishers, asking for permissions to use select quotes from their material as epigraphs. And all said “yes” with no fee, except for one. Penguin Random House. They controlled the rights to Kurt Vonnegut’s Player Piano, and specifically to the quote, “Out on the edge you see all the kinds of things you can’t see from the center.” They wanted $100. Once I realized that my pleading wouldn’t move them, I decided to pay. The sentiment that Vonnegut expressed was really important to me. Here’s why.
In writing my book, I learned that many discoveries and insights in thermodynamics occurred when someone with strength in one technical area moved, with curiosity, to the edge of that area to check out what was happening in a different technical area. And it was there, at the interface, where they found opportunity.
Consider the case of Sadi Carnot and his theoretical analysis of the steam engine. He was educated in the prestigious École Polytechnique but then led most of his adult years outside academia as an officer amongst his fellow engineers within the French military. And consider Galileo and his experimental and theoretical work on motion. He worked at the interface between craftsmanship and academia. For both, their respective exposures to a world apart from academia helped enable them to approach problems differently, if not uniquely.
Or consider James Joule, expert brewer, expert reader of thermometers, and amateur physicist. His work helped lay the foundation of the conservation of energy, alongside the efforts of Julius Robert von Mayer. Both Joule and Mayer were academic outsiders; neither was raised under the influence of the caloric theory of heat; neither was trapped by the academic paradigms that couldn’t grasp the concept of energy. Perhaps the value of being at the edge is just this. It’s where creative tension lies. One can bring the fresh-eyes look of an outsider, with no paradigm attachments, to catalyze a breakthrough in thinking.
As manifested in the table on the right, many of those responsible for contributing to the rise of thermodynamics achieved their respective successes by working at the interface between at least two different fields of study. Look, for example, at J. Willard Gibbs. He applied his expertise in mathematics to the study of heat, work, and equilibrium, and so helped lay the foundation of classical thermodynamics and also statistical mechanics. The success of these individuals and their approaches helped encourage others in subsequent years to explore the interface between different “silos” of science, business, art, and so on. It’s at the interface where creative opportunity exists.
The life of Bob Langer, Institute Professor at the Massachusetts Institute of Technology, is a great contemporary demonstration of the power of this approach. Bob brought his ScD in Chemical Engineering at MIT into a different field, specifically the field of medicine and biotechnology, and transformed, among other things, the world of drug delivery.
No Newton, no Principia. That much is clear. But did Newton do it alone? He was naturally exposed to the ideas of such predecessors as Descartes and Galileo and such contemporaries as Leibniz and Huygens. That this collective influenced Newton is reflected in his own writing, “If I have seen further it is by standing on the shoulders of giants.” But the larger question regarding the Principia remains. Did Newton do it alone? The answer: not entirely.
Motion and change in motion
Motion and especially change in motion, thanks to Galileo’s work, remained the central focus in science during the 17th century, and the need to resolve these concepts, especially as they pertained to planetary motion, energized many coffeehouse discussions. Rising to the top of these discussions were the concepts of action-at-a-distance, circular motion, and the relation between the two.
1665-66 annus mirabilis
While to many, action-at-a-distance was impossible, toNewton, it wasn’t. Indeed, Newton embraced this concept when he began developing his theory of force during his annus mirabilis (miracle year). This was one of the most famous periods in science and it began in 1665 when Isaac Newton (1642-1727), seeking to get as far away from the Great Plague of London as possible, departed Cambridge University, where he was a student, with all of his books to his family home in the countryside. In his one year of isolation, Newton “voyag[ed] through strange seas of thought alone” [1] and uncovered the logic and laws of many different phenomena in nature. He was all of 24 years old at the time! [What a great example of Cal Newport’s Deep Work!]
The challenge was circular motion
For Newton, though, circular motion remained covered. Prior to 1679, Newton, along with many others, incorrectly viewed circular motion as an equilibrium between two opposing forces, an inward gravitational force that pulls a circling body toward the center and an apparent outward force that pushes a circling body away from the center. These are referred to as “centripetal”–center seeking–and “centrifugal”–center fleeing–forces, respectively.
1679 – Robert Hooke re-frames circular motion for Newton
But Newton’s mistaken view changed in 1679 when Robert Hooke (1635-1703) properly re-framed the issue. In his letters to Newton, Hooke proposed that planetary orbits follow motions caused by a central attraction continually diverting a straight-line inertial motion into a closed orbit. To Hooke, a single unbalanced force is at work in circular motion, specifically an inverse-square attraction of the sun for the planets, which leads to acceleration as opposed to the non-acceleration involved with Newton’s equilibrium view. Frustrated with the inability of his equilibrium view to describe nature, Newton immediately latched onto Hooke’s concept as the critical missing piece to his evolving philosophy.
Thank goodness for Hooke’s shoulders!
Without the insight provided by Hooke, Newton’s Principia probably would not have happened. It was Hooke who came along and sliced away the confusion by “exposing the basic dynamic factors [of circular motion] with striking clarity.” [2] It was Hooke who corrected the misconceptions about circular motion. It was Hooke who properly framed the problem. It was Hooke’s conceptual insight and mentoring that “freed Newton from the inhibiting concept of an equilibrium in circular motion.” [3] With his newfound clarity, Newton let go of the concept of the fictitious centrifugal force, embraced the concept of his newly created centripetal force (Universal Gravitation) pulling toward the center, and so changed the world of science. The year 1679 was a crucial turning point in Newton’s intellectual life and Hooke was the cause.
Why not Hooke?
Given all this, why Newton and not Hooke? Why didn’t Hooke author the Principia? Why all the acclaim to Newton? The short answer, according to science historian Richard Westfall, is that the bright idea is overrated when compared to the demonstrated theory. While Hooke did indeed provide a critical insight to Newton, the overall problem of motion, including a fundamental understanding of Universal Gravitation, remained unsolved, and the reason was that no one, including Hooke, could work out the math. Well, no one except Newton. You see, of Newton’s many annus mirabilis breakthroughs, one of the most impactful was his invention of calculus, and it was his insightful use of calculus that enabled him to quantify time-varying parameters, such as instantaneous rates-of-change.
Unfortunately, Newton had intentionally kept these breakthrough ideas of 1665-66 away from the public, a result of his “paralyzing fear of exposing his thoughts.” [4] They remained in his private notebooks, sitting on the sidelines, a tool waiting to be used.
1687 – The Principia
It wasn’t until 20 years after his miracle year that Newton finally sat down and created his famous Philosophiae Naturalis Principia Mathematica, since shortened to the Principia. Published in 1687 by the 45-yr-old and eventually hailed by the scientific community as a masterpiece, the Principia presented the foundation of a new physics of motion that we now call Classical Mechanics based on his Laws of Motion and Universal Gravitation.
Let’s not forget Halley’s shoulders!
But why then? What happened to trigger Newton’s massive undertaking? Here we meet the other person critical to the creation of the Principia, namely Edmund Halley (1656-1742). Halley recognized that Newton had something vital to share with the science community; Halley recognized Newton’s genius. And so it was Halley who travelled to Newton in 1684 to call him forward to solve the yet unsolved problems of motion and change in motion.
Thank goodness for Halley! He is one of the heroes in this story. What would have happened had he not been present? He was responsible for lighting the fire within Newton. And he did it with skill, approaching Newton with ego-stroking flattery as reflected by his even more direct, follow-up request in 1687: “You will do your self the honour of perfecting scientifically what all past ages have but blindly groped after.” [5]
And so the furnace was lit; “a fever possessed [Newton], like none since the plague years.” [6] Through Halley’s gentle but firm push, Newton shifted his intense focus away from his other pursuits–Newton was Professor of Mathematics at Cambridge at the time–and towards the cosmos. Newton drew upon his volume of unpublished annus mirabilis work and his later Hooke-inspired insights and pursued–slowly, methodically–the answer. And when it was all done, the Principia was born.
Clearly, no Hooke, no Halley, no Principia. But even more clearly, no Newton, no Principia.
The landscape has been so totally changed, the ways of thinking have been so deeply affected, that it is very hard to get hold of what it was like before. It is very hard to realize how total a change in outlook [Newton] produced – Hermann Bondi [7]
Both Joseph-Louis Lagrange and Pierre-Simon, Marquis de Laplace regretted that there was only one fundamental law of the universe, the law of universal gravitation, and that Newton had lived before them, foreclosing them from the glory of its discovery – I. Bernard Cohen and Richard S. Westfall [8]
[1] Wordsworth, William. 1850. The Prelude. Book Third. Residence at Cambridge, Lines 58-63. “And from my pillow, looking forth by light/Of moon or favouring stars, I could behold/The antechapel where the statue stood/Of Newton with his prism and silent face/The marble index of a mind for ever/Voyaging through strange seas of Thought, alone.”
[2] Westfall, Richard S. 1971. Force in Newton’s Physics: The Science of Dynamics in the Seventeenth Century. American Elsevier, New York, p. 426.
[3] Westfall, p. 433.
[4] Cohen, I. Bernard, and Richard S. Westfall, eds. 1995. Newton: Texts, Backgrounds, Commentaries. 1st ed. A Norton Critical Edition. New York, NY: W.W. Norton. p. 314. Referenced to John Maynard Keynes, “Newton, the Man,” in The Royal Society Newton Tercentenary Celebrations (1947). “[Newton’s] deepest instincts were occult, esoteric, semantic—with profound shrinking from the world, a paralyzing fear of exposing his thoughts, his beliefs, his discoveries in all nakedness to the inspection and criticism of the world.”
[5] Gleick, James. 2003. Isaac Newton. 1st ed. New York: Pantheon Books, p. 129.
[6] Gleick, p. 124.
[7] Bondi, Hermann. 1988. “Newton and the Twentieth Century—A Personal View.” In Let Newton Bel A New Perspective on His Life and Works (Editors: R. Flood, J. Fauvel, M. Shortland, R. Wilson).
[8] Cohen and Westfall, p. xiv-xv.
Thank you for reading my post. I go into much greater detail about the life and accomplishments of Sir Isaac Newton (1642-1727) in my book, Block by Block – The Historical and Theoretical Foundations of Thermodynamics. Energy came to be viewed through the Classical Mechanics paradigm created by Newton in the Principia. An excellent account of Newton’s work can be found in Richard Westfall’s Force in Newton’s Physics: The Science of Dynamics in the Seventeenth Century. American Elsevier, New York.
Galileo, perhaps more than any other single person, was responsible for the birth of modern science – Steven Hawking [1]
Galileo was fascinated by motion and continually experimented with pendulums, cannons, and rolling balls to understand why bodies move the way they do. The arguable culmination of these efforts occurred in 1604 when he discovered what became known as “The Law of Fall.” The vertical distance travelled from rest (h) during free fall increases with the square of time (t).
h 𝜶 t2 Galileo’s Law of Fall
Galileo went on to assert that given the Law of Fall and given that the distance fallen (h) equals average speed (v) multiplied by time, then speed itself must be proportional to time.
v 𝜶 t
Combining the two relationships, Galileo arrived at one of the most significant discoveries in history.
h 𝜶 v2
Simply put, these findings were momentous, being the first to 1) identify the importance of v2 in science, and 2) discover the trade-off between what would become known as kinetic energy (1/2 mv2) and potential energy (mgh) as formalized in the conservation of mechanical energy (m = mass, g = gravitational acceleration) that was established around 1750.
1/2 mv2 + mgh = constant Conservation of Mechanical Energy
With this background, let’s now look at how Galileo accomplished this great feat.
The Law of Fall determined by an ingenious experiment
Hold a ball straight out in front of you. Then drop it. Watch how fast it accelerates and hits the ground. How could you possibly quantify this event, especially if you lived back in the 1600s absent of any kind of timing device? The fact that Galileo figured this out fascinates me. Here’s how he did it.
Galileo first focused on what he could directly measure: time and distance. But how did he measure these? When objects fall, they fall fast. So he slowed things down. He let balls roll down an inclined plane, which decreased the force in the direction of motion (look to your right). In this way he was able to mark on the plane distance from rest at fixed time increments.
But wait a minute! Fixed time increments? Yes! How do we know? Because we have the original data! One would have thought all of Galileo’s papers would have been analyzed by the 20 century, but in 1973, Stillman Drake, a leading Galileo expert, discovered otherwise [2]. He was going through Galileo’s own notebooks and surprisingly unearthed the experimental data supporting the Law of Fall (look to your left). Galileo published the result but not the data leading to the result.
But wait another minute! How did Galileo measure those fixed time increments, especially in an era when the necessary timing devices didn’t even exist? Ah! This is where things get interesting, because Galileo didn’t say. Into this void stepped Drake. Drake suggested that since Galileo was raised in a musical world, then he likely had a deep respect for the strong internal rhythm innate to human beings. He proposed that Galileo made use of this by singing a song or reciting a poem and using the cadence to mark time with rubber “frets” along the incline during the experiments to create audible bumps when the ball passed by. By adjusting or tuning these frets, Galileo was able to accurately synch the bump sounds with his internal cadence, thus providing a means to achieve equal divisions of small time increments. This proposed approach is strongly supported by the fixed time increments in the data. To Drake, the only method that would result in accurate fixed time increments would be a fixed cadence. “But wait!,” you say, yet again. “How could this possibly provide the necessary accuracy?” Well, just observe yourself listening to live music when the drummer is but a fraction of a second off-beat. You cringe, right? This is because your innate rhythm is that strong.
Now let’s take a step back and consider the larger impact that Galileo had on science.
Galileo’s discoveries, including The Law of Fall, led to the rise of modern science. Here’re some reasons why.
The dawn of a new variable to science – time
Galileo was one of the first to use the concept of time as a dimension in a mathematical relationship. As noted by science historian Charles Gillispie [3], “Time eluded science until Galileo.” Linking time with another dimension, distance, opened the door to developing more complex relationships involving speed and acceleration.
Galileo’s brought mathematics into physics
Historically, physicists and mathematicians didn’t interact. Physicists resisted the use of math since professors in this area were invariably philosophers and not mathematicians. Galileo joined the two fields together by using a mathematical approach to describe and quantify the physical world and so test the hypotheses he formed. Moreover, he believed that, “[The universe] is written in the language of mathematics.” [4] and thus that mathematics could be used to describe all natural phenomena and conversely that all natural phenomena must follow mathematical behavior. In his search for the Law of Fall, for example, he believed that a simple equation existed and then found the equation.
The scientific method
Although we may not recognize it, we work today in a world largely created by Galileo. We make observations of some process or phenomenon and make a hypothesis, e.g., a mathematical model, to explain it. We then design experiments to generate data to test the hypothesis. Is it right or wrong? This approach is built on Galileo’s approach that favors data over preconceived ideas.
Galileo and the launch of the scientific revolution
[T] the study of nature entered on the secure methods of a science, after having for many centuries done nothing but grope in the dark. – Kant in reference to Galileo and others using experiments to understand nature. [5]
The scientific revolution arguably started, if one had to pick a year, in 1453 when Copernicus put the sun back at the center of the solar system where it belonged. But while Copernicus may have started the revolution, Galileo clearly fanned its flames. His conviction that practical and controlled experimental observation, enhanced by mathematical theory and quantification, was the key to a more satisfying understanding of nature became an inspiration to those who followed.
So what did Galileo truly do that made the difference?
Between Galileo and Aristotle there were just a lot of guys with theories they never bothered to test – Helen Monaco’s character in Philip Kerr’s Prayer: A Novel [6]
So what did Galileo truly do that made the difference? Data! It’s fascinating to know that while everyone from Aristotle to those immediately preceding Galileo thought about all sorts of things, many of the same things that Galileo was to think about, none of them took any measurements. Galileo measured while others thought. We see this around us today. Much thinking, proposing, and speculating. But without measurements, it really doesn’t mean anything. As a former boss of mine once wisely said, “One data point is worth one thousand opinions.” Rarely has this been better put.
Thank you for reading my post. I go into much greater detail about the life and accomplishments of Galileo Galilei (1564-1642) in my book, Block by Block – The Historical and Theoretical Foundations of Thermodynamics. It was Galileo’s work that helped eventually led to the 1st Law of Thermodynamics based on energy and its conservation.
The above illustrations are from my book. My thanks to Carly Sanker for bringing her great skill to creating them from my ideas. She is an excellent artist.
END
[1] Hawking, Stephen W., 1988, A Brief History of Time: From the Big Bang to Black Holes. A Bantam Book. Toronto: Bantam Books, p. 179.
[2] Drake, Stillman. 1973. “Galileo’s Discovery of the Law of Free Fall.” Scientific American 228 (5): pp. 84–92; 1975. “The Role of Music in Galileo’s Experiments.” Scientific American 232 (6): pp. 98–104.
[3] Gillispie, Charles Coulston. 1990. The Edge of Objectivity: An Essay in the History of Scientific Ideas. 10. paperback printing and first printing with the new preface. Princeton, NJ: Princeton Univ. Press. p. 42.
[4] Popkin, Richard Henry, ed. 1966. The Philosophy of the Sixteenth and Seventeenth Centuries. New York: The Free Press. p. 65.
[5] Kant, Immanuel. 1896. Immanuel Kant’s Critique of Pure Reason: In Commemoration of the Centenary of Its First Publication. Macmillan. p. 692.
[6] Kerr, Philip. 2015. Prayer: A Novel. G.P. Putnam’s Sons. p. 73.
When asked my opinion on various science-related topics that are in the news, my usual reply is, “I don’t know.” It’s not that I’m incapable of knowing. It’s that I haven’t studied the topics in enough detail to have a well-grounded opinion. My scientific expertise lays elsewhere, in a less popular news cycle.
HOWEVER
If I were asked to develop a well-grounded opinion and had the time to do so, I would follow an approach that has withstood the test of time: the scientific method. My take is that while many have heard of this approach, only a few truly understand it, and fewer still employ it to its full capability. So my objectives here are to 1) share what this method entails, drawing largely from John Platt’s excellent article titled “Strong Inference” (1964), 2) provide examples from the evolution of thermodynamics to highlight key points, and 3) encourage you to embrace this approach in your own work.
Briefly speaking, the first step in the scientific method is INDUCTION. One gathers data, experiences, and observations and then induces a hypothesis to explain it all. In the second step, called DEDUCTION, one assumes the hypothesis to be true and then follows a rigorous cause-effect progression of thought to arrive at an array of consequences that have not yet been observed. The consequences inferred in this way cannot be false if the starting hypothesis is true (and no mistakes are made).
Thermodynamics generally evolved as laid out above. Rudolf Clausius reviewed years of data and analyses, especially including Sadi Carnot’s theoretical analysis of the steam engine and James Joule’s extensive work-heat experiments, and induced: dU = TdS – PdV. J. Willard Gibbs took this equation, assumed it to be true, and then deduced 300 pages of consequences, all the while excluding assumptions to ensure no weak links in his strong cause-effect chain of logic. To the best of my knowledge, he made no mistakes. Multiple experiments challenged his hypotheses; none succeeded. Gibbs’ success led scientists to view Clausius’ induced hypothesis as being true.
In parallel to the above efforts, which established classical thermodynamics, was the work of Clausius, James Clerk Maxwell, and Ludwig Boltzmann, among others, to establish statistical mechanics. One of my favorite examples of the scientific method in practice came from this work. Based on the induced assumption of the existence of gaseous atoms, Maxwell, an expert mathematician, deduced a kinetic model of gas behavior that predicted the viscosity of gas to be independent of pressure, a consequence that he simply couldn’t believe. But being a firm adherent of the scientific method, he fully understood the need to test the consequence. So he rolled up his sleeves and, together with his wife Katherine, assembled a large apparatus in their home to conduct a series of experiments that showed… the viscosity of gas to be independent of pressure! This discovery was a tremendous contribution to experimental physics and a wonderful example validating the worth of the scientific method.
HOWEVER
There’s a critical weakness in the above illustration. Can you spot it? It’s the thought that a single hypothesis is all you should strive toward when seeking to solve a problem.
Be honest with yourself. What happens when you come up with your own reason for why something happens the way it does? You latch onto it. You protect it. It’s your baby. It’s human nature because, when all is said and done, you want to be right. Ah, the ego at work! And it’s exactly this situation that can do great damage to science. People become wedded to their singular “I have the answer!” moments and then go forward, ‘cherry picking’ evidence that supports their theory while selectively casting aside evidence that doesn’t. And it is exactly this situation that inspired John Platt to take the scientific method to a higher level: strong inference.
Platt proposes that the induction process, illustrated below, should lead to not one but instead to multiple hypotheses, as many as one can generate that could explain the data. The act of proposing “multiple” ensures that scientists don’t become wedded to “one.” The subsequent deduction process assumes that each hypothesis is true, whereupon the resulting consequences are tested. Each hypothesis must be testable in this process, with the objective of the test being to effectively kill the hypothesis with a definitive experiment. Recall that a hypothesis can’t be proven correct but can be proven false. All it takes is a single data point. If by logical reasoning and accompanying experimentation the proposed hypothesis doesn’t lead to the specific consequence, then the hypothesis is assumed to be false and must be removed from consideration. As Richard Feynman famously stated, “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” Only the hypothesis that cannot be proven false, the last one standing, is taken to be the correct hypothesis. Even then, this does not constitute a proof. A hypothesis is only taken to be correct, for the time being, if it offers a means to be tested and if those tests can’t prove it incorrect.
While the illustration below suggests a linear process, in reality, the process is more likely to be iterative. The initiating step typically occurs once a problem or an unexplainable observation is detected. At this point, it is critical that a statement of the problem be written out to ensure clarity and bring focus. As more is learned about the problem, as hypotheses are proposed and tested, as some hypotheses are eliminated and others are expanded to multiple sub-hypotheses, the entire process, with evolving and more detailed problem statements, may repeat itself, over and over, until a single detailed testable hypothesis remains.
Returning to this post’s opening, while I don’t have the time to invest in researching the various sciences being debated today, I do have the time to read those who are doing the research. My criteria for trusting their conclusions? Whether or not they followed Platt’s strong inference model. I want to see the collected data, ensuring that no cherry picking or selective elimination has occurred. I want to see that dissent was encouraged and not ignored. I want to see multiple hypotheses laid out on the table. I want to see an intelligent experimental attack on each and every hypothesis. I want to see the reasoning that leaves hypotheses standing or falling. If I see all of this, then I trust.
I encourage all scientists, no matter the field, to embrace strong inference. Yes, it takes time. But thinking that this process could be short-circuited because you believe you know the answer will eventually lead to problems. As a PhD engineer and friend of mine once said, “some of my biggest errors were when I didn’t follow the methodology.”
A fitting conclusion to this post is the below wonderful quote from Louis Pasteur that captures the essence of Platt’s strong inference model.
“What I am here asking of you, and what you in turn will ask of those whom you will train, is the most difficult thing the inventor has to learn. To believe that one has found an important scientific fact and to be consumed by desire to announce it, and yet to be constrained to combat this impulse for days, weeks, sometimes years, to endeavor to ruin one’s own experiments, and to announce one’s discovery only after one has laid to rest all the contrary hypotheses, yes, that is indeed an arduous task. But when after all these efforts one finally achieves certainty, one feels one of the deepest joys it is given to the human soul to experience.” – Louis Pasteur, Nov. 14, 1888, in a speech given at the inauguration of the Pasteur Institute in Paris.
In a previous video (here), I stated my belief that a better understanding of thermodynamics is available by identifying the connections between the micro-world of moving and interacting atoms and the macro-world of classical thermodynamics. My goal is to do just this. My starting point? The Joule-Thomson effect, which is the temperature change that occurs in a gas stream as it is slowly depressurized. In this post I share my hypothesis as to what I believe is happening at the physical level to cause this effect.
Back in the mid-1800s, James Joule discovered that the temperature of a gas changes upon depressurization through a porous plug. At room temperature, most but not all gases cool down. Hydrogen and helium heat up.
Richard Feynman is my guiding light in trying to figure out the physical cause of this effect.
So let’s look at this. What happens as atoms approach each other? Well, at a large distance, nothing. They really don’t “see” each other since the forces of attraction and repulsion are insignificant. The motion is thus “free” and the gas can be modeled as an “ideal gas” with no intermolecular interactions.
As the atoms come closer toward each other, the attractive interaction becomes significant. This interaction happens when the electrons of one atom are attracted to the protons of the other. The atoms accelerate and their speeds increase.
At a certain point, closer still, the electrons of the two atoms repel each other and the interaction switches from attraction to strong repulsion. The atoms decelerate and their speeds decrease.
Since temperature is related to the average speed of atoms and molecules, let’s take a closer look at how these interactions affect the speed of atoms and thus the temperature of the gas as a whole..
Generally speaking, relative to “free motion”, when the attraction interaction is significant, atoms will be moving at higher speeds, and when the repulsion interaction is significant, atoms will moving at slower speeds.
Gas temperature is related to the time-averaged kinetic energy of the atoms and thus depends on the relative amount of time the atoms spend in each of these categories. At low pressure when large distances separate atoms, the interactions are insignificant and “free motion” dominates. At high pressure when small distances separate the atoms, the interactions are significant. So whether heating or cooling occurs during Joule-Thomson expansion from high to low pressure depends on which interaction dominates at high pressure, attraction or repulsion. Per below, attraction dominance leads to cooling, while repulsion dominance leads to heating.
So there’s my hypothesis. Now it’s time to test it. A small group of us is working to employ molecular dynamics simulation to model the above scenarios and, in so doing, uncover why some gases cool while others heat, and also why an inversion point exists. Stay tuned!
You must be logged in to post a comment.