Career decision making – trust your gut

When I give thermodynamics presentations to high school and college students, I begin with a 10-minute discussion about career decision-making based on my own experiences. I now share this discussion with you, both to provide you with helpful and hopefully inspiring ideas and to also seek your feedback. Do your thoughts align with mine? Let me know! [Note: the examples I use are from my academic years to align with the students, but the process I lay out applies to my entire career.]


Are you trying to decide what to do for your summer internship, your first job, the opportunity to switch jobs, to retire, or to accept an overseas assignment? If you are, I’m guessing you’re experiencing some degree of overwhelm. Here’s why.


“…one of the most difficult types of emotional labor is staring into the abyss of choice and picking a path.” – Seth Godin, Linchpin, p. 57.

Fear of living without a map is the main reason people are so insistent that we tell them what to do. – Seth Godin, Linchpin, p. 125

Because you have options, you have choice, and that can be a source of overwhelm. Choice is both a blessing and a curse. Wouldn’t life would be so much simpler if someone told you what to do, what decisions to make? Yes. But wouldn’t that be rather boring? And who exactly would tell you what to do? Who knows you better than you? Sure, your parents might weigh in early in life, but at some point, it’s your choice, and that choice can be overwhelming, as well manifested by this scene with Robin Williams from Moscow on the Hudson. The goal then is to develop an approach to reduce the overwhelm by transforming the decision-making process from scary to exciting. Here’s how I did this. Maybe it will work for you.

Maslow’s Pyramid

In 1943 American psychologist Abraham Maslow proposed a priority sequence for human motivation. As later illustrated by others (to the right), humans are only motivated at each stage in sequence up the pyramid once they feel satisfied with the stage they’re in. It’s very hard to worry about feelings of accomplishment when you’re worried about where your next meal is coming from.

Survive then Thrive

For my career decision-making, I took Maslow’s pyramid and simplified it. I wasn’t aware that this is what I was doing in my early years. I’m only recognizing it now. When I had to make a decision, my first priority was survival. Once I felt comfortable with that, and narrowed down the field of options, I then made my final decision based on my desire to thrive. Let me explain this process in more detail.


For each individual career decision, I had a range of options to choose from. At times, yes, it was overwhelming. But in the end I was able to narrow down the number of options by first passing them through my “survive” filter. To me, survival meant financial independence. Each career decision took me toward financial independence, the point at which I no longer had to work for somebody else. Some of these decision didn’t earn me more money but instead earned me experiences that I knew would lead to higher-earning opportunities later on. Note the additional criteria I added to my “survive” filter: whatever it was I chose had to be something I was good at and enjoyed doing.


There is no intellectualizing what resonates with you… When it reveals itself, you feel it. – Ryder Carroll, The Bullet Journal Method, p. 146.

The now-smaller list of options met up with my second “thrive” filter. This filter was governed by what I was passionate about, and the part of my body that best understood this was my gut. When a select group of “survive” choices came in front of me, I invariably knew, without necessarily knowing why, the one I wanted…as well as the one(s) I didn’t want. The one I wanted resonated with me, especially if it offered me the opportunity to journey on the road less traveled.

At some point, you have to make the decision

The important point I emphasize in the above illustration is the final red dot. At some point in the process I realized I had to make a decision, both to move forward and to gain experience. Sometimes the decision may have been to stay put, as this is always an option. But even with this seemingly non-decision decision, my choice to stay put was often accompanied by a decision to make a stronger commitment to what I was then doing.

Failure is guaranteed if you never begin – Ryder Carroll, The Bullet Journal Method, p. 125.

There is no “right” decision

The “thrive” decision was admittedly hard at times. Why? Because I often felt that a “right” decision existed and that my life would be forever damaged if I didn’t choose it. I now realize, in hindsight, that this is false. There is no “right” decision in my “thrive” filter. Once I realized this, it helped keep “analysis paralysis” at bay. Each decision leads down a different path, and for the most part, each path will work out just fine. They’ll just be different, that’s all.

Why I chose Bucknell University

For example, I considered a range of undergraduate universities: Bucknell, Lehigh, Clarkson, RPI. They were all good. I would have enjoyed any of them, each in a different way. Why did I choose Bucknell? Well, because when I visited the campus during high school spring break with my parents, Bucknell’s cherry blossoms were in full bloom–in hindsight, I think the grounds keepers somehow ensured this as it was spring break visit week!–and this sold me. Something in me clicked. My gut told me that Bucknell would work for me. I couldn’t list the reasons. The beauty of those trees played a role, perhaps. But I think there was much more to it than that. Sometimes decisions from the gut bypass the brain. All the experiences that I had in my life up until then, including the conversations with others and especially my parents, led me to that decision.

More experiences = better gut feel

This brings me to the curved arrow going from decision to experience. To me, the more decisions one makes, the more experiences one gains, and the better the gut feel develops. Gut feel doesn’t develop in a vacuum.

When you come to a fork in the road, take it – wisdom shared by a friend

Listening to your gut feel is so important when making career decisions. When considering the final decision from a range of options, you often just simply know deep down which decision you want. You feel it in your body. Trust this feeling. Use it to guide whether to do something… or not.

Consider the following, as described by Russ Roberts in his engaging book Wild Problems (p. 44). If you have to decide between two options, flip a coin, and while the coin is still spinning in the air, note which side you are hoping will come up. In that moment, you’ll realize that you don’t even need to see the outcome, because your decision will have already become clear to you. Trust your emotions. You don’t need to explain them to yourself or to others.

Why I went to Karlsruhe

As a final example of how this process worked for me, consider my decision to do a post-doctorate research project in Karlsruhe, Germany.

Remember those various display cases lining university hallways? They contain all sorts of interesting information. It was a rare occasion when I would stop and read, but all it took was once. I was walking down the MIT hallway, thinking about what company I was interested in joining upon graduation, when, for some unknown reason, I stopped at a case similar to the one on the right and actually read what was in it. A flyer spoke of scholarships offered by the German government to do post-graduate work at one of their universities. Bam! It hit me. I had never considered this before then. And all of a sudden it went to the top of my list.

Where did this decision come from? It came from everything, all of my experiences. My conversations with foreign students at MIT, the movies I watched, the stories from my dad about his international travel for Bristol-Myers, my interest in taking the fork, the road less traveled, the once-in-a-lifetime opportunity to live in a foreign country, not with a group, but on my own, knowing it would force me to learn the language. So many different experiences primed my gut to tell me, “Apply”. And I did. And I went. And I never looked back. This decision provided me with another experience, a big experience, that further developed my gut feel for the decisions I would be making later on in my life.

Final thoughts

The survive-then-thrive approach indeed helped me to manage the overwhelm during my career decision-making process. Along the way I learned to trust my gut more and more. How did you approach your own decision making process? The same? Different? If you do try out any of these ideas, please let me know. In the meantime, thank you for reading my post. While I don’t specifically discuss the above concepts in my recently published book Block by Block – The Historical and Theoretical Foundations of Thermodynamics, they do make for a stimulating starting point for an engaging conversation around what motivated the early thermodynamics scientists in the directions they took in their own lives?

Why I paid $100 for a Vonnegut quote

Publisher: “Before we go to print, we just wanted to make sure you got permissions for the epigraphs in your book.”

Me: “What’s an epigraph?”

As I was traveling through the final stages of publishing my book, I learned that there are two approaches to using a quote. One is to embed the quote in the paragraph. The other is to use the quote at the beginning of a chapter, or a section within a chapter, in order to suggest its theme; this type of quote is called an epigraph. Much to my dismay, I learned that while the former doesn’t require permission from the publisher (so long as it is appropriately referenced), the latter does. This would have been fine… had I not liberally sprinkled well over one hundred epigraphs throughout my book!

So I sat down and wrote to many publishers, asking for permissions to use select quotes from their material as epigraphs. And all said “yes” with no fee, except for one. Penguin Random House. They controlled the rights to Kurt Vonnegut’s Player Piano, and specifically to the quote, “Out on the edge you see all the kinds of things you can’t see from the center.” They wanted $100. Once I realized that my pleading wouldn’t move them, I decided to pay. The sentiment that Vonnegut expressed was really important to me. Here’s why.

In writing my book, I learned that many discoveries and insights in thermodynamics occurred when someone with strength in one technical area moved, with curiosity, to the edge of that area to check out what was happening in a different technical area. And it was there, at the interface, where they found opportunity.

Consider the case of Sadi Carnot and his theoretical analysis of the steam engine. He was educated in the prestigious École Polytechnique but then led most of his adult years outside academia as an officer amongst his fellow engineers within the French military. And consider Galileo and his experimental and theoretical work on motion. He worked at the interface between craftsmanship and academia. For both, their respective exposures to a world apart from academia helped enable them to approach problems differently, if not uniquely.

Or consider James Joule, expert brewer, expert reader of thermometers, and amateur physicist. His work helped lay the foundation of the conservation of energy, alongside the efforts of Julius Robert von Mayer. Both Joule and Mayer were academic outsiders; neither was raised under the influence of the caloric theory of heat; neither was trapped by the academic paradigms that couldn’t grasp the concept of energy. Perhaps the value of being at the edge is just this. It’s where creative tension lies. One can bring the fresh-eyes look of an outsider, with no paradigm attachments, to catalyze a breakthrough in thinking.

As manifested in the table on the right, many of those responsible for contributing to the rise of thermodynamics achieved their respective successes by working at the interface between at least two different fields of study. Look, for example, at J. Willard Gibbs. He applied his expertise in mathematics to the study of heat, work, and equilibrium, and so helped lay the foundation of classical thermodynamics and also statistical mechanics. The success of these individuals and their approaches helped encourage others in subsequent years to explore the interface between different “silos” of science, business, art, and so on. It’s at the interface where creative opportunity exists.

The life of Bob Langer, Institute Professor at the Massachusetts Institute of Technology, is a great contemporary demonstration of the power of this approach. Bob brought his ScD in Chemical Engineering at MIT into a different field, specifically the field of medicine and biotechnology, and transformed, among other things, the world of drug delivery.

Thank you for reading my post. I go into much greater detail about the power that exists at Kurt Vonnegut’s edge in Block by Block – The Historical and Theoretical Foundations of Thermodynamics.

Newton: On whose shoulders did he stand?

No Newton, no Principia. That much is clear. But did Newton do it alone? He was naturally exposed to the ideas of such predecessors as Descartes and Galileo and such contemporaries as Leibniz and Huygens. That this collective influenced Newton is reflected in his own writing, “If I have seen further it is by standing on the shoulders of giants.” But the larger question regarding the Principia remains. Did Newton do it alone? The answer: not entirely.

Motion and change in motion

Motion and especially change in motion, thanks to Galileo’s work, remained the central focus in science during the 17th century, and the need to resolve these concepts, especially as they pertained to planetary motion, energized many coffeehouse discussions. Rising to the top of these discussions were the concepts of action-at-a-distance, circular motion, and the relation between the two.

1665-66 annus mirabilis

While to many, action-at-a-distance was impossible, toNewton, it wasn’t. Indeed, Newton embraced this concept when he began developing his theory of force during his annus mirabilis (miracle year). This was one of the most famous periods in science and it began in 1665 when Isaac Newton (1642-1727), seeking to get as far away from the Great Plague of London as possible, departed Cambridge University, where he was a student, with all of his books to his family home in the countryside. In his one year of isolation, Newton “voyag[ed] through strange seas of thought alone” [1] and uncovered the logic and laws of many different phenomena in nature. He was all of 24 years old at the time! [What a great example of Cal Newport’s Deep Work!]

The challenge was circular motion

For Newton, though, circular motion remained covered. Prior to 1679, Newton, along with many others, incorrectly viewed circular motion as an equilibrium between two opposing forces, an inward gravitational force that pulls a circling body toward the center and an apparent outward force that pushes a circling body away from the center. These are referred to as “centripetal”–center seeking–and “centrifugal”–center fleeing–forces, respectively.

1679 – Robert Hooke re-frames circular motion for Newton

But Newton’s mistaken view changed in 1679 when Robert Hooke (1635-1703) properly re-framed the issue. In his letters to Newton, Hooke proposed that planetary orbits follow motions caused by a central attraction continually diverting a straight-line inertial motion into a closed orbit. To Hooke, a single unbalanced force is at work in circular motion, specifically an inverse-square attraction of the sun for the planets, which leads to acceleration as opposed to the non-acceleration involved with Newton’s equilibrium view. Frustrated with the inability of his equilibrium view to describe nature, Newton immediately latched onto Hooke’s concept as the critical missing piece to his evolving philosophy.

Thank goodness for Hooke’s shoulders!

Without the insight provided by Hooke, Newton’s Principia probably would not have happened. It was Hooke who came along and sliced away the confusion by “exposing the basic dynamic factors [of circular motion] with striking clarity.” [2] It was Hooke who corrected the misconceptions about circular motion. It was Hooke who properly framed the problem. It was Hooke’s conceptual insight and mentoring that “freed Newton from the inhibiting concept of an equilibrium in circular motion.” [3] With his newfound clarity, Newton let go of the concept of the fictitious centrifugal force, embraced the concept of his newly created centripetal force (Universal Gravitation) pulling toward the center, and so changed the world of science. The year 1679 was a crucial turning point in Newton’s intellectual life and Hooke was the cause.

Why not Hooke?

Given all this, why Newton and not Hooke? Why didn’t Hooke author the Principia? Why all the acclaim to Newton? The short answer, according to science historian Richard Westfall, is that the bright idea is overrated when compared to the demonstrated theory. While Hooke did indeed provide a critical insight to Newton, the overall problem of motion, including a fundamental understanding of Universal Gravitation, remained unsolved, and the reason was that no one, including Hooke, could work out the math. Well, no one except Newton. You see, of Newton’s many annus mirabilis breakthroughs, one of the most impactful was his invention of calculus, and it was his insightful use of calculus that enabled him to quantify time-varying parameters, such as instantaneous rates-of-change.

Unfortunately, Newton had intentionally kept these breakthrough ideas of 1665-66 away from the public, a result of his “paralyzing fear of exposing his thoughts.” [4] They remained in his private notebooks, sitting on the sidelines, a tool waiting to be used.

1687 – The Principia

It wasn’t until 20 years after his miracle year that Newton finally sat down and created his famous Philosophiae Naturalis Principia Mathematica, since shortened to the Principia. Published in 1687 by the 45-yr-old and eventually hailed by the scientific community as a masterpiece, the Principia presented the foundation of a new physics of motion that we now call Classical Mechanics based on his Laws of Motion and Universal Gravitation.

Let’s not forget Halley’s shoulders!

But why then? What happened to trigger Newton’s massive undertaking? Here we meet the other person critical to the creation of the Principia, namely Edmund Halley (1656-1742). Halley recognized that Newton had something vital to share with the science community; Halley recognized Newton’s genius. And so it was Halley who travelled to Newton in 1684 to call him forward to solve the yet unsolved problems of motion and change in motion.

Thank goodness for Halley! He is one of the heroes in this story. What would have happened had he not been present? He was responsible for lighting the fire within Newton. And he did it with skill, approaching Newton with ego-stroking flattery as reflected by his even more direct, follow-up request in 1687: “You will do your self the honour of perfecting scientifically what all past ages have but blindly groped after.” [5]

And so the furnace was lit; “a fever possessed [Newton], like none since the plague years.” [6] Through Halley’s gentle but firm push, Newton shifted his intense focus away from his other pursuits–Newton was Professor of Mathematics at Cambridge at the time–and towards the cosmos. Newton drew upon his volume of unpublished annus mirabilis work and his later Hooke-inspired insights and pursued–slowly, methodically–the answer. And when it was all done, the Principia was born.

Clearly, no Hooke, no Halley, no Principia. But even more clearly, no Newton, no Principia.

The landscape has been so totally changed, the ways of thinking have been so deeply affected, that it is very hard to get hold of what it was like before. It is very hard to realize how total a change in outlook [Newton] produced – Hermann Bondi [7]

Both Joseph-Louis Lagrange and Pierre-Simon, Marquis de Laplace regretted that there was only one fundamental law of the universe, the law of universal gravitation, and that Newton had lived before them, foreclosing them from the glory of its discovery – I. Bernard Cohen and Richard S. Westfall [8]

[1] Wordsworth, William. 1850. The Prelude. Book Third. Residence at Cambridge, Lines 58-63. “And from my pillow, looking forth by light/Of moon or favouring stars, I could behold/The antechapel where the statue stood/Of Newton with his prism and silent face/The marble index of a mind for ever/Voyaging through strange seas of Thought, alone.”

[2] Westfall, Richard S. 1971. Force in Newton’s Physics: The Science of Dynamics in the Seventeenth Century. American Elsevier, New York, p. 426.

[3] Westfall, p. 433.

[4] Cohen, I. Bernard, and Richard S. Westfall, eds. 1995. Newton: Texts, Backgrounds, Commentaries. 1st ed. A Norton Critical Edition. New York, NY: W.W. Norton. p. 314. Referenced to John Maynard Keynes, “Newton, the Man,” in The Royal Society Newton Tercentenary Celebrations (1947). “[Newton’s] deepest instincts were occult, esoteric, semantic—with profound shrinking from the world, a paralyzing fear of exposing his thoughts, his beliefs, his discoveries in all nakedness to the inspection and criticism of the world.”

[5] Gleick, James. 2003. Isaac Newton. 1st ed. New York: Pantheon Books, p. 129.

[6] Gleick, p. 124.

[7] Bondi, Hermann. 1988. “Newton and the Twentieth Century—A Personal View.” In Let Newton Bel A New Perspective on His Life and Works (Editors: R. Flood, J. Fauvel, M. Shortland, R. Wilson).

[8] Cohen and Westfall, p. xiv-xv.

Thank you for reading my post. I go into much greater detail about the life and accomplishments of Sir Isaac Newton (1642-1727) in my book, Block by Block – The Historical and Theoretical Foundations of Thermodynamics. Energy came to be viewed through the Classical Mechanics paradigm created by Newton in the Principia. An excellent account of Newton’s work can be found in Richard Westfall’s Force in Newton’s Physics: The Science of Dynamics in the Seventeenth Century. American Elsevier, New York.

How did Galileo measure time?

Galileo, perhaps more than any other single person, was responsible for the birth of modern science – Steven Hawking [1]

Galileo was fascinated by motion and continually experimented with pendulums, cannons, and rolling balls to understand why bodies move the way they do. The arguable culmination of these efforts occurred in 1604 when he discovered what became known as “The Law of Fall.” The vertical distance travelled from rest (h) during free fall increases with the square of time (t).

h 𝜶 t2 Galileo’s Law of Fall

Galileo went on to assert that given the Law of Fall and given that the distance fallen (h) equals average speed (v) multiplied by time, then speed itself must be proportional to time.

v 𝜶 t

Combining the two relationships, Galileo arrived at one of the most significant discoveries in history.

h 𝜶 v2

Simply put, these findings were momentous, being the first to 1) identify the importance of v2 in science, and 2) discover the trade-off between what would become known as kinetic energy (1/2 mv2) and potential energy (mgh) as formalized in the conservation of mechanical energy (m = mass, g = gravitational acceleration) that was established around 1750.

1/2 mv2 + mgh = constant Conservation of Mechanical Energy

With this background, let’s now look at how Galileo accomplished this great feat.

The Law of Fall determined by an ingenious experiment

Hold a ball straight out in front of you. Then drop it. Watch how fast it accelerates and hits the ground. How could you possibly quantify this event, especially if you lived back in the 1600s absent of any kind of timing device? The fact that Galileo figured this out fascinates me. Here’s how he did it.

Galileo first focused on what he could directly measure: time and distance. But how did he measure these? When objects fall, they fall fast. So he slowed things down. He let balls roll down an inclined plane, which decreased the force in the direction of motion (look to your right). In this way he was able to mark on the plane distance from rest at fixed time increments. 

But wait a minute! Fixed time increments? Yes! How do we know? Because we have the original data! One would have thought all of Galileo’s papers would have been analyzed by the 20 century, but in 1973, Stillman Drake, a leading Galileo expert, discovered otherwise [2]. He was going through Galileo’s own notebooks and surprisingly unearthed the experimental data supporting the Law of Fall (look to your left). Galileo published the result but not the data leading to the result.

But wait another minute! How did Galileo measure those fixed time increments, especially in an era when the necessary timing devices didn’t even exist? Ah! This is where things get interesting, because Galileo didn’t say. Into this void stepped Drake. Drake suggested that since Galileo was raised in a musical world, then he likely had a deep respect for the strong internal rhythm innate to human beings. He proposed that Galileo made use of this by singing a song or reciting a poem and using the cadence to mark time with rubber “frets” along the incline during the experiments to create audible bumps when the ball passed by. By adjusting or tuning these frets, Galileo was able to accurately synch the bump sounds with his internal cadence, thus providing a means to achieve equal divisions of small time increments. This proposed approach is strongly supported by the fixed time increments in the data. To Drake, the only method that would result in accurate fixed time increments would be a fixed cadence. “But wait!,” you say, yet again. “How could this possibly provide the necessary accuracy?” Well, just observe yourself listening to live music when the drummer is but a fraction of a second off-beat. You cringe, right? This is because your innate rhythm is that strong.

Now let’s take a step back and consider the larger impact that Galileo had on science.

Galileo’s discoveries, including The Law of Fall, led to the rise of modern science. Here’re some reasons why.

The dawn of a new variable to science – time

Galileo was one of the first to use the concept of time as a dimension in a mathematical relationship. As noted by science historian Charles Gillispie [3], “Time eluded science until Galileo.” Linking time with another dimension, distance, opened the door to developing more complex relationships involving speed and acceleration.

Galileo’s brought mathematics into physics

Historically, physicists and mathematicians didn’t interact. Physicists resisted the use of math since professors in this area were invariably philosophers and not mathematicians. Galileo joined the two fields together by using a mathematical approach to describe and quantify the physical world and so test the hypotheses he formed. Moreover, he believed that, “[The universe] is written in the language of mathematics.” [4] and thus that mathematics could be used to describe all natural phenomena and conversely that all natural phenomena must follow mathematical behavior. In his search for the Law of Fall, for example, he believed that a simple equation existed and then found the equation.

The scientific method

Although we may not recognize it, we work today in a world largely created by Galileo. We make observations of some process or phenomenon and make a hypothesis, e.g., a mathematical model, to explain it. We then design experiments to generate data to test the hypothesis. Is it right or wrong? This approach is built on Galileo’s approach that favors data over preconceived ideas.

Galileo and the launch of the scientific revolution

[T] the study of nature entered on the secure methods of a science, after having for many centuries done nothing but grope in the dark. – Kant in reference to Galileo and others using experiments to understand nature. [5]

The scientific revolution arguably started, if one had to pick a year, in 1453 when Copernicus put the sun back at the center of the solar system where it belonged. But while Copernicus may have started the revolution, Galileo clearly fanned its flames. His conviction that practical and controlled experimental observation, enhanced by mathematical theory and quantification, was the key to a more satisfying understanding of nature became an inspiration to those who followed.

So what did Galileo truly do that made the difference?

Between Galileo and Aristotle there were just a lot of guys with theories they never bothered to test – Helen Monaco’s character in Philip Kerr’s Prayer: A Novel [6]

So what did Galileo truly do that made the difference? Data! It’s fascinating to know that while everyone from Aristotle to those immediately preceding Galileo thought about all sorts of things, many of the same things that Galileo was to think about, none of them took any measurements. Galileo measured while others thought. We see this around us today. Much thinking, proposing, and speculating. But without measurements, it really doesn’t mean anything. As a former boss of mine once wisely said, “One data point is worth one thousand opinions.” Rarely has this been better put.

Thank you for reading my post. I go into much greater detail about the life and accomplishments of Galileo Galilei (1564-1642) in my book, Block by Block – The Historical and Theoretical Foundations of Thermodynamics. It was Galileo’s work that helped eventually led to the 1st Law of Thermodynamics based on energy and its conservation.

The above illustrations are from my book. My thanks to Carly Sanker for bringing her great skill to creating them from my ideas. She is an excellent artist.


[1] Hawking, Stephen W., 1988, A Brief History of Time: From the Big Bang to Black Holes. A Bantam Book. Toronto: Bantam Books, p. 179.

[2] Drake, Stillman. 1973. “Galileo’s Discovery of the Law of Free Fall.” Scientific American 228 (5): pp. 84–92; 1975. “The Role of Music in Galileo’s Experiments.” Scientific American 232 (6): pp. 98–104.

[3] Gillispie, Charles Coulston. 1990. The Edge of Objectivity: An Essay in the History of Scientific Ideas. 10. paperback printing and first printing with the new preface. Princeton, NJ: Princeton Univ. Press. p. 42.

[4] Popkin, Richard Henry, ed. 1966. The Philosophy of the Sixteenth and Seventeenth Centuries. New York: The Free Press. p. 65.

[5] Kant, Immanuel. 1896. Immanuel Kant’s Critique of Pure Reason: In Commemoration of the Centenary of Its First Publication. Macmillan. p. 692.

[6] Kerr, Philip. 2015. Prayer: A Novel. G.P. Putnam’s Sons. p. 73.

Science and the power of multiple hypotheses

When asked my opinion on various science-related topics that are in the news, my usual reply is, “I don’t know.” It’s not that I’m incapable of knowing. It’s that I haven’t studied the topics in enough detail to have a well-grounded opinion. My scientific expertise lays elsewhere, in a less popular news cycle.


If I were asked to develop a well-grounded opinion and had the time to do so, I would follow an approach that has withstood the test of time: the scientific method. My take is that while many have heard of this approach, only a few truly understand it, and fewer still employ it to its full capability. So my objectives here are to 1) share what this method entails, drawing largely from John Platt’s excellent article titled “Strong Inference” (1964), 2) provide examples from the evolution of thermodynamics to highlight key points, and 3) encourage you to embrace this approach in your own work.

Briefly speaking, the first step in the scientific method is INDUCTION. One gathers data, experiences, and observations and then induces a hypothesis to explain it all. In the second step, called DEDUCTION, one assumes the hypothesis to be true and then follows a rigorous cause-effect progression of thought to arrive at an array of consequences that have not yet been observed. The consequences inferred in this way cannot be false if the starting hypothesis is true (and no mistakes are made).

Thermodynamics generally evolved as laid out above. Rudolf Clausius reviewed years of data and analyses, especially including Sadi Carnot’s theoretical analysis of the steam engine and James Joule’s extensive work-heat experiments, and induced: dU = TdS – PdV. J. Willard Gibbs took this equation, assumed it to be true, and then deduced 300 pages of consequences, all the while excluding assumptions to ensure no weak links in his strong cause-effect chain of logic. To the best of my knowledge, he made no mistakes. Multiple experiments challenged his hypotheses; none succeeded. Gibbs’ success led scientists to view Clausius’ induced hypothesis as being true.

In parallel to the above efforts, which established classical thermodynamics, was the work of Clausius, James Clerk Maxwell, and Ludwig Boltzmann, among others, to establish statistical mechanics. One of my favorite examples of the scientific method in practice came from this work. Based on the induced assumption of the existence of gaseous atoms, Maxwell, an expert mathematician, deduced a kinetic model of gas behavior that predicted the viscosity of gas to be independent of pressure, a consequence that he simply couldn’t believe. But being a firm adherent of the scientific method, he fully understood the need to test the consequence. So he rolled up his sleeves and, together with his wife Katherine, assembled a large apparatus in their home to conduct a series of experiments that showed… the viscosity of gas to be independent of pressure! This discovery was a tremendous contribution to experimental physics and a wonderful example validating the worth of the scientific method.


There’s a critical weakness in the above illustration. Can you spot it? It’s the thought that a single hypothesis is all you should strive toward when seeking to solve a problem.

Be honest with yourself. What happens when you come up with your own reason for why something happens the way it does? You latch onto it. You protect it. It’s your baby. It’s human nature because, when all is said and done, you want to be right. Ah, the ego at work! And it’s exactly this situation that can do great damage to science. People become wedded to their singular “I have the answer!” moments and then go forward, ‘cherry picking’ evidence that supports their theory while selectively casting aside evidence that doesn’t. And it is exactly this situation that inspired John Platt to take the scientific method to a higher level: strong inference.

Platt proposes that the induction process, illustrated below, should lead to not one but instead to multiple hypotheses, as many as one can generate that could explain the data. The act of proposing “multiple” ensures that scientists don’t become wedded to “one.” The subsequent deduction process assumes that each hypothesis is true, whereupon the resulting consequences are tested. Each hypothesis must be testable in this process, with the objective of the test being to effectively kill the hypothesis with a definitive experiment. Recall that a hypothesis can’t be proven correct but can be proven false. All it takes is a single data point. If by logical reasoning and accompanying experimentation the proposed hypothesis doesn’t lead to the specific consequence, then the hypothesis is assumed to be false and must be removed from consideration. As Richard Feynman famously stated, “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” Only the hypothesis that cannot be proven false, the last one standing, is taken to be the correct hypothesis. Even then, this does not constitute a proof. A hypothesis is only taken to be correct, for the time being, if it offers a means to be tested and if those tests can’t prove it incorrect.

While the illustration below suggests a linear process, in reality, the process is more likely to be iterative. The initiating step typically occurs once a problem or an unexplainable observation is detected. At this point, it is critical that a statement of the problem be written out to ensure clarity and bring focus. As more is learned about the problem, as hypotheses are proposed and tested, as some hypotheses are eliminated and others are expanded to multiple sub-hypotheses, the entire process, with evolving and more detailed problem statements, may repeat itself, over and over, until a single detailed testable hypothesis remains.

Returning to this post’s opening, while I don’t have the time to invest in researching the various sciences being debated today, I do have the time to read those who are doing the research. My criteria for trusting their conclusions? Whether or not they followed Platt’s strong inference model. I want to see the collected data, ensuring that no cherry picking or selective elimination has occurred. I want to see that dissent was encouraged and not ignored. I want to see multiple hypotheses laid out on the table. I want to see an intelligent experimental attack on each and every hypothesis. I want to see the reasoning that leaves hypotheses standing or falling. If I see all of this, then I trust.

I encourage all scientists, no matter the field, to embrace strong inference. Yes, it takes time. But thinking that this process could be short-circuited because you believe you know the answer will eventually lead to problems. As a PhD engineer and friend of mine once said, “some of my biggest errors were when I didn’t follow the methodology.”

A fitting conclusion to this post is the below wonderful quote from Louis Pasteur that captures the essence of Platt’s strong inference model.

What I am here asking of you, and what you in turn will ask of those whom you will train, is the most difficult thing the inventor has to learn. To believe that one has found an important scientific fact and to be consumed by desire to announce it, and yet to be constrained to combat this impulse for days, weeks, sometimes years, to endeavor to ruin one’s own experiments, and to announce one’s discovery only after one has laid to rest all the contrary hypotheses, yes, that is indeed an arduous task. But when after all these efforts one finally achieves certainty, one feels one of the deepest joys it is given to the human soul to experience.” – Louis Pasteur, Nov. 14, 1888, in a speech given at the inauguration of the Pasteur Institute in Paris.

Thank you for reading my post. The above illustrations are from my book, Block by Block – The Historical and Theoretical Foundations of Thermodynamics. My thanks to Carly Sanker for bringing her great skill to creating them from my ideas. She is an excellent artist.

Special thanks to Jim Faler and Brian Stutts for introducing me to John Platt’s work and also for their contributions to this post

Joule-Thomson Effect (Part 2) – my hypothesis

In a previous video (here), I stated my belief that a better understanding of thermodynamics is available by identifying the connections between the micro-world of moving and interacting atoms and the macro-world of classical thermodynamics. My goal is to do just this. My starting point? The Joule-Thomson effect, which is the temperature change that occurs in a gas stream as it is slowly depressurized. In this post I share my hypothesis as to what I believe is happening at the physical level to cause this effect.

Back in the mid-1800s, James Joule discovered that the temperature of a gas changes upon depressurization through a porous plug. At room temperature, most but not all gases cool down. Hydrogen and helium heat up.

Richard Feynman is my guiding light in trying to figure out the physical cause of this effect.

So let’s look at this. What happens as atoms approach each other? Well, at a large distance, nothing. They really don’t “see” each other since the forces of attraction and repulsion are insignificant. The motion is thus “free” and the gas can be modeled as an “ideal gas” with no intermolecular interactions.

As the atoms come closer toward each other, the attractive interaction becomes significant. This interaction happens when the electrons of one atom are attracted to the protons of the other. The atoms accelerate and their speeds increase.

At a certain point, closer still, the electrons of the two atoms repel each other and the interaction switches from attraction to strong repulsion. The atoms decelerate and their speeds decrease.

Since temperature is related to the average speed of atoms and molecules, let’s take a closer look at how these interactions affect the speed of atoms and thus the temperature of the gas as a whole..

Generally speaking, relative to “free motion”, when the attraction interaction is significant, atoms will be moving at higher speeds, and when the repulsion interaction is significant, atoms will moving at slower speeds.

Gas temperature is related to the time-averaged kinetic energy of the atoms and thus depends on the relative amount of time the atoms spend in each of these categories. At low pressure when large distances separate atoms, the interactions are insignificant and “free motion” dominates. At high pressure when small distances separate the atoms, the interactions are significant. So whether heating or cooling occurs during Joule-Thomson expansion from high to low pressure depends on which interaction dominates at high pressure, attraction or repulsion. Per below, attraction dominance leads to cooling, while repulsion dominance leads to heating.

So there’s my hypothesis. Now it’s time to test it. A small group of us is working to employ molecular dynamics simulation to model the above scenarios and, in so doing, uncover why some gases cool while others heat, and also why an inversion point exists. Stay tuned!

Goggins, Full Capability, and “Atoms First” Thermodynamics

David Goggins, ex-Navy SEAL, now ultra-athlete and motivational speaker, shared in a popular YouTube video (JRE #1212) something that I found incredibly motivating. His biggest fear, and I paraphrase here, is that he arrives at the gates of Heaven and sees God there with a clipboard, holding a list of many great accomplishments. Goggins’ fear is that he looks at this list and says to God, “But that’s not me. I didn’t do those things,” and the all-knowing God replies, “That’s correct. That’s who you could have been. This is a list of all those things you were capable of doing.”

What does Goggins have to do with thermodynamics and, more generally, the subject of education? Nothing and everything. While he has accomplished much in his life, I’m not sure he ever turned his eyes toward thermodynamics. But that’s not the point here. My point is the provocative question that rises from Goggins’ fear. How many students graduate from K-12, college, or grad school without having achieved what they are capable of? How many graduate with a significant gap between who they are and who they could have been? As I recently shared (video here), my short answer for the specific world of university-level thermodynamics is, “too many.” This is unacceptable.

To this end, I believe that each and every university student enrolled in a thermodynamics course is capable of graduating from that course with a solid understanding of thermodynamics. So why isn’t this happening? To me, one of the major reasons lies in the first rung of the multi-step education process: the teacher must understand the material. I don’t believe this is happening for the simple reason that we as educators don’t fully understand thermodynamics.

The deepest understanding of thermodynamics comes, of course, from understanding the actual machinery underneath.” – Richard Feynman

The understanding I’m referring to is not with the thermodynamic equations and their use to solve problems. The majority of teachers and textbooks already do a very good job teaching this material. What I’m referring to instead is the deep understanding of what the equations physically mean. We simply aren’t there yet as best manifested by the fact that there is no single textbook I’m aware of that presents a physical explanation of thermodynamics based on the motions and interactions of atoms and molecules. Because of this, students learn the equations without understanding what they mean and end up viewing thermodynamics as an indecipherable black box, leaving them intimidated by and so hesitant to use this powerful science. This is their loss as they fall short of who they could be, and this is society’s loss as real-world problems remain unsolved.

The opportunity exists to create a thermodynamics curriculum based on atoms. Employing such an “atoms first” approach will enable students to better learn, better understand, and more confidently employ thermodynamics in a proactive and creative way. The challenge in front of us is to develop this curriculum. Most of the material is already out there in the pages of books and journals and in the minds of many. It needs to be assimilated into one single place. And some of the material remains to be discovered. If students are to reach their full capabilities, we need to gather and create, where needed, this content. This is the task in front of us. Time to start.

Thermodynamic “pain point” results – here are your responses

I believe that a better understanding of thermodynamics is available by explaining the connections between the micro-world of moving and colliding atoms that attract and repel each other and the macro-world of classical thermodynamics. My goal is to identify and clarify such micro-to-macro connections. To ensure that I’m addressing true needs of the science community, I reached out to you all at the beginning of this year (here) to seek your personal “pain points” with thermodynamics. I asked, what are the stumbling blocks you encounter when trying to teach or learn the physical meanings behind thermodynamic equations and phenomena? Presented below are your responses. My thanks to those who engaged in this exercise.

If you feel you can address any items on the list with supportive references, could you please let me know?

Based on a review of these responses, and subsequent discussions with some of you, I have decided to begin this journey by focusing on a single, specific phenomenon, the Joule-Thomson effect, number 13 in the list below. I hope to have some results to share with you in my next post.

A final note. One responder asked me, how can you animate thermodynamic concepts so that students can understand them? How can you translate physical chemistry and thermodynamics into practical real-world audiovisual content? If any of you has ideas on this, please let me know.

_ _ _ _ _

  1. Explain the physical meaning of not only temperature, but also energy, entropy, enthalpy, exergy, Gibbs energy, and the dreaded fugacity.
  2. Speaking of which, what exactly is fugacity and how does it relate to the material world?
  3. What is the physical mechanism behind the existence of the critical point?
  4. What does the concept of energy minimization physically mean, and how is this applied in the form of Gibbs free energy minimization of protein folding?
  5. Reversibility: What is it (really) and why is it important?
  6. What is the fundamental physical cause of the temperature effects that result when you depressurize a gas cylinder?
  7. Explain the presence of heterogeneous azeotropes.
  8. How deep a vacuum on steam turbines is worthwhile to pursue? How can this be more quickly understood and appreciated?
  9. Van der Waals equation. Why does long range attraction and short range repulsion give a liquid (ie phase transition), beyond just saying, “It’s in the math”? Does this same phenomenon hold with colloids and polymers in solution, that they undergo a “vapor-liquid type phase transition”?
  10. Column of gas in a gravitational field. Is it isothermal or is there a temperature gradient, and why? James Clerk Maxwell and Ludwig Boltzmann assumed the former, Josef Loschmidt the latter. Who was right?
  11. Where does thermodynamics begin? Is a perfect vacuum really a thermodynamic system? Without molecules, do we have pressure, temperature, Q, W, S & H?
  12. Gas Phase Behavior – ideal and non-ideal. Given that all atoms/molecules attract all atoms/molecules via London dispersion forces, what is it that makes a gas behave ideal in which such attraction has negligible influence? What is the dividing line between “ideal gas” and “non-ideal gas”? Some have suggested that it is the formation of dimers the causes of deviation from ideal gas law, but I have yet to find conclusive evidence of this in the literature.
  13. Joule-Thomson Effect – explain this. This effect is naturally related to the combined effects of intermolecular attraction and repulsion. But how exactly does this work at the molecular level? How does this explain, for example, no effect for ideal gas, heating effect for hydrogen, and the presence of an inversion temperature?
  14. Gas – flow. Explain in plain English Bernoulli’s equation, especially the trade-off between pressure and flow velocity.
  15. Photons. When are photons released? Solely with the acceleration of charges? Does this always release photons? As an unbound electron accelerates towards a single proton, are photons released? How about during chemical reactions? Because chemical reactions involve a change in energy level of electrons, are photons always released? If so, should photons be included in reaction equations? When photons are absorbed, heat is generated in the form of an increase in temperature. What is the energy balance around photon absorption (and emission)? What is the physical event that leads to an increase in kinetic energy of the atoms that absorb the photon? When does the presence of photons influence reaction equilibrium?
  16. Explain the micro-physics behind the Stefan-Boltzmann T4 law of radiation.
  17. Explain the micro-physics behind the existence of a supercritical fluid.
  18. Explain the Clausius-Clapeyron Equation in plain English. Why is it what it is?
  19. Gas Phase Reactions – Walk through exactly what happens at the atomic scale during reaction. For example, picture two hydrogen atoms. Long-range attraction draws each towards the other. But up close, the strong electron repulsion pushes them apart. How is this repulsive force overcome so that reaction occurs? Also, “heat” is generated when two hydrogen atoms combine to form molecular hydrogen. What exactly does this mean? What specific physical events lead to an increase in the kinetic energy of the atoms comprising the H-atom gas system when they react? Also, are photons emitted as a result of this reaction?
  20. What exactly does the change in Gibbs energy of a chemical reaction quantify? Is it simply the total change in energy of the orbital electrons?
  21. Why isn’t the distribution of orbital electrons included in the Boltzmann definition of entropy? If a chemical reaction is really the distribution of orbital electrons into their most probable distribution, shouldn’t the change in entropy account for this?
  22. Phase Change – Vapor/Liquid (similar discussion for liquid/solid). How does phase change occur? Walk through each step involved in energy balance. Also, walk through condensation. How does an atom/molecule slow down enough to be ‘captured’ by another atom/molecule? Do the slow atoms/molecules at the left end (slower speed) of the statistical distribution condense first? (Same could be asked of chemical reactions. Do the fast atoms/molecules at the right end of the statistical distribution react first?) When an atom escapes from liquid to vapor, what velocity does it end with? Is the resulting vapor initially at a very low temperature due to escape and then is this why some thermal energy is needed to bring the escaped gas up to temperature of the liquid? Also, does the average velocity of particles in vapor fall short of their average velocity in liquid, especially in case the liquid is a solution and the vapor pressure at a given temperature of the liquid is hence reduced?
  23. Absolute zero. Yes, the entropy of a pure crystal is zero at absolute zero. But aren’t the electrons still in motion? And wouldn’t this mean that the polarity of the atoms is not constant at absolute zero, and instead varies, and wouldn’t this result in a variation of attraction and hence result in motion, which is inconsistent with the concept of absolute zero? So what does matter physically look like at absolute zero?


What are your personal “pain points” with thermodynamics?

What are your personal “pain points” with thermodynamics? What are the stumbling blocks you encounter when trying to understand the physical meaning behind such thermodynamic equations and phenomena as Gibbs Free Energy, Joule-Thomson expansion, phase change, and even the physical properties of matter, including heat capacity and absolute temperature? Could you please share these with me in the comments section below or via direct email (, and I’ll add them to my own list of stumbling blocks and unanswered questions. My objective in doing this is as follows.

I believe that a better understanding of thermodynamics is available by explaining the connections between the micro-world of moving and colliding atoms and the macro-world of classical thermodynamics. My goal is to identify and clarify the micro-to-macro connections for the final list of “paint points” generated here, this list serving to ensure I’m addressing true needs of the science community. I remain undecided on how best to share these results back to you all. It may be a 2nd book, creation of a special YouTube channel, or some other form. Again, not sure. Regardless, a long journey awaits, and I’m looking forward to it.

If you have ideas on where best to locate well documented micro-to-macro connections, please let me know. My starting point is Richard Feynman’s excellent “Lectures on Physics” but even there, while some of my own pain points are indeed addressed, many aren’t.

Professors & teachers – please consider sharing this with your colleagues and also with your current or past students. I’d be very interested to hear their take on things.

Wishing each of you well for 2022.

Thank you,

The Road to Entropy – Boltzmann and his probabilistic entropy

Ludwig Boltzmann (1844-1906) brought his mastery of mathematics to the kinetic theory of gases and provided us with our first mechanical understanding of entropy. To Boltzmann, his work proved that entropy ALWAYS increases or remains constant. But to others, most notably Josef Loschmidt (1821-1895), his work contained a paradox that needed to be addressed. Loschmidt asked a provocative question about this paradox that motivated Boltzmann to transform his mathematics from mechanics to probability. The end result was a probabilistic entropy: entropy ALMOST ALWAYS increases. What was the question? Watch to find out.

For an excellent in-depth analysis of the development of the kinetic theory of gases and Boltzmann’s connection of entropy to the movement of the hypothesized atoms and molecules, I highly recommend Stephen Brush’s Kinetic Theory of Gases, The: An Anthology of Classic Papers with Historical Commentary.

I delve into the mathematical details of Boltzmann’s work, and also the personal details of his battle to defend his work, in my book.