Riddle me this: why does a gas deviate from ideal behavior?

Years ago, during on-campus interview season at college, a friend of mine majoring in electrical engineering told of how difficult one of his interviews was.  “The interviewer asked me how an oscilloscope worked, and I carefully explained how to plug in the different wires and then how to adjust the knobs and so on.  He then said, ‘No, that’s not what I meant.  I meant, how does the internal circuitry of the oscilloscope work?’”  Yikes!

I wrote Block by Block – The Historical and Theoretical Foundations of Thermodynamics to explain how the “internal circuitry” of thermodynamics works, seeking to link the microscopic world of atoms to the macroscopic world of thermodynamic phenomena.  While I explained much of what I wanted to, I was left with some unanswered questions, and these motivated me to create this website and continue my journey.

This post is my first in an anticipated “Riddle me this” series to start sharing my questions with you along with a crowd-sourcing invitation to help me answer them.  If you know the answers to any of my questions, please respond, either via my social media accounts (LinkedIn, Facebook, Twitter) or in the “Leave a Reply” section at the bottom of this post.  As I’m seeking documented evidence supporting the answers, the more detailed and referenced your response, the better.  Thank you in advance.  Now let’s get started.

Riddle me this: why exactly does a gas deviate from ideal behavior?

I’m very interested in understanding exactly why a gas deviates from ideal behavior.  Consider the ideal gas law:

                PV = nRT

where P = pressure, V = volume, n = number of moles, R = gas constant, and T = absolute temperature

Rudolf Clausius showed how this equation could be derived from first principles based on the kinetic theory of gases.  It is exact and contains no adjustable parameters.  Deviation is typically observed by comparing the actual measured pressure against that predicted by the equation, re-arranged as this:

                P (ideal) = nRT / V

In my own research on the cause of the deviation, the reason most often cited is the presence of attractive forces.  But this is like my friend’s initial answer about how an oscilloscope works.  It doesn’t really explain anything. 

Some of my contacts suggested to me that the answer lies in Van der Waals’ (VDW) equation.

                  P (VDW) = [nRT/(V – nb)] – a(n2/ V2)

                  where

a is a constant related to the attractive forces, and

b is a constant related to the volume of the atoms or molecules

Now I can study this equation, plug in numbers, and see indeed that for real gases the equation provides much greater accuracy than the ideal gas law.  I can also see how changes in both intermolecular attractive force (a) and atomic volume (b) from one gas system to another based on separate physical property data result in accurate predictions of corresponding changes in pressure.  I can sort of logically see how this equation works.  And yet, what I can’t see is an explanation of exactly what’s happening at the molecular level to cause the pressure to deviate from that predicted by the ideal gas law.  Why exactly does intermolecular attraction result in a change in pressure, all other things being equal?  The VDW equation doesn’t explain this.

As an aside, it’s interesting to read about the origin of the VDW equation.  As suggested by Kipnis et al. in their book Van der Waals and Molecular Science (Chapter 3), in the late 1800s Dutch scientist Johannes Diderik van der Waals first arrived at his equation, based on “elementary reasoning and some experimental information,” and then afterward rationalized its physical meaning.  In other words, the equation lacks a fundamental underpinning. While the empirical nature of the equation was recognized by scientists of the day, such as James Clerk Maxwell and Ludwig Boltzmann, who commented that Van der Waals found his equation “to some extent by inspiration,” this did not detract from their applause of Van der Waals’ work, for they made immediate use of it in their own research efforts.  Indeed, in 1905 Boltzmann and Josef Nabl described Van der Waals as “the Newton of the theory of the deviations of gases from ideality.”  In the end, the VDW equation played a critical role in the early days of thermodynamics.  And yet, its derivation did not tell us why the deviations occurred.

So where does this leave us?  This is the question.  I’ve not been able to find many who have sought links between non-ideal gas behavior and the physical phenomena as its cause.  I’ve read one explanation that said the deviation occurs because gas molecules near the wall boundary experience greater “pull back” attractive forces from the system and so don’t strike the wall as hard, but this never made sense to me because this would also cause lower temperature at the wall.  The deviation from the ideal gas law I’m referring to occurs when temperature, volume, and number of moles are held constant.

Does the formation of dimers result in the deviation?

Based on a suggestion from a colleague of mine, Tom Kinney, I found an explanation that made sense to me in David Oxtoby et al’s Principles of Modern Chemistry (2012, 7th edition, p. 420).  Pressure can decrease from that predicted by the ideal gas law due to the temporary attraction of molecules to form dimers, “so reducing the rate of collisions with the walls.”  In their review (1976) of this topic, Blaney and Ewing defined these gas phase dimers, which became known as Van der Waals molecules, as “weakly bound complexes of small atoms or molecules held together, not by chemical bonds, but by intermolecular attractions.”  These dimers, whose existence has been validated by spectroscopy, form whenever two atoms or molecules have sufficiently low kinetic energies and sufficiently close proximity to capture each other in the very early stages of condensation.  Other researchers proposed the same cause-effect phenomena as Oxtoby et al., stating that low temperatures cause monomers to form dimers, which effectively lowers the pressure of the system.

While the mathematical models involved in these studies, which primarily focus on the virial equation of state and specifically on the second virial coefficient, support the hypothesis that the occurrence of dimers influences the observed deviation from the ideal gas law, such support falls short of definitely demonstrating the cause-effect link.

With this background, I conclude this post with a comment and then some questions.  First, the comment.  It’s interesting to me that I never read about this dimer hypothesis in my thermodynamics textbooks even though I spent many hours working through problems involving equations of state, such as the VDW equation and its successor, the Redlich-Kwong equation.  I wish that a textbook had been available to explain such micro-to-macro connections.  Perhaps it’s time to write it.

Regarding my questions, these are meant for you, the reader.  Simply put, I don’t know what causes the deviation.  But while I don’t know, I’ve still thought about it.  It seems to me that if the dimer hypothesis is true, then modeling efforts should focus on the “n” variable in the ideal gas law, which quantifies the number of moles in the system, since the formation of dimers directly reduces n and so reduces the predicted pressure.  Have you read of anyone considering this or otherwise studying this?  Conversely, have you read of anyone who has tried to refute the hypothesis by quantifying deviation in the absence of dimers?  I have not found such published research.  Finally, I’m working with one hypothesis here.  If you know of another, please let me know.  I welcome your thoughts on this riddle.

Why Do Balloons Float?

An experimental balloon takes its inaugural flight in August 2020. This particular balloon can change altitude by shortening or lengthening a cord attached the top and bottom of the balloon. Shortening the cord compresses the balloon which makes it descend while lengthening the cord expands the balloon allowing it to ascend. Photo courtesy of Thin Red Line Aerospace, used with permission.

* * *

For this post I invited back fellow thermodynamics enthusiast Mike Pauken, principal engineer at NASA’s Jet Propulsion Laboratory and author of Thermodynamics for Dummies, to continue this series related to his work on developing balloons for Venus. His first post covered the developmental history of balloons. This second post dives into the fundamental reasons why balloons float to begin with. For his third and final post, to be published early next year, Mike will provide a general discussion of balloon flight on other planets. Please extend a warm return welcome to Mike! – Bob Hanlon

* * *

Why do Balloons Float? It almost seems like a question a child would ask. In fact, many children have asked this question. The short answer, which may not quite satisfy a child, is: balloons float because there is a force pushing them upwards. In 1687 Isaac Newton published his first two laws of motion in Principia Mathematica which showed us that things, like balloons, move in response to forces applied to them. For a balloon to move at all, some force must act on it. A balloon ascends or descends in response to two basic forces: the lift force, FL, and the mass gravitational force on the balloon, FB. You can imagine that a balloon also responds to other forces, such as those from wind, but we shall ignore these complexities for now.

The lift force on a balloon is created when the gas inside the balloon has a lower density than the gas outside the balloon. But density alone isn’t sufficient to make a lift force for a balloon floating in the air. Gravity is needed as well. Gravity has a way of sorting things out with lighter gases working their way above heavier gases. This was known to the Montgolfier brothers and Jacques Charles in 1783 as I described in my previous post in which I showed the lift force of a balloon can be calculated from gas density, gravity, and volume using this equation:

FL = V·g·(ρa – ρb)

Where FL is the lift force, V is the balloon gas volume, g is the gravitational acceleration, ρa is the density of the atmospheric gas, and ρb is the density of the balloon gas.

The other force acting on a balloon is the mass gravitational force, FB, which is just a fancy way of saying the weight of the balloon plus its payload. We calculate the balloon mass gravitational force, FB, using the following equation:

FB = g·(mb + mp)

where g, is the gravitational acceleration, mb is the mass of the balloon envelop and mp is the mass of the payload the balloon carries.

How a balloon responds to both the lift force, FL, and the mass gravitational force, FB, depends on which force is greater. In the lifetime of a balloon, it rises if the lift force exceeds the mass gravitational force, stops rising when they are equal, and descends when the lift force is smaller than the mass gravitational force. Mathematically these are written as:

FL > FB – rising balloon

FL = FB – stable balloon altitude

FL < FB – descending balloon.

Briefly, this is why and how balloons float, but there is so much more to the story, so read on.

As you can see, both gravity and density show up in the lift force equation:

FL = V·g·(ρa – ρb)

Density is one of the variables found in the famous Ideal Gas Law which took centuries to figure out through diligent efforts by brilliant people; also briefly described in my previous post. The Ideal Gas Law shows us that there are a couple ways in which one gas can have a lower density than another and demonstrates how both helium and hydrogen balloons and hot air balloons work.

Let’s take a look at the Ideal Gas Law: P·V = n·R·T, to understand what properties of a gas affect its density. In the ideal gas law, the variables in the equation are defined as:

P = absolute pressure

V = volume

n = number of moles of gas

R = universal gas constant

T = absolute temperature

The number of moles, n, is defined by:

Then making a substitution for n:

Density is defined by mass per unit volume:

Finally, we can make substitutions for m and V and rearrange the equation to solve for density:

What this equation tells us is that the density of a gas inside a balloon is based on three different properties: pressure, molar mass, and absolute temperature that are connected together through the universal gas constant. Let’s go over the contents of the Ideal Gas Law to get a better understanding of how it applies to our atmosphere and flying balloons.

The Universal Gas Constant

The Universal Gas Constant of the Ideal Gas Law. As its name implies, is universally applied to all gases. It allows us to take the proportional relationship between density and the other properties, specifically pressure, temperature and molar mass, and turn it into an equation to quantify density in absolute terms. Thus, the gas constant does not affect density as pressure, temperature or molar mass change. It is simply an important anchor! So, I’ll leave it there for now and go to the other variables.

A Primer on Molar Mass

The molar mass describes how many neutrons and protons exist in a molecule. Density is directly proportional to molar mass. The higher the molar mass of a gas molecule, the higher its density will be. A hydrogen atom has only 1 proton, so H2 has two protons and no neutrons for a molar mass of 2 grams/mole. Helium has two protons and two neutrons; its molar mass is 4 grams/mole. Air is a mixture of mostly nitrogen (~79%) and oxygen (~21%). We will ignore the contributions of the other gases in air like water vapor, carbon dioxide, methane, etc. to estimate the molar mass of air. Nitrogen has 7 protons and 7 neutrons and combining two atoms together to make N2, gives a molar mass of 28. Oxygen atoms have 8 protons and 8 neutrons, so the oxygen molecule of O2 has a molar mass of 32. If we apply the mixture ratio of N2 and O2 in air to the individual molecular molar masses, we end up with an apparent molar mass of 28.97 grams/mole for air.

The Ideal Gas Law shows that the density ratio of H2/air (holding pressure and temperature constant) is only 2/28.97 = 7% and that of helium/air is 4/28.97 = 14%. We can use the density of hydrogen and helium to determine how much lift they have in a balloon using the balloon lift force equation. Leaving the math to the reader as a homework exercise, the lift force of H2 is 10.9 N/m3 while that of helium is 10.1 N/m3 assuming a pressure of 100 kilopascals and a temperature of 20°C or 293K. So even though hydrogen has half the density of helium it can only lift about 8% more than helium.

Talk about Pressure!

Pressure and density go hand-in-hand. If pressure increases, so does density. If pressure decreases, density follows suit. Here I will discuss the effects of pressure on hot air balloons and helium/hydrogen balloons.

Hot Air Balloons: The pressure inside hot air balloons is the same as the atmospheric pressure outside the balloon. You probably already know that the pressure of the atmosphere decreases with altitude. (I’ll discuss this in more detail below.) You can conclude then that the air density also decreases with altitude because of pressure decreasing. As a balloon ascends, the pressure inside the balloon decreases to match the atmospheric pressure. For a hot air balloon, lets assume the burner keeps the hot air at roughly a constant temperature. In reality though, the temperature inside the balloon rises when the burner is on, and cools when the burner is off indicating that the temperature actually oscillates around an average value. Furthermore, we can assume the balloon volume remains the same as it rises or descends. The balloon stays the same size as it goes up or down.

We can use the Ideal Gas Law, P·V = n·R·T, to show that if the pressure, P, is decreasing with altitude, and the volume, V and the absolute temperature, T, are roughly constant then the only variable left to reduce pressure is the number of moles of gas, n, inside the balloon. If the number of moles of gas inside the balloon decreases with altitude, then this means that some gas inside the balloon leaves through the opening at the bottom of the balloon as the altitude rises. Removing gas from the balloon means the density of the gas inside has decreased as it rises in the atmosphere. As a hot air balloon descends, air enters the opening, thus increasing the number of moles of gas, and thus the gas density, inside the balloon.

Helium Balloons: In a helium (or hydrogen) filled balloon, the balloon is closed at the bottom, sealing it off from the atmosphere. There are several different scenarios possible for how pressure affects helium balloons. If the balloon is made from a non-stretchable material, it is usually only partially inflated on the ground as shown in Figure 1. As the balloon rises, the balloon volume grows until it reaches its maximum volume and the balloon will stop rising any further as shown in Figure 2. This means that the gas density inside the balloon decreases as it rises up. The mass of helium or hydrogen inside the balloon doesn’t change, but the volume grows which decreases the density.

If the balloon is filled to its maximum volume on the ground, as Jacques Charles did with his first hydrogen balloon, the pressure inside the balloon will change as the air temperature around the balloon changes. But the density of the helium or hydrogen remains constant because the mass inside the balloon and the volume of the balloon are constant. The density of the gas inside the balloon cannot increase or decrease with altitude or temperature changes. If the balloon is anchored to the ground and the temperature rises during the day, the pressure inside the balloon will rise. If the balloon is set free and rises up in the atmosphere, its pressure will drop as the temperature decreases with altitude. The tricky part is that the pressure difference between the inside and outside of the balloon will grow as the altitude increases. Unless the balloon is very strong, it can burst, which is what happened to Jacques Charles’ first balloon. When the balloon pressure is more than the atmospheric pressure, the balloon is known as a super-pressure balloon. I will talk about these kinds of balloons in my next post on balloon flights on other planets.

Figure 1. A helium filled balloon launched from the ground, only a few hundred meters up, is not fully inflated as shown in this photo. The balloon looks like a jelly fish with a long tail. Author’s personal photo.
Figure 2. Once the balloon reaches a high altitude, it fully expands into a spherical shape, reaching its maximum volume. You can see when the balloon is at a very high altitude, like this one at 32 km, (105,000 ft) the sky is black. Author’s personal photo.

If a balloon is made of a stretchy material like latex rubber, you can fill it with as much gas as you want as long as you don’t fill it up to the point where it will burst. The more gas you put in, the more it will lift up, but with one caveat. The more weight you try to lift up, the lower the altitude the balloon will be able to reach. As the balloon rises higher, the volume of the balloon will grow. If the balloon rises too high, it will reach a bursting point and then your balloon will fall down to the ground. The density of the gas inside this kind of balloon decreases as it rises in the atmosphere because the mass of gas inside is constant but the volume grows larger.

Temperature Effects on Density

In a hot air balloon, the gas density inside the balloon is less than the air density outside the balloon, even though the pressure inside equals the pressure outside, by virtue of it being hot compared to the atmosphere. For a typical nylon fabric hot air balloon, the average gas temperature is around 100°C. We can calculate the density of the hot air inside the balloon using the ideal gas law. At sea-level pressure of 101.3 kilopascals and 273.15 K temperature (identical to that specified above) the lifting force of a hot air balloon is only about 2.5 N/m3 (compare to ~10-11 N/m3 for helium or hydrogen). This is why hot air balloons are so large, they can’t lift nearly as much as helium or hydrogen balloons on a volume basis. If you go back and review my last post and you’ll see that Charles’ hydrogen balloon was a lot smaller than the Montgolfier brothers hot air balloon. Fun fact: if we were to replace 100°C hot air with a mythical ideal gas with the same density (hence lifting capacity), it would have a molar mass of around 23 grams/mole.

I would like to point out that the gas temperature inside a hot air balloon is not uniform. It is hotter at the top of the balloon than at the bottom of the balloon, as shown in the picture below in Figure 3. We used the temperature around the middle of the balloon as the average to make our lift force calculation above. I want to take a moment here to explain why there is a temperature gradient inside a balloon or even inside a building if you’ve ever climbed up near the ceiling.

If we heat a fixed quantity of air (n is constant) and keep the pressure, P, constant, the air volume, V, expands as predicted by our good buddy, the Ideal Gas Law: P·V = n·R·T. If volume increases, the density decreases because recall that density is mass/volume:

Density is affected by both pressure and temperature. You can decrease density by reducing pressure or by increasing temperature. In a hot air balloon, both effects are taking place simultaneously. The pressure at the top of the balloon is less than at the bottom of the balloon, just as the atmospheric pressure is less at the top than at the bottom. The temperature at the top of the balloon is higher than the temperature at the bottom of the balloon. Density gradients are formed in a gravitational field in which density decreases as elevation increases.

Figure 3. The air inside the balloon is hotter at the top than at the bottom which demonstrates that hot air rises above cooler air!
The website of Thermal Image UK, has some very interesting photos of hot air balloons.
Images used with permission.

The Density of the Atmosphere

At sea-level, the Earths’ atmospheric pressure is 101.3 kiloPascals (or 14.7 pounds per square inch). In Figure 4, I’ve made a graph of the air temperature, pressure and density in the Earth’s atmosphere. As you rise higher in altitude, the pressure decreases in half about every 5 kilometers. The absolute temperature slightly decreases with a rise in altitude. You can see in the graph, that pressure decays more rapidly than absolute temperature. This means the density of the air decreases with altitude. The x-axis is a log-scale plot since the temperature, pressure and density have widely different magnitudes making them hard to plot on the same scale.

Figure 4. Profiles of temperature, pressure and density of the Earth’s atmosphere up to an altitude of 50 km.

Since balloons float in the air as a result of a density difference between the balloon gas and the atmosphere, a balloon will reach a maximum altitude where the gravity force acting downward on the balloon equals the buoyancy force acting upward on the balloon. The maximum float altitude doesn’t necessarily occur at the altitude where the density inside the balloon equals the density outside the balloon. Because the balloon has to support its own mass plus the mass of anything hanging under it, the internal density will always be less than the external air density.

Digging Deeper: The Molecular Point of View

Some readers may stop here and go about their business. Maybe you’re one of them and already satisfied by this point. But if you wish to dive deeper into the topic and get a certificate of deep knowledge, continue on reading…

Thus far, my description of why balloons float focuses only on observations made by measuring properties using mechanical instruments such as thermometers, pressure gauges, graduated cylinders and balances. These observations of physics are merely that – just observations! They describe what we are able to measure mechanically. These kinds of observations do not explain why molecules, in the form of a gas, have the properties we call pressure, temperature and density connected together through the Ideal Gas Law. To understand how molecules can possess pressure, temperature and density, we must become like a molecule ourselves. This is something that the father of the famed physicist Dr. Richard Feynman emphasized to him – look at things from different perspectives to really understand something. (As an aside, Richard mentored his younger sister, Joan, to do the same. Joan became a well-known research scientist at JPL). Melville Feynman told young Richard: Just because you know the name of a bird, doesn’t mean you know anything about it. You need to look at a bird for a long time to know more about it.

Statistical Mechanics

In 1975, Dr. David Goodstein, a professor at Cal Tech, published a book called “States of Matter.” He starts off Chapter One the following alarming introduction: “Ludwig Boltzmann, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our turn to study statistical mechanics. Perhaps it will be wise to approach the subject cautiously.” With that warning, let me just say, we will only dip our pinky toe into the deep waters of statistical mechanics of a perfect gas.

Daniel Bernoulli published his classic work, written in Latin, “Hydrodynamica” in 1738 in which he suggested the pressure exerted by a gas on the wall of a container is due to the impact of the particles (now known as atoms or molecules) against the sides of the container. Bernoulli showed the illustration in Figure 5 to explain his theory of what happens inside a closed container full of air. The closed cylinder has a moveable piston with a weight on top to maintain a constant pressure inside the container. Bernoulli described his theory on pressure arising from the motion of molecules as follows: “let the cavity contain very minute corpuscles, which are driven hither and thither with a very rapid motion; so that these corpuscles, when they strike against the piston and sustain it by their repeated impacts, form an elastic fluid which will expand of itself if the weight is removed or diminished…

Bernoulli was ahead of his time, and most scientists in those days rejected his idea, holding on to the hypothesis that gas molecules do not move about and repel one another from a distance.

Figure 5. Bernoulli’s Illustration of a gas exerting a pressure inside a cylinder with a moveable piston.

Over a Century Later

A number of scientists contributed to the development of the kinetic theory of gases over the next 120 years, including James Joule, who calculated the molecular velocity required to produce the observed pressures for a number of gases in his 1851 paper “Some Remarks on Heat and the Constitution of Elastic Fluids.” This demonstrated that scientists were finally coming around to accepting Bernoulli’s hypothesis on molecules in motion as the source of pressure in a gas.

In the late 1850s, James Clerk Maxwell started to work on Bernoulli’s hypothesis accepting the notion that gas molecules move as perfectly elastic particles and obey Newton’s laws of motion, bouncing of each other and surfaces with straight-line trajectories between collisions. Maxwell did not believe that gas molecules all moved at the same velocity as James Joule’s analysis suggested even if the container holding them is at a uniform temperature. Maxwell held that temperature is simply an indicator of the mean speed of molecules (more accurately, temperature indicates the mean of the square of the speeds of a gas molecule population). Furthermore, he realized that knowledge of the position and velocity of every molecule at every instant in time was not necessary to describe how molecules produce pressure and other properties such as viscosity and heat capacity.

What was needed was a mathematical representation of the distribution of molecular velocities and the bell-shaped curve made the most sense. Some molecules will move slowly, other will move very fast, but most will be somewhere in the middle. Similar speed statistics are observed in marathons, or other races, where only a few, who practically sprint the whole time, cross the finish line first, then a big wave of “average” runners cross the finish, and finally the race tapers down to the folks who walked the distance to complete the course.

The culmination of Maxwell’s work on this topic were published in his 1860 paper “Illustrations of the Dynamical Theory of Gases.”  He continued working on the kinetic theory and concluded by 1867 that molecules do not really collide into each other, rather they repel one another with a force whose magnitude is inversely proportional to the fifth power of the distance between them in his paper “On the Dynamical Theory of Gases.”

The kinetic theories of gases were advanced even further in the 1870s through the genius of Ludwig Boltzmann. Boltzmann reworked Maxwell’s analysis to include various degrees of freedom that describe in more detail the way molecules move. Maxwell only looked at the energy from the linear velocity of molecules. Boltzmann included the energy associated with rotation and vibration of molecules.

The Speed of Gas Molecules

The speed of gas molecules allows us to define the properties we know as temperature and pressure. From pressure and temperature and molecular mass, we can define density. One important result of Maxwell and Boltzmann’s work on the Kinetic Theory of Gases is an equation, shown below, for describing the probability of gas molecules having a certain speed in terms of the molecular mass and the bulk gas temperature. While this equation looks complicated and it would take a blog all by itself to discuss it, I want to point out that there are three variables that are very important in the equation – velocity, v, absolute temperature, T, and molar mass, m. The other variables you see here are π, which takes care of the circular nature of the geometry space and kB, which is the Boltzmann constant.

In this equation, the first term (in parentheses) scales the equation in terms of molar mass and absolute temperature. It acts like a constant for a given temperature and gas species. The second term accounts for the motion of the molecules through a spherical geometry representing a system of molecules. This term dominates at low velocities. The last term, with the exponent, is a mathematical formulation that shows the number of molecules with a high speed gets smaller as the speed gets higher. This term dominates at high velocities. The Maxwell-Boltzmann speed distributions comparing the speed of air (at 0°C and 100°C), helium, and hydrogen are plotted in Figure 6. What you can see in this graph is that the speed range of small molecules like hydrogen and helium are higher than the speed range of air molecules. You will also see that as the temperature of the gas increases, (comparing the 0°C air to the 100°C air) the speed range moves to the right, that is, the speed range get faster. The speed of the molecules has very important implications for gas pressure and consequently gas density. This factors into the change in pressure and density with altitude in our atmosphere.

Figure 6. The Maxwell-Boltzmann Speed Distribution for air, hot air, helium and hydrogen

Although the Maxwell-Boltzmann speed distribution is a significant achievement over James Joules single speed interpretation of gas molecules, it turns out that there is a single speed in the distribution that is representative of all the gas molecules and is useful for computing properties like kinetic energy and pressure. This reaffirms James Joules’ representation of molecular speeds as a single value. This is known as the Root-Mean-Square (RMS) speed. I marked the location of the RMS speed for each gas in Figure 6. The RMS speed is not the most probable speed, which would be at the top of the curve. Nor is it the average speed, which would be slightly faster than the most probable speed. The RMS speed defines the average kinetic energy of the gas molecule population. The kinetic energy depends on the square of the speed and higher speed molecules have a disproportionately higher kinetic energy than molecules with below average speed. The RMS speed is simply the average of the square of the speeds.

Microscopic View of Temperature

Molecules move with a linear speed because they possess a property known as kinetic energy. Kinetic energy connects the molecular mass and the RMS speed of molecules to the property we physically measure as temperature. The kinetic energy of a molecule is dependent on only its absolute temperature, T, and is calculated using the Boltzmann constant, kB, in this equation: K.E. = 3/2 kB·T. All gas molecules at the same temperature have the same kinetic energy, regardless of whether the molecules are helium, hydrogen or air. The kinetic energy of a gas molecule population determines the RMS speed of molecules using the molecular mass in this equation: K.E. = 1/2 m·v2rms. Therefore, we can define temperature in terms of the molecular mass and RMS speed of molecules with the following equation:

From this equations we can see that for a given temperature, gases with lower a molecular mass, m, will have higher RMS speeds as shown in the Maxwell-Boltzmann speed distribution plots in Figure 6. When helium or hydrogen molecules are in the air, they will speed around much faster than the nitrogen and oxygen molecules even if they are at the same temperature. I would like to point out that molecules not only move in a linear fashion, they also rotate and vibrate. But these additional kinds of molecular motion are not normally accounted for in defining the kinetic energy temperature.

Microscopic View of Pressure

Daniel Bernoulli was correct in thinking that pressure is a result of molecules colliding with each other and with surfaces that constrain gases. Pressure depends on the number of molecules per unit volume and their RMS speed. This is clear from the Ideal Gas Law equation written to solve for pressure: P = (n/V)·R·T where (n/V) is the number of molecules per unit volume and we now know that the absolute temperature, T, determines the RMS speed of the molecules. Pressure is define as a force over an area. Forces are a result of a mass experiencing an acceleration. Acceleration is simply a change in velocity which arises from either a change in speed or direction. When a molecule collides with a surface, it will rebound from the surface and change it’s velocity. It will keep the same speed for an elastic collision and just change it’s direction. This change in direction is an acceleration. When a large number of molecules collide with surfaces, we are able to measure the force of these collisions and determine the property we call pressure.

The Ideal Gas Law can be rewritten to determine the pressure of a gas from the RMS speed of its molecules and the number of molecules contained in a unit volume. If we consider only the vertical velocity component of gases, the pressure is calculated with the RMS velocity with the following equation:

In this equation, N/V, is the number of gas molecules per unit volume, m, is the molar mass of the gas molecule and vz is the RMS vertical velocity. You will note that I have switched from speed, which does not refer to any direction, to velocity, which indicates direction. I use the z-axis to define the vertical direction. Also, we do not have to use only the vertical velocity component to compute pressure, we can also formulate the problem to use all three axes of velocity. I’m using the vertical velocity to demonstrate the effects of altitude later on.

The two properties in this equation that can decrease the pressure are the RMS speed of the molecules, and the number of the molecules per volume, N/V. There is a reduction in both the temperature (RMS speed of molecules) and the number of gas particles as altitude rises in our atmosphere. Both factors contribute to the reduction in pressure in the atmosphere. The vertical velocity component is calculated using the following equation:

You can look up the standard atmospheric pressure at sea level and at 0°C and find the pressure is 101,325 Pascals. We can compute this pressure using the number density of molecules and the vertical velocity component as follows:

kB is the Boltzmann constant: 1.38065E-23 J/K

T is absolute temperature: 273.15K = 0°C

m is the mass of an air molecule: 4.80992E-26 kg

vz = 280.01 m/s

If you compare this value for the molecular RMS speed in Figure 6 for air at 0°C, you will see that the RMS speed is about 485 m/s which is a lot faster than what we just computed here. This is because we are only considering the vertical velocity component. To compute the molecular speed, you’ll need to account for the velocity components in the x and y directions too. If the speed distribution is independent of direction, then vx = vy = vz = 280 m/s, and adding all the velocity components correctly results in computing the RMS speed as:

RMS = (vx2 + vy2 + vz2)0.5 = (3·2802)0.5 = 485 m/s.

Getting back to computing the standard sea-level pressure of the atmosphere: For air, at standard temperature and pressure, the molecular density is 44.615 moles/m3. Multiplying this by Avogadro’s number, 6.02214E+23, the molecular density is 2.6868E+25 molecules/m3. We confirm that the atmospheric pressure, based on the RMS vertical velocity component of the Maxwell-Boltzmann distribution function is:

p = (2.6868E+25 molecules/m3)(4.80992E-26 kg)(280.01 m/s)2 = 101,325 Pa.

Pressure and Altitude

We observe that the pressure of the atmosphere decreases with altitude. Why is this the case? If you think about gas molecules moving around, they are all affected by the force of gravity. The vertical component of their velocity will be altered by gravity which will slow them down as they move upwards and speed up when traveling downwards. Molecules moving vertically continually exchange kinetic energy for potential energy. As they move up, kinetic energy decreases, velocity decreases and thus so does temperature. Moving down has the opposite effects.

We can demonstrate that if we allow the gas molecules to move upward by 1 m, we can compute the pressure at this altitude and compare it to the hydrostatic formula for pressure. Let’s assume a population of air molecules travel 1 m vertically. The time, t, it takes to travel 1 m is: t = (1m)/(280 m/s) = 0.00357 sec

The velocity of the molecules after traveling upwards for 1 m is: v2 = v1 – g·t. The acceleration of gravity, g, is 9.81 m/s2.

v2 = 280.01 m/s – 9.81 m/s2·0.00357 sec = 279.97 m/s

This is a very small change in velocity, but it has a noticeable affect on pressure! At 1-m altitude, the pressure will be

p = (2.6868E+25 molecules/m3)(4.80992E-26 kg)(279.97 m/s)2 = 101,312.3 Pa.

The pressure decreased by 12.7 Pa in 1 m altitude change from sea level. We can compare this result to what we can obtain using the hydrostatic formula:

For a 1-m height change, the change in static pressure is:

which agrees with our pressure change we computed using the decrease in molecular vertical velocity. I should point out here that only the temperature was allowed to change here as a result of a change in kinetic energy, the molecule density remained constant. A more accurate formulation would account for both properties decreasing simultaneously.

Why do Light Gases Rise in the Atmosphere?

Imagine yourself as a helium molecule in the atmosphere illustrated in Figure 7 as the small red circle. Below you is a crowd of nitrogen and oxygen molecules, above you is a slightly smaller crowd. If you happen to be heading downward, eventually you’ll collide with a nitrogen or oxygen molecule. You may find yourself moving upwards. It’s getting less crowded now, but you’ll still bump into other molecules that send you back down. Overall, you are working your way up, it might be three bumps up, two bumps down. There are fewer and fewer nitrogen and oxygen molecules to push you downward as you rise in the atmosphere. The collisions between molecules of widely differing masses in a gravity field will sort themselves out with lower mass molecules going above higher mass molecules.

Figure 7. An imaginary group of molecules in the atmosphere moving about. A helium molecule is in the mix as a small red circle. Can you find it? The blue line is an imaginary plane that we use for balancing momentum and kinetic energy of the molecules.

Different gas species at the same temperature have the same kinetic energy, but not the same RMS molecular speed and not the same momentum. When molecules collide with one another they conserve both kinetic energy (m·v2) and momentum (m·v). Kinetic energy and momentum are frequently confused with each other. Momentum is proportional velocity while kinetic energy is proportional to the square of velocity. The mass of “air” molecules (approximately 79% N2 and 21%O2) is 7.2 times more than the mass of helium molecules. If helium and air are at the same temperature, the RMS speed of the helium is about 7.20.5 = 2.7 times faster than air molecules. Despite the higher speed of helium molecules, air molecules have about 2.7 times more momentum than helium molecules. In collisions between air and helium molecules, air molecules will impart large changes to the speed of helium molecules.

In Figure 7, I’ve drawn an imaginary line that the molecules are crossing. In general terms the number of molecules per volume (molecular density) is higher below the line. Many of these molecules are moving upwards. The upward momentum of these molecules is a product of the mass of molecules moving up times their velocity. Above the line, the molecular density is lower and many of them are moving downwards. The downward momentum is also a product of the mass of molecules moving down times their velocity. The velocity of molecules moving up is a bit slower than the average molecular speed because of the downward pull of gravity. Likewise, the velocity of the molecules moving down is a bit faster than the average molecular speed. At the line I’ve drawn, the downward momentum of the molecules above the line equals the upward momentum of the molecules below the line. Mathematically this is expressed as:

mdown·vdown = mup·vup

Since gravity makes the average upward velocity, vup, less than the average downward velocity, vdown, this implies that molecular density of molecules moving up, mup, is more than molecular density of molecules moving down, mdown. This creates the decreasing density over altitude gradient we observe in the atmosphere.

If we introduce a population of helium molecules as red circles, as shown in Figure 8, there are two situations we must consider. We assume the helium molecules are at the same temperature as the air molecules so they have the same kinetic energy but not the same momentum. First consider the helium molecules below the top line. If the molecular density of molecules is the same on both sides of the line, the air molecules will push the helium molecules downward because the air has more momentum.

Now look at the helium molecules above the lower line. The air molecules will push the helium upward because the air has more upward momentum than the helium has downward momentum. So where is the helium going if the top is being pushed downward while the bottom is pushed upward? Across the interval between the two lines, the pressure of the atmosphere decreases in the upward direction. The decrease in atmospheric pressure means the downward momentum of the air at the upper line is less than the upward momentum of the lower line. The lines need not be separated by a great distance. Even at the molecular level, there is a pressure gradient and a momentum gradient.

Figure 8. An illustration of helium molecules, red circles, in the atmosphere among nitrogen and oxygen molecules, black circles.

In this thought experiment, we’ve shown how the helium molecules will rise in the atmosphere because of the difference in molecular momentum between air and helium molecules.

But what about hot air? Does it work the same way as a low molar mass molecule? If we increase the temperature of a group of nitrogen and oxygen molecules, temperature is proportional to velocity squared. Temperature is proportional to volume, so volume is proportional to velocity squared. Did you follow that argument? Hopefully it didn’t come across as circular reasoning. But momentum is proportional to plain ole velocity. What happens as air is heated is that its volume increases at a much faster rate than its momentum. The larger volume creates a lower particle density that is proportional to velocity squared. The end result is that hot air has less momentum per unit volume than cold air even though it has more kinetic energy per molecule. Less momentum per unit volume for hot air is the same effect observed in the low molar mass molecules like helium and hydrogen. Thus hot air rises because the momentum balance pushes it upwards.

If we collect the hot air or helium molecules inside of a bag, we are merely replacing the imaginary lines we’ve drawn in Figure 8 with a physical barrier. Let’s call this barrier a balloon for the sake of convenience. The air on the bottom of the balloon exerts an upward pressure force while the air on the top of the balloon exerts a downward pressure force. The upward force will out-muscle the downward force until the momentum of the captured helium molecules balances out the momentum of the atmosphere molecules. This happens when the density of the gas inside the balloon is roughly equal to the density of the atmosphere outside the balloon. I say roughly equal, because of course we have to take into account the mass of the balloon itself holding the helium. Now dear children this is the end of our story on why balloons float.

If you read this post to the end, send me an email if you want to get your certificate of deep knowledge on why balloons float.

How to conduct powerful science? Check your ego at the door.

When I was a graduate student, I was asked to review a paper submitted to a technical journal.  The authors proposed a theory I didn’t agree with, and so I read the paper with anything but an open mind, selectively ignoring the good data they presented while seeking other data to support my way of thinking.  In my review, I stated that I didn’t agree with the theory and presented what I thought were valid arguments as to why it was wrong.  In their rebuttal, the authors pointed out a critical error in my argument.  I quickly realized, to my shame, that their point was absolutely correct. 

There’s a reason why I’m sharing this embarrassing story.  It shows what can happen when you become attached to your own ideas, and in the world of science, how damaging the results can be. 

Given this, what do we do?  How can we possibly generate an idea, or hypothesis, about why a specific aspect of nature behaves the way it does without becoming attached to it? Read on.

Science begins with observation

How do we discover nature’s truths?  Well, we could simply think deep thoughts.  But this alone won’t work.  Why not?  Because what deep thoughts would we think about in the absence of outside stimulus?  So what’s needed is observation.  We learn about nature by first observing it—sight, sound, smell, touch, taste. This is how Aristotle and the Ancient Greeks brought structure to science.  With curious inquiry, logic-based reasoning, and pure thought, they observed nature and proposed causes for what they saw, making significant contributions to many different areas in so doing.

There was one problem with this approach, however.  Measurement, or more accurately, lack thereof.  The Ancient Greeks came up with theories that they didn’t test.  It took many years for some to question the Ancients’ theories with experiments, Galileo arguably being the first.

Sitting at the intersection of academia and practical engineering, Galileo started observing, not passively but actively.  He got his hands dirty by building equipment and conducting experiments to better understand nature’s inner workings.  Swinging pendulums, firing cannons, rolling balls on inclined planes.  He then used mathematics to analyze the data he gathered, believing the universe to be “written in the language of mathematics.”  While Aristotle sought cause, Galileo ignored it, preferring measurement and analysis instead.  What Galileo discovered contradicted Aristotle and so helped launch the scientific revolution.

A former boss of mine once told me something that left a deep imprint on me.  He said, “one data point is worth one thousand opinions.”  Many may sit and passively think about why something is happening, but to those like Galileo who roll up their sleeves, go out into nature, and take a data point, I say, “Wonderful!”.  Because in that moment when the data point arrives, one thousand “opinions” evaporate, replaced by a fact. But what do you then do with the fact? Read on.

Induction follows observation

Once you gather (accurate) observations and data from your experiences and experiments, do you stop there?  Well, if you’re curious, you don’t.  You can’t.  Your curiosity won’t let you.  You’ll want to understand the cause behind the effects you’ve observed.  This step is called induction, and the cause that’s proposed is called a hypothesis.

But is this the end of the process?  Will other scientists listen to your induced hypothesis and say, “Boy, that sure sounds good to us!  Great job!”?  No.  This would be highly unlikely.  The more likely response would be something like, “Prove it!”  And this is entirely appropriate (although it could be stated more politely).  This is how good science flows, because how do you really know if your hypothesis is correct based on observation only?  Perhaps, for example, you’re witnessing correlation and not causation.  Perhaps you’re only seeing a part of the story and not the whole story.  The point is that you don’t really know.  This was the problem with the induced hypotheses of the Ancient Greeks; they were based on observation only.  They didn’t really know.

Deduction follows induction

How do you prove a hypothesis?  Well, the short answer is that you can’t.  You can’t prove a theory.  Many thought Newton’s Theory of Universal Gravitation was “proven” until Einstein arrived with his General Theory of Relativity.  And who really knows if this is the final chapter on gravity.

But while you can’t prove a hypothesis, you can still conduct experiments to test it.  The approach is to assume that the induced hypothesis is true, deduce a consequence or a prediction based on it, and then experimentally validate (or not) the prediction.  The more surprising and unexpected the prediction, the more powerful the test.  In other words, a prediction that doesn’t separate the new hypothesis from a former way of thinking doesn’t really demonstrate anything.  Ideally, the prediction should be of something that hasn’t yet been observed.

Francis Bacon and, later, Karl Popper took this deduction process to an even higher level.  Recognizing that no number of experiments can prove a hypothesis and that only one experiment is required to disprove a hypothesis, they proposed testing a hypothesis by deducing ways to eliminate it.  This approach, which took the required level of intellectual creativity also to a higher level, embraced the become-your-own-worst-critic mentality.  If your hypothesis can withstand your best shots, then perhaps it is correct.

One of my favorite historical “best practice” examples of how this induction-deduction process works occurred when James Clerk Maxwell challenged Rudolph Clausius’s kinetic theory of gases by first assuming the theory’s correctness, then applying a higher level of mathematics involving statistical distributions to the theory, and finally using the model to predict how gas viscosity varies with density.  Everyone knew that the greater the density, the greater the friction, and thus the higher the viscosity, right?  It made absolute sense, even to Maxwell, for when the model told him that density has no effect on viscosity, he set to work right away to build an experimental apparatus to disprove the “absurd” conclusion.  Except it didn’t end up that way.  He, together with his wife, built an excellent piece of equipment and found—well, and found that the mathematical model was absolutely correct.  This was a wonderful moment in the history of thermodynamics, for this single data point greatly increased confidence among the world’s scientists in both the kinetic theory of gases and the atomic theory of matter on which it was based. 

The Scientific Method

Maxwell’s example demonstrated the power of what became known as the scientific method: observe nature, induce a hypothesis, deduce a consequence, experimentally support or refute the consequence.  While this description is admittedly brief, it captures the essence of the method.  The top two panels in the figure below (from my book Block by Block – The Historical and Theoretical Foundations of Thermodynamics) illustrate this sequential process.  Rudolf Clausius induced his hypothesis of the 1st Law of Thermodynamics from the data of, among others, Sadi Carnot and James Joule. J. Willard Gibbs then deduced 300 pages worth of consequences of Clausius’s induced hypothesis. Their combined work helped establish the new field of thermodynamics.

This approach sounds pretty good.  What’s wrong with it?

It’s hard to find the Achilles’ Heel in the scientific method as stated above, isn’t it?  But it’s there, implicit in its use of the singular “hypothesis.”  The way science evolved often rested on the proposal and validation of a single hypothesis.  And this approach chalked up some major successes.  But it also chalked up some major failures.  Which brings me back to the opening of this post.  When you propose a single hypothesis, you become attached to it, even in the face of mounting evidence that it’s wrong.  Worse yet, you run the risk of becoming guilty of what’s known as confirmation bias, the act of seeking only those data that support your hypothesis and ignoring those that don’t.  You embrace and defend your hypothesis because it’s yours.

Consider some incorrect hypotheses that enjoyed long lives: the Earth-centered universe, Aristotle’s theories on motion, the phlogiston theory of fire, the caloric theory of heat, the anti-atom theory of matter.  While those who held on tightly to such theories may not have come up with them, they believed in and even staked their reputations on them, resisting contrary evidence and unfortunately hindering progress.  They embraced their chosen hypotheses to the grave, which led to Max Planck’s famous quote, “Science advances one funeral at a time.”

Why am I bringing this up?  In the world of science, our job is to discover nature’s truths.  When we bring our egos into the mix, it sets us on a collision course with learning the truth.  The challenge we face is, how do we neutralize our instinct to love our own ideas?

How do you neutralize the ego?  Multiple hypotheses!

T.C. Chamberlain, a late-1800s geologist and educator, proposed a path out of this dilemma.  Acknowledging the attachment problem inherent to the single hypothesis, Chamberlain recommended proposing many hypotheses.  Observe nature, take measurements, and then propose as many hypotheses as you possibly can that are consistent with the data.  In this way, you shift the focus from a negative conflict between scientists, each embracing his or her own individual hypothesis, to a positive, exciting, and team-based conflict between ideas in which technical debate among those with differing perspectives is encouraged in order to learn and not to win.  The creativity needed to propose and then attempt to eliminate many hypotheses can be very liberating, releasing you from the instinct to protect your own single hypothesis and energizing you toward true discovery.

Combining Bacon, Popper, and Chamberlain into the strong inference method of scientific discovery

John R. Platt wove the approaches of Bacon, Popper, and Chamberlain into his strong inference method (Science 1964). As illustrated in the bottom panel of the below illustration, strong inference embraces the creation of multiple hypotheses followed by a strategically designed attack on each and every one until only one—the most likely hypothesis—is left standing.

Once seen, strong inference is hard to un-see.  It makes so much sense. And it opens your mind toward a critical assessment of how science is being carried out today.  Look around you at the many science-related problems that need solving.  How are we going about doing this?  Do you hear multiple hypotheses being proposed and then attacked until one remains?  This is what strong inference is all about.

Strong inference is a powerful place for science to live.  By becoming our own critics, leaving our egos on the sidelines, and embracing and attacking multiple hypotheses, we will arrive at an answer that has withstood the best attacks we could offer.  This answer then becomes that which guides us forward, toward whatever goal it is that we’re trying to achieve.  Why would we choose to conduct science any other way than this?  Why would we accept any other answer than the one that results from such a process?

I end this post with a quote by Louis Pasteur that embodies the essence of strong inference.

What I am here asking of you, and what you in turn will ask of those whom you will train, is the most difficult thing the inventor has to learn.  To believe that one has found an important scientific fact and to be consumed by desire to announce it, and yet to be constrained to combat this impulse for days, weeks, sometimes years, to endeavor to ruin one’s own experiments, and to announce one’s discovery only after one has laid to rest all the contrary hypotheses, yes, that is indeed an arduous task.  But when after all these efforts one finally achieves certainty, one feels one of the deepest joys it is given to the human soul to experience.  Louis Pasteur, Nov. 14, 1888, in a speech given at the inauguration of the Pasteur Institute in Paris.

Illustration below from Block by Block – The Historical and Theoretical Foundations of Thermodynamics

Illustration by Robert Hanlon and Carly Sanker

Many thanks to Jim Faler and Brian Stutts for their helpful contributions to this post.

Carrying the Dreams of the Montgolfier Brothers to Other Worlds

Balloons – Early Thermodynamics Machines

A team of JPL engineers tests whether a large balloon can measure earthquakes from the air. The team proposes to measure “Venus-quakes” from the upper atmosphere of Venus, using an armada of balloons. The author is on the left holding a fan to inflate the solar balloon. Image Credit: NASA/JPL-Caltech

For this post I invited fellow thermodynamics enthusiast Mike Pauken, principal engineer at NASA’s Jet Propulsion Laboratory and author of Thermodynamics for Dummies, to share with us his involvement with the design of balloons for Venus. He kindly accepted my offer. Please extend a warm welcome to Mike! – Bob Hanlon

Before getting to his post, allow me to give you some more background on Mike

Mike Pauken is a principal engineer at NASA’s Jet Propulsion Laboratory operated by the California Institute of Technology. He was first introduced to the world of ballooning while teaching at Washington University in St. Louis, Missouri. Steve Fossett, a Wash U. alum, was attempting to be the first person to fly around the world solo in a balloon. The university was serving as his mission control center for his flight attempts. Fossett ditched his balloon in Russia in 1997 because his cabin was freezing, and Mark Wrighton, the university chancellor, promised Fossett that he’d have his mechanical engineering faculty look at improving his cabin heater. Mike was one of the faculty tapped to fix this problem. A few years later, while working at JPL, they were looking to add experienced balloon engineers on a Mars balloon technology development project. Knowing Mike had helped Steve Fossett, they thought Mike was an expert balloon technologist. Mike protested that he knew nothing about balloons, but that didn’t matter. Mike was signed on to the Mars balloon team anyway. Now, twenty years later, Mike’s primary research area is developing planetary aerial vehicles. He is currently working on balloon concepts for flying in the upper Venus atmosphere. In addition to developing Venus balloons, Mike is also working with a team to develop an instrument that would fly on a Venus balloon to detect infrasound waves generated by Venus seismic activity. We currently do not know how seismically active Venus is compared to Earth and Mars, but understanding seismic activity levels of rocky planets is a key element to figuring out how terrestrial bodies form. More information about this research is available here.

Mike’s work in planetary balloon technology development and expertise in thermodynamics has resulted in the development of this mini-series of posts on balloons and their development in the context of thermodynamics with a vision for the future of balloons to explore other planets in our solar system. This 3-part series starts with an historical context of balloons with the rise of thermodynamic advancement, then explores the fundamental physics behind the concepts of buoyancy and hydrostatic pressure to explain why balloons work, and concludes with a general discussion of balloon flight on other planets.

Series Title: Carrying the Dreams of the Montgolfier Brothers to Other Worlds

Part 1: Balloons – Early Thermodynamic Machines

Part 2: Why do balloons float?

Part 3: Like a Bird on Venus

Part 1: Balloons – Early Thermodynamic Machines

« Jacques est-ce que tu te souviens quand nous étions enfants et que nous rêvions de voler comme des oiseaux et de voir le monde d’en haut? » Demanda Étienne « et bien j’ai une idée … »

“Do you notice that smoke from a fire rises up? How do clouds float so high? Imagine if we could capture the clouds or smoke from a fire and put it into a bag, may the bag not fly upwards?” Étienne (Stephen) and Jacques (John) Montgolfier may have had such a conversation in the summer of 1782 in Annonay, France.

By November 1782, Étienne fabricated a rectangular bag, about 40 cubic feet in volume, from fine silk. He burned paper under the open bottom of the bag to create “rarefied air” and soon the bag ascended rapidly to the ceiling. A short while later, after this initial success, the brothers repeated the experiment outdoors and the silk bag rose to about 70 feet before returning back to the ground as the gas cooled. Delighted with the success of these experiments, the Montgolfier brothers resolved to build a larger machine. This second prototype had a volume of about 650 cubic feet and after the fires underneath warmed the air inside the balloon, it broke loose from its mooring and ascended 600 feet in the air before returning to the ground.

Gaining confidence in their new invention, the brothers built a third machine, this time with a diameter of about 35 feet. They did not have a name for this new machine yet. On April 25, 1783, they lit the fires under this large envelope and again the ropes holding it down gave way and it rose more than 1000 feet before returning back to Earth about three quarters of a mile away. Having privately achieved these successes, Étienne and Jacques were ready for a public display of their new invention.

On Thursday June 5, 1783, a crowd assembled to witness the new aerostatic experiment. The enormous linen bag could hold over 23,000 cubic feet of gas when filled. By now the brothers were able to calculate that the experiment could lift about 490 pounds. Burning straw and wood under the platform holding the experiment above the fire, it soon filled out to a spherical shape. Eight men held it down. When the ropes were set free, the machine rose to about 6000 feet in 10 minutes and then it landed about a mile and a half away to the astonishment of the viewers. Thus, began the race for ascending into the air and seeing the world as birds view it.

Soon after the news of the Montgolfier brother’s achievement reached Paris, the scientific community there began thinking of ways to do the experiment themselves. The information from Annonay reported the Montgolfier machine was filled with a gas that was half as heavy as common air.

Short break for some quick calculations

Let’s pause a moment here and do some quick calculations on the Montgolfier balloon using a volume of 23,000 cubic feet, 490 pounds of lift, and density half of common air. It’s always good to do a bit of fact checking, especially these days; we can’t just take anyone’s word for it. The lift force of a balloon is determined from the this relatively simple equation:

F = V·g·(ra – rb)

Where F is the lift force, V is the balloon gas volume, g is the gravitational acceleration, ra is the density of the atmospheric gas, and rb is the density of the balloon gas. In this system the balloon gas density is the hardest to measure, so let’s assume this is the greatest unknown and solve for it. I prefer working in SI units (metric system) and we have a lift of 2180 Newtons, a volume of 650 cubic meters, air density of 1.2 kilograms per cubic meter, and gravitational acceleration of 9.8 meters per second squared. The gas density inside the balloon is estimated by solving the following equation for rb:

2180N = 650m3·9.8m/s2·(1.2 – rb)kg/m3

Where we find that rb is approximately: 0.85 kg/m3, which is about 70% as dense as air, not half as dense. The reader may then calculate the bulk average gas temperature inside the balloon would have been about 136°C (278°F) as an exercise using the Gay-Lussac gas law. If the lift capacity were unknown and the density inside the balloon was taken as true, we would discover the lift capacity would be on the order of 860 pounds, much higher than the Montgolfier brothers estimate, and the average gas temperature would be about 313°C (595°F). Silk burns at about 206°C, so we know the balloon gas temperature wasn’t this hot and the density could not have been nearly half that as ordinary air.

Continuing – The First Hydrogen Balloon

Getting back on track in 1783: The scientists in Paris imagined that a new gas was discovered that was previously unknown because it was heavier than inflammable air (hydrogen) yet lighter than common air. The scientists concluded that inflammable air would be a better gas for the experiment than that used by the Montgolfier brothers. A subscription to fund the Paris experiment was quickly raised and Jacques Charles (of subsequent Charles Law fame) was appointed to oversee the work. But how to build a large bag to hold inflammable gas? It leaked through paper and silk; there was no such thing as plastic or latex. It was decided to build the bag from lutestring silk, a material with a very fine thread and a plain weave, and then varnish it with a dissolved elastic gum to make it as impermeable as they could.

At this point the inflated bag resembled a giant ball and it was thus called a Balloon. The new balloon was about 13 feet in diameter and weighed only about 25 pounds including a valve at the bottom to seal in the inflammable gas. The second problem to overcome was generating the required volume of inflammable gas. Such a large quantity had never previously been produced. The first attempt to produce the inflammable gas consisted of using a chest of lead-lined drawers filled with iron filings and dilute vitriolic (sulfuric) acid. The chest of drawers had a pipe connecting it to the balloon. It turned out that this arrangement wasted more gas than went into the balloon.

A second apparatus was set up using a cask filled with dilute acid, and iron filings were poured into it through a bung hole. The gas was connected to the balloon using a varnished leather tube. A dilute solution of sulfuric acid produces only hydrogen and iron sulfate, but as the reaction generates a significant amount of heat, much water is evaporated and thus the Charles balloon was filled with a mixture of hydrogen and water vapor. The balloon and pipe were prone to overheating and water was pumped against them to keep them cool. The water vapor thus condensed and the water inside the balloon was intermittently drained.

The process to fill the 13-foot diameter hydrogen balloon took 3 days compared to the tens of minutes it took to fill the 35-foot diameter hot air balloon of the Montgolfier brothers. The gas-filled Charles balloon was ready for a public demonstration only 83 days after the first public viewing of the Montgolfier balloon; truly an astonishing accomplishment! During the night of August 27, 1783, the inflated balloon was moved 2 miles on a cart from the Place of Victories to the Camp of Mars in Paris. The balloon was topped off with hydrogen in front of a crowd of onlookers, giving them an idea of how the balloon was filled.

At 5 pm, the balloon was launched in a rain storm and rose over 3000 feet in two minutes. The balloon was lost in dark clouds for about 45 minutes when it finally came down in a field about 15 miles away. The balloon had ruptured at high altitude which caused it to come down after such a short flight. The expansion of the gas contained in the balloon as it rose up in the air was not accounted for prior to the flight, but Jacques Charles realized this caused the bursting of the balloon upon examination afterwards and learned not to overfill the balloon on the ground. You can read about how well the balloon was received by a group of farmers that found it soon after it landed if you do a little bit of investigating on your own.

The reader might be interested in seeing how the Montgolfier brothers and Jacques Charles inflated and launched their balloons. The sketch below is in Tiberius Cavallo’s book describing the method of launching hot air and hydrogen balloons in the 1780s. A description of the figure is available in the Public Domain from Digital Science History.

Image credit: Cavallo, Tiberius. “Plate II: Illustrating the Chemical Apparatus and Balloons Used for Hydrogen Generation.” In The History and Practice of Aerostation. London, England: Printed for the author and sold by C. Dilly, P. Elmsly, and J. Stockdale, 1785. https://digital.sciencehistory.org/works/n296x033r.

Lift Gases

For the first 140 years of balloon flight, hot air and hydrogen were the dominate lift gases in use. Hydrogen was known as inflammable air in the early days of ballooning. A mere 17 years prior to Charles making his first hydrogen balloon, Henry Cavendish determined the density of inflammable air and found it was between 7 and 11 times lighter than air in 1766. (You can imagine how hard it would be to measure hydrogen’s density accurately in the day. I would suspect his samples may have also contained some water vapor as Jacques Charles found when he inflated his balloon. It is actually about 15 times lighter than air.) The idea of making a vessel to contain hydrogen such that it would float in air was made soon after this discovery. Dr. Joseph Black, who had been working for years with hydrogen showing how it burns and explodes, realized it could be used to create a vessel that could float in the air. Dr. Black thought that making such a vessel would be an amusing experiment for his students in 1767. But what would you use to make a thin vessel to contain the hydrogen? Dr. Black decided to make the vessel from the allantois of a calf, basically the fetal sac of a cow. He never did find time to make such an experiment even though he went to the trouble of actually getting an allantois for it. The hydrogen container proved to be a hard problem to solve, but Jacques Charles was the first to succeed in a big way in 1783.

Helium, the most common balloon gas today, was not discovered until 85 years after the first hydrogen balloon was flown. It was first observed in spectral lines on the Sun during an eclipse in 1868, and Norman Lockyer named the newly discovered element “Helium” after the Greek Titan of the Sun, Helios, by adding the “ium” ending, thinking it was similar to the alkali metal series which all have the same ending, e.g. sodium, potassium, etc. It wasn’t found on the Earth until 1895 outgassing from uranium ore. Helium generation from uranium ore is not sufficiently productive to be a useful source for filling balloons. Large deposits of helium were discovered in 1903 in natural gas reservoirs, which is where we obtain helium for today’s uses. The first use of helium in an airship was for the U.S.S. Shenandoah, a 2-million-cubic-foot rigid airship commissioned in 1923. It consumed practically all the U.S. government supply of helium to fill it at the time.

Balloons in the Thermodynamic Timeline: Relative to the Gas Laws

If we place ourselves in the year 1783, we do not have the same perspectives on thermodynamics related to balloon flight as we do today or even 50 or 100 years ago. A floating balloon touches on buoyancy, density, pressure, temperature, the ideal gas law, and the kinetic theory of gases. Furthermore, a balloon also performs work! And in some cases, converts heat directly to work. The development of the balloon occurred in the heyday of the advancements in understanding the workings of machines through thermodynamics. Let’s take a brief look at the state of thermodynamics in the context of the invention of the balloon.

Tiberius Cavallo wrote “The History and Practice of AEROSTATION” in 1785 and provided a contemporary account of the early days of balloon development including a review of relevant knowledge from a thermodynamic point of view. Every detail I described above about the Montgolfier balloon and the Charles balloon came from this valuable resource. I recommend the reader download this volume for their own benefit. It was known that the density and volume of air could be changed by “means of fire” or removing pressure as demonstrated by the invention of the air pump in the mid-1600s. Quantification of the relationship between pressure and volume were first determined from experiments performed by Robert Boyle and published in 1662. Cavallo writes: Doubling pressure decreases the volume of air by half. Heat expands the air while cold contracts the air. One degree of heat, according to the scale of Fahrenheit’s thermometer, seems to expand the air about one five hundredth part. Not too bad actually since today we know it is one part in 460.

Working with the hydrogen balloon, Jacques Charles observed the change in volume with temperature when it was cooled with water. This led him, four years later (1787), to experiment with gas volumes and temperature changes. Details of these experiments by Charles are hard to find since he did not publish them. Charles wasn’t the first to make this observation, nor the last. Joseph Louis Gay-Lussac (also an avid balloonist) discussed the experiments with Charles for measuring the volume of 5 different gases at constant pressure over a temperature range from 0 to 80°C. A bit critical of Charles’ apparatus, Gay-Lussac made improvements upon the experimental method, and he gave Jacques Charles credit for his work in his publication of the results. The details of the volume-temperature relation experiments are all thoroughly described (in French) in: “Sur le dilatation des gaz et des vapeurs,” Annal. Chim. 43 (1), 137–175, 1802. You can find a sketch of Gay-Lussac’s experimental apparatus here.

Having touched upon Boyle’s Law and Gay-Lussac’s Law I should mention that in July 1811 Amedeo Avogadro published his hypothesis that samples of different gases with the same volume, pressure, and temperature have the same number of molecules. This hypothesis is the last element needed to formulate the ideal gas law. Although we have all the pieces for assembling the ideal gas law by 1811, it wasn’t formally put together until 1834 by Benoît Clapeyron. The fact that so much time elapsed from the initial formulation of the various relationships between pressure, volume, temperature, and the number of molecules in a gas (from 1662 to 1811) to formalizing them into the ideal gas law is due to the difficulty in seeing the big picture on how these separate laws are related to each other. Furthermore, getting new ideas accepted in the science community is not an easy task. Usually it takes several influential investigators conveying the message to make the case stick. Using the kinetic theory of gases, the ideal gas law was independently derived by August Krönig in 1856 and Rudolf Clausius in 1857. Thus, it still took nearly 25 years using multiple proofs to formally define and accept the ideal gas law that we use today without question.

The connection between the kinetic theory of gases, first described by Daniel Bernoulli in 1738, and balloons is not intuitively apparent. But consider that a balloon is a volume containing a gas at a specific pressure, temperature and density; we want to understand why a balloon has buoyancy from a molecular or microscopic point of view. The kinetic theory of gases is the roadmap connecting the micro-scale phenomena to the macro-scale effects. We will look at why there is such a thing as hydrostatic pressure and buoyancy starting with the kinetic theory of gases in more detail in a second blog. Please come back next month to hear the rest of the story on understanding the cause of buoyancy and hydrostatic pressure from kinetic theory.

If you’re not up for doing a French to English translation, the first sentence of this blog reads:

“John, do you remember when we were kids and we dreamed of flying like birds and seeing the world from above?” Asked Stephen, “well I have an idea …”

Electric Cars – Is “zero emissions” a valid claim?

I just read an article about an electric vehicle having zero CO2 emissions and thought it’d be an opportune moment to emphasize the value of thermodynamics in critically assessing such claims. Let’s walk through how this is done, starting first with a recap of the foundational mass & energy conservation laws.

The conservation laws for mass and energy define what is and is not possible

Man’s failed attempts at alchemy and perpetual motion revealed the underlying mass and energy conservation laws that prevented each from happening. Alchemy was the effort to magically transform mass from one form to another that never succeeded but instead led to the wonderful research performed by Antoine Lavoisier in his Parisian laboratory in the late 1700s. In a series of experiments involving chemical reactions of gaseous species such as oxygen, nitrogen, hydrogen, and water, Lavoisier monitored the weights of the reactants and products very meticulously and so quantified highly accurate reaction stoichiometries. It was in the perfection of his mass balance methodology that he concluded, “nothing is created either in the operations of the laboratory, or in those of nature, and one can affirm as an axiom that, in every operation, there is an equal quantity of matter before and after the operation.” In addition to discovering the conservation of mass, his results also helped found modern chemistry and, later, helped validate the atomic theory, since not only is mass conserved but so too the numbers of atoms of each element involved (ignoring nuclear reactions).

As for perpetual motion, for hundreds of years, scientists sought in vain to create mechanically operated and gravity-driven devices and machines that could work forever. By around the mid-1700s, the frustration with repeated failures evolved into the realization that perpetual motion is impossible. A rudimentary form of the conservation of mechanical energy then emerged in which the sum of kinetic energies and potential energies (gravitational) remains constant for a system of moving and interacting bodies. It would take another 100 years for heat to be added to this summation based on the work of a small group of scientists. In 1850 Rudolf Clausius quantified these concepts with his famed equation, dU (change in energy) = Q (heat) – W (work). The equation became known as the 1st Law of Thermodynamics and solidified energy as the central property in the new field of thermodynamics.

The Mass & Energy Balance (M&EB)

Lavoisier’s and Clausius’s separate works eventually merged into a seemingly simple equation for any system with a defined boundary, Accumulation = IN – OUT (Figure 1). Since both mass and energy are conserved, this equation applies to each property individually and thus became the core of what we now call the Mass & Energy Balance (M&EB). Scientists use the M&EB as a necessary (but not sufficient) reality check on their work. If the equation fails, then scientists conclude that something must be wrong with their data and not the equation, thus prompting them to review their assumptions, their calculations, their equipment, and so on. The power of the M&EB revealed itself in 1925 when experimental results on beta-decay didn’t make sense from an energy-balance perspective, thus casting doubt on the conservation of energy itself. It was Wolfgang Pauli, who, “desperate” for a solution to save the energy law, hypothesized a way out by proposing in 1930 the existence of a new, difficult-to-detect particle to account for the missing energy. This particle, the anti-neutrino, was discovered in 1956 and not only provided yet more supporting evidence for the conservation of energy but also demonstrated the active use of this law as a reality check on experimental research.

While the M&EB can be applied to any system with a defined boundary, it’s usually applied to a continuous process. As most continuous processes operate at steady-state for which there’s no accumulation, then whatever you put IN to the process must eventually come OUT.

The M&EB is often used in the direction of either IN to OUT or OUT to IN. Here’s what I mean by this. In the former, you know what you put IN and so make sure that everything you measure OUT is accounted for. For example, a single reactant flows into a reactor and many products flow out, some of which might be hard to detect. You focus your attention on all of your measuring devices for flow and composition to make sure that all of the IN atoms are accounted for in the OUT. The same logic applies for energy. If you know the IN energy, then you know what the OUT must be, and if you’re not arriving at that answer, then something’s wrong. This was the logic used in the beta-decay discussion above. The power of this tool explains why it became one of the engineer’s guiding principles: in equals out at steady state.

The other approach for using the M&EB is when you know the OUT that you want. If you know you have to produce so much of a given OUT material, then you can work backwards to determine the IN required to do this. With regards to the automobile, for example, you know the OUT you want to achieve. It’s the energy required to move a person from one place to another against the resistant forces of wind and road friction. You know what OUT looks like and so can then work backwards to determine the IN to get there.

What is the appropriate boundary for the Mass & Energy Balance when evaluating CO2 reduction options?

While use of the M&EB seems pretty straightforward, there’s one complication involved. Notice the previously used words, “defined boundary.” Where exactly do you define the boundary? The answer is that it depends what your objective is, which brings us back to the electric vehicle.

Consider the typical gasoline car and our desire to reduce transportation CO2 emissions. Where would you draw the M&EB boundary to analyze this situation? Well, you could draw it around the car itself (Figure 2) and then consider how much CO2 flows out of the tailpipe. Assuming steady-state operation, the carbon contained in the gasoline IN stream determines the carbon contained in the CO2 OUT stream. Simple enough.

Now consider an electric car (Figure 3). What are the corresponding IN and OUT streams? Here you would have a continuous flow of electricity IN and zero CO2 emissions OUT. Whereas for the gasoline car, you have a certain amount of CO2 emitted, for the electric car, you have zero CO2 emitted. Is this a valid comparison on which to draw a conclusion about which option is more desirable? Again, the goal is to reduce CO2 emissions caused by driving. What is the real boundary to consider?

The answer is that the boundary should encompass not just the operation of the vehicle itself but also everything required to create both the vehicle and its energy source (Figure 4). Only in this way can you quantify which option results in the lowest total CO2. For example, are the energy source options used to power the automobile sitting in nature, ready to use? Clearly not. There’s no lake of gasoline in nature, nor is there a “lake” of electricity. When you plug a cord into the electric outlet, electricity isn’t just sitting there waiting to flow. Each of these resources must be manufactured or generated and so the system boundary should be drawn all the way back to the resources that are sitting there in nature waiting to be used. The crude oil, coal, or raw nuclear fuels buried underground. The water fall, the wind, the sunlight. Such a system boundary would then justifiably account for the construction and operation of the infrastructure required to extract the natural resources and refine them into usable forms.

And that’s not all. Don’t forget that there’s no “lake” of cars sitting around either. You need to account for the building and continuous re-charging of the cars, again starting with natural resources. You might think that a car’s a car and so there’d be no need to account for this when comparing options. But the material inputs used to manufacture a gasoline-powered car are quite different from those used to manufacture a battery-powered electric car. For each automobile option there will be many and often different natural-resource IN stream components. Also don’t forget that the options must be compared using the same rule book, meaning, for example, that the same regulatory criteria as regards safety and environmental impact must be applied to all activities comprising each option.

Life Cycle Analysis (LCA)

This example is very real and calls for an accurate M&EB analysis to determine which option is best for reducing total CO2 emissions associated with driving. This type of all-encompassing analysis is called a Life Cycle Analysis (LCA) as it covers every single cradle-to-grave step in the process, including the final step of handling the used, no-longer-working automobiles themselves, with either recycling or disposal. Each step of each option can be characterized by its own CO2 emission OUT stream, the sum of which becomes the metric by which to make an informed decision.

The discovery of mass, energy, and the conservation of each helped lay the foundation on which thermodynamics was built, including the Mass & Energy Balance and the Life Cycle Analysis. These methodologies offer the means by which informed decisions can be made by ensuring that all is accounted for and nothing ignored. I do not offer an opinion here about which car option is best as regards CO2 emissions but I do suggest that the final decision can only be made after a thorough LCA is completed.

Illustrations by Carly Sanker

Happy birthday, Henrietta Leavitt!

You’ve likely heard of the Big Bang theory and the name of Edwin Hubble associated with it. But a person you may not have heard of is Henrietta Leavitt. Leavitt played a critical role in enabling Hubble’s accomplishment. Seeing as today’s her birthday, let’s celebrate her, her achievement, and her impact on astronomy and cosmology.

Born on the 4th of July, 1868, in Lancaster, Massachusetts, to a church minister, Henrietta Swan Leavitt attended Oberlin College, transferred to Harvard University’s women’s college, later to become Radcliffe, studied a broad range of curriculum, and received her bachelor’s degree in 1892. In her final year, she signed up for a course on astronomy and so took her life in a new direction. She began working for the Harvard College Observatory, then under the direction of Edward Charles Pickering, and after years of this work, which were interspersed with some travel and teaching elsewhere, she became a permanent member of the Observatory in 1902.

Leavitt’s focus at the Harvard Observatory was on photographic plates, specifically the measuring and cataloging of the brightness of stars. It was in this role that she saw something no one else had yet seen. While that moment was significant in its own right, its subsequent impact on the field of astronomy was tremendous. To fully understand why, one must first understand the scientific context in which Leavitt’s discovery was made.

In Leavitt’s time, scientists didn’t yet suspect that the stars were all moving away from Earth. They seemed to be just sitting there in the sky. Yes, they rotated across the sky throughout the night, but that’s because we have known since the time of Copernicus that that’s because the Earth itself is spinning on its axis. And yes, some objects among the stars were moving, but others explained them to be “wanderers” or planets, some asteroids too. But the huge mass of stars? They were really just sitting there. Who knew they were moving away from us?

This is how things stood until the early 1900s when scientists first started pointing spectroscopes towards the stars to see what they were made of, these devices providing a fascinating means of capturing electron-orbital fingerprints and so identifying elemental composition. But the curious astronomers eventually noticed that the resulting spectral lines didn’t line up exactly with those measured on Earth and thus concluded that the stars must be moving relative to Earth, causing a “Doppler” shift in the lines, with a shift towards the blue end of the spectrum signifying motion “toward” and a shift towards the red signifying motion “away.” Vesto Slipher, a leader in this field, took the first set of accurate line-shift measurements and noted that the majority of the stars were red-shifted, thus moving away from Earth. But there was a large paradigm-gap between these observations and the thought that the universe itself was expanding and so moving everything away from everything else. The missing data needed to discover this expansion was distance, specifically the distance from Earth to the stars. Without it, attempts to better understand any structure behind the motion of the stars were impossible.

The primary method available for measuring the distance from Earth to the stars when Leavitt started her work was the parallax method, which relied on the observed change in position of a star against a non-moving background of more distant stars as the Earth rotated around the Sun. But this method could only be used on stars closer than 100 light years away from Earth and most stars and other galaxies are beyond this distance. Clearly another method was needed.

Consider a flashlight. Flip it on and have someone walk it away from you. Its brightness decreases. Using basic physics, you could calculate its distance from you based on its observed brightness. Ah, but you would have to know one more variable. How inherently bright is the flashlight when it’s right next to you? You need to know the relation between observed brightness and inherent brightness to calculate distance. In astronomy, the challenge in knowing inherent star brightness lies in the fact that there are many different types of stars, each with its own inherent brightness, which itself could vary depending on other variables related to that star. What was needed was a way to identify a known type of star with known variables and thus known inherent brightness.

We now return to Henrietta Leavitt. The Harvard College Observatory had a treasure trove of photographic plate images of stars taken by astronomers. Pickering assigned Leavitt the task of studying the plates to identify “variable stars” in the Small and Large Magellanic Clouds. These are stars whose brightness swings back and forth from bright to dim to bright on a regular basis of hours, days, or weeks. (This swing was later determined to be caused by the ionization and de-ionization of helium in the pulsating star atmosphere.) Leavitt’s strong work ethic and exceptional skill set led her to discover 1,777 variable stars in these clouds and by the end of her career more than 2,400 such stars in the universe, about half the total known at that time.

Within this effort, Leavitt narrowed her focus to a specific type of variable star called a Cepheid and started pursuing her interest in the relation between the observed maximum brightness and the rate of variation in brightness. Recall that observed brightness depends on both inherent brightness and distance. For her research, Leavitt wanted to remove distance as a variable and so needed to identify a set of these Cepheid variables all at the same distance from Earth. And this she did. She discovered 25 Cepheid variables in the Small Magellanic Cloud, assumed that they were all the same distance from the Earth, and for each compared maximum brightness against the period of variation.

Have you ever experienced that eureka moment of excitement after unveiling some aspect of nature for the first time? Stuart Firestein describes this well in his book Ignorance – How It Drives Science (p. 160). “I am afraid that it is impossible to convey completely the excitement of discovery, of seeing the result of an experiment and knowing that you know something new, something fundamental, and that for this moment at least, only you, in the entire world, knows it.” I can imagine that this is how Leavitt must have felt when she plotted her data, because the correlation was right there in front of her. What a wonderful moment in science this was. As the rate of variation slows down (longer period of time), maximum brightness increases.

Leavitt’s discovery was not the final step in the process of measuring inter-galactic distances. Instead, it was the penultimate step. The reason? Her data weren’t calibrated to the distance from the Small Magellanic Cloud (SMC) to Earth. This situation was rectified when Ejnar Hertzsprung and later Harlow Shapley measured the distance from Earth to a single Cepheid outside the SMC and close enough for the parallax method to be used. With this distance in hand, and with the distance between the Cepheid and the SMC calculated based on maximum brightness difference (for the same variation rate), astronomers could calculate the distance from Earth to the SMC and so convert Leavitt’s brightness scale from “observed” to “inherent.” This then enabled the following process: identify a distant cloud, find a Cepheid variable in that cloud, measure its rate of brightness variation, determine its inherent brightness from Leavitt, Hertzsprung, and Shapley’s work, compare this with the observed brightness, and thus calculate distance from Earth to that Cepheid.

The creation of this new distance-measuring methodology from Leavitt’s work was a true breakthrough moment in the fields of not only astronomy but also cosmology, for in 1929-1931 Edwin Hubble combined Slipher’s work with Leavitt’s to reach deep into the universe and gather the data used to support the hypothesis of an expanding universe, which then led to the Big Bang theory. The figure below from my book Block by Block illustrates this process.

Henrietta Leavitt unfortunately didn’t live to see Hubble’s amazing results. She died of stomach cancer on December 12, 1921, at the age of 53. In her obituary, fellow astronomer Solon I. Bailey shared the following:

She took life seriously. Her sense of duty, justice and loyalty was strong. For light amusements she appeared to care little. She was a devoted member of her intimate family circle, unselfishly considerate in her friendships, steadfastly loyal to her principles, and deeply conscientious and sincere in her attachment to her religion and church. She had the happy faculty of appreciating all that was worthy and lovable in others, and was possessed of a nature so full of sunshine that to her all of life became beautiful and full of meaning… Miss Leavitt was of an especially quiet and retiring nature, and absorbed in her work to an unusual degree. She had the highest esteem of all her associates at the Harvard Observatory, where her loss is keenly felt.” (Popular Astronomy, 30, no. 4, April 1922, pp. 197-199)

And so on this July 4, her birthday, let us recognize Henrietta Leavitt for her curiosity, intuition, intelligence, and roll-up-your sleeves perseverance in collecting and interpreting plate after countless plate of data, thereby contributing to the most wondrous of human achievements, the discovery of the Big Bang origin of our universe. Here’s to Henrietta Leavitt, a true scientist and role model.

From Block by Block – The Historical and Theoretical Foundations of Thermodynamics

The 170th Anniversary of the 1st Law of Thermodynamics — A Tribute to Rudolf Clausius

Upon publishing my book, Block by Block – The Historical and Theoretical Foundations of Thermodynamics, Oxford University Press kindly invited me to write a post related to my book for their academic blog. I gladly accepted and chose as my topic the creation of the 1st Law of Thermodynamics by Rudolf Clausius’ work of 1850.

Here is a link to my post. I hope you’ll enjoy it.

Here’s Why I Wrote “Block by Block” (video)

I’m very excited to share in the below video why I wrote Block by Block – The Historical and Theoretical Foundations of Thermodynamics. As you’ll see, I clarify my motivation and also the book’s structure. It’s a readable account of both the history and science of thermodynamics. Enjoy!