Skip to content

1 is more important than 9: Benford’s Law

August 26, 2015

Let’s start like this: think of some number that describes nature, or any object in it.  It can be any mathematical or physical constant or measurement, in any system of units.

Got one?

I predict, using my psychic powers, that you were much more likely to have thought of a number that begins with 1, 2, or 3 rather than a number that begins with 7, 8, or 9.

As it turns out, the probability is about four times higher. In fact, the probability of having a particular first digit decreases monotonically with the value the digit (1 is a more common first digit than 2, 2 is more common than 3, and so on).   And the odds of you having picked a number that starts with 1 are about seven times higher than the odds of you having picked a number that starts with 9.


This funny happenstance is part of a larger observation called Benford’s law.  Broadly speaking, Benford’s law says that the lower counting numbers (like 1, 2, and 3) are disproportionately likely to be the first digit of naturally-occurring numbers.

In this post I’ll talk a little bit about Benford’s law, its quantitative form, and how one can think about it.

But first, as a fun exercise, I decided to see whether Benford’s law holds for the numbers I personally tend to use and care about.

(Here I feel I must pause to acknowledge how deeply, ineluctably nerdy that last sentence reveals me to be.)

So I made a list of the physical constants that I tend to think about — or, at least, of the ones that occurred to me at the moment of making the list.  These are presented below in no particular order, and with no particular theme or guarantee for completeness and non-redundancy (i.e., some of the constants on this list can be made by combining others).


After a quick look-over, it’s pretty clear that this table has a lot more numbers starting with 1 than numbers starting with 9.  A histogram of first digits in this table looks like this:

my constants

Clearly, there are more small digits than large digits.  (And somehow I managed to avoid any numbers that start with 4.  This is perhaps revealing about me.)


As far as I can tell, there is no really satisfying proof of Benford’s law.  But if you want to get some feeling for where it comes from, you can notice that those numbers on my table cover a really wide range of values: ranging in scale from 10^{-35} (the Planck length) to 10^{30} (the sun’s mass).  (And no doubt they would cover a wider range if I were into astronomy.)  So if you wanted to put all those physical constants on a single number line, you would have to do it in logarithmic scale.  Like this:

number line

The funny thing about a logarithmic scale, though, is that it distorts the real line, giving more length to numbers beginning with lower integers.  For example, here is the same line from above, zoomed in to the interval between 1 and 10:

number line - zoomed

You can see in this picture that the interval from 1 to 2 is much longer than the interval from 9 to 10.  (And, just to remind you, the general rule for logarithmic scales is that the same interval separates any two numbers with the same ratio.  So, for example, 1 and 3 are as far from each other as 2 and 6, or 3 and 9, or 500 and 1500.)  If you were to choose a set of numbers by randomly throwing darts at a logarithmic scale, you would naturally get more 1’s and 2’s than 8’s and 9’s.

What this implies is that if you want a quantitative form for Benford’s law, you can just compare the lengths of the different intervals on the logarithmic scale.  This gives:

P(d) = \log_{10}(1 + 1/d),

where d is the value of the first digit and P(d) is the relative abundance of that digit.


If you have a large enough data set, this quantitative form of Benford’s law tends to come through pretty clearly.  For example, if you take all 335 entries from the list of physical constants provided by NIST, then you find that the abundance of different first digits is described by the formula above with pretty good quantitative accuracy:



Now, if you don’t like the image of choosing constants of nature by throwing darts at a logarithmic scale, let me suggest another way to see it: Benford’s law is what you’d get as the result of a random walk using multiplicative steps.

In the conventional random walk, the walker steps randomly to the right or left with steps of constant length, and after a long time ends up at a random position on the number line.  But imagine instead that the random walker takes steps of constant multiplicative value — for example, at each “step” the walker could have his position multiplied by either 2/3 or 3/2.   This would correspond to steps that appeared to have constant length on the logarithmic scale.  Consequently, after many steps the walker would have a random position on the logarithmic axis, and so would be more likely to end up in one of those wider 1–2 than in the shorter 8–9 bins.

The upshot is that one way to think about Benford’s law is that the numbers we have arise from a process of multiplying many other “randomly chosen” numbers together.  This multiplication naturally skews our results toward numbers that begin with low digits.


By the way, for me the notion of “randomly multiplying numbers together” immediately brings to mind the process of doing homework as an undergraduate.  This inspired me to grab a random physics book off my shelf (which happened to be Tipler and Llewellyn’s Modern Physics, 3rd edition) and check the solutions to the homework problems in the back.

Sure enough:homework_problems

So the next time you find yourself trying to randomly guess answers, remember Benford’s law.



A children’s picture-book introduction to quantum field theory

August 20, 2015

Note: the following is the second in a series of blog post that I was invited to contribute to the blog ribbonfarm.  You can go here to see the original post, which contains quite a bit of discussion in the comments. 


First of all, don’t panic.

I’m going to try in this post to introduce you to quantum field theory, which is probably the deepest and most intimidating set of ideas in graduate-level theoretical physics.  But I’ll try to make this introduction in the gentlest and most palatable way I can think of: with simple-minded pictures and essentially no math.

To set the stage for this first lesson in quantum field theory, let’s imagine, for a moment, that you are a five-year-old child.  You, the child, are talking to an adult, who is giving you one of your first lessons in science.  Science, says the adult, is mostly a process of figuring out what things are made of.  Everything in the world is made from smaller pieces, and it can be exciting to find out what those pieces are and how they work.  A car, for example, is made from metal pieces that fit together in specially-designed ways.  A mountain is made from layers of rocks that were pushed up from inside the earth.  The earth itself is made from layers of rock and liquid metal surrounded by water and air.

This is an intoxicating idea: everything is made from something.

So you, the five-year-old, start asking audacious and annoying questions. For example:

What are people made of?
People are made of muscles, bones, and organs.
Then what are the organs made of?
Organs are made of cells.
What are cells made of?
Cells are made of organelles.
What are organelles made of?
Organelles are made of proteins.
What are proteins made of?
Proteins are made of amino acids.
What are amino acids made of?
Amino acids are made of atoms.
What are atoms made of?
Atoms are made of protons, neutron, and electrons.
What are electrons made of?
Electrons are made from the electron field.
What is the electron field made of?

And, sadly, here the game must come to an end, eight levels down.  This is the hard limit of our scientific understanding.  To the best of our present ability to perceive and to reason, the universe is made from fields and nothing else, and these fields are not made from any smaller components.

But it’s not quite right to say that fields are the most fundamental thing that we know of in nature.  Because we know something that is in some sense even more basic: we know the rules that these fields have to obey.  Our understanding of how to codify these rules came from a series of truly great triumphs in modern physics.  And the greatest of these triumphs, as I see it, was quantum mechanics.

In this post I want to try and paint a picture of what it means to have a field that respects the laws of quantum mechanics.  In a previous post, I introduced the idea of fields (and, in particular, the all-important electric field) by making an analogy with ripples on a pond or water spraying out from a hose.  These images go surprisingly far in allowing one to understand how fields work, but they are ultimately limited in their correctness because the implied rules that govern them are completely classical.  In order to really understand how nature works at its most basic level, one has to think about a field with quantum rules.


The first step in creating a picture of a field is deciding how to imagine what the field is made of.  Keep in mind, of course, that the following picture is mostly just an artistic device.  The real fundamental fields of nature aren’t really made of physical things (as far as we can tell); physical things are made of them.  But, as is common in science, the analogy is surprisingly instructive.

So let’s imagine, to start with, a ball at the end of a spring.  Like so:


This is the object from which our quantum field will be constructed.  Specifically, the field will be composed of an infinite, space-filling array of these ball-and-springs.


To keep things simple, let’s suppose that, for some reason, all the springs are constrained to bob only up and down, without twisting or bending side-to-side.  In this case the array of springs can be called, using the jargon of physics, a scalar field.  The word “scalar” just means a single number, as opposed to a set or an array of multiple numbers.  So a scalar field is a field whose value at a particular point in space and time is characterized only by a single number.  In this case, that number is the height of the ball at the point in question.  (You may notice that what I described in the previous post was a vector field, since the field at any given point was characterized by a velocity, which has both a magnitude and a direction.)

In the picture above, the array of balls-and-springs is pretty uninteresting: each ball is either stationary or bobs up and down independently of all others.  In order to make this array into a bona fide field, one needs to introduce some kind of coupling between the balls.  So, let’s imagine adding little elastic bands between them:


Now we have something that we can legitimately call a field.  (My quantum field theory book calls it a “mattress”.)  If you disturb this field – say, by tapping on it at a particular location – then it will set off a wave of ball-and-spring oscillations that propagates across the field.  These waves are, in fact, the particles of field theory.  In other words, when we say that there is a particle in the field, we mean that there is a wave of oscillations propagating across it.

These particles (the oscillations of the field) have a number of properties that are probably familiar from the days when you just thought of particles as little points whizzing through empty space.  For example, they have a well-defined propagation velocity, which is related to the weight of each of the balls and the tightness of the springs and elastic bands.  This characteristic velocity is our analog of the “speed of light”.  (More generally, the properties of the springs and masses define the relationship between the particle’s kinetic energy and its propagation velocity, like the  KE = \frac{1}{2}mv^2 of your high school physics class.)  The properties of the springs also define the way in which particles interact with each other.  If two particle-waves run into each other, they can scatter off each other in the same way that normal particles do.

(A technical note: the degree to which the particles in our field scatter upon colliding depends on how “ideal” the springs are.  If the springs are perfectly described by Hooke’s law, which says that the restoring force acting on a given ball is linearly proportional to the spring’s displacement from equilibrium, then there will be no interaction whatsoever.  For a field made of such perfectly Hookean springs, two particle-waves that run into each other will just go right through each other.  But if there is any deviation from Hooke’s law, such that the springs get stiffer as they are stretched or compressed, then the particles will scatter off each other when they encounter one another.)

Finally, the particles of our field clearly exhibit “wave-particle duality” in a way that is easy to see without any philosophical hand-wringing.  That is, our particles by definition are waves, and they can do things like interfere destructively with each other or diffract through a double slit.

All of this is very encouraging, but at this point our fictitious field lacks one very important feature of the real universe: the discreteness of matter.  In the real world, all matter comes in discrete units: single electrons, single photons, single quarks, etc.  But you may notice that for the spring field drawn above, one can make an excitation with completely arbitrary magnitude, by tapping on the field as gently or as violently as one wants.  As a consequence, our (classical) field has no concept of a minimal piece of matter, or a smallest particle, and as such it cannot be a very good analogy to the actual fields of nature.


To fix this problem, we need to consider that the individual constituents of the field – the balls mounted on springs – are themselves subject to the laws of quantum mechanics.

A full accounting of the laws of quantum mechanics can take some time, but for the present pictorial discussion, all you really need to know is that a quantum ball on a spring has two rules that it must follow.  1) It can never stop moving, but instead must be in a constant state of bobbing up and down.  2) The amplitude of the bobbing motion can only take certain discrete values.

oscillator_quantaThis quantization of the ball’s oscillation has two important consequences.  The first consequence is that, if you want to put energy into the field, you must put in at least one quantum.  That is, you must give the field enough energy to kick at least one ball-and-spring into a higher oscillation state.  Arbitrarily light disturbances of the field are no longer allowed. Unlike in the classical case, an extremely light tap on the field will produce literally zero propagating waves.  The field will simply not accept energies below a certain threshold.  Once you tap the field hard enough, however, a particle is created, and this particle can propagate stably through the field.

This discrete unit of energy that the field can accept is what we call the rest mass energy of particles in a field.  It is the fundamental amount of energy that must be added to the field in order to create a particle.  This is, in fact, how to think about Einstein’s famous equation  E = mc^2  in a field theory context.  When we say that a fundamental particle is heavy (large mass  m ), it means that a lot of energy has to be put into the field in order to create it.  A light particle, on the other hand, requires only a little bit of energy.

(By the way, this why physicists build huge particle accelerators whenever they want to study exotic heavy particles.  If you want to create something heavy like the Higgs boson, you have to hit the Higgs field with a sufficiently large (and sufficiently concentrated) burst of energy to give the field the necessary one quantum of energy.)

The other big implication of imposing quantum rules on the ball-and-spring motion is that it changes pretty dramatically the meaning of empty space.  Normally, empty space, or vacuum, is defined as the state where no particles are around.  For a classical field, that would be the state where all the ball-and-springs are stationary and the field is flat.  Something like this:

flat_fieldBut in a quantum field, the ball-and-springs can never be stationary: they are always moving, even when no one has added enough energy to the field to create a particle.  This means that what we call vacuum is really a noisy and densely energetic surface:


This random motion (called vacuum fluctuations) has a number of fascinating and eminently noticeable influences on the particles that propagate through the vacuum.  To name a few, it gives rise to the Casimir effect (an attraction between parallel surfaces, caused by vacuum fluctuations pushing them together) and the Lamb shift (a shift in the energy of atomic orbits, caused by the electron getting buffeted by the vacuum).

In the jargon of field theory, physicists often say that “virtual particles” can briefly and spontaneously appear from the vacuum and then disappear again, even when no one has put enough energy into the field to create a real particle.  But what they really mean is that the vacuum itself has random and indelible fluctuations, and sometimes their influence can be felt by the way they kick around real particles.

That, in essence, is a quantum field: the stuff out of which everything is made.  It’s a boiling sea of random fluctuations, on top of which you can create quantized propagating waves that we call particles.

I only wish, as a primarily visual thinker, that the usual introduction to quantum field theory didn’t look quite so much like this.  Because behind the equations of QFT there really is a tremendous amount of imagination, and a great deal of wonder.


Where do electric forces come from?

June 23, 2015

Note: what follows is a reproduction of a post I wrote for the blog ribbonfarm.  You can go here to see the original post, which contains some additional discussion in the comments.


There’s a good chance that, at some point in your life, someone told you that nature has four fundamental forces: gravity, the strong nuclear force, the weak nuclear force, and the electromagnetic force.

This factoid is true, of course.

But what you probably weren’t told is that, at the scale of just about any natural thing that you are likely to think about, only one of those four forces has any relevance.  Gravity, for example, is so obscenely weak that one has to collect planet-sized balls of matter before its effect becomes noticeable.  At the other extreme, the strong nuclear force is so strong that it can never go unneutralized over distances larger than a few times the diameter of an atomic nucleus ( \sim 10^{-15} meters); any larger object will essentially never notice its existence.  Finally, the weak nuclear force is extremely short-ranged, so that it too has effectively no influence over distances larger than  \sim 10^{-15} meters.

That leaves the electromagnetic force, or, in other words, the Coulomb interaction.  This is the familiar law that says that like charges repel each and opposites attract.  This law alone dominates the interactions between essentially all objects larger than an atomic nucleus ( 10^{-15} meters) and smaller than a planet ( 10^{7} meters).  That’s more than twenty powers of ten.

But not only does the “four fundamental forces” meme give a false sense of egalitarianism between the forces, it is also highly misleading for another reason.  Namely, in physics forces are not considered to be “fundamental”.  They are, instead, byproducts of the objects that really are fundamental (to the best of our knowledge): fields.

Read more…

Where does magnetism come from?

April 19, 2015

Magnets are complicated.

This fact is pretty well illustrated by how hard it is to predict whether a given material will make a good magnet or not. For example, if someone tells you the chemical composition of some material and then asks you “will it be magnetic?”, you (along with essentially all physicists) will have a hard time answering.  That’s because whether a material behaves like a magnet usually depends very sensitively on things like the crystal structure of the material, the valence of the different atoms, and what kind of defects are present.  Subtle changes to any of these things can make the difference between having a strong magnet and having an inert block.  Consequently, a whole scientific industry has grown up around the question “will X material be magnetic?”, and it keeps thousands of scientists gainfully employed.

On the other hand, there is a pretty simple answer to the more general question “where does magnetism come from?”  And, in my experience, this answer is not very well-known, despite the obvious public outcry for an explanation.



So here’s the answer:  Magnetism comes from the exchange interaction.

In this post, I want to explain what the “exchange interaction” is, and outline the essential ingredients that make magnets work.  Unless you’re a physicist or a chemist, these ingredients are probably not what you expect.  (And, despite some pretty good work by talented science communicators, I haven’t yet seen a popular explanation that introduces both of them.)


Ingredient #1: Electrons are little magnets


As I discussed in a recent post, individual electrons are themselves like tiny magnets.  They create magnetic fields around themselves in the same way that a tiny bar magnet would, or that a spinning charged sphere would.  In fact, the “north pole” of an electron points in the same direction as the electron’s “spin”.

(This is not to say that an electron actually is a little spinning charged sphere.  My physics professors always told me not to think about it that way… but sometimes I get away with it anyway.)

The magnetic field created by one electron is relatively strong at very short distances: at a distance of one angstrom, it is as strong as 1 Tesla.  But the strength of that field decays quickly with distance, as 1/(\text{distance})^3.  At any distance \gtrsim 3 nm away the magnetic field produced by one electron is essentially gone — it’s weaker than the Earth’s magnetic field (tens of microTesla).

What this means is that if you want a collection of electrons to act like a magnet then you need to get a large fraction of them to point their spins (their north poles) in the same direction.  For example, if you want a permanent magnet that can create a field of 1 Tesla (as in the internet-famous neodymium magnets), then you need to align about one electron per cubic angstrom (which is more than one electron per atom).

So what is it that makes all those electron align their spins with each other?  This is the essential puzzle of magnetism.


Before I tell you the right answer, let me tell you the wrong answer.  The wrong answer is probably the one that you would first think of.  Namely, that all magnets have a natural pushing/pulling force on each other: they like to align north-to-south.  So too, you might think, will these same electrons push each other into alignment through their north-to-south attraction.  Your elementary/middle school teacher may have even explained magnets to you by showing you a picture that looks like this:

(Never mind the fact that the picture on the right, in addition to its aligned North and South poles, has a bunch of energetically costly North/North and South/South side-by-side pairs.  So it’s not even clear that it has a lower energy than the picture on the left.)

But if you look a little more closely at the magnetic forces between electrons, you quickly see that they are way too weak to matter.  Even at about 1 Angstrom of separation (which is the distance between two neighboring atoms), the energy of magnetic interaction between two electrons is less than 0.0001 electronVolts (or, in units more familiar to chemists, about 0.001 kcal/mol).

The lives of electrons, on the other hand, are played out on the scale of about 10 electronVolts: about 100,000 times stronger than the magnetic interaction.  At this scale, there are really only two kinds of energies that matter: the electric repulsion between electrons (which goes hand-in-hand with the electric attraction to nuclei) and the huge kinetic energy that comes from quantum motion.

In other words, electrons within a solid material are feeling enormous repulsive forces due to their electric charges, and they are flying around at speeds of millions of miles per hour.  They don’t have time to worry about the puny magnetic forces that are being applied.

nobody_got_timeSo, in cases where electrons decide to align their spins with each other, it must be because it helps them reduce their enormous electric repulsion, and not because it has anything to do with the little magnetic forces.


Ingredient #2: Same-spin electrons avoid each other


The basic idea behind magnetism is like this: since electrons are so strongly repulsive, they want to avoid each other as much as possible.  In principle, the simplest way to do this would be to just stop moving, but this is prohibited by the rules of quantum mechanics.  (If you try to make an electron sit still, then by the Heisenberg uncertainty principle its momentum becomes very uncertain, which means that it acquires a large velocity.)  So, instead of stopping, electrons try to find ways to avoid running into each other.  And one clever method is to take advantage of the Pauli exclusion principle.

In general, the Pauli principle says that no two electrons can have the same state at the same time.  There are lots of ways to describe “electron state”, but one way to state the Pauli principle is this:

No two electrons can simultaneously have the same spin and the same location.

This means that any two electrons with the same spin must avoid each other.  They are prohibited by the most basic laws of quantum mechanics from ever being in the same place at the same time.

Or, from the point of view of the electrons, the trick is like this: if a pair of electrons points their spin in the same direction, then they are guaranteed to never run into each other.


It is this trick that drives magnetism.  In a magnet, electrons point their spins in the same direction, not because of any piddling magnetic field-based interaction, but in order to guarantee that they avoid running into each other.  By not running into each other, the electrons can save a huge amount of repulsive electric energy.  This saving of energy by aligning spins is what we (confusingly) call the “exchange interaction”.

To make this discussion a little more qualitative, one can talk about the probability P(r) that a given pair of electrons will find themselves with a separation r.  For electrons with opposite spin (in a metal), this probability distribution looks pretty flat: electrons with opposite spin are free to run over each other, and they do.

But electrons with the same spin must never be at the same location at the same time, and thus the probability distribution P(r) must go to zero at r = 0; it has a “hole” in it at small r.  The size of that hole is given by the typical wavelength of electron states: the Fermi wavelength \lambda_F \sim n^{-1/3} (where n is the concentration of electrons).  This result makes some sense: after all, the only meaningful way to interpret the statement “two electrons can’t be in the same place at the same time” is to say that “two electrons can’t be within a distance R of each other, where R is the electron size”.  And the only meaningful definition of the electron size is the electron wavelength.

If you plot two probability distributions together, they look something like this:


If you want to know how much energy the electrons save by aligning their spins, then you can integrate the distribution P(r) multiplied by the interaction energy law V(r) over all possible distances r, and compare the result you get for the two cases.  The “hole” in the orange curve is sometimes called the “exchange hole”, and it implies means that electrons with the same spin have a weaker average interaction with each other.  This is what drives magnetism.

(In slightly more technical language, the most useful version of the probability P(r) is called the “pair distribution function“, which people spend a lot of time calculating for electron systems.)


Epilogue: So why isn’t everything magnetic?


To recap, there are two ingredients that produce magnetism:

  1. Electrons themselves are tiny magnets.  For a bulk object to be a magnet, a bunch of the electrons have to point their spins in the same direction.
  2. Electrons like to point their spins in the same direction, because this guarantees that they will never run into each other, and this saves them a lot of electric repulsion energy.

These two features are more-or-less completely generic.  So now you can go back to the first sentence of this post (“Magnets are complicated”) and ask, “wait, why are they complicated?  If electrons universally save on their repulsive energy by aligning their spins, then why doesn’t everything become a magnet?”

The answer is that there is an additional cost that comes when the electrons align their spins.  Specifically, electrons that align their spins are forced into states with higher kinetic energy.

You can think about this connection between spin and kinetic energy in two ways.  The first is that it is completely analogous to the problem of atomic orbitals (or the simpler quantum particle in a box).  In this problem, every allowable state for an electron can hold only two electrons, one in each spin direction.  But if you start forcing all electrons to have the same spin, then each energy level can only hold one electron, and a bunch of electrons get forced to sit in higher energy levels.

The other way to think about the cost of spin polarization is to notice that when you give electrons the same spin, and thereby force them to avoid each other, you are really confining them a little bit more (by constraining their wavefunctions to not overlap with each other).  This extra bit of confinement means that their momentum has to go up (again, by the Heisenberg uncertainty principle), and so they start moving faster.

Either way, it’s clear that aligning the electron spins means that the electrons have to acquire a larger kinetic energy.  So when you try to figure out whether the electrons actually will align their spins, you have to weigh the benefit (having a lower interaction energy) against the cost (having a higher kinetic energy).  A quantitative weighing of these two factors can be difficult, and that’s why so many of scientific types can make a living by it.


But the basic driver of magnetism is really as simple as this: like-spin electrons do a better job of avoiding each other, and when electrons line up their spins they make a magnet.

So the next time someone asks you “magnets: how do they work?”, you can reply “by the exchange interaction!”  And then you can have a friendly discussion without resorting to profanity or name-calling.



1. The simplest quantitative description of the tradeoff between the interaction energy gained by magnetism and the kinetic energy cost is the so-called Stoner model.  I can write a more careful explanation of it some time if anyone is interested.

2. One thing that I didn’t explicitly bring up (but which most popular descriptions of magnetism do bring up) is that electrons also create magnetic fields by virtue of their orbits around atomic nuclei.  This makes the story a bit more complicated, but doesn’t change it in a fundamental way.  In fact, the magnetic fields created by those orbits are nearly equal in magnitude to the ones created by the electron spin itself, so thinking about them doesn’t change any order-of-magnitude estimates.  (But you will definitely need to think about them if you want to predict the exact strength of the magnetic field in a material.)

3. There is a pretty simple version of magnetization that occurs within individual atoms.  This is called Hund’s rule, which says that when you have a partially-filled atomic orbital, the electrons within the orbital will always arrange themselves so as to maximize the amount of spin alignment.  This “magnetization of a single atom” happens for the same reason that I outlined above: when electron spins align, they do a better job of avoiding each other, and their energy is lower.

4. If I were a good popularizer of science, then I would really go out of my way to emphasize the following point.  The existence of magnetism is a visible manifestation of quantum mechanics.  It cannot be understood without the Pauli exclusion principle, or without thinking about the electron spin.  So if magnets feel a little bit like magic, that’s partly because they are a startling manifestation of quantum mechanics on a human-sized scale.

On Godlessness

April 16, 2015

Today, April 16, is the one day in the year when I allow myself to co-opt this blog for very personal purposes.  Usually I use this time as a way to remember Virginia Tech, and my time there.  Today, however, I want to be a little more audacious. 

Forgive me, I want to talk about God and godlessness.

The theme and the day have something of a connection, of course.  But, as before, I will steadfastly refuse to make it.

If you’re here for physics-related content, I apologize; a new post should be up within a couple days.



My childhood was a very religious one.

I mean this not just in the sense that I spent a lot of time in church (which I did), or that religious doctrines played an outsized role in shaping and constraining the events of my life (which they did), but in the sense that religion felt very important to me.  From a very early age, the religious ideas to which I was exposed felt intensely valuable and deeply moving.

I was a Christian (a Mormon, in fact), and the doctrines of Christianity fascinated and moved me in a way that few things in my life have.  I felt very personally the Christian call to strive for a particular ideal of Christ-likeness, one defined by patience, charity, loving kindness, and faith.  It is probably not an exaggeration to say that I took some inspiration from this ideal almost every day of my life, starting from my earliest moments of literacy through the first year or two of college.  Christianity, to me, felt challenging and profound; I’m sure I expected that my religion would be the most important thing I would ever study.  Through deep intellectual thought, I supposed, and through the cultivation of very personal emotional experiences, I would become ever closer to this Christ-like person.  And to do so felt not just like a rewarding challenge, but like a moral imperative.

In short: I was a serious-minded kid, and my religion was perhaps the thing I took most seriously.

I was also a pretty analytical-minded kid.  I loved building things, solving puzzles, and learning how things work.  I relished those moments of wonder when you first realize that something seemingly mundane is much bigger or more intricate than you first supposed.

And, during childhood, those analytical proclivities seemed very much compatible with my religious feelings.  In fact, they usually mixed together in a truly beautiful way.  I remember, for example, the first time I looked at a cell.  I had peeled the thin skin off an onion from the fridge, and I lay down on my stomach on the prickly June grass and looked at the onion skin with a little mirror-lit microscope.  That was a moment of real wonder, as I’m sure it is for many a child: the first time you get a visual sense of the impossible intricacy of life.  And for me, those feelings were compounded and deepened by my awe at their creator.

In those years, every piece of learning or exploration of Nature made me feel like I was getting a little bit closer to God, like I was learning to see the universe just a little bit more like He was able to see it.  And that was exhilarating.

In addition to my seriousness and my nerdiness, my childhood was generally defined by happiness.  I had (and continue to have) a wonderful family, and from them I received the gift of feeling safe and feeling loved in a very fundamental way.  These feelings, not surprisingly, also got entangled with my gratitude for God, as the ostensible source from which love and protection flows.  And gratitude is a beautiful feeling when it is felt deeply and recognized for what it is.

In this way my religion provided the frame for the deepest feelings and experiences in my life.  It was the channel through which I experienced wonder and gratitude, and it provided the context through which I interpreted the love and security that I felt in life.

It was a beautiful way to live.


Of course, I realize in retrospect that there was something dangerous about this religious way of experiencing life.  Specifically, there is a particular trick commonly played by religion (if I can phrase it this way without ascribing bad intention to any particular person) for which I fell completely.

The trick goes like this: in most churches one is taught, quite explicitly, and from the youngest possible age, that God loves you and is watching out for you.  Therefore, the lesson continues, you can feel loved and safe in the world.  It’s a comforting lesson, and when felt deeply it is quite moving.  Implicit in this lesson, however, is something like a threat.  Namely, that if for some reason you don’t know that there is a God who loves you and protects you, then you will have no grounds for feeling loved and safe in the world.

Even without being explicitly taught that unhappy converse statement of the “God loves you” lesson, I accepted it.  I doubt that I even realized how deeply I had accepted it; it just seemed natural.  This was, perhaps, the most dangerous idea that I absorbed from my religion: that I was dependent on my religious belief for the most valuable emotions in my life.  (And maybe this is always the case, that the most dangerous religious lessons are the ones that are absorbed through implication rather than through explicit statement.)

I remember, for example, somewhere in my early teenage years, having a gentle creationism argument with an atheist.  The things that this person was saying (basic things, like “there is no God” — this was a very primitive discussion) seemed, to me, impossible to accept.  Not because they were illogical, but because it seemed intolerable to believe that human life was a cosmic accident with no purpose.  Such an idea simply could not (or should not) be accepted, I thought, because it would rob one of the ability to feel things that were essential for happiness.

As I aged through my teenage years, however, I found it increasingly difficult to be a “religious person.”  Maintaining religious belief and religious feeling required a constant struggle, whose gains were made by concentrated acts of studying, prayer, and willing oneself into a state of religious feeling.  Indeed, the very nature of religion was often described explicitly as a struggle, or a battle: a “good fight” that must be won.  But as I got older, this fight got increasingly difficult.  It involved grappling with deeper and more personal questions, and staving off doubts and alternative ideologies that seemed increasingly natural (and which I often kept at bay by equating their acceptance with a kind of moral laziness).

This process was largely terrifying, and intensely stressful.  At the conscious level, I propelled myself forward by appealing to the moral imperative associated with being a Christ-like force for good in the world.  (Again, this type of motivation, and the “battle” imagery that accompanied it, was often made explicit at church.)  But at the unconscious level, I was very afraid that if I lost my religion then I would lose completely the channel through which I experienced happiness.

This struggle for my own soul left me stressed, dour, and judgmental.

Eventually, it all fell apart.  Somewhere during my first years of college I became unable (or, as I would reproachfully describe myself, lazily unwilling) to maintain real religious belief.  My level of certainty about religious ideas shrank and shrank over time, until one day I found that I had nothing at all – no strong belief, and essentially no religious feeling.

That realization was a dark moment of real despair, and I became terribly depressed.  I felt hopeless, and isolated from my own family.  I felt as if I had a shameful secret that I had to keep from them.  Perhaps even worse, I felt isolated from a part of myself that I had loved dearly and that had made me happy.  I felt like a person whom I could no longer like or respect.


Eventually, though, after at least a year in this kind of state, a truly wonderful thing happened.  I remember very clearly: I was in the middle of a long, solo road trip, driving through the staggering mountains of Colorado along I-70 west of Denver.  The radio didn’t work in my car, so I had nothing to do for entertainment but to provoke myself to internal argument.  Suddenly, during the course of one of these arguments, and among the mountains of Colorado beneath a beautiful bright sky, I had an epiphany.

All my life my religion had exhorted me to seek truth, which was to be obtained through that painful “good fight” process, under the premise that real knowledge of deep truths would enable me to be a happy and a good person.  But what I realized, in that flash under the Colorado sky, was that I could be perfectly happy with no knowledge about any deep truths at all.

What a beautiful moment that was — it was one of the happiest instants of my life. (And, ironically, it felt exactly the way that I had always been taught religious revelation would feel.)

What I realized in that moment is that my happiness in life did not come from, or rely upon, any religious idea.  It was not dependent on any particular idea of God or any specific narrative about my place in the universe.  My feelings of being fundamentally safe and loved in the world were gifts that had been given to me by my family; even if I interpreted them in a religious way, they were never inherently religious feelings.  Today I am just as capable as ever of feeling like a good and a happy person, even though I lack any absolute standard for “good” or any plausible ultimate source from whom that happiness flows.  My religion had always taught me to credit its god for those feelings, but I realize now that they exist whether I believe in Him or not.

I can’t tell you how beautifully liberating I find that realization to be.


I should make clear, in closing, that I am not trying to make an anti-religious statement.  In general, I don’t feel qualified to make any large-scale comments about the “value” of religion.  Whether religion (in some particular form) is more “moral” than atheism, whether religion in general does more good than harm in the world, or whether any particular person will profit from adopting a particular set of religious beliefs – these are all hard questions that are best left to someone who has given them much more thought than I have.

But the point of this post is that I do have one comment that I think is worth making, directed to anyone who may find themselves unable to hold on to their religious feelings:

Though it may seem impossible, there is still wonder on the other side of belief.  Even without a God around whom to focus your wonder.  There is still gratitude, even without a God to whom you can direct it.  There is still love and kindness, and the world can still be deeply moving.  It is still possible to be happy and to feel like a good person, even without an ultimate arbiter to give your life meaning or to tell you what “good” means.

To those of you who have never constructed your lives around religion, these statements may sound obvious to the point of being asinine.  But to me they were among the most valuable and difficult ideas that I ever learned.




I was inspired to write this little essay after reading this wonderful blog post, which includes the lines:

If one thinks of creationism as a sequoia with God in the towering trunk and the various aspects of the natural world as branches going outward at every height, then [in contrast] science in general and evolution in particular is a web of unimaginable richness, with connections in every conceivable direction, splitting and rejoining and looping in almost infinite variety.  The strength of the sequoia is its enormous trunk, a monolithic invulnerability; that of the web is its deep interconnectedness, so that even if a few of its strands are found to be flawed (and they surely are, from time to time), the overall structure retains its integrity with room to spare.


How big is an electron?

April 11, 2015

In the year 1908, while the great minds of natural philosophy were puzzling over how to understand the structure of atoms in terms of the recently-discovered electron, Vladimir Lenin (yes, that Vladimir Lenin) declared that:

The electron is as inexhaustible as the atom

Lenin was making a philosophical point, but you can view his declaration as something like a scientific prediction.  Just as scientists of his era were discovering that atoms were made from smaller constituents like electrons, so too, he said, would we eventually find that the electron was made from even smaller components.  And on the cycle would go, with each new piece of matter being an “inexhaustible” source of new discovery.

It was a pretty reasonable expectation at the time, given the relentless progress of reductionist science.  But so far, despite more than a century of work by scientists (including many presumably extra-motivated Soviets), we still have no indication that the electron is made from anything else, or that it has any internal structure.  Students of physics in the 21st century are taught that the electron is a point in space with certain properties — mass, charge, and spin — but that it cannot be thought of as a spinning sphere or anything else that has a size and shape.  The electron looks pretty exhausted.

In this post, though, I want to take Lenin’s side, and ask the heretical question: “If the electron actually is a real, physical object with a finite size, then how big is it?”  Not surprisingly, there is no clear answer to this question, but some of the candidate answers turn out to be pretty interesting.

Option #1: The Bohr radius

If you’re a physicist, and someone asks you “how big is an electron?”, then the most canonically correct thing to say is “There is no concept of an electron size other than the spatial extent of the electron wavefunction.  The size of the electron wavefunction is the electron size.”

This opinion essentially amounts to telling someone that they need to stop trying to think about a quantum electron as a physical object that you could hold in your hand like a baseball if only it was enlarged 10^{10} times.  People who profess this opinion are presumably also the ones who get annoyed by the popular declaration that “matter is 99.999% empty space.”

So, if such a person were pressed to give a numerical value for the “size of the electron”, they might say something like “Well, most electrons in the universe are bound into atoms.  So the typical ‘size’ of an electron is about the same as the typical size of an atom.”

As has been discussed here before, the typical size of an atom is given by the Bohr radius:

a_B \sim 4 \pi \epsilon_0 \hbar^2/me^2.

[Here, \hbar is Planck’s constant, \epsilon_0 is the vacuum permittivity, and m is the electron mass, and e is the electron charge.]

Just to remind you how this length scale appears, you can figure out the rough size of an atom by remembering that an electron bound to an atom exists in a balance between two competing energies.  First, there is the attractive potential energy U \sim -e^2/4 \pi \epsilon_0 r that pulls the electron toward the nucleus, and that gets stronger as the electron size r (which is about the same as the average distance between the electron and the nucleus) gets smaller.  Second, there is the kinetic energy of the electron KE \sim p^2/2m.  The typical momentum of the electron gets larger as r gets smaller, as dictated by the Heisenberg uncertainty principle, p \sim \hbar/r, which means that the electron kinetic energy gets larger as the atom size r shrinks: KE \sim \hbar^2/mr^2.

In the balanced state that is an atom, PE and KE are about the same, which means that e^2/4 \pi \epsilon_0 r \sim \hbar^2/mr^2.  Solve for r, and you get that the electron size is about equal to the Bohr radius, which numerically works out to about 1 Angstrom, or  10^{-10} m.


So now we have our first candidate answer to the question “how big is an electron?”.  If someone asks you this question, then you can sort of roll your eyes and then say “usually, about 10^{-10} meters”.

But let’s keep going, and see if there are any other concepts of an “electron size”.  Perhaps you don’t really like the answer that “the size of the electron wavefunction is the electron size”, because the electron wavefunction can be different in different situations, which means that this definition of “size” isn’t really an immutable property of all electrons.  Maybe you prefer to think about the electron as a tiny little ball, and the wavefunction as a sort of probability distribution that describes where the ball tends to be from time to time.  If you insist on this point of view, then what can you say about the size of the ball?


Option #2: The classical electron radius

If you’re going to think about the electron as a tiny charged ball, then there is one thing that should bother you: that ball will have a lot of energy.

To see why this is true, imagine the hypothetical process of assembling your tiny charged ball from a bunch of smaller pieces, each with a fraction of the total charge.  Since the pieces have an electric repulsion from each other, and since you are bringing them very close to each other, the ball will be very hard to put together.


In fact, the energy required to “build” the electron is E \sim e^2/4 \pi \epsilon_0 r, where r is the electron size.  So the smaller the electron is, the harder it is to build, and the more energy gets stored in the form of electric repulsion between all the pieces.  The large self-energy of the  electron also means that if the electron were to get “broken”, then all the tiny pieces would fly apart from each other, and release a tremendous amount of energy.

But, in fact, by the end of the first decade of the 20th century, people already knew that a single electron stored a lot of energy.  Einstein’s famous equation, E = mc^2, suggested that even a very small piece of matter was really an intensely concentrated form of energy.  One electron represents about 10^{-13} Joules (500,000 electron volts), which means that it only takes a few cups’ worth of electrons to have enough rest mass energy to equal the energy of an atomic bomb.  (Of course, if you ever actually removed all the electrons from a few cups’ worth of matter, you would create something much worse than an atomic bomb.)

So one of the early attempts to estimate a size for the electron was to equate these two ideas.  Maybe, the logic goes, the large “mass energy” of an electron is actually the same as the energy stored in the electric repulsion of its constituent pieces.  Equating those two expressions for the energy, E = e^2/4\pi \epsilon_0 r = mc^2 gives an estimate for r that is called the “classical radius of the electron”:

r_c = e^2/(4 \pi \epsilon_0 m c^2) \sim 10^{-15} m

So that’s our first estimate for the intrinsic electron size: about  10^{-15} m.

This classical radius, by the way, works out to be about the same size as the typical size of an atomic nucleus.  And this is, perhaps, where we get the typical middle school picture of the atom: if you think about the electron using its classical radius, then the picture you get is of a tiny negatively-charged speck orbiting another equally tiny positively-charged speck, with about a factor 10^5 difference between the speck sizes and the orbit sizes.

Option #3: The Compton Wavelength

This picture of the electron as a little charged ball with size 10^{-15} meters seemed relatively okay up until the 1920s, when it was discovered that the electron had more properties than just its charge and its mass.  The electron also has what we now call a “spin”.

Basically, the concept of spin comes down to the fact that the electron is magnetic: it has a north pole and a south pole, and it creates a magnetic field around itself that is as large as about 1 Tesla at a distance of 1 Angstrom away, and that decays in strength as 1/(\text{distance})^3.  Another way of saying this same thing is that the electron has a “magnetic moment” whose value is equal to the “Bohr magneton”, \mu_B = e \hbar/2m.  (As it turns out, this magnetic moment is essentially the same as the magnetic moment created by the orbit of an electron around a nucleus.)

At first sight, this magnetic-ness of the electron doesn’t seem like a problem.  A charged sphere can create a magnetic field around itself, as long as the sphere is spinning.  Remember, for example, that electrical currents create magnetic fields:

and that current is just moving electric charge.  So a spinning charged sphere also creates magnetic fields by virtue of the movement of its charge-containing surface:

This sort of image is, in fact, why the electron’s magnetic-ness was first called “spin”.  People imagined that the little electron was actually spinning.

A problem arises with this picture pretty quickly, though.  Namely, if the electron is very small, then in order for it to create a noticeable magnetic field it has to be spinning really quickly.  In particular, the magnetic moment of a spinning sphere with charge e is something like \mu \sim e \omega r^2, where \omega is the sphere’s rotation frequency.  If the sphere is spinning very quickly, then \omega is large, and the equator of the sphere is moving at a very fast speed v \sim \omega r.

We know, however, that nothing can move faster than the speed of light (including, presumably, the waistline of an electron).  This puts a limit on how fast the electron can spin, which translates to a limit on how small the radius of the electron can be if we have any hope of explaining the observed electron magnetic field in terms of a physical rotation.

In particular, if you set \mu = e \omega r^2 = e v r = \mu_B, and require that v < c, then you get that the size of the electron r must be bigger than

\lambda_c \sim \hbar/(m c) \approx 10^{-12} m.

In other words, if you hope to explain the electron magnetic field as coming from an actual spinning motion, then the size of the electron needs to be at least 10^{-12} m.  This is about a thousand times larger than the classical electron radius.

Coincidentally, this value \lambda_c is called the “Compton wavelength”, and it has another important meaning.  The Compton wavelength is more or less the smallest distance to which you can confine an electron.  If you try to squeeze the electron into an even smaller distance, then its momentum will become so large (via the uncertainty principle) that it its kinetic energy will be larger than mc^2.  In this case, there will be enough energy to create (from the vacuum) a new electron-positron pair, and the newly-created positron can just annihilate the trapped electron while the newly-created electron flies away.

Option #4: the empiricist’s view

At this point you may start to feel like none of the above estimates for the electron size seems meaningful.  (Although don’t be too harsh in discarding them: the concepts of the Bohr radius, the classical electron radius, and the Compton wavelength appear over and over again in physics.)

So let’s consider a purely empirical view: what do experiments tell us about how big an electron is?

I know of two types of experiments that qualify as saying something about the electron size.  The first is a measurement of the electric dipole moment of the electron.  The idea is that, perhaps, the electron does not always have its charge arranged in a perfectly spherical way.  Maybe its “shape” can be slightly asymmetric, like this:


An asymmetric electron

In this case the electron would would want to align its “head” with an external electric field.  The strength of the electron asymmetry is quantified by the electron dipole moment d, which is defined as something like d \sim (charge in the bottom half of the electron - charge in the top half of the electron ) \times ( electron size).

So far, however, experiments looking for a finite value of the electron dipole moment have not found one, and their experiments place an upper limit of d < 10^{-30} \text{ electron charges} \times \text{meters}.  This means that either the electron is an extremely symmetric object (for example, a very perfect sphere) or its size is smaller than about 10^{-30} meters.


Another set of experiments looks for corrections to the way that the electron interacts with the vacuum.  At a very conceptual level, an electron can absorb and re-emit photons from the vacuum, and this slightly alters its magnetic moment in a way that is mind-bogglingly well-described.  If the electron were to have a finite size, then this would alter its interaction with the vacuum a little bit, and the magnetic moment would very slightly change relative to our theories based on size-less electrons.

So far, however, experiments have seen no evidence of such an effect.  The accuracy of the experimental observations places an apparent upper limit on the electron size of about 10^{-18} m.


Option #5: The Planck length

At this point, seeing the apparent failure of the classical description to produce a coherent picture, and seeing the experimental appearance of such spectacularly small numbers as 10^{-18} and 10^{-30}, you may be willing to abandon Lenin’s hope of an “inexhaustible” electron and simply declare that the electron really is size-less.  From now on, you may resolve, when you draw an electron, you’ll draw it as a single pixel.  But only because you can’t draw it as a half-pixel.

But this brings up the last, and strangest, question: what is the smallest conceivable length that anything can have?  In other words, if the universe has a fundamental “pixel size”, then what is it?

Of course, I don’t know the answer to this question.  Whether there really is a “smallest possible length” is interesting to consider, but at the moment it can only be addressed with speculation.  Nonetheless, we do know that there is a length scale below which our most basic theories of the universe stop making sense.  This is called the “Planck length”, \ell_p.  At distances smaller than the Planck length, we are unable to describe even what empty space is like.

As I understand it, the Planck length problem can be viewed like this.  Our modern understanding of the vacuum (i.e., of empty space) is that it contains one photon mode for every possible photon wavelength.  This means that empty space is essentially full of photons of all conceivable energies and with all conceivable wavelengths.

However, photons with very small wavelength \ell have very large energy, E \sim \hbar c/\ell.  It should therefore be possible to convert that energy, if only for a brief instant, into a large mass M = E/c^2. (After a short time t \sim \hbar/E, the mass will have to disappear again and give its energy back to the vacuum).  If that mass is large enough, though, it can create a small black hole.  A black hole gobbles up all other photons around it if they have a wavelength smaller than the black hole’s Schwarzschild radius, R_s \sim G M/c^2.  (Here, G is Newton’s gravitational constant).  This starts to be a real problem when R_s gets as large as the wavelength of the photon that first created the black hole, because then the black hole can consume the original photon and all other photons with smaller wavelength.

Do you see the problem with that (semi-contorted) logical sequence?  If very short length scales exist, then very high energy photons exist.  But if high energy photons exist, then they should be able to create, for a brief moment, very high masses.  Those high masses will create black holes.  And those black holes will eat up all the high energy photons.

It’s sort of a logical inconsistency.

As mentioned above, this problem first arises when the Schwarzschild radius R_s becomes equal to the wavelength \ell of the photon that created it.  If you work through the chain of algebra above, this will bring you to a length scale of

\ell_p = \sqrt{\hbar G/c^3} \approx 10^{-35} m.

At any length scale smaller than \ell_p, we don’t know what’s going on.  At such small length scales either quantum theory should be different from what we know, or photons should be different, or gravity should be different.

Or maybe at such small length scales there is no good notion of continuous space at all.  Such thinking, as I understand it, gives rise to lots of picturesque ideas about “quantum foam”, and is the playground of (as yet mostly non-existent) theories of quantum gravity.

This picture is supposed to illustrate the idea that space might smooth and continuous until you try to look at it at the scale of the Planck length.


What shall we say about Lenin?

So in the end, was Lenin right about the electron being “inexhaustible”?  For the moment, it looks like the answer is no, in the sense that there isn’t really any serious candidate for the intrinsic size of the electron.  In that sense, we could all have saved some time by just accepting the standard dogma that an electron is a sizeless point in space.

But I personally tend to resist dogmatism in all its forms, even the kind that is almost certainly correct.  Because sometimes those heretical questions lead you through all sorts of interesting ideas, ranging from from 10^{-10} meters down to 10^{-35} meters.

And if it becomes clear some day that Lenin’s statement really only works on the Planck scale, then we can probably say that his prediction came several centuries before its time.

Pedestrians as interacting particles

February 8, 2015

I generally don’t like to use this blog to discuss my own scientific projects.  (Physics as a whole is much more interesting than my own meager contributions to it.)  But there is a recent project I was involved in that is getting a decent amount of attention from the popular press.  So I thought there might be some value in giving a description of it from the horse’s mouth (or, at least, the mouth of one of the horses).


The basic question behind this project is this:
Is it possible to model a crowd of people as a collection of interacting particles?
If so, what kind of “particles” are they?

Now, this question may seem silly to you.  Obviously, a human being isn’t a “particle” like an electron or a billiard ball.  Human motion (in most cases) arises from the workings of the human brain, and not from random physical forces.  So what would motivate someone to talk about humans as interacting particles?

The answer, at least for me personally, is that the motivation comes from watching videos like this one:


Basically, when you watch the movement of crowds at a large enough scale, the motion starts to look beautiful and familiar.  Perhaps something like the flow of a liquid:


Or maybe a granular fluid:


These apparent similarities are pretty exciting to a person like me (and to many others who are perhaps not a lot like me), in part because they imply the possibility of bringing old knowledge to a new frontier.  We have centuries of knowledge that was built to describe the physics of fluids and many-body systems, and now there is the hope that we can adapt it to say something useful about human crowds.

But all that those videos really demonstrate is that crowds and particle systems are visually similar, and so the question remains: is there really a good analogy between particle systems and human crowds?  If so, what kind of “particles” are we?


When you come down to it, the defining feature of a particle system is the “interaction law”, which is the equation that relates the energy E of interaction between two particles to their relative positions.  For example, for two electrons the interaction law is the Coulomb law E \propto e^2/r, while for two neutral atoms it is something like the Lennard-Jones potential, E \propto C_1/r^{12} - C_2/r^6.

So, in some sense, the question of “how do we describe human crowds as particle systems?” comes down to the question “what is the interaction law between pedestrians?”.


In fact, scientists have been interested in that question for a few decades now.  And, generally, the way they have approached it is to make some hypothesis about how the interaction law E(r) should look, and then make a big computer simulation of pedestrians following that law and see if the simulation behaves correctly.

This approach has gotten us pretty far — it has helped to save lives in Mecca and given us fantastic CGI crowd animations in movies and video games.  But it has also given rise to a sort of messy situation, scientifically, in which there are many competing models for crowds and their interaction law, with no generally accepted way of adjudicating between them.

What my colleagues and I eventually figured out is that we could take a different approach to this problem.  Instead of guessing what we thought the interaction law should be and then checking how well our guess worked, we decided to look first at real data and see what the data was telling us.  If there is, indeed, a universally correct equation for the interaction law between people, then it should be encoded in the data.

I’ll spare you the technical details of our data-digging, but the basic idea is like this.  In many-body physics, there is a general rule (the Boltzmann law) that relates energy to probability.  This law says, in short, that in a large system, any configuration of particles having a large energy is exponentially rare: its probability is proportional to \exp(-E\times \text{const.}).  So, for a crowd of people, looking at the relative abundance of different configurations of people tells you something about how much “energy” is associated with that configuration.  This allows you to infer the correct quantitative form of the interaction energy, by correlating the properties of different configurations with their relative abundance in the dataset.

So what we did is to first amass a large amount of crowd data.  This data was generally in the form of digitized video footage of people walking around crowded areas.  For example, we had data from students milling around on college campuses, shoppers walking around shopping streets, and even a few controlled experiments where people were recorded walking out of a crowded room.


An example of some of the data we looked at, showing different pedestrian “tracks”. The color corresponds to the time-averaged pedestrian density.


When we finished the analysis, there were two results that jumped out from the data very clearly: one obvious, and one surprising.

The first result is that the interaction law between pedestrians is not a function only on their relative distance r.

In hindsight, this result is  pretty obvious.  For example, two people walking headfirst into each other will feel a large “force” that compels them to move out of each other’s way.  On the other hand, two people walking side-by-side may feel no such force, even if they are relatively close to each other.

The implication of this result, though, is important.  It means that human pedestrians are very different from other, non-sentient “particles.”  An electron, for example, feels a force that is based on the physical proximity of other electric charges to itself at that particular instant.  Humans, on the other hand, respond to the world not as it is currently, but as they anticipate it to be in the near future.


Given the first, obvious result — that humans are not particles — the second result is much more surprising.  What we found is that there is, in fact, a very consistent interaction law between pedestrians in a crowd.  And it looks like this:

(\text{interaction energy}) \propto 1/(\text{projected time to collision})^2.

In other words, as pedestrians navigate around each other, they base their movements not on the physical distance between each other, but on the extrapolated time \tau to an upcoming collision.  What’s more, the form of their interaction energy has the very simple form E \propto 1/\tau^2.  That this interaction law could have such a remarkably simple form was quite surprising, and that it holds across a whole range of different environments, densities, and cultures, was even more surprising.

EtauIt’s remarkable that something so mathematically simple could describe what is essentially a psychological phenomenon.

That, in a nutshell, was the finding of our paper.  My colleagues went on to show how this simple rule could immediately be used to make fast and accurate simulations of pedestrian crowds.  You can check out some of their simulation videos here, but I’ll also embed one of my favorite ones:

Here, two groups of people are asked to walk perpendicularly past each other.  They manage to resolve their imminent collisions by spontaneously forming diagonal “stripes” that cut through each other.


What’s nice about this result is that it gives us, with some confidence, the first major building block needed for making a real theory of human crowds.  Now that we know the nature of the interaction between two individuals, we can start putting together a kinematic theory of how crowds move in the aggregate.  This has all sorts of practical importance, in terms of understanding and predicting crowd disasters before they happen, but it also opens up a variety of fun problems to the language of condensed matter physics.  Maybe, in addition to describing the bulk “flow” of crowds, we can talk about the emergent features (“quasiparticles“) of crowds, like lane formation and mosh pit vortices.

It will be fun to see how this field develops in the near future.



Of course, most of the credit for this work belongs to my co-authors, Ioannis Karamouzas and Stephen Guy, at the Applied Motion Lab at the University of Minnesota.  They did the hard work of suggesting the problem, doing (most of) the data analysis, and writing the computer simulations.  My role was mostly to insist on making the problem “sound more like physics.”



Get every new post delivered to your Inbox.

Join 2,225 other followers