Skip to content

Pedestrians as interacting particles

February 8, 2015

I generally don’t like to use this blog to discuss my own scientific projects.  (Physics as a whole is much more interesting than my own meager contributions to it.)  But there is a recent project I was involved in that is getting a decent amount of attention from the popular press.  So I thought there might be some value in giving a description of it from the horse’s mouth (or, at least, the mouth of one of the horses).

 

The basic question behind this project is this:
Is it possible to model a crowd of people as a collection of interacting particles?
If so, what kind of “particles” are they?

Now, this question may seem silly to you.  Obviously, a human being isn’t a “particle” like an electron or a billiard ball.  Human motion (in most cases) arises from the workings of the human brain, and not from random physical forces.  So what would motivate someone to talk about humans as interacting particles?

The answer, at least for me personally, is that the motivation comes from watching videos like this one:

 

Basically, when you watch the movement of crowds at a large enough scale, the motion starts to look beautiful and familiar.  Perhaps something like the flow of a liquid:

 

Or maybe a granular fluid:

 

These apparent similarities are pretty exciting to a person like me (and to many others who are perhaps not a lot like me), in part because they imply the possibility of bringing old knowledge to a new frontier.  We have centuries of knowledge that was built to describe the physics of fluids and many-body systems, and now there is the hope that we can adapt it to say something useful about human crowds.

But all that those videos really demonstrate is that crowds and particle systems are visually similar, and so the question remains: is there really a good analogy between particle systems and human crowds?  If so, what kind of “particles” are we?

 

When you come down to it, the defining feature of a particle system is the “interaction law”, which is the equation that relates the energy E of interaction between two particles to their relative positions.  For example, for two electrons the interaction law is the Coulomb law E \propto e^2/r, while for two neutral atoms it is something like the Lennard-Jones potential, E \propto C_1/r^{12} - C_2/r^6.

So, in some sense, the question of “how do we describe human crowds as particle systems?” comes down to the question “what is the interaction law between pedestrians?”.

 

In fact, scientists have been interested in that question for a few decades now.  And, generally, the way they have approached it is to make some hypothesis about how the interaction law E(r) should look, and then make a big computer simulation of pedestrians following that law and see if the simulation behaves correctly.

This approach has gotten us pretty far — it has helped to save lives in Mecca and given us fantastic CGI crowd animations in movies and video games.  But it has also given rise to a sort of messy situation, scientifically, in which there are many competing models for crowds and their interaction law, with no generally accepted way of adjudicating between them.

What my colleagues and I eventually figured out is that we could take a different approach to this problem.  Instead of guessing what we thought the interaction law should be and then checking how well our guess worked, we decided to look first at real data and see what the data was telling us.  If there is, indeed, a universally correct equation for the interaction law between people, then it should be encoded in the data.

I’ll spare you the technical details of our data-digging, but the basic idea is like this.  In many-body physics, there is a general rule (the Boltzmann law) that relates energy to probability.  This law says, in short, that in a large system, any configuration of particles having a large energy is exponentially rare: its probability is proportional to \exp(-E\times \text{const.}).  So, for a crowd of people, looking at the relative abundance of different configurations of people tells you something about how much “energy” is associated with that configuration.  This allows you to infer the correct quantitative form of the interaction energy, by correlating the properties of different configurations with their relative abundance in the dataset.

So what we did is to first amass a large amount of crowd data.  This data was generally in the form of digitized video footage of people walking around crowded areas.  For example, we had data from students milling around on college campuses, shoppers walking around shopping streets, and even a few controlled experiments where people were recorded walking out of a crowded room.

pedestrian_trajectories

An example of some of the data we looked at, showing different pedestrian “tracks”. The color corresponds to the time-averaged pedestrian density.

 

When we finished the analysis, there were two results that jumped out from the data very clearly: one obvious, and one surprising.

The first result is that the interaction law between pedestrians is not a function only on their relative distance r.

In hindsight, this result is  pretty obvious.  For example, two people walking headfirst into each other will feel a large “force” that compels them to move out of each other’s way.  On the other hand, two people walking side-by-side may feel no such force, even if they are relatively close to each other.

The implication of this result, though, is important.  It means that human pedestrians are very different from other, non-sentient “particles.”  An electron, for example, feels a force that is based on the physical proximity of other electric charges to itself at that particular instant.  Humans, on the other hand, respond to the world not as it is currently, but as they anticipate it to be in the near future.

 

Given the first, obvious result — that humans are not particles — the second result is much more surprising.  What we found is that there is, in fact, a very consistent interaction law between pedestrians in a crowd.  And it looks like this:

(\text{interaction energy}) \propto 1/(\text{projected time to collision})^2.

In other words, as pedestrians navigate around each other, they base their movements not on the physical distance between each other, but on the extrapolated time \tau to an upcoming collision.  What’s more, the form of their interaction energy has the very simple form E \propto 1/\tau^2.  That this interaction law could have such a remarkably simple form was quite surprising, and that it holds across a whole range of different environments, densities, and cultures, was even more surprising.

EtauIt’s remarkable that something so mathematically simple could describe what is essentially a psychological phenomenon.

That, in a nutshell, was the finding of our paper.  My colleagues went on to show how this simple rule could immediately be used to make fast and accurate simulations of pedestrian crowds.  You can check out some of their simulation videos here, but I’ll also embed one of my favorite ones:

Here, two groups of people are asked to walk perpendicularly past each other.  They manage to resolve their imminent collisions by spontaneously forming diagonal “stripes” that cut through each other.

 

What’s nice about this result is that it gives us, with some confidence, the first major building block needed for making a real theory of human crowds.  Now that we know the nature of the interaction between two individuals, we can start putting together a kinematic theory of how crowds move in the aggregate.  This has all sorts of practical importance, in terms of understanding and predicting crowd disasters before they happen, but it also opens up a variety of fun problems to the language of condensed matter physics.  Maybe, in addition to describing the bulk “flow” of crowds, we can talk about the emergent features (“quasiparticles“) of crowds, like lane formation and mosh pit vortices.

It will be fun to see how this field develops in the near future.

 

Credits:

Of course, most of the credit for this work belongs to my co-authors, Ioannis Karamouzas and Stephen Guy, at the Applied Motion Lab at the University of Minnesota.  They did the hard work of suggesting the problem, doing (most of) the data analysis, and writing the computer simulations.  My role was mostly to insist on making the problem “sound more like physics.”

 

How strong would a magnetic field have to be to kill you?

January 12, 2015

There’s a great joke in Futurama, the cartoon comedy show, about a horror movie for robots.  In the movie, a planet of robots is terrorized by a giant “non-metallic being” (a monsterified human).  The human is finally defeated by a makeshift spear, which prompts the robot general to say:

“Funny, isn’t it?  The human was impervious to our most powerful magnetic fields, yet in the end he succumbed to a harmless sharpened stick.”

The joke, of course, is that the human body might seem much more fragile than a metallic machine, but to a robot our ability to withstand enormous magnetic fields would be like invincibility.

But this got me thinking: how strong would a magnetic field have to be before it killed a human?

 

\hspace{1mm}

Unlike a computer hard drive, the human body doesn’t really make use of any magnetic states — there is nowhere in the body where important information is stored as a static magnetization.  This means that there is no risk that an external magnetic field could wipe out important information, the way that it would for, say, a credit card or a hard drive.  So, for example, it’s perfectly safe for a human (with no metal in their body) to have an MRI scan, during which the magnetic fields reach several Tesla, which is about 10^5 times stronger than the normal magnetic fields produced by the Earth.

 

A computer hard drive stores information in a sequence of magnetically aligned segments.

 

But even without any magnetic information to erase, a strong enough magnetic field must have some effect.  Generally speaking, magnetic fields create forces that push on moving charges.  And the body has plenty of moving charges inside it: most notably, the electrons that orbit around atomic nuclei.

As I’ll show below, a large enough magnetic field would push strongly enough on these orbiting electrons to completely change the shape of atoms, and this would ruin the chemical bonds that give our body its function and its structure integrity.

\hspace{1mm}

What atoms look like

Before I continue, let me briefly recap the cartoon picture of the structure of the atom, and how to think about it.  An atom is the bound state of at least one electron to a positively charged nucleus.  The electric attraction between the electron and the nucleus pulls the electron inward, while the rules of quantum mechanics prevent the electron from collapsing down completely onto the nucleus.

 

H-orbit

In this case, the relevant “rule of quantum mechanics”  is the Heisenberg uncertainty principle, which says that if you confine an electron to a volume of size r, then the electron’s momentum must become at least as large as p \sim \hbar/r.  The corresponding kinetic energy is p^2/2m \sim \hbar^2/mr^2, which means that the more tightly you try to confine an electron, the more kinetic energy it gets.  [Here, \hbar is Planck’s constant, and m is the electron mass$.]  This kinetic energy is often called the “quantum confinement energy.”

In a stable atom, the quantum confinement energy, which favors having a large electron orbit, is balanced against the electric attraction between the electron and the nucleus, which pulls the electron inward and has energy \sim -e^2/\epsilon_0 r. [Here e is the electron charge and \epsilon_0 is the vacuum permittivity].  In the balanced state, these two energies are nearly equal to each other, which means that r \sim \hbar^2 \epsilon_0/me^2 \sim 10^{-10} meters.

This is the quick and dirty way to figure out the answer to the question: “how big is an atom?”.

The associated velocity of the electron in its orbit is v \sim p/m \sim \hbar/m r, which is about 10^6 m/s (or about a million miles per hour).  The attractive force between the electron and the nucleus is about F_E \sim -e^2/\epsilon_0 r \sim m^2 e^6/\hbar^4 \epsilon_0^3, which comes to ~100 nanoNewtons.

\hspace{1mm}

Who pulls harder: the nucleus, or the magnetic field?

Now that I’ve reminded you what an atom looks like, let me remind you what magnetic fields do to free charges.

They pull them into circular orbits, like this:

Lorentz_force.svg

 

The force with which a magnetic field pulls on a charge is given by F_B \sim e v B, where B is the strength of the field.  For an electron moving at a million miles per hour, as in the inside of an atom, this works out to be about 1 picoNewton per Tesla of magnetic field.

Now we can consider the following question.  Who pulls harder on the electron: the nucleus, or the external magnetic field?

The answer, of course, depends on the strength of the magnetic field.  Looking at the numbers above, one can see that for just about any realistic situation, the force provided by the magnetic field is much much smaller than the force from the nucleus, so that the magnetic field essentially does nothing to perturb the electrons in their atomic orbitals.  However, if the magnetic field were to get strong enough, then the force it produces would be enough to start significantly bending the electron trajectories, and the shape of the electron orbits would get distorted.

Setting F_B > F_E from above gives the estimate that this kind of distortion happens only when B \gtrsim 10^5  Tesla.  Given that the strongest static magnetic fields we can create artificially are only about 100 Tesla, it’s probably safe to say that you are unlikely to experience this any time soon.  Just don’t wander too close to any magnetars.

\hspace{1mm}

Distorted atoms

But supposing that you did wander into a magnetic field of 100,000 Tesla, what would happen?

The strong magnetic forces would start to squeeze the electron orbits in all the atoms in your body.  The result would look something like this:

 

distorted_hydrogen

 

So, for example, an initially spherical hydrogen atom (on the left) would have its orbit squeezed in the directions perpendicular to the magnetic field, and would end up instead looking like the picture on the right.  This squeezing would get more and more pronounced as the field is turned up, so that all the atoms in your body would go from roughly spherical to “cigar-shaped,” and then to “needle-shaped”.

Needless to say, the molecules that make up your body are only able to hold together when they are made from normal shaped atoms, and not needle-shaped atoms.  So once the atomic orbitals got sufficiently distorted, their chemistry would change dramatically and these molecules would start to fall apart.  And your body would presumably be reduced to a dusty, incoherent mess (artist’s conception).

 

But for those of us who stay away from neutron stars, it is probably safe to assume that death by magnetic field-induced disintegration is pretty unlikely.  So you can continue lording your invincibility over your robot coworkers.

 

UPDATE:

A number of people have pointed out, correctly, that if you really subjected a body to strong magnetic fields, something would probably go wrong biologically far before the field got so ludicrously large fields as 100,000 Tesla.  For example, the motion of ions through ion channels, which is essential for nerve firing, might be affected.  Sadly, I probably don’t know enough biology to give you a confident speculation about what, exactly, might go wrong.

There is another possible issue, though, that can be understood at the level of cartoon pictures of atoms.  An electron orbiting around a nucleus is, in a primitive sense, like a tiny circular electric current.  As a result, the electron creates its own little magnetic field, with a “north pole” and “south pole” determined by the direction of its orbital motion.   Like so:

Normally, these little electron orbits all point in more or less random directions.  But in the presence of a strong enough external magnetic field, the electron orbit will tend to get aligned so that its “north pole” points in the same direction as the magnetic field.  By my estimate, this would happen at a few hundred Tesla.

In other words, a few hundred Tesla is what it would take to strongly magnetize the human body.  This isn’t deformation of atoms, just alignment of their orbits in a consistent direction.

Once the atomic orbits were all pointed in the same direction, the chemistry of atomic interactions might start to be affected.  For example, some chemical processes might start happening at different rates when the atoms are “side by side” as compared to when they are “front to back.”  I can imagine this subtle alteration of chemical reaction rates having a big effect over a long enough time.

Maybe this is why, as commenter cornholio pointed out below, a fruit fly that grows up in a ~ 10 Tesla field appears to get mutated.

 

 

Footnote

I have been assuming, of course, that we are talking only about static magnetic fields.  Subjecting someone to a magnetic field that changes quickly in time is the same thing as bombarding them with radiation.  And it is not at all difficult to microwave someone to death.

[Update: A number of people have brought up transcranial magnetic stimulation, which has noticeable biological effects at relatively small field strengths.  But this  works only because it applies a time-dependent magnetic field, which can induce electric currents in the brain.]

Some equations are more equal than others

August 2, 2014

equations

Here’s a strange math problem that I encountered as an undergraduate:

What is the solution to the following equation?

xx2

[Note: The order of exponents here is such that the upper ones are taken first.  For example, you should read 2^{3^2} as 2^{(3^2)} = 512 and not as (2^3)^2 = 81.]

As it happens, there’s a handy trick for solving this equation, and that’s to use both sides as an exponent for x.  This gives

xxx2

From the first equation, though, the left hand side is just 2.  So now we’re left with simply x^2 = 2, which means x = \sqrt{2}.

Not bad, right?  Apparently the conclusion is that

sqrt2-2

Where things get weird is when you try to solve an almost identical variant of this problem.  In particular, let’s try to solve:

xx4

We can do the same trick as before, using both sides of the equation as an exponent for x, and this gives

xxx4

so that we’re left with 4 = x^4.  The solution to this equation is, again, x = \sqrt{2}.

But now you should be worried, because apparently we have reached the conclusion that

sqrt2-4

So which is it?  What is the correct value of \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}}?  Is it 2, or is it 4?

Yes, but which one is the real answer?

Maybe in the world of purely abstract mathematics, it’s not a problem to have two different answers to a single straightforward mathematical operation.  But in the real world this is not a tolerable situation.

The reasoning above raised a straightforward question — what is \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}}? — and provided two conflicting answers: \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} = 2 and \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} = 4.  Both of these equations are correct, but which one should you really believe?

Suppose that you don’t really believe either of those two equations (which, at this point, you probably shouldn’t), and you want to figure out for yourself what the value of \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} is.  How would you do it?

One simple protocol that you could do with a calculator or a spreadsheet is this:

  • Make an initial guess for what you think is the correct value of \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}}.
  • Take your guess and raise \sqrt{2} to that power.
  • Take the answer you get and raise \sqrt{2} to that power.
  • Repeat that last step a bunch of times.

Try this process out, and you will almost certainly get one of two answers:

If you initially guessed that \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} was any number less than 4, then you will arrive at the conclusion that \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} = 2.

If your initial guess was something larger than 4, though, you will instead get to the conclusion that \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} = \infty.

The situation can be illustrated something like this:

fixed_pointsOnly if your initial guess was exactly 4, and if your calculator gave you the exact correct answer at every step, will you ever see the solution \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} = 4.  A single error in the 16th decimal place anywhere along the way will instead lead you to a final answer of either 2 or \infty.

In this sense \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} = 2 is a much better answer than \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} = 4.  The latter is true only in a hypothetical world of perfect exactness, while the former is true even if your starting conditions are a little uncertain, or if your calculator makes mistakes along the way, or (most importantly) there’s some small additional factor that you haven’t taken into consideration.

Scientists adjudicate between mathematical realities

For the most part, this has been a silly little exercise.  But it actually does illustrate something that is part of the job of a physicist, or anyone else who uses math as a tool.  Physicists spend a lot of time solving equations that describe (or are supposed to describe) the physical world.  But finding a solution to some equations is not the end of process.  We also have to check whether the solution we came up with is meaningful in the real world, which is full of inexactnesses.  For example, the equation that describing the forces acting on a pencil an my desktop will tell me that the pencil can be non-moving either when lying on its side or when balanced on its point.  But only one of those two situations really deserves to be called a “solution”.

So, as for me, if you ask me whether \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} = 2 or \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{...}}} = 4, I’ll go with 2.

Because, as Napoleon the pig understood, some equations are more equal than others.

Boston Marathon 2014

April 25, 2014

As promised:

761542-1086-0016s

A letter to the donors who helped me at Virginia Tech

April 16, 2014

Every year on April 16, I like to remember my time at Virginia Tech.

So far, the memories I have written about have been ones of reverence, or anger, or sadness.  But I haven’t been explicit about the predominant emotion I feel when reflecting on my undergraduate years at VT: gratitude.

It seems to me that the defining feature of my life so far is that I have been the beneficiary of great and undeserved kindness.  My time at VT was certainly no exception.  One of the most concrete examples I have of this kindness is the numerous privately-endowed scholarships that helped to pay my tuition.  I can’t imagine what motivates a private individual to give thousands of dollars every year for the benefit of some unknown (and, in my case, thoroughly immature) kid.  But it’s humbling and puzzling to think that such people considered it worthwhile to pay me to study whatever I was interested in, without knowing me and without getting anything in return.  I benefited enormously from their generosity.

I wish that I had made more diligent and more sincere efforts to thank those people.  I’m sure I would be embarrassed if I had to read through the meager thank-you letters that I wrote every year.

As it happens, though, I do have one of those letters in my possession.  The last scholarship I received at VT was the H. Y. Loh Award, which I was given just before my graduation in 2007.  I wrote a thank-you letter to the donor shortly after graduation, but, sadly, the donor passed away before the letter arrived and it was returned to me.

Just today I finally worked up the courage to open the envelope and read what I had written.  It is a little embarrassing to read, and it reflects my own insecurities as much as anything else, but I like it because it stands as a record of who I was at the time and of the people who helped me get there.

Below is the letter itself.  I have blanked out the donor’s name, but maybe it can stand as an open thank-you to all of those who helped me at Virginia Tech, and to those who continue to help out immature kids like the one I was.

 

letter-1 letter-2

The parable of the perfectly symmetric ass

April 10, 2014

I would like to introduce a phrase into the lexicon of science and everyday life, based on the following ridiculous story that was taught to me at the CERN summer school.

\hspace{1mm}

Imagine a perfectly symmetric ass, standing atop a perfectly symmetric hill (…I’m talking about a donkey here, folks).  Placed on either side of the hill, at perfectly equidistant locations, are two perfectly identical piles of hay.

The ass is hungry, but it feels itself pulled toward each pile of hay with exactly equal and opposite forces.

Given the staggering symmetry of the setup, the only logical conclusion is that the ass is doomed to inaction and will eventually starve.

\hspace{1mm}

As it turns out, this silly story is a famous satire of the assertion by the French philosopher Jean Buridan that

Should two courses be judged equal, then the will cannot break the deadlock, all it can do is to suspend judgement until the circumstances change, and the right course of action is clear.

The poor starving donkey above is thus called “Buridan’s ass.”

(As is often the case, the original version of this satirical argument actually belongs to Aristotle.)

\hspace{1mm}

Since hearing this story, there have been a large and increasingly frequent number of times when it has seemed like a good depiction of myself or someone else.  So I would like to suggest using the phrase “to be a perfectly symmetric ass” as a description of someone who is being paralyzed into inaction by symmetry.  In particular, I see two good targets for this phrase:

1. Scientific arguments that invoke symmetry at the expense of energy minimization

For example, suppose someone asked you to predict what will happen if you apply a large voltage between a small inner sphere and a large outer sphere that is filled with a weakly-conducting plasma.  Most of us who had Gauss’s law arguments trained into us would immediately say that an electrical current will flow out from the inner sphere in a radially-symmetric way, and consequently that the total current flow will be very small.  But most of us would be wrong, however, because what actually happens is this:

[Just watch from the 0:09 mark until 0:12 or so.]

In short: the system figures out very quickly that there is a much lower-energy way to move its current from inner to outer surface.  Namely, by creating sharp (symmetry-breaking) pathways with intense current, which produce dielectric breakdown of the plasma and allow the current to flow easily.

If you allow symmetry to fool you into thinking that the current will flow slowly and radially, then you are “being a perfectly symmetric ass.”

2. Everyday situations in which opportunities are missed because of an inability to choose between two good options

Suppose, for example, that you are at an ice cream shop and you are standing at the counter, unable to make up your mind about which of the various fantastic flavors you will get.  As the line starts to build up behind you, you eventually get flustered and say “never mind, I’ll just get chocolate.”

In that situation, you are “being a perfectly symmetric ass.”

Or, maybe, you have “made a perfectly symmetric ass of yourself.”

\hspace{1mm}

I say it lovingly, of course, because I make a perfectly symmetric ass of myself all the time.

 

There’s nothing particularly “spooky” about avoided crossing

April 8, 2014

Coming to terms with quantum mechanics is no easy task.  The quantum world has its own unique atmosphere, its own set of laws and tendencies — its own tao, if you will — that seem far removed from the lifetime of solid-feeling intuition that all of us develop naturally.  So gaining an ease and familiarity with the “quantum way” takes time.  It’s a bit like living in a foreign country: you have to spend a lot of time immersed in it before you start to feel like you can move easily through its streets.

That said, those of us who spend a good chunk of our lifetimes thinking about quantum mechanics run a peculiar risk.  Namely, we start to feel like quantum mechanics is everything, and that every result and every feature that appears in the quantum world must be understood on its own terms.  In other words, we forget that some of the things that show up in the quantum world show up just as easily in the “person-sized” classical world, too.

One particular example is the phenomenon of “avoided crossing.”

"No crossing": a fundamental law in the quantum world

In this post I’ll explain what avoided crossing is in its standard quantum form.  Then I’ll show you that it can just as easily rear its head in the classical world, too.

Avoided crossing: quantum version

In its simplest form, the quantum phenomenon of avoided crossing goes something like this:

Imagine that there are two places where a quantum particle (say, an electron) can sit: a site on the left, and a site on the right.    Suppose also that the site on the left has lower energy than the site on the left, and that an electron is sitting there.  Like this:

left_and_right_sites

Now suppose that you start to slowly raise the energy of the left site and lower the energy of the right site.  Eventually, the two energy levels will pass by each other, and after a long enough time the left site will have a high energy and the right site will have a low energy.  You would expect that during this process the electron will ride on the left site, so that its energy increases steadily.  Like this:

lifting

But that’s not what happens.  If you raise/lower the energies of the two sites slowly enough, then what happens is something like this:

ac-from_below

In other words, the electron energy (the blue line) stays low, and never even gets as high as the point where the left and right site energies cross.  What’s more, at long enough times the electron manages to transfer itself from the left site to the right site.

You can now ask the question: what would have happened if the electron started on the right site?  Clearly, in this case the energy should be large to start, and should start decreasing with time as the energy of the right site drops.  And that is indeed what happens, except that when the energies of the two sites get close to each other, something funny happens again:

ac-from_above

The electron in this case manages to transfer itself from right to left, keeping its energy high, and never reaches the point where the two energies are supposed to cross.

So now if you make a plot of “energies that an electron can have” as a function of how far you’ve shifted the left and right site energies, you’ll get something like this:

ac

This is the phenomenon of avoided crossing, or “level repulsion”.  In short: you can never push two energy levels through each other.  If you try, you’ll find that the two energy levels always get “repelled” from each other a little bit, and that the low-energy states remain smoothly connected to other low-energy states, while high-energy states remain connected to other high energy states.

So what causes avoided crossing?  As you could probably guess, its existence depends crucially on the ability of the electron to jump from one site to the other.  In other words, the avoided crossing arises from quantum tunneling.  When the two sites have very different energy, you can say that the electron almost definitely resides in either one site or the other.  Right at the crossing point, however, the electron finds itself spread between the two in a way that apparently involves the “spooky” laws of quantum mechanics.

A pause for philosphisizing

Let me pause here to make a more general comment concerning how we think about quantum mechanics.

In general, when one first encounters some strange phenomenon in quantum mechanics, like the avoided crossing outlined above, there are two courses of action, philosophically.  One possibility is to just learn the phenomenon mathematically without grasping for a physical/mechanical way of thinking about it.  The people who advocate this approach (as, for example, here) generally use the argument that all macroscopic “physical” objects really emerge from quantum mechanical laws applied across large scales, so trying to think about the quantum world in terms of mechanical objects is backwards and nonsensical.

This is true, of course.  But it also strikes me as somewhat defeatist.  Science, in my opinion, is never a business of compiling true statements.  It is only a business of compiling useful concepts and models that give some predictive power.  And for an idea to be useful, it has to be able to stick in your mind in a firm and conceptual way.  An idea that consists of arbitrary laws or fiats is unlikely to stick in your mind (or at least in my mind) in this way, even if such fiats constitute a very correct way of stating the idea.

For me, at least, the only ideas that really stick in my mind are ones that can be thought of physically, i.e., ones that have some accompanying picture of how one thing pushes or shakes or distorts another.  Even if these pictures are “not correct,” they are, to me, essential for scientific reasoning.

So let me not take the defeatist attitude of correctness, and try to come up with a “mechanical” way to think about the strange quantum business of avoided crossing.

Avoided crossing: classical version

Imagine for a moment that you have two springs, one on the left and one on the right, each connected to one of two equal masses.  Let’s say that the spring on the left is fairly loose, while the spring on the right is tight.  This means that if you excite the mass on the left it will vibrate slowly, like this:

 

On the other hand, if you excite the mass on the right, it will vibrate more quickly, like this:rightmodeIn this classical example, the vibration frequencies are the analogues of the electron energies in the quantum example above: to begin with, one is small and one is large.

Now let’s imagine the process of slowly tightening the spring on the left and simultaneously loosening the spring on the right.  This is like the simultaneous raising/lowering of the electron energies in the quantum example.

If the two springs are completely disconnected from each other, then nothing interesting will happen.  The left spring frequency will gradually increase and the right spring frequency will gradually decrease.  Like this:

crossing-springs

 

But things become more interesting if you introduce a small coupling between the two springs.  Suppose, for example, that we put a very weak spring in the middle, connecting the two masses.  Like this:

coupledsprings

Now, in principle, whenever either of the two masses moves it affects the other.  But if the middle spring is very weak, then we can still excite mostly the left spring only or mostly the right spring only. For example, if you excite the loose left spring then the tight right spring won’t move very much.  But what happens if you slowly tighten the left spring while simultaneously loosening the right spring?

As it turns out, this happens:

sym-transfer

What you’re seeing here is that the oscillation starts out more or less entirely on the left, but as the two springs exchange roles (go from loose to tight and vice-versa), the oscillation moves to the right side.  So you start with a slow, left-heavy oscillation at the beginning, go through a phase where both are oscillating equally, and end up with a slow, right-heavy oscillation at the end.

What would have happened if we had started with the oscillation on the right side?

This:

asym-transfer

You’ll notice that the same thing is happening here, but in reverse.  A fast oscillation in the tight spring on the right is eventually turned into a fast oscillation in the tight spring on the left.

If you plot what is happening to the oscillation frequency as a function of time, it looks like this:

ac-springs

 

Now that we have this picture, and a movie of what’s happening to the springs over time, we can talk about what, exactly, is the meaning of those funny “avoided crossing” states in the middle.  These states correspond to the moment where the two springs are identically tight (the left and right spring are right at the point of exchanging roles).  Apparently at this moment there are two possible frequencies that the system can have.  If you started with the loose left spring oscillating, then by the time you get to the equality point the system will be doing this:

symmetric

On the other hand, if you started with the right, tight spring oscillating, then at the equality point the system will be doing this:

antisymmetricThe first situation, where the two springs move together, has a lower frequency.  This is generally called a “symmetric mode”, and it doesn’t involve any stretching of the central spring.  The second situation is called an “antisymmetric mode.”  It involves substantial stretching of the central spring and therefore has a larger frequency.  (You can think that at any given moment, each mass in the antisymmetric mode has two springs pulling on it, while in the symmetric mode the central spring isn’t doing anything and each mass only has one spring pulling on it.)

This is, essentially, the main point that produces avoided crossing in classical systems.  When two oscillating things have some connection to each other, even if it’s weak, you can’t think anymore about exciting just one of them.  Every oscillation you put into the system becomes a joint oscillation, and there are always two independent ways of making joint oscillations: a symmetric way and an antisymmetric way.  These two independent ways will have different frequencies, because they necessarily place different demands on the connecting object (here, the central spring).

So how does this teach me anything about electrons?

If you start looking for commonalities between the two examples given here, one of the first you’ll see is that avoided crossing is associated with the transfer of something from one place to another.  In the classical case, it was the transfer of an oscillation from one spring to another.  In the quantum case, it was the transfer of an electron from one site to another.

At the moment when the two electron sites (or two springs), are made equal to each other, the electron (oscillation) becomes indifferent about which site to sit on.  This means that the electron will sit on both sites equally.  You can think that the electron finds itself jumping back and forth between the two sites.  But there are two ways to do this: the electron can jump back and forth in a “symmetric way” or in an “antisymmetric way.”  The “symmetric way” will always have lower energy.

Now, you can ask what is the meaning of calling the electron jumping “symmetric” or “antisymmetric.”  The technically correct answer is that they relate to properties of the electron wavefunction, which is the mathematical function that describes the probability for the electron to occupy different places in space.  In the symmetric state, the electron wavefunction is literally a symmetric function, while in the antisymmetric state the wavefunction is antisymmetric.

But let me try to be a bit more pictorial.  It seems rude to invoke mathematical functions in a friendly discussion.

Quantum mechanics, in the end, is the theory of quantum fields.  These fields are something like fluids that fill all of space, and electrons (or any other particles) are like little floating beads (or little floating bugs) that make a small disturbance in the field around them and are likewise pushed along by the field’s ripplings and frothings.

I’m not going to lie, sometimes my mental image for an electron looks kind of like this.

When we say that the electron sits at a given site, what we mean is that the rippling of the field is calm enough (or the confining environment is sturdy enough) that field ripples don’t take the electron very far from a particular point.  But when two sites have very similar energy, the field can readily slosh back and forth between those two sites and take the electron with it.  The more strongly connected these two sites are (for example, if they are physically very close), the faster will be the frequency with which the field sloshes back and forth, and thus the faster will be the frequency with which the electron jumps back and forth between the sites.

In my mind, the “symmetric way” for the electron to move is to go with the field — for example, to always ride on the crest of a “wave.”  The “antisymmetric way” is to go against the field, which would imply that with each jump from one site to another the electron passes through a wave crest going in the opposite direction.  Because the field is disturbed by the electron itself, the motion of the electron with or against the field alters the sloshing motion (this is something like my picture of Calvin in the bathtub).  And thus the symmetric and antisymmetric states for the electron have different frequencies.

\hspace{1mm}

\hspace{1mm}

If you’re a pragmatic, quantitative-minded person, these supposed similarities between electrons and springs and sloshing waterbugs might all seem a bit wishy-washy.  And at some level, they are.  But for people who have to manipulate concepts in the quantum world, these kind of “visual” similarities can be very useful for building up a feeling for how the quantum world behaves.  Real predictions require calculations, of course, but knowing which calculations are worth doing and what to expect requires feeling.  And for me, at least, cartoonish pictures go along way toward creating those feelings.

\hspace{1mm}

It occurs to me, by the way, that if I ever become an old crackpot I might be pretty tempted to write a bizarre quantum mechanics textbook entitled “the electron as a water strider bug.”

 

Follow

Get every new post delivered to your Inbox.

Join 1,237 other followers