Skip to content

What would you teach if you could teach absolutely anything?

September 28, 2015

Suppose someone told you the following:

You are invited to teach a class to a group of highly-motivated high school students.  It can be about absolutely any topic, and can last for as little as 5 minutes or as long as 9 hours.

What topic would you choose for your class?


As it happens, this is not just a hypothetical question for me at the moment.  In November MIT is hosting its annual MIT Splash event, and the call for volunteer teachers is almost exactly what is written in the quote above.  Students, staff, and faculty from all over MIT are invited to teach short courses on a topic of their choosing, and the results are pretty wild.

A few of my favorite courses from last year:

  • The History of Video Game Music
  • How to Create a Language
  • Cryptography for People Without a Computer
  • Build a Mini Aeroponic Farm
  • Calculating Pi With a Coconut
  • Advanced Topics in Murder


So now I would like to turn to you, dear blog readers, for help.

What should I teach about?  Please let me know, in the comments, what you think about either of these two questions:

  1. If you were a high school student, what kind of class would you want to go to?
  2. If you were in my position, what kind of class would you want to teach?


The two ideas that come to mind immediately are:

  • Quantum Mechanics with middle school math

Use Algebra 1 – level math to figure out answers to questions like: What is wave/particle duality? How big is an atom? How do magnets work?  What is quantum entanglement?

  • The Math Behind Basketball Strategy

Learn about some of the difficult strategic decisions that basketball teams are faced with, and see how they can be described with math.  Then solve a few of them yourself!

Imagining yourself as a high school student, which of those two sounds better to you?  Any suggestions for alternative ideas or refinements?


Samuel Beckett’s Guide to Particles and Antiparticles

September 24, 2015

As part of an ongoing series of posts about particles and fields, I have contributed another post to the blog ribbonfarm.  This one takes a sort of dark (Beckettian) viewpoint of the nature of particles and antiparticles, and advances the image of particles as defects in the smooth fields that make up the vacuum.

You can read the post here.

You will be rewarded with this picture:

(among many others), and the claim that the physical universe is like a screwed-up zipper.

Draw me a picture of a Cooper pair

September 13, 2015

As part of a friendly blog post exchange, I just contributed a guest post to the condensed matter physics blog This Condensed Life.  My goal was to explain how to think about Cooper pairs, which are the paired states of electrons that enable superconductivity.

You can read the post here.


As a teaser, I’ll give you my preferred picture of a Cooper pair:

standing_wavewhich I think is an upgrade over the typical illustrations.


Guest Post: If You Walk in A Closed Loop, Do You End Up Where You Started?

September 10, 2015

Editor’s note: The following is a guest post contributed by Anshul Kogar.  Anshul is a postdoctoral researcher in experimental condensed matter physics who currently splits his time between Argonne National Laboratory and the University of Illinois at Urbana-Champaign.  He maintains the blog This Condensed Life, which discusses conceptual ideas and recent developments in condensed matter physics.


This may surprise you, but the answer to the question in the title leads to some profound quantum mechanical phenomena that fall under the umbrella of the Berry or geometric phase, the topic I’ll be addressing here. There are also classical manifestations of the same kind of effects, a prime example being the Foucault pendulum.

Let’s now return to the original question. Ultimately, whether or not one returns to oneself after walking in a closed loop depends on what you are and where you’re walking, as I’ll describe below. The geometric phase is easier to visualize classically, so let me start there. Let’s consider a boy, Raj, who is pictured below:


Now, Raj, for whatever reason, wants to walk in a rectangle (a closed loop). But he has one very strict constraint: he can’t turn/twist his body while he’s walking. So if Raj starts his walk in the top right corner of the rectangle (pictured below) and then walks forward normally, he has to start side-stepping when he reaches the bottom right corner to walk to the left. Similarly, on the left side of the rectangle, he is constrained to walk backwards, and on the top side of the rectangle, he must side-step to the right. The arrow in the diagram below is supposed to indicate the direction which Raj faces as he walks. RectangleWhen Raj returns to the top right corner, he ends up exactly in the same place that he did when he started — not very profound at all! But now let’s consider the case where Raj is not walking on a plane, but walking on the surface of a sphere:Sphere

Again, the arrow is supposed to indicate the direction that Raj is facing. This time, Raj starts his trek at the north pole, heads to Quito, Ecuador on the equator, then continues his walk along the equator and heads back up to the north pole. Notice that on this journey, even though Raj obeys the non-twisting constraint, he ends up facing a different direction when he returns to the north pole! Even though he has returned to the same position, something is slightly different. We call this difference anholonomy.

Why did anholonomy result in the spherical case and not the flat rectangle case? Amazingly, it turns out to be related to the hairy ball theorem (don’t ask me how it got its name). Crudely speaking, the hairy ball theorem states that if you have a ball covered with hairs, you can’t comb the hairs straight without leaving at least one little bald spot or tuft. In the image below you can see a little tuft on both the top and bottom of the sphere:

The sphere which Raj traversed had a little bald spot at the north pole, leading to the anoholonomy.

Now before moving onto the Foucault pendulum, I want to explicitly state the items that were critical in obtaining the anholonomy for the spherical case: (i) The object that is transported must have a direction (i.e. be a vector); (ii) The object must be a transported on a surface on which one cannot properly comb hair.

Now, how does the Foucault pendulum get tied up in all this? Well, the Foucault pendulum, swinging in Paris, does not oscillate in the same plane after the earth makes a full 24-hour rotation. This difference in angle between the original and the next-day plane of oscillation is also an anholonomy, except the earth rotates instead of the pendulum taking a walk. Check out this great animation from Wikipedia below:

If the pendulum was at the north (south) pole, the pendulum would come back to itself after a 24-hour rotation of the earth, \theta=360^o (\theta=-360^o). If the pendulum was at the equator, the pendulum would not change its oscillation plane at all (\theta=0^o). Now, depending on the latitude in the northern hemisphere where a Foucault pendulum is set up, the pendulum will make an angle between 0^o and 360^o after the earth makes its daily rotation. The north pole case, in light of the animation above, can be inferred from this image by imagining the earth rotating about its vertical axis:

The concept on anholonomy in the quantum mechanical case can actually be pictured quite similarly to the classical case described above. In quantum mechanics, we describe particles using a wavefunction, which in a very basic sense is also a vector. The vector does not exist in “real space” but what physicists refer to as “Hilbert space”. Nonetheless, the geometrical game I played with classical anholonomy can also be played in this abstract “Hilbert space”. The main difference is that the anholonomy angle becomes a phase factor in the quantum realm. The correspondence is as so:

(Classical)     \theta \rightarrow \mathrm{e}^{i\theta}     (Quantum)

The expression on the right is precisely the Berry/geometric phase.  In the quantum case, regarding the two criteria above, we already have (i) a “vector” in the form of the wavefunction — so all we need is (ii) an appropriate surface on which hair cannot be combed straight. It turns out that in quantum mechanics, there are many ways to do this, but the most famous is undoubtedly the case of the Aharonov-Bohm effect.

In the Aharonov-Bohm experiment, one prepares a beam of electrons, splits the beam and passes them on either side of the solenoid, recombining them on the other side. A schematic of this experimental steup is shown below:


In the image, B labels the magnetic field and A is the vector potential. While many readers are probably familiar with the magnetic field, the vector potential may not be as household a concept. The vector potential was originally thought up by Maxwell, and considered to be a mathematical oddity. He realized that one could obtain B by measuring the curl of A at each point in space, but A was not given any physical meaning.

Now, it doesn’t immediately seem like this experiment would give one a geometric phase, especially considering the fact that the magnetic field outside the solenoid is zero. But let’s take look at the pattern for the vector potential outside the solenoid (top-down view):

Interestingly, the vector potential outside the solenoid looks like it would have a tuft in the center! Criterion number (ii) may therefore potentially be met. The one question left to be answered is this: can the vector potential actually “rotate” the electron wavefunction (or “vector”)? The answer to that question deserves a post to itself, and perhaps Brian or I can fill that hole in the future, but the answer seems to be emphatically in the affirmative.

The equation describing the relationship between the anholonomy angle and the vector potential is:

\theta = \alpha \oint \textbf{A}(\textbf{r})\cdot d\textbf{r}

where \theta is the rotation (or anholonomy) angle, the integral is over the closed loop of the electron path, and \alpha is just a proportionality constant.

The way to think about the equality is as so: d\textbf{r} is an infinitesimal “step” that the electron takes, much in the way that Raj took steps earlier. At each step the wavefunction is rotated a little compared to the previous step by an amount \textbf{A}(\textbf{r})d\textbf{r}, dictated by the vector potential. When I add up all the little rotations caused by \textbf{A}(\textbf{r}) over the entire path of the electrons, I get the integral around the closed loop.

Now that we have the anholonomy angle, we need to use the classical \rightarrow quantum relation from above. This gives us a phase difference of \mathrm{e}^{i\theta} between the electrons that go to the right and left of the solenoid. Whenever there is a non-zero phase difference, one should always be able to measure it using an interference experiment — and this is indeed the case here.

An experiment consisting of an electron beam fired at double-slit interference setup coupled with a solenoid demonstrates this interference effect most profoundly. On the setup to the left, the usual interference pattern is set up due to the path length difference of the electrons. On the setup to the right, the entire spectrum is shifted because the extra phase factor from the anholonomy angle.

AB Double Slit

Again, let me emphasize that there is no magnetic field in the region in where the electrons travel. This effect is due purely to the geometric effect of the anholonomy angle, a.k.a. the Berry phase, and the geometric effect arises in relation to the swirly tuft of hair!

So next time you’re taking a long walk, think about how much the earth has rotated while you’ve been walking and whether you really end up where you started — chances are that something’s just a little bit different.

1 is more important than 9: Benford’s Law

August 26, 2015

Let’s start like this: think of some number that describes nature, or any object in it.  It can be any mathematical or physical constant or measurement, in any system of units.

Got one?

I predict, using my psychic powers, that you were much more likely to have thought of a number that begins with 1, 2, or 3 rather than a number that begins with 7, 8, or 9.

As it turns out, the probability is about four times higher. In fact, the probability of having a particular first digit decreases monotonically with the value the digit (1 is a more common first digit than 2, 2 is more common than 3, and so on).   And the odds of you having picked a number that starts with 1 are about seven times higher than the odds of you having picked a number that starts with 9.


This funny happenstance is part of a larger observation called Benford’s law.  Broadly speaking, Benford’s law says that the lower counting numbers (like 1, 2, and 3) are disproportionately likely to be the first digit of naturally-occurring numbers.

In this post I’ll talk a little bit about Benford’s law, its quantitative form, and how one can think about it.

But first, as a fun exercise, I decided to see whether Benford’s law holds for the numbers I personally tend to use and care about.

(Here I feel I must pause to acknowledge how deeply, ineluctably nerdy that last sentence reveals me to be.)

So I made a list of the physical constants that I tend to think about — or, at least, of the ones that occurred to me at the moment of making the list.  These are presented below in no particular order, and with no particular theme or guarantee for completeness and non-redundancy (i.e., some of the constants on this list can be made by combining others).


After a quick look-over, it’s pretty clear that this table has a lot more numbers starting with 1 than numbers starting with 9.  A histogram of first digits in this table looks like this:

my constants

Clearly, there are more small digits than large digits.  (And somehow I managed to avoid any numbers that start with 4.  This is perhaps revealing about me.)


As far as I can tell, there is no really satisfying proof of Benford’s law.  But if you want to get some feeling for where it comes from, you can notice that those numbers on my table cover a really wide range of values: ranging in scale from 10^{-35} (the Planck length) to 10^{30} (the sun’s mass).  (And no doubt they would cover a wider range if I were into astronomy.)  So if you wanted to put all those physical constants on a single number line, you would have to do it in logarithmic scale.  Like this:

number line

The funny thing about a logarithmic scale, though, is that it distorts the real line, giving more length to numbers beginning with lower integers.  For example, here is the same line from above, zoomed in to the interval between 1 and 10:

number line - zoomed

You can see in this picture that the interval from 1 to 2 is much longer than the interval from 9 to 10.  (And, just to remind you, the general rule for logarithmic scales is that the same interval separates any two numbers with the same ratio.  So, for example, 1 and 3 are as far from each other as 2 and 6, or 3 and 9, or 500 and 1500.)  If you were to choose a set of numbers by randomly throwing darts at a logarithmic scale, you would naturally get more 1’s and 2’s than 8’s and 9’s.

What this implies is that if you want a quantitative form for Benford’s law, you can just compare the lengths of the different intervals on the logarithmic scale.  This gives:

P(d) = \log_{10}(1 + 1/d),

where d is the value of the first digit and P(d) is the relative abundance of that digit.


If you have a large enough data set, this quantitative form of Benford’s law tends to come through pretty clearly.  For example, if you take all 335 entries from the list of physical constants provided by NIST, then you find that the abundance of different first digits is described by the formula above with pretty good quantitative accuracy:



Now, if you don’t like the image of choosing constants of nature by throwing darts at a logarithmic scale, let me suggest another way to see it: Benford’s law is what you’d get as the result of a random walk using multiplicative steps.

In the conventional random walk, the walker steps randomly to the right or left with steps of constant length, and after a long time ends up at a random position on the number line.  But imagine instead that the random walker takes steps of constant multiplicative value — for example, at each “step” the walker could have his position multiplied by either 2/3 or 3/2.   This would correspond to steps that appeared to have constant length on the logarithmic scale.  Consequently, after many steps the walker would have a random position on the logarithmic axis, and so would be more likely to end up in one of those wider 1–2 than in the shorter 8–9 bins.

The upshot is that one way to think about Benford’s law is that the numbers we have arise from a process of multiplying many other “randomly chosen” numbers together.  This multiplication naturally skews our results toward numbers that begin with low digits.


By the way, for me the notion of “randomly multiplying numbers together” immediately brings to mind the process of doing homework as an undergraduate.  This inspired me to grab a random physics book off my shelf (which happened to be Tipler and Llewellyn’s Modern Physics, 3rd edition) and check the solutions to the homework problems in the back.

Sure enough:homework_problems

So the next time you find yourself trying to randomly guess answers, remember Benford’s law.



A children’s picture-book introduction to quantum field theory

August 20, 2015

Note: the following is the second in a series of blog post that I was invited to contribute to the blog ribbonfarm.  You can go here to see the original post, which contains quite a bit of discussion in the comments. 


First of all, don’t panic.

I’m going to try in this post to introduce you to quantum field theory, which is probably the deepest and most intimidating set of ideas in graduate-level theoretical physics.  But I’ll try to make this introduction in the gentlest and most palatable way I can think of: with simple-minded pictures and essentially no math.

To set the stage for this first lesson in quantum field theory, let’s imagine, for a moment, that you are a five-year-old child.  You, the child, are talking to an adult, who is giving you one of your first lessons in science.  Science, says the adult, is mostly a process of figuring out what things are made of.  Everything in the world is made from smaller pieces, and it can be exciting to find out what those pieces are and how they work.  A car, for example, is made from metal pieces that fit together in specially-designed ways.  A mountain is made from layers of rocks that were pushed up from inside the earth.  The earth itself is made from layers of rock and liquid metal surrounded by water and air.

This is an intoxicating idea: everything is made from something.

So you, the five-year-old, start asking audacious and annoying questions. For example:

What are people made of?
People are made of muscles, bones, and organs.
Then what are the organs made of?
Organs are made of cells.
What are cells made of?
Cells are made of organelles.
What are organelles made of?
Organelles are made of proteins.
What are proteins made of?
Proteins are made of amino acids.
What are amino acids made of?
Amino acids are made of atoms.
What are atoms made of?
Atoms are made of protons, neutron, and electrons.
What are electrons made of?
Electrons are made from the electron field.
What is the electron field made of?

And, sadly, here the game must come to an end, eight levels down.  This is the hard limit of our scientific understanding.  To the best of our present ability to perceive and to reason, the universe is made from fields and nothing else, and these fields are not made from any smaller components.

But it’s not quite right to say that fields are the most fundamental thing that we know of in nature.  Because we know something that is in some sense even more basic: we know the rules that these fields have to obey.  Our understanding of how to codify these rules came from a series of truly great triumphs in modern physics.  And the greatest of these triumphs, as I see it, was quantum mechanics.

In this post I want to try and paint a picture of what it means to have a field that respects the laws of quantum mechanics.  In a previous post, I introduced the idea of fields (and, in particular, the all-important electric field) by making an analogy with ripples on a pond or water spraying out from a hose.  These images go surprisingly far in allowing one to understand how fields work, but they are ultimately limited in their correctness because the implied rules that govern them are completely classical.  In order to really understand how nature works at its most basic level, one has to think about a field with quantum rules.


The first step in creating a picture of a field is deciding how to imagine what the field is made of.  Keep in mind, of course, that the following picture is mostly just an artistic device.  The real fundamental fields of nature aren’t really made of physical things (as far as we can tell); physical things are made of them.  But, as is common in science, the analogy is surprisingly instructive.

So let’s imagine, to start with, a ball at the end of a spring.  Like so:


This is the object from which our quantum field will be constructed.  Specifically, the field will be composed of an infinite, space-filling array of these ball-and-springs.


To keep things simple, let’s suppose that, for some reason, all the springs are constrained to bob only up and down, without twisting or bending side-to-side.  In this case the array of springs can be called, using the jargon of physics, a scalar field.  The word “scalar” just means a single number, as opposed to a set or an array of multiple numbers.  So a scalar field is a field whose value at a particular point in space and time is characterized only by a single number.  In this case, that number is the height of the ball at the point in question.  (You may notice that what I described in the previous post was a vector field, since the field at any given point was characterized by a velocity, which has both a magnitude and a direction.)

In the picture above, the array of balls-and-springs is pretty uninteresting: each ball is either stationary or bobs up and down independently of all others.  In order to make this array into a bona fide field, one needs to introduce some kind of coupling between the balls.  So, let’s imagine adding little elastic bands between them:


Now we have something that we can legitimately call a field.  (My quantum field theory book calls it a “mattress”.)  If you disturb this field – say, by tapping on it at a particular location – then it will set off a wave of ball-and-spring oscillations that propagates across the field.  These waves are, in fact, the particles of field theory.  In other words, when we say that there is a particle in the field, we mean that there is a wave of oscillations propagating across it.

These particles (the oscillations of the field) have a number of properties that are probably familiar from the days when you just thought of particles as little points whizzing through empty space.  For example, they have a well-defined propagation velocity, which is related to the weight of each of the balls and the tightness of the springs and elastic bands.  This characteristic velocity is our analog of the “speed of light”.  (More generally, the properties of the springs and masses define the relationship between the particle’s kinetic energy and its propagation velocity, like the  KE = \frac{1}{2}mv^2 of your high school physics class.)  The properties of the springs also define the way in which particles interact with each other.  If two particle-waves run into each other, they can scatter off each other in the same way that normal particles do.

(A technical note: the degree to which the particles in our field scatter upon colliding depends on how “ideal” the springs are.  If the springs are perfectly described by Hooke’s law, which says that the restoring force acting on a given ball is linearly proportional to the spring’s displacement from equilibrium, then there will be no interaction whatsoever.  For a field made of such perfectly Hookean springs, two particle-waves that run into each other will just go right through each other.  But if there is any deviation from Hooke’s law, such that the springs get stiffer as they are stretched or compressed, then the particles will scatter off each other when they encounter one another.)

Finally, the particles of our field clearly exhibit “wave-particle duality” in a way that is easy to see without any philosophical hand-wringing.  That is, our particles by definition are waves, and they can do things like interfere destructively with each other or diffract through a double slit.

All of this is very encouraging, but at this point our fictitious field lacks one very important feature of the real universe: the discreteness of matter.  In the real world, all matter comes in discrete units: single electrons, single photons, single quarks, etc.  But you may notice that for the spring field drawn above, one can make an excitation with completely arbitrary magnitude, by tapping on the field as gently or as violently as one wants.  As a consequence, our (classical) field has no concept of a minimal piece of matter, or a smallest particle, and as such it cannot be a very good analogy to the actual fields of nature.


To fix this problem, we need to consider that the individual constituents of the field – the balls mounted on springs – are themselves subject to the laws of quantum mechanics.

A full accounting of the laws of quantum mechanics can take some time, but for the present pictorial discussion, all you really need to know is that a quantum ball on a spring has two rules that it must follow.  1) It can never stop moving, but instead must be in a constant state of bobbing up and down.  2) The amplitude of the bobbing motion can only take certain discrete values.

oscillator_quantaThis quantization of the ball’s oscillation has two important consequences.  The first consequence is that, if you want to put energy into the field, you must put in at least one quantum.  That is, you must give the field enough energy to kick at least one ball-and-spring into a higher oscillation state.  Arbitrarily light disturbances of the field are no longer allowed. Unlike in the classical case, an extremely light tap on the field will produce literally zero propagating waves.  The field will simply not accept energies below a certain threshold.  Once you tap the field hard enough, however, a particle is created, and this particle can propagate stably through the field.

This discrete unit of energy that the field can accept is what we call the rest mass energy of particles in a field.  It is the fundamental amount of energy that must be added to the field in order to create a particle.  This is, in fact, how to think about Einstein’s famous equation  E = mc^2  in a field theory context.  When we say that a fundamental particle is heavy (large mass  m ), it means that a lot of energy has to be put into the field in order to create it.  A light particle, on the other hand, requires only a little bit of energy.

(By the way, this why physicists build huge particle accelerators whenever they want to study exotic heavy particles.  If you want to create something heavy like the Higgs boson, you have to hit the Higgs field with a sufficiently large (and sufficiently concentrated) burst of energy to give the field the necessary one quantum of energy.)

The other big implication of imposing quantum rules on the ball-and-spring motion is that it changes pretty dramatically the meaning of empty space.  Normally, empty space, or vacuum, is defined as the state where no particles are around.  For a classical field, that would be the state where all the ball-and-springs are stationary and the field is flat.  Something like this:

flat_fieldBut in a quantum field, the ball-and-springs can never be stationary: they are always moving, even when no one has added enough energy to the field to create a particle.  This means that what we call vacuum is really a noisy and densely energetic surface:


This random motion (called vacuum fluctuations) has a number of fascinating and eminently noticeable influences on the particles that propagate through the vacuum.  To name a few, it gives rise to the Casimir effect (an attraction between parallel surfaces, caused by vacuum fluctuations pushing them together) and the Lamb shift (a shift in the energy of atomic orbits, caused by the electron getting buffeted by the vacuum).

In the jargon of field theory, physicists often say that “virtual particles” can briefly and spontaneously appear from the vacuum and then disappear again, even when no one has put enough energy into the field to create a real particle.  But what they really mean is that the vacuum itself has random and indelible fluctuations, and sometimes their influence can be felt by the way they kick around real particles.

That, in essence, is a quantum field: the stuff out of which everything is made.  It’s a boiling sea of random fluctuations, on top of which you can create quantized propagating waves that we call particles.

I only wish, as a primarily visual thinker, that the usual introduction to quantum field theory didn’t look quite so much like this.  Because behind the equations of QFT there really is a tremendous amount of imagination, and a great deal of wonder.


Where do electric forces come from?

June 23, 2015

Note: what follows is a reproduction of a post I wrote for the blog ribbonfarm.  You can go here to see the original post, which contains some additional discussion in the comments.


There’s a good chance that, at some point in your life, someone told you that nature has four fundamental forces: gravity, the strong nuclear force, the weak nuclear force, and the electromagnetic force.

This factoid is true, of course.

But what you probably weren’t told is that, at the scale of just about any natural thing that you are likely to think about, only one of those four forces has any relevance.  Gravity, for example, is so obscenely weak that one has to collect planet-sized balls of matter before its effect becomes noticeable.  At the other extreme, the strong nuclear force is so strong that it can never go unneutralized over distances larger than a few times the diameter of an atomic nucleus ( \sim 10^{-15} meters); any larger object will essentially never notice its existence.  Finally, the weak nuclear force is extremely short-ranged, so that it too has effectively no influence over distances larger than  \sim 10^{-15} meters.

That leaves the electromagnetic force, or, in other words, the Coulomb interaction.  This is the familiar law that says that like charges repel each and opposites attract.  This law alone dominates the interactions between essentially all objects larger than an atomic nucleus ( 10^{-15} meters) and smaller than a planet ( 10^{7} meters).  That’s more than twenty powers of ten.

But not only does the “four fundamental forces” meme give a false sense of egalitarianism between the forces, it is also highly misleading for another reason.  Namely, in physics forces are not considered to be “fundamental”.  They are, instead, byproducts of the objects that really are fundamental (to the best of our knowledge): fields.

Read more…


Get every new post delivered to your Inbox.

Join 2,197 other followers