Skip to content

Samuel Beckett’s Guide to Particles and Antiparticles

September 24, 2015
by

Note: the following is the third in a series of five blog post that I was invited to contribute to the blog ribbonfarm on the topic of particles and fields.  You can go here to see the original post, which contains some discussion in the comments.

 

I was 12 years old when I first encountered this quote by Samuel Beckett:

“Every word is like an unnecessary stain on silence and nothingness.”

That quote impressed me quite a bit at the time. It appeared to my young self to be simultaneously profound, important, and impossible to understand. Now, nineteen years later, I’m still not sure I understand what Beckett meant by that short sentence. But I nonetheless find that its dark Zen has worked itself into me indelibly.

The Beckett quote comes to mind in particular as I sit down to write again about quantum field theory (QFT). QFT, to recap, is the science of describing particles, the most basic building blocks of matter. QFT concerns itself with how particles move, how they interact with each other, how they arise from nothingness, and how they disappear into nothingness again. As a framing idea or motif for QFT, I can’t resist presenting an adaptation of Beckett’s words as they might apply to the idea of particles and fields:

“Every particle is an unnecessary defect in a smooth and featureless field.”

Of course, it is not my intention to depress anyone with existential philosophy. But in this post I want to introduce, in a pictorial way, the idea of particles as defects. The discussion will allow me to draw some fun pictures, and also to touch on some deeper questions in physics like “what is the difference between matter and antimatter?”, “what is meant by rest mass energy?”, “what are fermions and bosons?”, and “why does the universe have matter instead of nothing?”

***

Let’s start by imagining that you have screwed up your zipper.

A properly functioning zipper, in the pictorial land of this blog post, looks like this:

smooth_zipper

But let’s say that your zipper has become dysfunctional, perhaps because of an overly hasty zip, and now looks more like this:

screwed_zipperThis zipper is in a fairly unhappy state. There is zipping a defect right in the middle of it: the two teeth above the letter “B” have gotten twisted around each other, and now all the zipper parts in the neighborhood of that pair are bending and bulging with stress. You could relieve all that stress, with a little work, by pulling the two B teeth back around each other.

But maybe you don’t want to fix it. You could, instead, push the two teeth labeled “A” around each other, and similarly for the two “C” teeth. Then the zipper would end up in a state like this:

zipper_defects-apartNow you may notice that your zipping error looks not like one defect, but like two defects that have become separated from each other. The first defect is a spot where two upper teeth are wedged between an adjacent pair of lower teeth (this is centered more or less around the number “1”). The second defect is a spot where two lower teeth are wedged between two upper teeth (“2”).

You can continue the process of moving the defects away from each other, if you want. Just keep braiding the teeth on the outside of each defect around each other. After a long while of this process, you might end up with something that looks like this:

zipper_defects-far_apart

In this picture the two types of defects have been moved so far from each other that you can sort of forget that they came from the same place. You can now describe them independently, if you want, in terms of how hard it is to move them around and how much stress they create in the zipper. If you ever bring them back together again, though, the two defects will eliminate each other, and the zipper will be healed.

My contention in this post is that what we call particles and antiparticles are something like those zipper defects. Empty space (the vacuum) is like an unbroken zipper, with all the teeth sewed up in their proper arrangement. In this sense, empty space can be called “smooth”, or “featureless”, but it cannot really be said to have nothing in it. The zipper is in it, and with the zipper comes the potential for creating pairs of equal and opposite defects that can move about as independent objects. The potential for defects, and all that comes with them, is present in the zipper itself.

Like the zipper, the quantum fields that pervade all of space encode within themselves the potential for particles and antiparticles, and dictate the rules of how they behave. Creating those particles and antiparticles may be difficult, just as moving two teeth around each other in the zipper can be difficult, and such creation results in lots of “stress” in the field. The total amount of stress created in the field is the analog of the rest mass energy of a particle (as defined by Einstein’s famous  E = mc^2 , which says that a particle with large mass takes a lot of energy to create). Once created, the particles and antiparticles can move away from each other as independent objects, but if they ever come back together all of their energy is released, and the field is healed.

Since the point of this post is to be “picture book”, let me offer a couple more visual analogies for particles and antiparticles.  While the zipper example is more or less my own invention, the following examples come from actual field theory.

Imagine now a long line of freely-swinging pendulums, all affixed to a central axle.  And let’s say that you tie the ends of adjacent pendulums together with elastic bands. Perhaps something like this:

pendulum_line-cropped

In its rest state, this field will have all its pendulums pointing downward. But consider what would happen if someone were to grab one of the pendulums in the middle of the line and flip it around the axle. This process would create two defects, or “kinks”, in the line of pendulums. One defect is a 360 degree clockwise flip around the axis, and the other is a 360 degree counterclockwise flip. Something like this:

pendulum_soliton

As with the zipper, each of these kinks represents a sort of frustrating state for the field. The universe would prefer for all those pendulums to be pulled downward with gravity, but when there is a kink in the line this is impossible. Consequently, there is a large (“rest mass”) energy associated with each kink, and this energy can only be released when two opposite kinks are brought together.

(By the way, the defects in this “line of pendulums” example are an example of what we call solitons.  Their motion is described by the so-called Sine-Gordon equation.  You can go on YouTube and watch a number of videos of people playing with these kinds of things.)

In case you’re starting to worry that these kinds of particle-antiparticle images are only possible in one dimension, let me assuage your fears by offering one more example, this time in two dimensions. Consider a field that is made up of arrows pointing in the 2D plane. These arrows have no preferred direction that they like to point in, but each arrow likes to point in the same direction as its neighbors. In other words, there is an energetic cost to having neighboring arrows point in different directions. Consequently, the lowest energy arrangement for the field looks something like this:

aligned_arrowsIf an individual arrow is wiggled slightly out of alignment with its neighbors, the situation can be righted easily by nudging it back into place. But it is possible to make big defects in the field that cannot be fixed without a painful, large-scale rearrangement. Like this vortex:

vortex

Or this configuration, which is called an antivortex:

antivortexThe reason for the name antivortex is that a vortex and antivortex are in a very exact sense opposite partners to each other, meaning that they are created from the vacuum in pairs and they can destroy each other when brought together. Like this:

 vortex-antivortex

(A wonky note: this “field of arrows” is what one calls a vector field, as opposed to a scalar field. What I have described is known as the XY model. It will make an appearance in my next post as well.)

***

Now that you have some pictures, let me use them as a backdrop for some deeper and more general ideas about QFT.

The first important idea that you should remember is that in a quantum field, nothing is ever allowed to be at rest. All the pieces that make up the field are continuously jittering back and forth: the teeth of the zipper are rattling around and occasionally twisting over each other; the pendulums are swinging back and forth, and on rare occasion swinging all the way over the axle; the arrows are shivering and occasionally making spontaneous vortex-antivortex pairs. In this way the vacuum is never quiet. In fact, it is completely correct to say that from the vacuum there are always spontaneously arising particle-antiparticle pairs, although these usually annihilate each other quickly after appearing. (Which is not to say that they never make their presence felt.)

If you read the previous post, you might also notice a big difference between the way I talk about fields here and the way I talked about them before. The previous post employed much more pastoral language, going on about gentle “ripples” in an infinite quantum “mattress”. But this post uses the harsher imagery of “unhappy defects” that cannot find rest. (Perhaps you should have expected this, since the last post was “A children’s picture book”, while this one is “Samuel Beckett’s Guide”.) But the two types of language were actually chosen to reflect a fundamental dichotomy of the fields of nature.

In particular, the previous post was really a description of what we call bosonic fields (named after the great Indian physicist Satyendra Bose). A bosonic field houses quantized ripples that we call particles, but it admits no concept of antiparticles. All excitations of a bosonic field are essentially the same as all others, and these excitations can blend with each other and overlap and interfere and, generally speaking, happily coincide at the same place and the same time. Equivalently, one can say that bosonic particles are the same as bosonic antiparticles.  In a bosonic field with many excitations, all particles are one merry slosh and there is literally no way of saying how many of them you have. In the language of physics, we say that for bosonic fields the particle number is “not conserved”.

The pictures presented in this post, however, were of fermionic fields (after the Italian Enrico Fermi). The particles of fermionic fields — fermions — are very different objects than bosons. For one thing, there is no ambiguity about their number: if you want to know how many you have, you just need to count how many “kinks” or “vortices” there are in your field (their number is conserved). Fermions also don’t share space well with each other – there is really no way to put two kinks or two vortices on top of each other, since they each have hard “cores”. These properties of fermions, together, imply that they are much more suitable for making solid, tangible matter than bosons are. You don’t have to worry about a bunch of fermions constantly changing their number or collapsing into a big heap. Consequently, it is only fermions that make up atoms (electrons, protons, and neutrons are all fermions), and it is only fermions that typically get referred to as matter.

Of course, bosonic fields still play an important role in nature.  But they appear mostly in the form of so-called force carriers. Specifically, bosons are usually seen only when they mediate interactions between fermions.  This mediation is basically a process in which some fermion slaps the sloshy sea of a bosonic field, and thereby sets a wave in motion that ends up hitting another fermion. It is in this way that our fermionic atoms get held together (or pushed apart), and fermionic matter abides.

Finally, you might be bothered by the idea that particles and antiparticles are always created together, and are therefore seemingly always on the verge of destruction. It is true, of course, that a single particle by itself is perfectly stable. But if every particle is necessarily created together with the agent of its own destruction, an antiparticle, then why isn’t any given piece of matter subject to being annihilated at any moment? Why do the solid, matter-y things that we see around us persist for so long? Why isn’t the world plagued by randomly-occurring atomic blasts?

In other words, where are all the anti-particles?

The best I can say about this question is that it is one of the biggest puzzles of modern physics. (It is often, boringly, called the “baryon asymmetry” problem; I might have called it the “random atomic bombs” problem.) To use the language of this post, we somehow ended up in a universe, or at least a neighborhood of the universe, where there are more “kinks” than “antikinks”, or more “vortices” than “antivortices”.  This observation brings up a rabbit hole of deep questions. For example, does it imply that there is some asymmetry between matter and antimatter that we don’t understand? Or are we simply lucky enough to live in a suburb of the universe where one type of matter predominates over the other? How unlikely would that have to be before it seems too unlikely to swallow?

And, for that matter, are we even allowed to use the fact of our own existence as evidence for a physical law? After all, if matter were equally common as antimatter, then no one would be around to ask the question.

And perhaps Samuel Beckett would have preferred it that way.

beckett

 

Draw me a picture of a Cooper pair

September 13, 2015
by

As part of a friendly blog post exchange, I just contributed a guest post to the condensed matter physics blog This Condensed Life.  My goal was to explain how to think about Cooper pairs, which are the paired states of electrons that enable superconductivity.

You can read the post here.

 

As a teaser, I’ll give you my preferred picture of a Cooper pair:

standing_wavewhich I think is an upgrade over the typical illustrations.

 

Guest Post: If You Walk in A Closed Loop, Do You End Up Where You Started?

September 10, 2015

Editor’s note: The following is a guest post contributed by Anshul Kogar.  Anshul is a postdoctoral researcher in experimental condensed matter physics who currently splits his time between Argonne National Laboratory and the University of Illinois at Urbana-Champaign.  He maintains the blog This Condensed Life, which discusses conceptual ideas and recent developments in condensed matter physics.

 

This may surprise you, but the answer to the question in the title leads to some profound quantum mechanical phenomena that fall under the umbrella of the Berry or geometric phase, the topic I’ll be addressing here. There are also classical manifestations of the same kind of effects, a prime example being the Foucault pendulum.

Let’s now return to the original question. Ultimately, whether or not one returns to oneself after walking in a closed loop depends on what you are and where you’re walking, as I’ll describe below. The geometric phase is easier to visualize classically, so let me start there. Let’s consider a boy, Raj, who is pictured below:

Raj

Now, Raj, for whatever reason, wants to walk in a rectangle (a closed loop). But he has one very strict constraint: he can’t turn/twist his body while he’s walking. So if Raj starts his walk in the top right corner of the rectangle (pictured below) and then walks forward normally, he has to start side-stepping when he reaches the bottom right corner to walk to the left. Similarly, on the left side of the rectangle, he is constrained to walk backwards, and on the top side of the rectangle, he must side-step to the right. The arrow in the diagram below is supposed to indicate the direction which Raj faces as he walks. RectangleWhen Raj returns to the top right corner, he ends up exactly in the same place that he did when he started — not very profound at all! But now let’s consider the case where Raj is not walking on a plane, but walking on the surface of a sphere:Sphere

Again, the arrow is supposed to indicate the direction that Raj is facing. This time, Raj starts his trek at the north pole, heads to Quito, Ecuador on the equator, then continues his walk along the equator and heads back up to the north pole. Notice that on this journey, even though Raj obeys the non-twisting constraint, he ends up facing a different direction when he returns to the north pole! Even though he has returned to the same position, something is slightly different. We call this difference anholonomy.

Why did anholonomy result in the spherical case and not the flat rectangle case? Amazingly, it turns out to be related to the hairy ball theorem (don’t ask me how it got its name). Crudely speaking, the hairy ball theorem states that if you have a ball covered with hairs, you can’t comb the hairs straight without leaving at least one little bald spot or tuft. In the image below you can see a little tuft on both the top and bottom of the sphere:

The sphere which Raj traversed had a little bald spot at the north pole, leading to the anoholonomy.

Now before moving onto the Foucault pendulum, I want to explicitly state the items that were critical in obtaining the anholonomy for the spherical case: (i) The object that is transported must have a direction (i.e. be a vector); (ii) The object must be a transported on a surface on which one cannot properly comb hair.

Now, how does the Foucault pendulum get tied up in all this? Well, the Foucault pendulum, swinging in Paris, does not oscillate in the same plane after the earth makes a full 24-hour rotation. This difference in angle between the original and the next-day plane of oscillation is also an anholonomy, except the earth rotates instead of the pendulum taking a walk. Check out this great animation from Wikipedia below:

If the pendulum was at the north (south) pole, the pendulum would come back to itself after a 24-hour rotation of the earth, \theta=360^o (\theta=-360^o). If the pendulum was at the equator, the pendulum would not change its oscillation plane at all (\theta=0^o). Now, depending on the latitude in the northern hemisphere where a Foucault pendulum is set up, the pendulum will make an angle between 0^o and 360^o after the earth makes its daily rotation. The north pole case, in light of the animation above, can be inferred from this image by imagining the earth rotating about its vertical axis:

The concept on anholonomy in the quantum mechanical case can actually be pictured quite similarly to the classical case described above. In quantum mechanics, we describe particles using a wavefunction, which in a very basic sense is also a vector. The vector does not exist in “real space” but what physicists refer to as “Hilbert space”. Nonetheless, the geometrical game I played with classical anholonomy can also be played in this abstract “Hilbert space”. The main difference is that the anholonomy angle becomes a phase factor in the quantum realm. The correspondence is as so:

(Classical)     \theta \rightarrow \mathrm{e}^{i\theta}     (Quantum)

The expression on the right is precisely the Berry/geometric phase.  In the quantum case, regarding the two criteria above, we already have (i) a “vector” in the form of the wavefunction — so all we need is (ii) an appropriate surface on which hair cannot be combed straight. It turns out that in quantum mechanics, there are many ways to do this, but the most famous is undoubtedly the case of the Aharonov-Bohm effect.

In the Aharonov-Bohm experiment, one prepares a beam of electrons, splits the beam and passes them on either side of the solenoid, recombining them on the other side. A schematic of this experimental steup is shown below:

ABEffect

In the image, B labels the magnetic field and A is the vector potential. While many readers are probably familiar with the magnetic field, the vector potential may not be as household a concept. The vector potential was originally thought up by Maxwell, and considered to be a mathematical oddity. He realized that one could obtain B by measuring the curl of A at each point in space, but A was not given any physical meaning.

Now, it doesn’t immediately seem like this experiment would give one a geometric phase, especially considering the fact that the magnetic field outside the solenoid is zero. But let’s take look at the pattern for the vector potential outside the solenoid (top-down view):

Interestingly, the vector potential outside the solenoid looks like it would have a tuft in the center! Criterion number (ii) may therefore potentially be met. The one question left to be answered is this: can the vector potential actually “rotate” the electron wavefunction (or “vector”)? The answer to that question deserves a post to itself, and perhaps Brian or I can fill that hole in the future, but the answer seems to be emphatically in the affirmative.

The equation describing the relationship between the anholonomy angle and the vector potential is:

\theta = \alpha \oint \textbf{A}(\textbf{r})\cdot d\textbf{r}

where \theta is the rotation (or anholonomy) angle, the integral is over the closed loop of the electron path, and \alpha is just a proportionality constant.

The way to think about the equality is as so: d\textbf{r} is an infinitesimal “step” that the electron takes, much in the way that Raj took steps earlier. At each step the wavefunction is rotated a little compared to the previous step by an amount \textbf{A}(\textbf{r})d\textbf{r}, dictated by the vector potential. When I add up all the little rotations caused by \textbf{A}(\textbf{r}) over the entire path of the electrons, I get the integral around the closed loop.

Now that we have the anholonomy angle, we need to use the classical \rightarrow quantum relation from above. This gives us a phase difference of \mathrm{e}^{i\theta} between the electrons that go to the right and left of the solenoid. Whenever there is a non-zero phase difference, one should always be able to measure it using an interference experiment — and this is indeed the case here.

An experiment consisting of an electron beam fired at double-slit interference setup coupled with a solenoid demonstrates this interference effect most profoundly. On the setup to the left, the usual interference pattern is set up due to the path length difference of the electrons. On the setup to the right, the entire spectrum is shifted because the extra phase factor from the anholonomy angle.

AB Double Slit

Again, let me emphasize that there is no magnetic field in the region in where the electrons travel. This effect is due purely to the geometric effect of the anholonomy angle, a.k.a. the Berry phase, and the geometric effect arises in relation to the swirly tuft of hair!

So next time you’re taking a long walk, think about how much the earth has rotated while you’ve been walking and whether you really end up where you started — chances are that something’s just a little bit different.

1 is more important than 9: Benford’s Law

August 26, 2015
by

Let’s start like this: think of some number that describes nature, or any object in it.  It can be any mathematical or physical constant or measurement, in any system of units.

Got one?

I predict, using my psychic powers, that you were much more likely to have thought of a number that begins with 1, 2, or 3 rather than a number that begins with 7, 8, or 9.

As it turns out, the probability is about four times higher. In fact, the probability of having a particular first digit decreases monotonically with the value the digit (1 is a more common first digit than 2, 2 is more common than 3, and so on).   And the odds of you having picked a number that starts with 1 are about seven times higher than the odds of you having picked a number that starts with 9.

 

This funny happenstance is part of a larger observation called Benford’s law.  Broadly speaking, Benford’s law says that the lower counting numbers (like 1, 2, and 3) are disproportionately likely to be the first digit of naturally-occurring numbers.

In this post I’ll talk a little bit about Benford’s law, its quantitative form, and how one can think about it.

But first, as a fun exercise, I decided to see whether Benford’s law holds for the numbers I personally tend to use and care about.

(Here I feel I must pause to acknowledge how deeply, ineluctably nerdy that last sentence reveals me to be.)

So I made a list of the physical constants that I tend to think about — or, at least, of the ones that occurred to me at the moment of making the list.  These are presented below in no particular order, and with no particular theme or guarantee for completeness and non-redundancy (i.e., some of the constants on this list can be made by combining others).

constants_table

After a quick look-over, it’s pretty clear that this table has a lot more numbers starting with 1 than numbers starting with 9.  A histogram of first digits in this table looks like this:

my constants

Clearly, there are more small digits than large digits.  (And somehow I managed to avoid any numbers that start with 4.  This is perhaps revealing about me.)

 

As far as I can tell, there is no really satisfying proof of Benford’s law.  But if you want to get some feeling for where it comes from, you can notice that those numbers on my table cover a really wide range of values: ranging in scale from 10^{-35} (the Planck length) to 10^{30} (the sun’s mass).  (And no doubt they would cover a wider range if I were into astronomy.)  So if you wanted to put all those physical constants on a single number line, you would have to do it in logarithmic scale.  Like this:

number line

The funny thing about a logarithmic scale, though, is that it distorts the real line, giving more length to numbers beginning with lower integers.  For example, here is the same line from above, zoomed in to the interval between 1 and 10:

number line - zoomed

You can see in this picture that the interval from 1 to 2 is much longer than the interval from 9 to 10.  (And, just to remind you, the general rule for logarithmic scales is that the same interval separates any two numbers with the same ratio.  So, for example, 1 and 3 are as far from each other as 2 and 6, or 3 and 9, or 500 and 1500.)  If you were to choose a set of numbers by randomly throwing darts at a logarithmic scale, you would naturally get more 1’s and 2’s than 8’s and 9’s.

What this implies is that if you want a quantitative form for Benford’s law, you can just compare the lengths of the different intervals on the logarithmic scale.  This gives:

P(d) = \log_{10}(1 + 1/d),

where d is the value of the first digit and P(d) is the relative abundance of that digit.

 

If you have a large enough data set, this quantitative form of Benford’s law tends to come through pretty clearly.  For example, if you take all 335 entries from the list of physical constants provided by NIST, then you find that the abundance of different first digits is described by the formula above with pretty good quantitative accuracy:

NIST-Benford

 

Now, if you don’t like the image of choosing constants of nature by throwing darts at a logarithmic scale, let me suggest another way to see it: Benford’s law is what you’d get as the result of a random walk using multiplicative steps.

In the conventional random walk, the walker steps randomly to the right or left with steps of constant length, and after a long time ends up at a random position on the number line.  But imagine instead that the random walker takes steps of constant multiplicative value — for example, at each “step” the walker could have his position multiplied by either 2/3 or 3/2.   This would correspond to steps that appeared to have constant length on the logarithmic scale.  Consequently, after many steps the walker would have a random position on the logarithmic axis, and so would be more likely to end up in one of those wider 1–2 than in the shorter 8–9 bins.

The upshot is that one way to think about Benford’s law is that the numbers we have arise from a process of multiplying many other “randomly chosen” numbers together.  This multiplication naturally skews our results toward numbers that begin with low digits.

 

By the way, for me the notion of “randomly multiplying numbers together” immediately brings to mind the process of doing homework as an undergraduate.  This inspired me to grab a random physics book off my shelf (which happened to be Tipler and Llewellyn’s Modern Physics, 3rd edition) and check the solutions to the homework problems in the back.

Sure enough:homework_problems

So the next time you find yourself trying to randomly guess answers, remember Benford’s law.

 

 

A children’s picture-book introduction to quantum field theory

August 20, 2015
by

Note: the following is the second in a series of blog post that I was invited to contribute to the blog ribbonfarm.  You can go here to see the original post, which contains quite a bit of discussion in the comments. 

 

First of all, don’t panic.

I’m going to try in this post to introduce you to quantum field theory, which is probably the deepest and most intimidating set of ideas in graduate-level theoretical physics.  But I’ll try to make this introduction in the gentlest and most palatable way I can think of: with simple-minded pictures and essentially no math.

To set the stage for this first lesson in quantum field theory, let’s imagine, for a moment, that you are a five-year-old child.  You, the child, are talking to an adult, who is giving you one of your first lessons in science.  Science, says the adult, is mostly a process of figuring out what things are made of.  Everything in the world is made from smaller pieces, and it can be exciting to find out what those pieces are and how they work.  A car, for example, is made from metal pieces that fit together in specially-designed ways.  A mountain is made from layers of rocks that were pushed up from inside the earth.  The earth itself is made from layers of rock and liquid metal surrounded by water and air.

This is an intoxicating idea: everything is made from something.

So you, the five-year-old, start asking audacious and annoying questions. For example:

What are people made of?
People are made of muscles, bones, and organs.
Then what are the organs made of?
Organs are made of cells.
What are cells made of?
Cells are made of organelles.
What are organelles made of?
Organelles are made of proteins.
What are proteins made of?
Proteins are made of amino acids.
What are amino acids made of?
Amino acids are made of atoms.
What are atoms made of?
Atoms are made of protons, neutron, and electrons.
What are electrons made of?
Electrons are made from the electron field.
What is the electron field made of?

And, sadly, here the game must come to an end, eight levels down.  This is the hard limit of our scientific understanding.  To the best of our present ability to perceive and to reason, the universe is made from fields and nothing else, and these fields are not made from any smaller components.

But it’s not quite right to say that fields are the most fundamental thing that we know of in nature.  Because we know something that is in some sense even more basic: we know the rules that these fields have to obey.  Our understanding of how to codify these rules came from a series of truly great triumphs in modern physics.  And the greatest of these triumphs, as I see it, was quantum mechanics.

In this post I want to try and paint a picture of what it means to have a field that respects the laws of quantum mechanics.  In a previous post, I introduced the idea of fields (and, in particular, the all-important electric field) by making an analogy with ripples on a pond or water spraying out from a hose.  These images go surprisingly far in allowing one to understand how fields work, but they are ultimately limited in their correctness because the implied rules that govern them are completely classical.  In order to really understand how nature works at its most basic level, one has to think about a field with quantum rules.

***

The first step in creating a picture of a field is deciding how to imagine what the field is made of.  Keep in mind, of course, that the following picture is mostly just an artistic device.  The real fundamental fields of nature aren’t really made of physical things (as far as we can tell); physical things are made of them.  But, as is common in science, the analogy is surprisingly instructive.

So let’s imagine, to start with, a ball at the end of a spring.  Like so:

mass_on_a_spring

This is the object from which our quantum field will be constructed.  Specifically, the field will be composed of an infinite, space-filling array of these ball-and-springs.

spring_field

To keep things simple, let’s suppose that, for some reason, all the springs are constrained to bob only up and down, without twisting or bending side-to-side.  In this case the array of springs can be called, using the jargon of physics, a scalar field.  The word “scalar” just means a single number, as opposed to a set or an array of multiple numbers.  So a scalar field is a field whose value at a particular point in space and time is characterized only by a single number.  In this case, that number is the height of the ball at the point in question.  (You may notice that what I described in the previous post was a vector field, since the field at any given point was characterized by a velocity, which has both a magnitude and a direction.)

In the picture above, the array of balls-and-springs is pretty uninteresting: each ball is either stationary or bobs up and down independently of all others.  In order to make this array into a bona fide field, one needs to introduce some kind of coupling between the balls.  So, let’s imagine adding little elastic bands between them:

mattress

Now we have something that we can legitimately call a field.  (My quantum field theory book calls it a “mattress”.)  If you disturb this field – say, by tapping on it at a particular location – then it will set off a wave of ball-and-spring oscillations that propagates across the field.  These waves are, in fact, the particles of field theory.  In other words, when we say that there is a particle in the field, we mean that there is a wave of oscillations propagating across it.

These particles (the oscillations of the field) have a number of properties that are probably familiar from the days when you just thought of particles as little points whizzing through empty space.  For example, they have a well-defined propagation velocity, which is related to the weight of each of the balls and the tightness of the springs and elastic bands.  This characteristic velocity is our analog of the “speed of light”.  (More generally, the properties of the springs and masses define the relationship between the particle’s kinetic energy and its propagation velocity, like the  KE = \frac{1}{2}mv^2 of your high school physics class.)  The properties of the springs also define the way in which particles interact with each other.  If two particle-waves run into each other, they can scatter off each other in the same way that normal particles do.

(A technical note: the degree to which the particles in our field scatter upon colliding depends on how “ideal” the springs are.  If the springs are perfectly described by Hooke’s law, which says that the restoring force acting on a given ball is linearly proportional to the spring’s displacement from equilibrium, then there will be no interaction whatsoever.  For a field made of such perfectly Hookean springs, two particle-waves that run into each other will just go right through each other.  But if there is any deviation from Hooke’s law, such that the springs get stiffer as they are stretched or compressed, then the particles will scatter off each other when they encounter one another.)

Finally, the particles of our field clearly exhibit “wave-particle duality” in a way that is easy to see without any philosophical hand-wringing.  That is, our particles by definition are waves, and they can do things like interfere destructively with each other or diffract through a double slit.

All of this is very encouraging, but at this point our fictitious field lacks one very important feature of the real universe: the discreteness of matter.  In the real world, all matter comes in discrete units: single electrons, single photons, single quarks, etc.  But you may notice that for the spring field drawn above, one can make an excitation with completely arbitrary magnitude, by tapping on the field as gently or as violently as one wants.  As a consequence, our (classical) field has no concept of a minimal piece of matter, or a smallest particle, and as such it cannot be a very good analogy to the actual fields of nature.

***

To fix this problem, we need to consider that the individual constituents of the field – the balls mounted on springs – are themselves subject to the laws of quantum mechanics.

A full accounting of the laws of quantum mechanics can take some time, but for the present pictorial discussion, all you really need to know is that a quantum ball on a spring has two rules that it must follow.  1) It can never stop moving, but instead must be in a constant state of bobbing up and down.  2) The amplitude of the bobbing motion can only take certain discrete values.

oscillator_quantaThis quantization of the ball’s oscillation has two important consequences.  The first consequence is that, if you want to put energy into the field, you must put in at least one quantum.  That is, you must give the field enough energy to kick at least one ball-and-spring into a higher oscillation state.  Arbitrarily light disturbances of the field are no longer allowed. Unlike in the classical case, an extremely light tap on the field will produce literally zero propagating waves.  The field will simply not accept energies below a certain threshold.  Once you tap the field hard enough, however, a particle is created, and this particle can propagate stably through the field.

This discrete unit of energy that the field can accept is what we call the rest mass energy of particles in a field.  It is the fundamental amount of energy that must be added to the field in order to create a particle.  This is, in fact, how to think about Einstein’s famous equation E = mc^2 in a field theory context.  When we say that a fundamental particle is heavy (large mass  m ), it means that a lot of energy has to be put into the field in order to create it.  A light particle, on the other hand, requires only a little bit of energy.

(By the way, this why physicists build huge particle accelerators whenever they want to study exotic heavy particles.  If you want to create something heavy like the Higgs boson, you have to hit the Higgs field with a sufficiently large (and sufficiently concentrated) burst of energy to give the field the necessary one quantum of energy.)

The other big implication of imposing quantum rules on the ball-and-spring motion is that it changes pretty dramatically the meaning of empty space.  Normally, empty space, or vacuum, is defined as the state where no particles are around.  For a classical field, that would be the state where all the ball-and-springs are stationary and the field is flat.  Something like this:

flat_fieldBut in a quantum field, the ball-and-springs can never be stationary: they are always moving, even when no one has added enough energy to the field to create a particle.  This means that what we call vacuum is really a noisy and densely energetic surface:

choppy_field

This random motion (called vacuum fluctuations) has a number of fascinating and eminently noticeable influences on the particles that propagate through the vacuum.  To name a few, it gives rise to the Casimir effect (an attraction between parallel surfaces, caused by vacuum fluctuations pushing them together) and the Lamb shift (a shift in the energy of atomic orbits, caused by the electron getting buffeted by the vacuum).

In the jargon of field theory, physicists often say that “virtual particles” can briefly and spontaneously appear from the vacuum and then disappear again, even when no one has put enough energy into the field to create a real particle.  But what they really mean is that the vacuum itself has random and indelible fluctuations, and sometimes their influence can be felt by the way they kick around real particles.

That, in essence, is a quantum field: the stuff out of which everything is made.  It’s a boiling sea of random fluctuations, on top of which you can create quantized propagating waves that we call particles.

I only wish, as a primarily visual thinker, that the usual introduction to quantum field theory didn’t look quite so much like this.  Because behind the equations of QFT there really is a tremendous amount of imagination, and a great deal of wonder.

 

Where do electric forces come from?

June 23, 2015
by

Note: what follows is a reproduction of a post I wrote for the blog ribbonfarm.  You can go here to see the original post, which contains some additional discussion in the comments.

 

There’s a good chance that, at some point in your life, someone told you that nature has four fundamental forces: gravity, the strong nuclear force, the weak nuclear force, and the electromagnetic force.

This factoid is true, of course.

But what you probably weren’t told is that, at the scale of just about any natural thing that you are likely to think about, only one of those four forces has any relevance.  Gravity, for example, is so obscenely weak that one has to collect planet-sized balls of matter before its effect becomes noticeable.  At the other extreme, the strong nuclear force is so strong that it can never go unneutralized over distances larger than a few times the diameter of an atomic nucleus ( \sim 10^{-15} meters); any larger object will essentially never notice its existence.  Finally, the weak nuclear force is extremely short-ranged, so that it too has effectively no influence over distances larger than  \sim 10^{-15} meters.

That leaves the electromagnetic force, or, in other words, the Coulomb interaction.  This is the familiar law that says that like charges repel each and opposites attract.  This law alone dominates the interactions between essentially all objects larger than an atomic nucleus ( 10^{-15} meters) and smaller than a planet ( 10^{7} meters).  That’s more than twenty powers of ten.

But not only does the “four fundamental forces” meme give a false sense of egalitarianism between the forces, it is also highly misleading for another reason.  Namely, in physics forces are not considered to be “fundamental”.  They are, instead, byproducts of the objects that really are fundamental (to the best of our knowledge): fields.

Read more…

Where does magnetism come from?

April 19, 2015

Magnets are complicated.

This fact is pretty well illustrated by how hard it is to predict whether a given material will make a good magnet or not. For example, if someone tells you the chemical composition of some material and then asks you “will it be magnetic?”, you (along with essentially all physicists) will have a hard time answering.  That’s because whether a material behaves like a magnet usually depends very sensitively on things like the crystal structure of the material, the valence of the different atoms, and what kind of defects are present.  Subtle changes to any of these things can make the difference between having a strong magnet and having an inert block.  Consequently, a whole scientific industry has grown up around the question “will X material be magnetic?”, and it keeps thousands of scientists gainfully employed.

On the other hand, there is a pretty simple answer to the more general question “where does magnetism come from?”  And, in my experience, this answer is not very well-known, despite the obvious public outcry for an explanation.

magnets

 

So here’s the answer:  Magnetism comes from the exchange interaction.

In this post, I want to explain what the “exchange interaction” is, and outline the essential ingredients that make magnets work.  Unless you’re a physicist or a chemist, these ingredients are probably not what you expect.  (And, despite some pretty good work by talented science communicators, I haven’t yet seen a popular explanation that introduces both of them.)

 

Ingredient #1: Electrons are little magnets

 

As I discussed in a recent post, individual electrons are themselves like tiny magnets.  They create magnetic fields around themselves in the same way that a tiny bar magnet would, or that a spinning charged sphere would.  In fact, the “north pole” of an electron points in the same direction as the electron’s “spin”.

(This is not to say that an electron actually is a little spinning charged sphere.  My physics professors always told me not to think about it that way… but sometimes I get away with it anyway.)

The magnetic field created by one electron is relatively strong at very short distances: at a distance of one angstrom, it is as strong as 1 Tesla.  But the strength of that field decays quickly with distance, as 1/(\text{distance})^3.  At any distance \gtrsim 3 nm away the magnetic field produced by one electron is essentially gone — it’s weaker than the Earth’s magnetic field (tens of microTesla).

What this means is that if you want a collection of electrons to act like a magnet then you need to get a large fraction of them to point their spins (their north poles) in the same direction.  For example, if you want a permanent magnet that can create a field of 1 Tesla (as in the internet-famous neodymium magnets), then you need to align about one electron per cubic angstrom (which is more than one electron per atom).

So what is it that makes all those electron align their spins with each other?  This is the essential puzzle of magnetism.

 

Before I tell you the right answer, let me tell you the wrong answer.  The wrong answer is probably the one that you would first think of.  Namely, that all magnets have a natural pushing/pulling force on each other: they like to align north-to-south.  So too, you might think, will these same electrons push each other into alignment through their north-to-south attraction.  Your elementary/middle school teacher may have even explained magnets to you by showing you a picture that looks like this:

https://i1.wp.com/www.cdn.sciencebuddies.org/Files/6626/11/magetic-domains.jpg

(Never mind the fact that the picture on the right, in addition to its aligned North and South poles, has a bunch of energetically costly North/North and South/South side-by-side pairs.  So it’s not even clear that it has a lower energy than the picture on the left.)

But if you look a little more closely at the magnetic forces between electrons, you quickly see that they are way too weak to matter.  Even at about 1 Angstrom of separation (which is the distance between two neighboring atoms), the energy of magnetic interaction between two electrons is less than 0.0001 electronVolts (or, in units more familiar to chemists, about 0.001 kcal/mol).

The lives of electrons, on the other hand, are played out on the scale of about 10 electronVolts: about 100,000 times stronger than the magnetic interaction.  At this scale, there are really only two kinds of energies that matter: the electric repulsion between electrons (which goes hand-in-hand with the electric attraction to nuclei) and the huge kinetic energy that comes from quantum motion.

In other words, electrons within a solid material are feeling enormous repulsive forces due to their electric charges, and they are flying around at speeds of millions of miles per hour.  They don’t have time to worry about the puny magnetic forces that are being applied.

nobody_got_timeSo, in cases where electrons decide to align their spins with each other, it must be because it helps them reduce their enormous electric repulsion, and not because it has anything to do with the little magnetic forces.

 

Ingredient #2: Same-spin electrons avoid each other

 

The basic idea behind magnetism is like this: since electrons are so strongly repulsive, they want to avoid each other as much as possible.  In principle, the simplest way to do this would be to just stop moving, but this is prohibited by the rules of quantum mechanics.  (If you try to make an electron sit still, then by the Heisenberg uncertainty principle its momentum becomes very uncertain, which means that it acquires a large velocity.)  So, instead of stopping, electrons try to find ways to avoid running into each other.  And one clever method is to take advantage of the Pauli exclusion principle.

In general, the Pauli principle says that no two electrons can have the same state at the same time.  There are lots of ways to describe “electron state”, but one way to state the Pauli principle is this:

No two electrons can simultaneously have the same spin and the same location.

This means that any two electrons with the same spin must avoid each other.  They are prohibited by the most basic laws of quantum mechanics from ever being in the same place at the same time.

Or, from the point of view of the electrons, the trick is like this: if a pair of electrons points their spin in the same direction, then they are guaranteed to never run into each other.

exchange_interaction_comics

It is this trick that drives magnetism.  In a magnet, electrons point their spins in the same direction, not because of any piddling magnetic field-based interaction, but in order to guarantee that they avoid running into each other.  By not running into each other, the electrons can save a huge amount of repulsive electric energy.  This saving of energy by aligning spins is what we (confusingly) call the “exchange interaction”.

To make this discussion a little more qualitative, one can talk about the probability P(r) that a given pair of electrons will find themselves with a separation r.  For electrons with opposite spin (in a metal), this probability distribution looks pretty flat: electrons with opposite spin are free to run over each other, and they do.

But electrons with the same spin must never be at the same location at the same time, and thus the probability distribution P(r) must go to zero at r = 0; it has a “hole” in it at small r.  The size of that hole is given by the typical wavelength of electron states: the Fermi wavelength \lambda_F \sim n^{-1/3} (where n is the concentration of electrons).  This result makes some sense: after all, the only meaningful way to interpret the statement “two electrons can’t be in the same place at the same time” is to say that “two electrons can’t be within a distance R of each other, where R is the electron size”.  And the only meaningful definition of the electron size is the electron wavelength.

If you plot two probability distributions together, they look something like this:

exchange_probabilities

If you want to know how much energy the electrons save by aligning their spins, then you can integrate the distribution P(r) multiplied by the interaction energy law V(r) over all possible distances r, and compare the result you get for the two cases.  The “hole” in the orange curve is sometimes called the “exchange hole”, and it implies means that electrons with the same spin have a weaker average interaction with each other.  This is what drives magnetism.

(In slightly more technical language, the most useful version of the probability P(r) is called the “pair distribution function“, which people spend a lot of time calculating for electron systems.)

 

Epilogue: So why isn’t everything magnetic?

 

To recap, there are two ingredients that produce magnetism:

  1. Electrons themselves are tiny magnets.  For a bulk object to be a magnet, a bunch of the electrons have to point their spins in the same direction.
  2. Electrons like to point their spins in the same direction, because this guarantees that they will never run into each other, and this saves them a lot of electric repulsion energy.

These two features are more-or-less completely generic.  So now you can go back to the first sentence of this post (“Magnets are complicated”) and ask, “wait, why are they complicated?  If electrons universally save on their repulsive energy by aligning their spins, then why doesn’t everything become a magnet?”

The answer is that there is an additional cost that comes when the electrons align their spins.  Specifically, electrons that align their spins are forced into states with higher kinetic energy.

You can think about this connection between spin and kinetic energy in two ways.  The first is that it is completely analogous to the problem of atomic orbitals (or the simpler quantum particle in a box).  In this problem, every allowable state for an electron can hold only two electrons, one in each spin direction.  But if you start forcing all electrons to have the same spin, then each energy level can only hold one electron, and a bunch of electrons get forced to sit in higher energy levels.

The other way to think about the cost of spin polarization is to notice that when you give electrons the same spin, and thereby force them to avoid each other, you are really confining them a little bit more (by constraining their wavefunctions to not overlap with each other).  This extra bit of confinement means that their momentum has to go up (again, by the Heisenberg uncertainty principle), and so they start moving faster.

Either way, it’s clear that aligning the electron spins means that the electrons have to acquire a larger kinetic energy.  So when you try to figure out whether the electrons actually will align their spins, you have to weigh the benefit (having a lower interaction energy) against the cost (having a higher kinetic energy).  A quantitative weighing of these two factors can be difficult, and that’s why so many of scientific types can make a living by it.

 

But the basic driver of magnetism is really as simple as this: like-spin electrons do a better job of avoiding each other, and when electrons line up their spins they make a magnet.

So the next time someone asks you “magnets: how do they work?”, you can reply “by the exchange interaction!”  And then you can have a friendly discussion without resorting to profanity or name-calling.

 


Footnotes:

1. The simplest quantitative description of the tradeoff between the interaction energy gained by magnetism and the kinetic energy cost is the so-called Stoner model.  I can write a more careful explanation of it some time if anyone is interested.

2. One thing that I didn’t explicitly bring up (but which most popular descriptions of magnetism do bring up) is that electrons also create magnetic fields by virtue of their orbits around atomic nuclei.  This makes the story a bit more complicated, but doesn’t change it in a fundamental way.  In fact, the magnetic fields created by those orbits are nearly equal in magnitude to the ones created by the electron spin itself, so thinking about them doesn’t change any order-of-magnitude estimates.  (But you will definitely need to think about them if you want to predict the exact strength of the magnetic field in a material.)

3. There is a pretty simple version of magnetization that occurs within individual atoms.  This is called Hund’s rule, which says that when you have a partially-filled atomic orbital, the electrons within the orbital will always arrange themselves so as to maximize the amount of spin alignment.  This “magnetization of a single atom” happens for the same reason that I outlined above: when electron spins align, they do a better job of avoiding each other, and their energy is lower.

4. If I were a good popularizer of science, then I would really go out of my way to emphasize the following point.  The existence of magnetism is a visible manifestation of quantum mechanics.  It cannot be understood without the Pauli exclusion principle, or without thinking about the electron spin.  So if magnets feel a little bit like magic, that’s partly because they are a startling manifestation of quantum mechanics on a human-sized scale.

Follow

Get every new post delivered to your Inbox.

Join 2,635 other followers