Note: the following is the fourth in a series of five blog post that I was invited to contribute to the blog ribbonfarm on the topic of particles and fields.  You can go here to see the original post, which contains some discussion in the comments.

Writ large, science is the process of identifying and codifying the rules obeyed by nature. Beyond this general goal, however, science has essentially no specificity of topic.  It attempts to describe natural phenomena on all scales of space, time, and complexity: from atomic nuclei to galaxy clusters to humans themselves.  And the scientific enterprise has been so successful at each and every one of these scales that at this point its efficacy is essentially taken for granted.

But, by just about any a priori standard, the extent of science’s success is extremely surprising. After all, the human brain has a very limited capacity for complex thought. We human tend to think (consciously) only about simple things in simple terms, and we are quickly overwhelmed when asked to simultaneously keep track of multiple independent ideas or dependencies.

As an extreme example, consider that human thinking struggles to describe even individual atoms with real precision. How is it, then, that we can possibly have good science about things that are made up of many atoms, like magnets or tornadoes or eukaryotic cells or planets or animals? It seems like a miracle that the natural world can contain patterns and objects that lie within our understanding, because the individual constituents of those objects are usually far too complex for us to parse.

You can call this occurrence the “miracle of emergence”.  I don’t know how to explain its origin. To me, it is truly one of the deepest and most wondrous realities of the universe: that simplicity continuously emerges from the teeming of the complex.

But in this post I want to try and present the nature of this miracle in one of its cleanest and most essential forms. I’m going to talk about quasiparticles.

Let’s talk for a moment about the most world-altering scientific development of the last 400 years: electronics.

When humans learned how to harness the flow of electric current, it completely changed the way we live and our relationship to the natural world. In the modern era, the idea of electricity is so fundamental to our way of living that we are taught about it within the first few years of elementary school. That teaching usually begins with pictures that look something like this:

That is, we are given the image of electrons as little points that flow like a river through some conducting material. This image more or less sticks around, with relatively little modification, all the way through a PhD in physics or electrical engineering.

But there is a dirty secret behind that image: it doesn’t make any sense.

And an even deeper secret: it isn’t electrons that carry electric current. Instead, the current is carried by much larger and more nuanced objects called “electrons”.

Let me explain.

To see the problem with the standard picture, think for a moment about what metals are actually made of. Let’s even take the simplest possible example of a metal: metallic lithium, with three electrons per atom. A single lithium atom looks something like this:

Those points are meant to show the probability density for the electrons inside the atom.  Two of the electrons (the yellow points) are closely bound to the nucleus, while the third (blue points) is more loosely bound. The exact arrangement of electrons around the nucleus is actually a difficult question, because the three electrons are continually pushing on each other (via very strong electric forces) as they orbit around the nucleus. So the precise structure of even this simple atom does not have an easy solution.

The situation gets exponentially messier, though, when you bring a whole bunch of atoms together to make a block of lithium. Inside that block, the atoms are packed together very tightly, something like this:

This picture may look tidy, but consider it from the point of view of an electron traveling through the metal. Such an electron has no hope for a smooth and simple trajectory. Instead, it gets continuously buffeted around by the enormous forces coming from the other nearby electrons and from the nuclei. So, for example, if you injected an electron into one side of a piece of lithium, it would absolutely not just sail smoothly across to the other side. It would quickly adopt a completely chaotic trajectory, and any information about its initial direction or speed would be lost.

(Of course, this is all to say nothing about how messy things are in a more typical metal like copper. In copper each atom has 29 electrons swirling around it in complicated orbits, rather than just 3. I can’t even draw you a good picture of a Copper atom.)

So thinking about individual electrons is hard – much too hard to be useful for any simple human reasoning. As it turns out, if you want to make any headway thinking about electric current, it actually makes sense to forget about the electrons’ individuality and just imagine clouds of probability density around each nucleus. Now, when you inject an electron into one side of the metal, it just adds some probability to the electron clouds on that side.   And you can imagine that over time this probability moves on down the line, like so:

So this is how electric current is really carried. Not by free-sailing electrons, but by waves of probability density that are themselves made from the swirling, chaotic trajectories of many different electrons.

Messy, right? Well, now comes the miracle.

The key insight is that you can think about all those swirling, chaotic electron trajectories as a quantum field of electric charge, conceptually similar to the quantum fields out of which the fundamental particles arise. And now you can ask the question: what do the ripples on that field look like?

The answer: they look almost identical to real electrons.

In fact, those waves of electric charge density look so similar to “bare” electrons flying through free space that we even call them “electrons”. But they are not electrons as God made them. These “electrons” are instead an emergent concept: a collective movement of many jumbled and densely-packed God-given electrons, all pushing on each other and flying around at millions of miles per hour in chaotic trajectories.

But the emergent wave, that so-called “electron”, is startling in its simplicity. It moves through the crystal in straight lines and with a constant speed, like a ghost that can travel through walls. It carries with it the exact same charge as a single electron (and the exact same quantum-mechanical spin). It has the same type of kinetic energy, $KE = \frac{1}{2} mv^2$, and for all the world behaves like a naked electron moving through empty space. In fact, the only way you could tell, from a distance, that the “electron” is not really an electron, is that its mass is different. The “electron” that emerges from the sea of chaotic electrons feels either a bit heavier, or a bit lighter, than a bare electron. (And sometimes it as much as 100 times heavier or lighter, depending on the details of the atomic orbitals and the atom spacing.)

This discovery – that the fundamental emergent excitation from a soup of electrons looks and acts just like a real, solitary electron – was one of the great triumphs of 20th century physics. The theory of these excitations is called Fermi liquid theory, and was pioneered by the near-mythological Soviet physicist Lev Landau. Landau called these emergent waves “quasiparticles”, because they behave just like free, unimpeded particles, even though the electrons from which they are made are very much neither free nor unimpeded.

To my mind, this discovery emphasizes the essence of what is beautiful about physical science. Complexity, by itself, has no inherent beauty. But there is something beautiful about observing a very simple thing emerging from an environment that initially appears to be a complicated mess. It gives the same fundamental pleasure as, say, watching waves roll onto the seashore (or, to a lesser extent, seeing people in a stadium do the wave).

A good deal of physics (especially condensed matter physics, my own specialty) is built on the pattern that Landau set for us. Its practitioners spend much of their time combing through the physical universe in search of quasiparticles, those little miracles that allow us to understand the whole, even though we have no hope of understanding the sum of its parts.

And at this point, we have amassed a fair collection of them. I’ll give you a few examples, in table form:

Each item in this list has its own illustrious history and deserves to have its story told on its own. But what they all have in common are the traits that make them quasiparticles. They all move through their host material in simple straight lines, as if they were freely traveling through empty space. They are all stable, meaning that they live for a long time without decaying back into the field from which they arose. They all come in discrete, indivisible units. And they all have simple laws determining how they interact with each other.

They are, in short, particles. It’s just that we understand what they’re made of.

One could ask, finally, why it is that these quasiparticles exist in the first place. Why do we keep finding simple emergent objects wherever we look?

As I alluded to in the beginning, there is no really satisfying answer to that question. But there is a hint. All of these objects have a mathematically simple structure. It somehow seems to be a rule of nature that if you can write a simple and aesthetically pleasing equation, somewhere in nature there will be a manifestation of the equation you wrote down. And the simpler and more beautiful the equation you can manage to write, the more manifestations you will find.

So physicists have slowly learned that the one of the best pathways to discovery is through mathematical parsimony. We write the simplest equation that we think could possibly describe the object we want to understand. And, in no small number of instances, nature finds a way to realize exactly that equation.

Of course, the big question remains: why should nature care about man’s mathematics, or his sense of beauty? How can the same sorts of simple equations keep appearing at every scale of nature that we look for them? How is it that math, seemingly an invention by feeble human brains, is capable of transcending so thoroughly the understanding of its creators?

These questions seem to have no good answers. But they are, to me, continually awe-inspiring. And they swirl around the heart of the mystery of why science is possible.

1. November 13, 2015 6:42 pm

Nice lithium atom! And excellent explanation of quasi-particles! Thank you, they were mostly just a phrase to me.

I think your reply in the comments answers this, but in your table, should “photons” be in double quotes like “electrons”? When I saw the table I thought you were suggesting photons are all quasi-particles, never god-made ones. Your reply to the comment suggests otherwise, though. I was quite off-balance there for a moment! 😀

November 14, 2015 1:01 am

You’re right, it’s probably unfair to call photons “quasiparticles” rather than just “particles”. But photons are unusual historically, in the sense that we understood the (electric and magnetic) fields before we understood that those fields admitted quantized particles. Usually for the fundamental particles the situation worked in reverse: we observed the particles before we understood the fields from which they arose.

I should probably admit that I stole the picture of the lithium atom from here: http://www.sciencephoto.com/media/2067/view#

My illustration skills don’t go much beyond powerpoint.

3. April 14, 2016 4:52 am

Thanks for a fascinating post. It could almost be a law of nature that extreme simplicity on one level of a system necessitates extreme complexity elsewhere. Could this also apply to atomic nuclei? I’m reminded of Matthew Strassler’s article: “Protons and Neutrons: the Massive Pandemonium in Matter.” As a non-mathematician, I’m amazed by the simple arithmetic of three quarks determining electric charge. Two Up-quarks and one Down make exactly one unit of positive charge of a proton, precisely balancing the negative charge of any electron. Change one Up to a Down and they add up to the perfect zero charge of a neutron. However, the masses of these nucleons are far greater than the sum of the three quarks. The best explanation (according to Strassler) is that each nucleon contains a turbulent ‘sea’ of countless other quarks, antiquarks and gluons, continuously appearing and annihilating, and somehow generating the relatively enormous mass-energy of the whole nucleon. Despite the apparent chaos, the resultant masses remain constant and stable potentially for ever, and the tiny difference between proton and neutron mass seems to be caused simply by that one quark difference, Down being a bit heavier than Up. And there is no evidence that the same three singleton quarks sit out the ‘wild dance party’ for billions of years; it may just be a rule of the dance that different quarks take turns not swirling about with a partner. Could it be that the three ‘valence’ quarks are quasiparticles, like the miraculous “electrons” in your article?