Skip to content

A letter to the donors who helped me at Virginia Tech

April 16, 2014

Every year on April 16, I like to remember my time at Virginia Tech.

So far, the memories I have written about have been ones of reverence, or anger, or sadness.  But I haven’t been explicit about the predominant emotion I feel when reflecting on my undergraduate years at VT: gratitude.

It seems to me that the defining feature of my life so far is that I have been the beneficiary of great and undeserved kindness.  My time at VT was certainly no exception.  One of the most concrete examples I have of this kindness is the numerous privately-endowed scholarships that helped to pay my tuition.  I can’t imagine what motivates a private individual to give thousands of dollars every year for the benefit of some unknown (and, in my case, thoroughly immature) kid.  But it’s humbling and puzzling to think that such people considered it worthwhile to pay me to study whatever I was interested in, without knowing me and without getting anything in return.  I benefited enormously from their generosity.

I wish that I had made more diligent and more sincere efforts to thank those people.  I’m sure I would be embarrassed if I had to read through the meager thank-you letters that I wrote every year.

As it happens, though, I do have one of those letters in my possession.  The last scholarship I received at VT was the H. Y. Loh Award, which I was given just before my graduation in 2007.  I wrote a thank-you letter to the donor shortly after graduation, but, sadly, the donor passed away before the letter arrived and it was returned to me.

Just today I finally worked up the courage to open the envelope and read what I had written.  It is a little embarrassing to read, and it reflects my own insecurities as much as anything else, but I like it because it stands as a record of who I was at the time and of the people who helped me get there.

Below is the letter itself.  I have blanked out the donor’s name, but maybe it can stand as an open thank-you to all of those who helped me at Virginia Tech, and to those who continue to help out immature kids like the one I was.

 

letter-1 letter-2

The parable of the perfectly symmetric ass

April 10, 2014

I would like to introduce a phrase into the lexicon of science and everyday life, based on the following ridiculous story that was taught to me at the CERN summer school.

\hspace{1mm}

Imagine a perfectly symmetric ass, standing atop a perfectly symmetric hill (…I’m talking about a donkey here, folks).  Placed on either side of the hill, at perfectly equidistant locations, are two perfectly identical piles of hay.

The ass is hungry, but it feels itself pulled toward each pile of hay with exactly equal and opposite forces.

Given the staggering symmetry of the setup, the only logical conclusion is that the ass is doomed to inaction and will eventually starve.

\hspace{1mm}

As it turns out, this silly story is a famous satire of the assertion by the French philosopher Jean Buridan that

Should two courses be judged equal, then the will cannot break the deadlock, all it can do is to suspend judgement until the circumstances change, and the right course of action is clear.

The poor starving donkey above is thus called “Buridan’s ass.”

(As is often the case, the original version of this satirical argument actually belongs to Aristotle.)

\hspace{1mm}

Since hearing this story, there have been a large and increasingly frequent number of times when it has seemed like a good depiction of myself or someone else.  So I would like to suggest using the phrase “to be a perfectly symmetric ass” as a description of someone who is being paralyzed into inaction by symmetry.  In particular, I see two good targets for this phrase:

1. Scientific arguments that invoke symmetry at the expense of energy minimization

For example, suppose someone asked you to predict what will happen if you apply a large voltage between a small inner sphere and a large outer sphere that is filled with a weakly-conducting plasma.  Most of us who had Gauss’s law arguments trained into us would immediately say that an electrical current will flow out from the inner sphere in a radially-symmetric way, and consequently that the total current flow will be very small.  But most of us would be wrong, however, because what actually happens is this:

[Just watch from the 0:09 mark until 0:12 or so.]

In short: the system figures out very quickly that there is a much lower-energy way to move its current from inner to outer surface.  Namely, by creating sharp (symmetry-breaking) pathways with intense current, which produce dielectric breakdown of the plasma and allow the current to flow easily.

If you allow symmetry to fool you into thinking that the current will flow slowly and radially, then you are “being a perfectly symmetric ass.”

2. Everyday situations in which opportunities are missed because of an inability to choose between two good options

Suppose, for example, that you are at an ice cream shop and you are standing at the counter, unable to make up your mind about which of the various fantastic flavors you will get.  As the line starts to build up behind you, you eventually get flustered and say “never mind, I’ll just get chocolate.”

In that situation, you are “being a perfectly symmetric ass.”

Or, maybe, you have “made a perfectly symmetric ass of yourself.”

\hspace{1mm}

I say it lovingly, of course, because I make a perfectly symmetric ass of myself all the time.

 

There’s nothing particularly “spooky” about avoided crossing

April 8, 2014

Coming to terms with quantum mechanics is no easy task.  The quantum world has its own unique atmosphere, its own set of laws and tendencies — its own tao, if you will — that seem far removed from the lifetime of solid-feeling intuition that all of us develop naturally.  So gaining an ease and familiarity with the “quantum way” takes time.  It’s a bit like living in a foreign country: you have to spend a lot of time immersed in it before you start to feel like you can move easily through its streets.

That said, those of us who spend a good chunk of our lifetimes thinking about quantum mechanics run a peculiar risk.  Namely, we start to feel like quantum mechanics is everything, and that every result and every feature that appears in the quantum world must be understood on its own terms.  In other words, we forget that some of the things that show up in the quantum world show up just as easily in the “person-sized” classical world, too.

One particular example is the phenomenon of “avoided crossing.”

"No crossing": a fundamental law in the quantum world

In this post I’ll explain what avoided crossing is in its standard quantum form.  Then I’ll show you that it can just as easily rear its head in the classical world, too.

Avoided crossing: quantum version

In its simplest form, the quantum phenomenon of avoided crossing goes something like this:

Imagine that there are two places where a quantum particle (say, an electron) can sit: a site on the left, and a site on the right.    Suppose also that the site on the left has lower energy than the site on the left, and that an electron is sitting there.  Like this:

left_and_right_sites

Now suppose that you start to slowly raise the energy of the left site and lower the energy of the right site.  Eventually, the two energy levels will pass by each other, and after a long enough time the left site will have a high energy and the right site will have a low energy.  You would expect that during this process the electron will ride on the left site, so that its energy increases steadily.  Like this:

lifting

But that’s not what happens.  If you raise/lower the energies of the two sites slowly enough, then what happens is something like this:

ac-from_below

In other words, the electron energy (the blue line) stays low, and never even gets as high as the point where the left and right site energies cross.  What’s more, at long enough times the electron manages to transfer itself from the left site to the right site.

You can now ask the question: what would have happened if the electron started on the right site?  Clearly, in this case the energy should be large to start, and should start decreasing with time as the energy of the right site drops.  And that is indeed what happens, except that when the energies of the two sites get close to each other, something funny happens again:

ac-from_above

The electron in this case manages to transfer itself from right to left, keeping its energy high, and never reaches the point where the two energies are supposed to cross.

So now if you make a plot of “energies that an electron can have” as a function of how far you’ve shifted the left and right site energies, you’ll get something like this:

ac

This is the phenomenon of avoided crossing, or “level repulsion”.  In short: you can never push two energy levels through each other.  If you try, you’ll find that the two energy levels always get “repelled” from each other a little bit, and that the low-energy states remain smoothly connected to other low-energy states, while high-energy states remain connected to other high energy states.

So what causes avoided crossing?  As you could probably guess, its existence depends crucially on the ability of the electron to jump from one site to the other.  In other words, the avoided crossing arises from quantum tunneling.  When the two sites have very different energy, you can say that the electron almost definitely resides in either one site or the other.  Right at the crossing point, however, the electron finds itself spread between the two in a way that apparently involves the “spooky” laws of quantum mechanics.

A pause for philosphisizing

Let me pause here to make a more general comment concerning how we think about quantum mechanics.

In general, when one first encounters some strange phenomenon in quantum mechanics, like the avoided crossing outlined above, there are two courses of action, philosophically.  One possibility is to just learn the phenomenon mathematically without grasping for a physical/mechanical way of thinking about it.  The people who advocate this approach (as, for example, here) generally use the argument that all macroscopic “physical” objects really emerge from quantum mechanical laws applied across large scales, so trying to think about the quantum world in terms of mechanical objects is backwards and nonsensical.

This is true, of course.  But it also strikes me as somewhat defeatist.  Science, in my opinion, is never a business of compiling true statements.  It is only a business of compiling useful concepts and models that give some predictive power.  And for an idea to be useful, it has to be able to stick in your mind in a firm and conceptual way.  An idea that consists of arbitrary laws or fiats is unlikely to stick in your mind (or at least in my mind) in this way, even if such fiats constitute a very correct way of stating the idea.

For me, at least, the only ideas that really stick in my mind are ones that can be thought of physically, i.e., ones that have some accompanying picture of how one thing pushes or shakes or distorts another.  Even if these pictures are “not correct,” they are, to me, essential for scientific reasoning.

So let me not take the defeatist attitude of correctness, and try to come up with a “mechanical” way to think about the strange quantum business of avoided crossing.

Avoided crossing: classical version

Imagine for a moment that you have two springs, one on the left and one on the right, each connected to one of two equal masses.  Let’s say that the spring on the left is fairly loose, while the spring on the right is tight.  This means that if you excite the mass on the left it will vibrate slowly, like this:

 

On the other hand, if you excite the mass on the right, it will vibrate more quickly, like this:rightmodeIn this classical example, the vibration frequencies are the analogues of the electron energies in the quantum example above: to begin with, one is small and one is large.

Now let’s imagine the process of slowly tightening the spring on the left and simultaneously loosening the spring on the right.  This is like the simultaneous raising/lowering of the electron energies in the quantum example.

If the two springs are completely disconnected from each other, then nothing interesting will happen.  The left spring frequency will gradually increase and the right spring frequency will gradually decrease.  Like this:

crossing-springs

 

But things become more interesting if you introduce a small coupling between the two springs.  Suppose, for example, that we put a very weak spring in the middle, connecting the two masses.  Like this:

coupledsprings

Now, in principle, whenever either of the two masses moves it affects the other.  But if the middle spring is very weak, then we can still excite mostly the left spring only or mostly the right spring only. For example, if you excite the loose left spring then the tight right spring won’t move very much.  But what happens if you slowly tighten the left spring while simultaneously loosening the right spring?

As it turns out, this happens:

sym-transfer

What you’re seeing here is that the oscillation starts out more or less entirely on the left, but as the two springs exchange roles (go from loose to tight and vice-versa), the oscillation moves to the right side.  So you start with a slow, left-heavy oscillation at the beginning, go through a phase where both are oscillating equally, and end up with a slow, right-heavy oscillation at the end.

What would have happened if we had started with the oscillation on the right side?

This:

asym-transfer

You’ll notice that the same thing is happening here, but in reverse.  A fast oscillation in the tight spring on the right is eventually turned into a fast oscillation in the tight spring on the left.

If you plot what is happening to the oscillation frequency as a function of time, it looks like this:

ac-springs

 

Now that we have this picture, and a movie of what’s happening to the springs over time, we can talk about what, exactly, is the meaning of those funny “avoided crossing” states in the middle.  These states correspond to the moment where the two springs are identically tight (the left and right spring are right at the point of exchanging roles).  Apparently at this moment there are two possible frequencies that the system can have.  If you started with the loose left spring oscillating, then by the time you get to the equality point the system will be doing this:

symmetric

On the other hand, if you started with the right, tight spring oscillating, then at the equality point the system will be doing this:

antisymmetricThe first situation, where the two springs move together, has a lower frequency.  This is generally called a “symmetric mode”, and it doesn’t involve any stretching of the central spring.  The second situation is called an “antisymmetric mode.”  It involves substantial stretching of the central spring and therefore has a larger frequency.  (You can think that at any given moment, each mass in the antisymmetric mode has two springs pulling on it, while in the symmetric mode the central spring isn’t doing anything and each mass only has one spring pulling on it.)

This is, essentially, the main point that produces avoided crossing in classical systems.  When two oscillating things have some connection to each other, even if it’s weak, you can’t think anymore about exciting just one of them.  Every oscillation you put into the system becomes a joint oscillation, and there are always two independent ways of making joint oscillations: a symmetric way and an antisymmetric way.  These two independent ways will have different frequencies, because they necessarily place different demands on the connecting object (here, the central spring).

So how does this teach me anything about electrons?

If you start looking for commonalities between the two examples given here, one of the first you’ll see is that avoided crossing is associated with the transfer of something from one place to another.  In the classical case, it was the transfer of an oscillation from one spring to another.  In the quantum case, it was the transfer of an electron from one site to another.

At the moment when the two electron sites (or two springs), are made equal to each other, the electron (oscillation) becomes indifferent about which site to sit on.  This means that the electron will sit on both sites equally.  You can think that the electron finds itself jumping back and forth between the two sites.  But there are two ways to do this: the electron can jump back and forth in a “symmetric way” or in an “antisymmetric way.”  The “symmetric way” will always have lower energy.

Now, you can ask what is the meaning of calling the electron jumping “symmetric” or “antisymmetric.”  The technically correct answer is that they relate to properties of the electron wavefunction, which is the mathematical function that describes the probability for the electron to occupy different places in space.  In the symmetric state, the electron wavefunction is literally a symmetric function, while in the antisymmetric state the wavefunction is antisymmetric.

But let me try to be a bit more pictorial.  It seems rude to invoke mathematical functions in a friendly discussion.

Quantum mechanics, in the end, is the theory of quantum fields.  These fields are something like fluids that fill all of space, and electrons (or any other particles) are like little floating beads (or little floating bugs) that make a small disturbance in the field around them and are likewise pushed along by the field’s ripplings and frothings.

I’m not going to lie, sometimes my mental image for an electron looks kind of like this.

When we say that the electron sits at a given site, what we mean is that the rippling of the field is calm enough (or the confining environment is sturdy enough) that field ripples don’t take the electron very far from a particular point.  But when two sites have very similar energy, the field can readily slosh back and forth between those two sites and take the electron with it.  The more strongly connected these two sites are (for example, if they are physically very close), the faster will be the frequency with which the field sloshes back and forth, and thus the faster will be the frequency with which the electron jumps back and forth between the sites.

In my mind, the “symmetric way” for the electron to move is to go with the field — for example, to always ride on the crest of a “wave.”  The “antisymmetric way” is to go against the field, which would imply that with each jump from one site to another the electron passes through a wave crest going in the opposite direction.  Because the field is disturbed by the electron itself, the motion of the electron with or against the field alters the sloshing motion (this is something like my picture of Calvin in the bathtub).  And thus the symmetric and antisymmetric states for the electron have different frequencies.

\hspace{1mm}

\hspace{1mm}

If you’re a pragmatic, quantitative-minded person, these supposed similarities between electrons and springs and sloshing waterbugs might all seem a bit wishy-washy.  And at some level, they are.  But for people who have to manipulate concepts in the quantum world, these kind of “visual” similarities can be very useful for building up a feeling for how the quantum world behaves.  Real predictions require calculations, of course, but knowing which calculations are worth doing and what to expect requires feeling.  And for me, at least, cartoonish pictures go along way toward creating those feelings.

\hspace{1mm}

It occurs to me, by the way, that if I ever become an old crackpot I might be pretty tempted to write a bizarre quantum mechanics textbook entitled “the electron as a water strider bug.”

 

The Bohr model, Landau quantization, and “truth” in science

October 26, 2013

A few years ago, at a big physics conference, I was party to an argument about whether we should be teaching the Bohr model of the atom in lower-level physics classes.  The argument in favor was that the Bohr model is easy to teach and gives a simple way to think about the structure of atoms.  The argument against was that the Bohr model is completely outdated, conceptually inaccurate, and has long been superseded by a more correct theory.  The major statement of the opposition argument was that it doesn’t do anyone much good to learn an idea that’s wrong.

How strongly I disagree with that statement!

I, personally, love the Bohr model.  It’s founded on a cartoonishly simple way of thinking about quantum mechanical effects, but it can give you a surprisingly solid way of thinking about quantum problems for very little effort.  In other words, even when the Bohr model doesn’t give you the exact right answer, it is very good at teaching you how to feel about a quantum system.

The purpose of this post is, more or less, to be a defense of the Bohr model.  After outlining what the Bohr model is, I’ll show how the exact same logic gives a very quick and surprisingly accurate sketch of another major phenomenon in the quantum world: Landau quantization.  Then, at the end, I’ll wax philosophical a bit about why it’s a mistake to try and teach only “true” ideas in science.

The Bohr model of the atom

The essence of the Bohr model approach is to start by thinking about the problem using only classical physics, and figure out what different states look like.  Then, once you’re done, remember that quantum mechanics only allows certain particular ones of those states.

This way of thinking developed very naturally, because at the time the Bohr model was developed (1913) there was no quantum mechanics.  So as people were puzzling about how to describe the hydrogen atom, they started with the eighteenth and nineteenth-century physics that they knew, and then tried to figure out how it might be modified by funny “new” stuff.

To see how this works, start by thinking about the Hydrogen atom using only high school-level physics.  You have a single electron running around a single proton, and the picture that emerges is that the electron should orbit around the proton in the same way that the earth orbits around the sun.  Like this:

H-orbit

You can work out everything about this orbit (by balancing the attractive force between the charges with the centripetal acceleration), and what you’ll find is that orbits with any radius r are possible.  Orbits with small r have a large momentum (high speed) and a deeply negative energy, while orbits with a large radius have a small momentum and small energy.

More specifically, the momentum is p = mv = (me^2/4 \pi \epsilon_0 r)^{1/2} and the energy is E = -e^2/8 \pi \epsilon_0 r.  [Here, e is the electron charge, m is the electron mass, and \epsilon_0 is the electric constant.]

Once you’ve figured out everything that would happen for classical electrons, you can remember that quantum mechanics only allows certain kinds of trajectories to be stable.  The key idea, which was developed only slowly and painfully during the first few decades of the 20th century, is that moving particles have a wavelength associated with them, called the “de Broglie wavelength” \lambda.  A larger momentum p implies a shorter wavelength: \lambda = 2 \pi \hbar/p, where \hbar is Planck’s constant.  [My way of thinking about the wavelength \lambda is that fast-moving particles make short, choppy waves in the quantum field, while slow-moving particles make gentle, long-wavelength ripples.]  For a trajectory to be stable, the orbit of the electron needs to have an integer number of wavelengths.  Otherwise, the trajectory gets unsettled by ripples in the quantum field.  This stability is often demonstrated with pictures like this:

The orbit on the left is stable, while the one on the right is not.

My own personal image for the stability/instability of different quantum trajectories comes from all the time I spent playing in the bathtub as a little kid.  Like many kids, I imagine, I used to try and slide back and forth in the tub to get the water sloshing from side to side in big dramatic waves. Like this:

What I found is that making these big “tidal waves” requires you to rock back and forth with just the right frequency.  If you continue to rock with that frequency, then you get one big wave moving back and forth, and you can slide around the tub while staying inside the biggest part of the wave as it shifts from one side to the other.  But if you try to change your frequency, then suddenly you find yourself colliding with the tidal wave and water goes flying everywhere.  This is something like what happens with electrons in the Bohr model.  If they travel around their orbits at just the right speed, then they move together with the ripples in the quantum field.  But moving at other speeds leads to some kind of unstable mess, and not a stable atom.

…I wonder whether I can go back and explain to my parents that all that water on the floor was really just an important part of my training for quantum mechanics.

Anyway, applying the Bohr stability condition to the classical electron trajectories gives the result that the orbit radius r can only have the following specific values:

r = n \times \hbar^2/4 \pi \epsilon_0 e^2 = n \times 0.53 \text{\AA},

where n = 1, 2, 3, ... (and 0.53 \text{\AA} = 5.3 \times 10^{-11} meters).  Correspondingly, the energy can only have the values

E = -m e^4/[2 (4 \pi \epsilon_0)^2 \hbar^2] \times (1/n^2) = -13.6 \text{ eV} \times (1/n^2).

\hspace{1mm}

Now, the Bohr model is not a true representation of the inside of an atom.  The movement of electrons around a nucleus is not nearly so simple as the circular orbits I drew above. The Bohr model also doesn’t tell you anything about how many electrons you can fit on different orbits.  And you certainly couldn’t use the Bohr model to predict subtle effects like the Lamb shift.  But the Bohr model very quickly tells you some important things: how big the atom is, how deep the energy levels are, and how the energy levels are arranged.  And in this case, it happens to get those answers exactly right.

To a certain degree, Bohr was lucky that this line of very approximate reasoning got him the exact right answers.  But I think it is often underappreciated how useful the Bohr model is as a paradigm for approaching quantum problems.  To illustrate this point, let me use the same exact thinking on another problem that wasn’t figured out until decades after the Bohr model.

Landau levels

As it happens, there is another kind of problem where charges run around in circular orbits, which you are also likely to learn about in a first or second-year physics course.  This is the problem of a electrons in a magnetic field.  As you might remember, a magnetic field pushes on moving charges, bending their trajectories into circles.  Like this:

A beam of moving electrons (the purple streak) is pulled into a circular trajectory by a magnetic field.

A magnetic field makes a force that is always perpendicular to the velocity of a moving charge, causing the charges inside the field to run in closed circles.  In the classical world, those circles can have any size, but quantum mechanics should select only some of them to be stable.

So what kind of quantum states can electrons in a magnetic field have?

Following the Bohr model philosophy, we can approach this problem by first working everything out as if quantum mechanics did not exist.  The physics of charges in a magnetic field is a few hundred years old, and fairly simple.  You can use it to figure out that faster charges have bigger orbit radii, according to the relation r = m v/eB, where B is the strength of the magnetic field.  The kinetic energy E of the electron is E = mv^2/2, which in terms of the radius means E = e^2 B^2 r^2/2m.  So, there is a whole range of classical trajectories with different radii.  Those trajectories with larger radius correspond to faster electron speed and larger energy.

Now we can examine this classical picture through the lens of Bohr’s stability criterion, which says that only trajectories with just the right radius can be stable.  In particular, only trajectories whose length 2 \pi r is an integer multiple of the de Broglie wavelength \lambda can survive as stable orbits (remember the “sloshing in the bathtub” analogy).  Applying this condition gives:

r = \sqrt{n} \times \sqrt{\hbar / eB},

where, again, n = 1, 2, 3, ....  If you put this result for r into the kinetic energy, you get:

E = n \times (\hbar e B / 2m).

These discrete energies are called “Landau levels” (after the Soviet Union’s legendary alpha physicist).  And while the quantization of magnetic trajectories is not quite as widely known as the hydrogen atom, it has been the source of just as many strange scientific observations, and has kept many scientists (myself included) gainfully employed for more than half a century.

Here the Bohr model approach again provides a quick and easy guide to these energy levels.  First, one can see that different energy levels come with a uniform spacing in energy, \sim \hbar \omega_c, where \omega_c = eB/m is the “cyclotron frequency.”  Second, the radius of the corresponding trajectories grows with the square root of the energy.  Finally, the smallest possible cyclotron trajectory has a radius \sim \sqrt{\hbar / eB}, which is called the “magnetic length.”

These are important (and correct) results which can lead you quite far in conceptual thinking about what magnetic field does to electronic states.  And while they were really only appreciated in the second half of the twentieth century (after quantum mechanics had come into full bloom), they could have been mostly derived as early as 1913 using Bohr’s way of thinking.

[As it happens, the formulas above are not exactly correct, as they were in the Bohr model.  The correct result for the energy is

E = \hbar \omega_c (n - 1/2).

So the "Bohr model" type approach gives the exact right answer for the lowest energy level, but is wrong about the spacing between levels by a factor of two.

UPDATE: Here's a simple addition you can make to get the answer exactly right.]

What is “truth,” really?

I have no qualifications as a philosopher of science or of anything else.  That much should be emphasized.  But nonetheless I can’t resist making a larger comment here about the idea that certain scientific ideas shouldn’t be taught because they are “not true.”

Science, as I see it, is not really a business of figuring out what’s true.  As a scientist, it is best to take the perspective that no scientific theory, model, or idea is really “true.”  A theory is just a collection of ideas that can stick in the human mind as a useful way of imagining the natural world.

Given enough time, every scientific theory will ultimately be replaced by a more correct one.  And often, the more correct theory feels entirely different philosophically from the one it replaces.  But the ultimate arbiter of what makes good science is not whether the idea an true, but only whether it is useful for predicting the outcome of some future event.  (It is, of course, that predictive power that allows us to build things, fix things, discover things, and generally improve the quality of human life.)

It is undeniable at this point that the Bohr model is decidedly not true.  But, as I hope I have shown, it is also undoubtedly very useful for scientific thinking.  And that alone justifies its presence in scientific curricula.

Beta Decay

August 12, 2013

An even nerdier update on an old joke:

beta_decay

Fun fact: a free neutron only lives about 15 minutes before it decays into a proton and an anti-electron neutrino.

Spare me the math: Raman scattering

August 12, 2013

For most of my (still nascent) scientific career,  I have worked on the physics of materials. This likely sounds pretty humdrum to you.  To me, at least, the terms “materials” or “materials science” conjure up images of stodgy old nerds meticulously optimizing the chemical composition of some slurry to be used in one or another mind-numblingly specific manufacturing process.

“Hmm, perhaps we should add another 250 ppm of Vanadium…”

[The likely source of this prejudice is my stodgy old materials science professor from college, who made us spend 4 months memorizing the iron-carbon phase diagram.]

But for a physicist, the study of materials can be something significantly more dramatic and imaginative.  In short, every new material is like a new, synthetic universe.  It has its own quantum fields that arise from the motions and interactions of the atoms in the material.  And these fields give rise to their own kinds of particles, which may look a lot like the electrons, atoms, and photons we’re used to, or which may have completely different rules of engagement.

For example, the recently-discovered graphene is essentially a two-dimensional universe where electrons have no mass and the speed of light is 300 times smaller than its normal value.  For a physicist, that’s a dramatic thing.

The downside to working in materials is that it’s often hard to see what’s going on, and to know whether the material you have is the same as the material you think you have.  For this reason materials scientists become dependent on a barrage of characterization methods, each of which probes some slightly different aspect of the material’s properties.

In this second edition of Spare Me the Math, I want to describe one of the most crucial and ubiquitous of the material characterization tools: Raman scattering.

\hspace{1mm}

In Raman scattering, you shine light on a material and see some of it get reflected back with a different frequency (that is, a different color).   Since light is made of individual photons, and the energy of these photons is proportional to their frequency, this shift in frequency means that the light is gaining or losing some of its energy inside the material.

As it turns out, that lost energy goes into exciting a vibration in the material.  (Or, conversely, the light can gain energy by stealing some existing vibrational energy from the material).  Thus, the shift of the light tells you something about the way the material vibrates, and therefore about what the material is made of.

But how, exactly, is the light frequency getting shifted?  How does light energy get mixed up with vibrational energy?

Usually the process of Raman scattering is explained only in terms of some “electron-phonon scattering” and accompanied by opaque diagrams like this one:

Raman_vertex

or this one.  But what is really going on?

In this post I want to try and demystify this process a little bit, by explaining how, exactly, the light frequency gets shifted.  It turns out that pretty much everything can be understood by imagining that your material is made of stretchy metal balls.

\hspace{1mm}

Imagine, for a moment, a metal ball.  Like this:

ball

This metal ball will stand for a molecule; you should imagine that there are bazillions of little metal balls making up your material.

A metal ball is like a molecule in the sense that it can rearrange its electrons a bit to adapt to the presence of an electric field: the ball is polarizable.  So when an electric field gets applied, the ball moves some positive charge to one side and some negative charge to the other side.  Like this:

ball-E

In polarizing this way, the ball creates its own electric field that partially counteracts some of the applied electric field.

When an oscillating electric field (a beam of light) is applied to the ball, the electric charge on the ball redistributes to point in the direction of the light’s electric field.  Like this:

polarized_ball

In this picture, the arrow above shows the direction and magnitude of the electric field coming from the light, and the colors show the induced charge density (red for positive, blue for negative).

It is common to say that by this process of polarizing back and forth, the metal ball “scatters” some of the incident light.  But one can just as well say that in sloshing its charge back and forth, the ball creates is own light waves that emanate outward.

(I like this language a little better.  When the sky is a beautiful vibrant blue, I like to think how all the little molecules in the air are getting excited by the sun and glowing bright blue light in my direction, like bioluminescent algae.)

In the picture above, however, the ball’s electric charge is oscillating in lock-step with the applied electric field, which means that its own radiated light is at exactly the same frequency \omega as the incoming light.  To get the frequency shift implied by Raman scattering, we have to make the ball a bit stretchy.

\hspace{1mm}

So imagine now that the metal ball is a bit squishy, and can get stretched out in different directions the same way that a real molecule can.  Say, like this:

stretched_ball

Crucially, you should notice that in its stretched-out state, the ball responds differently to an applied electric field.  Basically, it puts positive and negative charge further apart, and in doing so it does a better job of screening out the applied field.  Like this:

stretched_ball-E

Since the ball is elastic, however, it can easily start wobbling back and forth when it gets stretched out.  Like this:

wobbling_ball

The frequency of this wobbling, which I call \Omega, depends on how stiff the ball is.  Every molecule has its own characteristic wobbling frequency (or rather, its own set of frequencies, one for each different way it can be excited.)

Now, the essence of Raman scattering can be understood by thinking about what happens if you try to apply an oscillating electric field while the ball is wobbling.  Basically, the electric polarization frequency and the wobbling frequency both get involved in determining how light is radiated by the ball.  The picture is something like this:

wp_ball

Here you can see how the light frequency and wobbling frequency get mixed up.  The ball is sometimes at its fattest while the electric field is strongest (making ah enhanced dipole) and sometimes at its thinnest while the electric field is strong (making a weakened dipole).  As a result, the ball radiates some light at the original frequency \omega and radiates some light at the shifted frequencies \omega + \Omega and \omega - \Omega.  Its just like beating in sound waves: when something is the outcome of two frequencies simultaneously, you see (hear) the sum and the difference of the two frequencies also.

\hspace{1mm}

And that’s how you can shine light on a material and see some of it come back to you at a different frequency.  The point is that the molecules inside the material are like squishy metal balls: they polarize, and they wobble. And so the light that they radiate has information about the frequencies of both processes.  By applying light with a known frequency, you can figure out how quickly the material’s constituent molecules wobble, and thereby say something about what they are made of.

\hspace{1mm}

\hspace{1mm}

Footnote

You may well ask “what gets the ball wobbling in the first place?”  It turns out there are two possible answers.  First, the ball can start wobbling just by random kicks that it gets from its thermal environment.  In this case the intensity of the scattered light is proportional to the square of the temperature multiplied by the square of the incident electric field.

On the other hand, if the temperature is small or the intensity of the applied light is large, then the ball can start wobbling due to the stretching forces it feels from the light itself.  (Briefly, when a molecule gets electrically polarized, it feels a force pushing it apart that is proportional to the strength of the applied field.)  This is called “stimulated Raman scattering,” and it produces a scattered light intensity that is proportional to the fourth power of the incident electric field.

What if I were 1% charged?

May 22, 2013

In case you hadn’t heard, the universe is governed by four fundamental forces.  But when it comes to understanding nature at almost any level larger than a nucleus and smaller than a planet, only one of them really matters: the Coulomb interaction.

The Coulomb interaction — the pushing and pulling force between electric charges — is almost incomprehensibly strong.  One common way to express this strength is by considering the forces that exist between two electrons.  Two electrons in an otherwise empty space will feel pulled together by their mutual gravitational attraction and pushed apart by the Coulomb repulsion.  The Coulomb repulsion, however, is stronger than gravity by 4,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times.  (For two protons, this ratio is a more pedestrian 10^{36} times.)

When I was a TA, I enjoyed demonstrating this point in the following way.  Take a balloon, and rub it against the top of your head until your hair starts to stand on end.  Then stick the balloon to the ceiling, where it stays without falling due to static electricity.  Now consider the forces acting on the balloon.  Pulling up on the balloon are electric forces between the relatively few electrons I just rubbed off from my hair and the opposite charge that they induce in the ceiling.  Pulling down on the balloon are gravitational forces coming from the pull of the entire mass of the Earth.  Apparently the electric force created by those few (something like 10^{10}) electrons is more than enough to counterbalance the gravitational pull coming from every proton, neutron and electron in the planet below it (something like 10^{51}).

balloon_balancing

So electric forces are strong.  Why is it, then, that we can go about our daily lives without worrying about them buffeting us back and forth?

The short answer is that they do buffet us back and forth.  Pretty much any time you feel yourself being pushed or pulled by something (say, the ground beneath your feet or the muscles tied to your skeleton), the electric repulsion between microscopic charges is ultimately to blame.

But a better answer is that the very strength of electric forces is responsible for their seeming quietude.  Electric forces are so tremendously strong that nature will not abide having a large amount of electric charge collect in one place.  And so electric forces, at the scale of people-sized objects, are largely neutralized.

But what if they weren’t?

\hspace{1mm}

When I was a TA I got to walk my students through the following morbid little problem, which helped them see why it is that electric forces don’t really appear on the human scale.  Perhaps you will enjoy it.  Like most good physics problems, it is thoroughly contrived and, for a new student of physics, at least, its message is completely memorable.

The problem goes like this:

What would happen if your body suddenly lost 1% of its electrons?

\hspace{1mm}

Now, 1% may not sound like a big deal.  After all, there is almost no reason for excitement or concern when you lose 1% of your total mass.  But losing 1% of your electrons, without at the same time losing a equal number of protons, means that suddenly, within your body, there is an enormous amount of positive, unneutralized electric charge.  And nature will not abide its strongest force being so unrequited.

I’ll use my own body as an example.  My body has a mass of about 80 kg, which means that it contains something like 2 \times 10^{28} protons, and an almost exactly equal number of electrons.  Losing 1% of those electrons would mean that my body acquires an electric charge of 2 \times 10^{26} electron charges, or about 4 \times 10^9 Coulombs.

Now, 4 billion Coulombs is a silly amount of charge.  It is about 300 million times more than what gets discharged by a lightning bolt, for example.  So, in some sense, losing 1% of your electrons would be like getting hit by 300 million lightning bolts at the same time.

Things get even more dramatic if you start to think about the forces involved.

Suppose, for example, that in their rush to escape my body, those 4 billion Coulombs split in half and flowed to opposite extremities.  Say, each hand suddenly acquired a charge of 2 billion Coulombs.  The force between those two hands (spread apart, about 6 meters feet) would be 10^{27} Newtons, which translates to about 10^{26} pounds.  Needless to say, my body would not retain its structural integrity.

Of course, in addition to the forces pushing the extremities of my body apart, there would also be a force similar in magnitude pulling me toward the ground.  You may recall that when an electric charge is next to a grounded surface (like, say, the ground) it induces some opposite charge on that surface in a way that acts like an “image charge” of opposite sign.  In my case, the earth would accumulate a huge amount of negative charge around my feet so as to create a force like that of an “image me.”

image_me

Because of my 4 billion Coulombs, the force between myself and my “image self” would be something like 10^{23} tons.  To give that some perspective, consider that something with the same mass as the planet earth weighs only about 10^{21} tons.  So the force pulling me toward the earth would be something like the force of a collision between the earth and the planet Saturn.

But my hypercharged self would not only crush the earth.  It would also break open the vacuum itself.  At the instant of losing those 1% of electrons, the electric potential at the edge of my body would be about 40 exavolts.  This is much larger than the voltage required to rip apart the vacuum and create electron-positron pairs.  So my erstwhile body would be the locus of a vacuum instability, in which electrons were sucked in while positrons were blasted out.

In short, if I lost 1% of my electrons, I would not be a person anymore.  I would be a bomb.  A Coulomb bomb, if you will, with an energy equivalent to that of ten trillion (modern) atomic bombs.  Which would surely destroy the planet.  All by removing just 1 out of every 100 of my electrons.

\hspace{1mm}

The moral of this story, of course, is that nothing of observable size will ever get 1% charged.  The Coulomb interaction cannot be thus toyed with.  All of chemistry and biology function by the interactions between just a few charges at a time, and their effects are plenty strong as they are.

\hspace{1mm}

\hspace{1mm}

Footnote

As a PhD student, I worked on all sorts of problems that involved the Coulomb interaction, and occasionally my proposed solution would be very wrong.  The worst kind of wrong was the one that made my advisor remark “What you just created is a Coulomb bomb,” which meant that I had proposed something that wasn’t neutral on the large scale.

It’s one thing to feel like you just solved a problem incorrectly.  Its another to feel like your proposed solution would destroy the planet.

Follow

Get every new post delivered to your Inbox.

Join 95 other followers