Skip to content

A plug for poetry

April 30, 2013

I have never been a great creative writer.  But there was a time in my life where I devoted a fair amount of effort to it.  And it did me a great deal of good.

My creative writing came almost entirely during college, which was as turbulent a time for me as I imagine it is for most people.  Creative writing, and poetry in particular, was valuable during those years because it gave me a channel through which I could plainly state the way I perceived and felt about certain ideas.  Once those feelings were committed to paper I could begin to understand them (or at least be aware of them), and see how they meshed or conflicted with my intellectual understanding of the same ideas.  Eventually I came to realize that there was a very specific set of things that I found simultaneously moving and irreconcilable.  And I saw that there was something very particular that I wanted to say, and which felt very important, but that I couldn’t manage to put into words.  Finally, after over a year hovering around the same ideas (even when I tried not to), I wrote the best poem of my life.  I don’t know how other people would feel about it, but to me it was exactly the thing that I wanted to see written as black lines on white paper.  And it made me quite happy.

So, in short, I have a great and very personal appreciation for the potential benefits of poetry.  Today is April 30, which means there are technically a few hours remaining in National Poetry Month (as Brian Tung has been reminding me).  So I wanted to write a short post endorsing poetry in general.

In particular, I wanted to suggest two simple exercises for the person who has an interest in trying to write a poem, but has a hard time knowing where to start.  I credit both of these exercises to my wife (who got them from a class she took with Jude Nutter), although I imagine that they are pretty common in creative writing classes.

\hspace{1mm}

Exercise 1: Thirteen Ways

Wallace Stevens’ “Thirteen Ways of Looking at a Blackbird” is a pretty famous poem, and for many it may bring back bad memories of high school English class.  But it’s also surprisingly fun to play with.  In case you’ve never seen it before, the original poem is this:

Thirteen Ways of Looking at a Blackbird

I
Among twenty snowy mountains,
The only moving thing
Was the eye of the blackbird.

II
I was of three minds,
Like a tree
In which there are three blackbirds.

III
The blackbird whirled in the autumn winds.
It was a small part of the pantomime.

IV
A man and a woman
Are one.
A man and a woman and a blackbird
Are one.

V
I do not know which to prefer,
The beauty of inflections
Or the beauty of innuendoes,
The blackbird whistling
Or just after.

VI
Icicles filled the long window
With barbaric glass.
The shadow of the blackbird
Crossed it, to and fro.
The mood
Traced in the shadow
An indecipherable cause.

VII
O thin men of Haddam,
Why do you imagine golden birds?
Do you not see how the blackbird
Walks around the feet
Of the women about you?

VIII
I know noble accents
And lucid, inescapable rhythms;
But I know, too,
That the blackbird is involved
In what I know.

IX
When the blackbird flew out of sight,
It marked the edge
Of one of many circles.

X
At the sight of blackbirds
Flying in a green light,
Even the bawds of euphony
Would cry out sharply.

XI
He rode over Connecticut
In a glass coach.
Once, a fear pierced him,
In that he mistook
The shadow of his equipage
For blackbirds.

XII
The river is moving.
The blackbird must be flying.

XIII
It was evening all afternoon.
It was snowing
And it was going to snow.
The blackbird sat
In the cedar-limbs.

–Wallace Stevens

What’s great about this poem is how it takes a relatively common object — a blackbird — and constructs thirteen short images around it, each of which makes you see the object in a new light.  Trying to emulate this approach (as literally as you want) with your own chosen object can be surprisingly gratifying.

Here, for example, is my own go at it:

Thirteen Ways of Standing at a Bridge

I
Standing on a bridge, a body becomes aware
of its tenderness for open air.
A body becomes aware of all that it cannot be.

II
She was so small.
She was distant.
She walked as if in place.
But I saw her, on a straight line,
And the bridge was my telescope.
The bridge was my ravine
to channel the love I devoutly wished
would fill the space between us.

III
A bridge is a joining of triangles.
A bridge is geometry.
A bridge is not perfect.
A bridge is the feeling of being perfect.

IV
Above: thin and light whispering, tenuous fluid.
Below: dark and crashing roar, inexorable fluid.
I am held, just here, by the bridge

V
I am writing, I am resolute. I am filling the page with my hand.
My mind is here.
My mind is not here.
I am building a thin, gray bridge across the white paper.

VI
A bridge, sometimes, is everything.
A bridge is not alive.

VII
Listen: urine will cover the concrete.
The walls will writhe with vulgarness.
The traffic will echo, always from above.
But you,
sir,
you can live here.
This bridge will keep you dry.

VIII
When the sun rose, the bridge was yellow.
The bridge was yellow
in the dark night.

IX
In those days of rain, the floodwaters
culminated.
They embarassed us deeply.
We watched, together, a trailer house go sliding down the river:
A white bird alighted on the roof.
Together, they passed beneath the bridge.

X
Every atom is a stone.
It is held by others —
together, they must hold.
This bridge is my shoulder.
A shadow travels over it.

XI
In the evening, I sit.  I think of them.
They are silent.  I sit in fear.
My fear is a dark, black bridge that stands against an impossible sky.

XII
A child, in the world, is small.
A man, on a bridge, is a child.
A child, in the world, is small.

XIII
You did not know my courage
when I stepped into the kitchen.
You looked at me.  I stopped.
I was feeling the crack of a great, great bridge.

–Brian Skinner

If you’re curious, I wrote this poem largely thinking about the Stone Arch Bridge in Minneapolis:

Exercise 2: The Single Sentence

One of the things that a great poem can do is to surprise you by pulling you in a sudden and unexpected direction.  Here, for example, is a good one that does exactly that:

Where You Go When She Sleeps

What is it when a woman sleeps, her head bright
In your lap, in your hands, her breath easy now as though it had never been
Anything else, and you know she is dreaming, her eyelids
Jerk, but she is not troubled, it is a dream
That does not include you, but you are not troubled either,
It is too good to hold her while she sleeps, her hair falling
Richly on your hands, shining like metal, a color
That when you think of it you cannot name, as though it has just
Come into existence, dragging you into the world in the wake
Of its creation, out of whatever vacuum you were in before,
And you are like the boy you heard of once who fell
Into a silo full of oats, the silo emptying from below, oats
At the top swirling in a gold whirlpool, a bright eddy of grain, the boy
You imagine, leaning over the edge to see it, the noon sun breaking
Into the center of the circle he watches, hot on his back, burning
And he forgets his father’s warning, stands on the edge, looks down,
The grain spinning, dizzy, and when he falls his arms go out, too thin
For wings, and he hears his father’s cry somewhere, but is gone
Already, down in a gold sea, spun deep in the heart of the silo,
And when they find him, he lies still, not seeing the world
Through his body but through the deep rush of grain
Where he has gone and can never come back, though they drag him
Out, his father’s tears bright on both their faces, the farmhands
Standing by blank and amazed – you touch that unnamable
Color in her hair and you are gone into what is not fear or joy
But a whirling of sunlight and water and air full of shining dust
That takes you, a dream that is not of you but will let you
Into itself if you love enough, and will not, will never let you go.

– T. R. Hummer

One of the impressive things about this poem is that it is all one sentence, but it never feels like a run-on.  The writer pulls you fluidly from one image and one emotion to another, returning at the end to where you started.  But by the end the starting place feels very different than it did at the beginning of the poem.

This is also a fun poem to try and emulate.  That is, you can try to write a poem that tells a story with a single sentence.  When I tried this, I chose to do it from a similar first person view, but I chose for my subject someone that I thought would be more likely to think in long, unbroken sentences.

[Untitled]

When your vantage point is low to the ground, everything is edible —
the angled, cylindrical surface of a pencil
that yields to the teeth, leaving flecks in the mouth;
the cool and sharp-tasting metal of a dime,
with its muscular face and rippled edges to tickle the tongue;
the round, plastic coating of electrical wire;
the animalesque foot of a wooden table;
the hairy tendrils of a scuffed-up ball of carpet,
which feel like danger, but also like knowledge –,
in short, such a world is meant to be explored with the mouth,
each small or large thing encountered, somehow by chance,
and carefully weighed for taste and texture,
which together allow you to make a judgment
(a judgment only — not an opinion,
not an image,
not a coherent set of emotions,
and not a list of rules or guidelines)
of a world that is being presented to you
through a constant, dizzying sensorial flow,
a world that cannot be encountered in any way except
from the floor
in a state of great wonder,
in frequent pauses, where the head lolls
for trying to take in uncountable surroundings,
and through short, exultant bursts of motion,
rife with the joy of what it means
to be a living body
(the kind of joy that exists only
while “joy” is not its own word
or a category of certain things
to be sought or shunned or understood as a controlled part of being alive),
until finally, with little warning,
that feeling of being powerful and lost,
of being central to an inscrutable universe,
becomes intolerable
and you lie there, twinging,
feeling the body’s reaction to oppression,
and the depth of your own helplessness against it,
until that moment when, somehow, you are lifted
from your own terrible weight of being
and restored to comfort and familiarity
by a gentle but firm compression, a soft encirclement,
and the reassuring, nurturing warmth
of a round breast.

–Brian Skinner

My poem is, of course, in all ways a worse poem than T. R. Hummer’s; it’s uglier and less smooth, with weaker imagery.  But the point of this post is that you don’t have to be T. R. Hummer or Wallace Stevens to gain something by writing poetry.  So if you’re the kind of person who feels a vague interest in it, maybe you can use the last day of National Poetry Month as an excuse to try one of these exercises.

Of course, feel free to suggest other good exercises in the comments.

\hspace{1mm}

\hspace{1mm}

This is probably pretty immature, but at the end of this post I feel a need to wash away some of the artsy stink of pretentiousness that (unfairly) comes with talking about and writing poetry.  So feel free to enjoy this quote from Paul Dirac:

The aim of science is to make difficult things understandable in a simpler way; the aim of poetry is to state simple things in an incomprehensible way. The two are incompatible.

–P. A. M. Dirac

And this hilarious sketch from Stephen Fry and Hugh Laurie:

On the myth of the gay-straight dichotomy

April 29, 2013

Today marks the first time that an athlete in a major American sport has come out as gay.  It’s a fairly monumental moment, one that would have been unimaginable even during my (not-so-distant) childhood.  The tide of public sentiment is very firmly and dramatically changing.

As it happens, the athlete in question is Jason Collins, an NBA center who plays with the Washington Wizards.  I have long had a personal liking for Jason Collins, probably because he was an NBA center (which is my own dream job) who went to Stanford (where I wish I could have gone) and who listed in his official biography that his favorite book was Faulkner’s A Light in August (which I wish I had understood well enough to appreciate).  I even met him once (very briefly and incidentally) at the Radio Shack in the Minneapolis Skyway.

Today Sports Illustrated published an excellent article written by Collins about his coming out.  If you haven’t seen it yet, I highly recommend reading it.

\hspace{1mm}

There is one feature of his article, though, that bothers me.  This is, in fact, something that stems from a larger problem I have with the way we think about homosexuality.  Namely, Jason Collins goes out of his way to say that “Being gay is not a choice.”

This is a sentiment that is repeated by just about everyone who comes out publicly: “I didn’t choose to be this way.”  Of course, I don’t doubt that this is a true and even good thing to say.  But it bothers me for two reasons:

  1. It oversimplifies the complex and highly diverse nature of human sexuality, and it creates a misleading impression that people are born as either “gay” or “straight.”
  2. It implies that homosexuality is okay only because its practitioners have no choice in the matter.

Homosexuality is becoming more accepted; that much is very clear from the chart above.  But I am worried that it is being accepted only because people can be sympathetic to a very particular, and for the most part very untrue, narrative.  This is the narrative of the person born without the ability to feel attracted to the opposite sex, who is forced by nature to develop homosexual rather than heterosexual relationships.  This is the person who announces “I didn’t choose to be this way,” as if to say “I would have been straight if I could have been, but I was born gay so the decision wasn’t mine to make.”

But what would have been wrong if the person could have decided?  If some hypothetical person felt equally attracted to both genders, what would be wrong with them choosing to pursue a homosexual rather than heterosexual relationship?  It seems to me that this narrative is still one that we, as a society, are still not willing to accept.

And I think we should get over that.  Because, ultimately, the more freedom people are given to choose their partners, the more easily they will be able to form happy and stable relationships.  And I strongly suspect that, in a world where value judgments were not associated with the gender of one’s partner, gender would be a significant part of that choice for a great many people.

\hspace{1mm}

Let me try to say it this way.  The idea of the “gay person” is a myth, and so is the idea of the “straight person.”  Every individual is attracted to a particular set of features and attributes, both physical and personal, that they want in a potential partner.  For most people, members of the opposite gender are much more likely to have an attractive combination of those attributes than a member of the same gender.  But for no one is every member of the opposite gender more attractive than every member of the same gender.  In other words, no one is 100% straight, and similarly, no one is 100% gay.

To make this discussion a little more concrete, let me introduce a hypothetical measure of a person’s sexual preference.  If I were asked to design such a metric, it might go something like this.  For a given individual, randomly select one person from the opposite gender and one person from the same gender (if you want, choose both of them to be within the individual’s age group).  Have the individual spend some time with each of the two people, and then report an honest assessment of which person the individual found more attractive (hypothetically, you could get this information from something like plethysmography).  The probability that the individual will be more attracted to the person of their same gender could be called the “gay preference ratio.”

I am virtually certain that for no individual would this ratio be exactly 0 or exactly 1.  Instead, I imagine that its distribution across a large population would be something like this:

You can fairly say that people on the far left and on the far right of this distribution have essentially no choice in which gender to date: they are virtually never attracted to one of the two sexes.  All those people in the middle, however, could theoretically have a happy relationship with a person from either of the two genders.

Most people (most Americans, anyway) who are publicly gay probably do correspond to the far right of this distribution, and in this sense their proclamations of “I didn’t choose to be gay” are likely very honest and heartfelt.  After all, until quite recently (arguably), societal prejudice has made being a gay American so difficult that only those who have nearly no alternative would choose it.   (And in most places this is still the case, to varying degrees.)  I suspect, however, that for every publicly gay American there are a dozen straight Americans who, in a completely free society, could easily have settled down in a homosexual relationship had a good one presented itself.

Thus, while gay rights are advancing at an impressive rate, we are still a long way from granting the sort of casual non-judgmentalism that would benefit people in the middle of this distribution.  Their existence just doesn’t fit the narrative that we are willing to follow in order to accept homosexuality.  Perhaps in the near future our gay rights discussion will shift toward eliminating this completely false narrative, and advancing the idea that everyone should be free to choose who they want to be with, regardless of whether gender is part of that choice.

\hspace{1mm}

\hspace{1mm}

Footnotes

1.  As I read stories about Jason Collins, I can’t help but contrast them with the story of another professional basketball player, Sheryl Swoopes.  Swoopes was married in 1995 to a man and had a son.  Then in 2005 she “came out” as gay, saying that she had fallen in love with her former (female) assistant coach.  That relationship ultimately didn’t last, and in 2011 she became engaged to a man again.

While her coming out in 2005 was widely-reported, I haven’t read anything about her since.  Somehow, it’s easier to champion the courage of someone who is gay than of someone who is capable of being attracted to both men and women.  I hope that in the near future stories like hers become more commonplace and more easily acceptable.

2.  The schematic distribution drawn above is actually a logit-normal distribution.  It is, of course, completely hypothetical.

3.  There seems to be much hand-wringing about whether people are born gay, or whether they develop it through some factors present in their upbringing.  Personally, I can’t imagine why this distinction matters.  I love sports, but who cares whether that love arises primarily from genetic or environmental factors?

UPDATE:  4.  In this post, I have not bothered to draw a distinction between “gender” and “sex,” and have mostly used the former.  I understand that’s not very correct, but for the point I’m trying to make it doesn’t matter which way you decide to look at the word “gender.”

5.  Of course, it was silly of me not to bring up the Kinsey Scale in this post, which was the first real attempt to make a quantitative measure of “gay preference ratio.”  I hope no one got the impression that I consider myself the first person to realize that sexual orientation comes on a continuous spectrum.

Where the periodic table ends

April 27, 2013

periodic_table_ends

There is a wonderful story in physics, with a rich history, that begins with this question:  What is the biggest possible atomic number?

In other words, where does the periodic table end?  We (as a species) have managed to observe or create nuclei with atomic number ranging from 1 to as large as 118.  But how far, in theory, could we keep going?

\hspace{1mm}

As it turns out, there is a scientific law that says that nuclei whose charge Z is greater than some particular critical value Z_c cannot exist.  What’s more, this critical value Z_c is related to the mysterious fine structure constant \alpha = e^2/\hbar c \approx 1/137, one of the most fundamental and mysterious constants of nature.  (Here, e is the electron charge, \hbar is Planck’s constant divided by 2 \pi, and c is the speed of light.)

In particular, the periodic table should end at Z \approx 1/\alpha \approx 137.

In this post I’ll explain where this law comes from, and why it is that no point object can have a charge greater than \sim 137.

\hspace{1mm}

\hspace{1mm}

To begin with, imagine a point in space at which is localized a very large positive charge Z e.  Like this:

Z_nucleus

I’ll call this point the nucleus.  You can now ask the question of what happens if you release an electron in the neighborhood of this nucleus.  Obviously, the electron gets strongly bound to the nucleus, and it settles into a compact state with some size r around the nucleus.  To figure out how big r should be, you can remember that its value is determined by a balance between the typical energy of attraction -Ze^2/r between the electron and the nucleus and the kinetic energy K associated with confining the electron to within the distance r (as was explained here for the hydrogen atom).

The tricky part here is that for a very large nuclear charge Z the electron kinetic energy gets big, and the electron ends up moving with speeds close to the speed of light.  To see this, consider that when the nuclear charge Z is large, the electron becomes tightly bound, which means r is small.  From the uncertainty principle, confining the electron to within the distance r gives it a momentum p \sim \hbar/r.  If p is big enough (or r is small enough) that p \gg m c (where m is the electron mass), then the kinetic energy can be described using the relativistic formula K = p c \sim \hbar c/r.

Now if you put together the potential and kinetic energy, you’ll find that the total energy as a function of r is

E \sim \hbar c/r - Z e^2/r.

This is a disconcerting formula.  Unlike for the hydrogen atom (where the electron moves much slower than the speed of light), this energy has no minimum value as a function of r.  In particular, if Z is large enough that Z \gtrsim \hbar c/e^2 \sim 1/\alpha \sim 137, then the energy just keeps getting lower as r is made smaller.

What this means is that for Z \gtrsim 137, the electron state is completely unstable, and the electron collapses onto the nucleus.

\hspace{1mm}

\hspace{1mm}

This may not seem like a particularly big problem to you.  You may think that perhaps one can just keep electrons away from the nucleus (at least for a little while), and the nuclear charge Z > 137 will sit happily in space.

However, when Nature wants electrons badly enough, it finds a way to get them.

In this case,  the nuclear charge Z > 137 creates an electron binding energy that is so large, it becomes even larger than the rest mass energy of the electron, mc^2.  With such a large energy at stake, the nucleus can literally rip apart the vacuum and pull an electron from it.

Or, more correctly, the nucleus can wait until random fluctuations of the electromagnetic field produce an electron and positron pair (which under normal circumstances would immediately disappear again), then greedily suck in the electron and spit out the positron.  Like this:

greedy_nucleus

[This process is similar to the perhaps more famous phenomenon of Hawking radiation at the edge of a black hole.  In black holes the enormous gravitational field rips antiparticles from the vacuum and sucks them in, spitting out their (normal) particle partners.  The difference is that Hawking radiation is an extremely slow process, whereas the process described above would be nearly instantaneous.]

The ripping and devouring of vacuum electrons by the large nucleus continues until the charge Z has been reduced to the point where it becomes smaller than \sim 137, and everything settles down again.

It’s a fascinating instability of the vacuum itself, and its result is to prohibit too much charge from existing at any one location.

This means an end to the periodic table.

\hspace{1mm}

\hspace{1mm}

Footnotes

1.  The derivation in this post was pretty schematic, and all I showed was that the critical value Z_c of the nuclear charge is proportional to 1/\alpha \approx 137.  Up until the 1970s it was believed that Z_c was exactly 137.  More recent works, however, have put this number closer to 170.

2.  Sadly, there’s no easy way to observe this vacuum instability at Z > 1/\alpha; making nuclei with charge 137 is no simple matter.  So one can think of this as yet another interesting fable of physics relegated to trivia by the fact that in our universe \alpha just happens to be a small number.  However, there are synthetic systems (like graphene) where the effective fine structure constant happens to not be a small number.  In such cases even charges as small as Z = 2 cannot exist stably.

3.  The picture, and the title, at the top of this post is of course taken from Shel Silverstein’s Where the Sidewalk Ends.

Nothing lasts forever

April 25, 2013

Quick, what’s the integral from zero to infinity of  \sin x?

\hspace{1mm}

If you’re a good math student, you’ll tell me that the answer is undefined, since \sin x oscillates forever and so the integral doesn’t converge.

I, on the other hand, am not a good math student, so I am free to tell you that I know the answer.

The integral from zero to infinity of \sin x is 1.

What's the total area under this curve (red - blue)?  I say it's 1.

What’s the total area under this curve (red – blue)?  1.

\hspace{1mm}

\hspace{1mm}

Let me explain where this answer comes from, and why I’m so confident that it’s right.  In doing so, perhaps I will demonstrate a little bit about the relationship that physics has with math.

First of all, as a physical-minded person I should interpret what it means to write \int_0^\infty \sin (x) dx.  In my mind, the only meaningful interpretation of the question “what is \int_0^\infty \sin (x) dx?” is something like “what is the net (integrated) effect of something that oscillates for a very long time?”  The \infty in the integral means that when someone asks “how long do you mean by ‘very long’?”, the correct answer is “as long as I want.”

\hspace{1mm}

Now I should answer the question, and I can do so as long as I hold to one belief: In the real world, nothing actually lasts forever.

That is, I don’t know how long, exactly, the \sin x in the integral will keep going, but I do know that it should die off eventually.  So let me assume that the amplitude of the sine wave dies off very slowly (“How slowly?”  “As slowly as I want.”).  Then I can calculate the integral, get a perfectly well defined answer, and verify that my answer doesn’t depend on how slowly I killed off the sine wave.

For example, say I kill off the oscillations of the sine function exponentially, by replacing \sin x in the integral with \sin(x) e^{-x/L}, where L is a very large number.  Then I can calculate the integral, and check what happens when L gets arbitrarily large.  You can do this exercise for yourself (by hand if you’re diligent, using Wolfram Alpha if you’re lazy) and you’ll find that when L \gg 1 the answer is very close to 1.  (“How close?”  “As close as you want.”).

\hspace{1mm}

The more precise mathematical statement goes like this:

\int_0^\infty \sin(x) dx = \lim_{L \rightarrow \infty} \int_0^\infty e^{-x/L} \sin(x) dx = 1.

(This is a mathematical trick that I use, in one form or another, all the time.)

\hspace{1mm}

\hspace{1mm}

So, to recap, what is the integral of \sin x from zero to infinity?

My math textbook, my math teacher, and all my math software says that the answer is undefined.  But as long as you grant me that nothing actually lasts forever, I’ll tell you that the answer is 1.

\hspace{1mm}

\hspace{1mm}

Footnotes

1.  It is not really my intention to bash on mathematics or math teachers.  For example, I found that the most “hard core” math course that I took in college — real analysis — was thoroughly grounded in intuitive and physical thinking of the sort I am advocating here.

2.  One way to think about the final answer, \int_0^\infty \sin(x) dx = 1, is that the area under the curve above (red – blue) depends on how many red bumps and how many blue bumps you count.  Every red bump contributes an area +2 and every blue bump has area -2.  So as you count them from left to right, your final tally for the area will go back and forth between +2 and 0.  The correct answer, 1, is the average of these two, which you might expect from any process that slowly washes out your counting procedure.

3.  If you’re curious, \int_0^\infty \cos(x) dx = 0.  (This can of course be worked out using the same mathematical trick as above.)

4.  If you’re really curious, \int_0^\infty \sin(x + \phi) dx =\cos(\phi).  So the integral from zero to infinity of an oscillating wave can take any value from -1 to 1, depending on its phase when it started.  This result could be anticipated from the simple argument in Footnote 2.

5.  Just to reassure you, there is nothing magical about the choice of an exponential cutoff e^{-x/L}.  I usually use it because it’s easy to work with.  But you’ll find that any slow damping of the oscillations will give the same result.

I suspect, in fact, that there is some nice theorem here.  Like:

For any continuous function f(y) such that f(y \rightarrow \infty) \rightarrow 0 [UPDATE: and f(0) = 1], \lim_{L \rightarrow \infty} \int_0^\infty \sin(x) f(x/L) dx = 1.

If I were a smarter person I could probably prove this theorem and generalize it to any oscillating function (with zero mean).  Can any of my more mathematically inclined readers shed light on the subject?  Maybe there are other necessary constraints on f(y)?

Spare me the math: the Lamb Shift

April 24, 2013

To kick off SMTM, I’ll look at a topic that I never really understood when I was in graduate school: the Lamb shift.

The Lamb Shift: what it is

In its most commonly-discussed form, the Lamb shift is a small effect.  In fact, it’s a very small effect (which is probably why I never bothered to learn it in the first place).  The Lamb shift is a miniscule change in (some of) the energy levels of the hydrogen atom relative to where it seems like they should be.  For example, the binding energy of an electron to the hydrogen nucleus (a proton) is about 13.6 electron volts.  The Lamb shift is a phenomenon that changes this energy level by about 4\times10^{-6} eV, or about 0.00003%.  But the existence of this shift was a serious puzzle to physicists in the 1940s and 50s, and its final resolution provided a beautiful piece of physics that helped spur the development of quantum electrodynamics, one of the most spectacularly successful scientific theories in history.

The essence of the Lamb shift can be stated like this: it is the energy of interaction between hydrogen and empty space.

The dominant contribution, of course, to the energy of the hydrogen atom is the interaction of the electron with the proton it’s orbiting.  If you want a really quick way to derive the energy of the hydrogen atom, all you need to remember is that the size of the electron cloud around the proton has some characteristic size a.  Confining the electron to within this cloud costs some kinetic energy \sim \hbar^2 /m a^2, and it buys you some energy of attraction, \sim -e^2/a (here I’m being too lazy to write out the 4\pi\epsilon_0s that come in SI units).  So the total energy is something like E \sim \hbar^2/ma^2 - e^2/a.  If you minimize E with respect to a (take the derivative and set it to zero), you’ll find that

a \sim \hbar^2/me^2

and

E \sim -me^4/\hbar^2 \sim -e^2/a.

The constant a \approx 0.5 Angstroms is the Bohr radius, which is the typical size of the hydrogen atom (and, roughly speaking, any atom).  The energy -e^2/a \equiv R is the Rydberg energy.

The Lamb shift comes from the way this balanced state between electron and proton is influenced by the slight, random buffetings from the vacuum itself.

\hspace{1mm}

The hydrogen atom

There are two players in this story: the hydrogen atom, and empty space.  I’ll describe the former first, since the latter (paradoxically) is considerably harder.

In reality, the Lamb shift is most easily observed in excited states of hydrogen (the P states, see Footnote 1 at the bottom), but for the purpose of this discussion it’s easiest to think about the ground state.  In terms of the electron probability cloud, the ground state of the hydrogenic electron looks like this:

ugly_hydrogen

It has a peak right at the middle of the atom, and it falls of exponentially.

I know there is a lot of trickiness associated with whether to think about an electron as a particle or as a wave (my own favorite take is here — in short, an electron is a particle that surfs on a wave), but for this discussion it’s easiest to think of an electron as a point object that just happens to arrange itself in space according to the probability density plotted above.

\hspace{1mm}

The vacuum

There is a lot going on in empty space.  If this crazy idea is completely new to you, I would (humbly) suggest reading my post on the Casimir effect.  The upshot of it is that all of space is filled by endlessly boiling quantum fields, and one of these, the electromagnetic field, is responsible for conveying electromagnetic forces.  As a result of its indelible boiling, however, the electromagnetic field can push on charged objects, like our electron, even when there are no other charged objects around to seemingly initiate the pushing.

To get a better description of the electromagnetic field in vacuum, it will be helpful to imagine that our electron sits inside a large metal box with size L.  Inside this metal box are lots of randomly-arising electromagnetic waves (“virtual photons”).  Something like this:

metal_box

When dealing with quantum fields, a good rule of thumb is to expect that, in vacuum, every possible oscillatory mode will be occupied by one quantum of energy.  In this case, it means that for every possible vector k = 2\pi / \lambda, where \lambda = 2L, 2L/2, 2L/3, 2L/4, ... is a permissible photon wavelength, there will be roughly one photon present inside the box.  This photon has an energy E_k = \hbar c k, where c is the speed of light.  One can estimate the typical magnitude of the electric field the photon creates, |\vec{\mathcal{E}_k}|, by remembering that |\vec{\mathcal{E}_k}|^2 gives the energy density of an electric field.  Since the photon fills the whole box, this means that |\vec{\mathcal{E}_k}|^2 \times L^3 \sim E_k, or |\mathcal{E}_k|^2 \sim \hbar c k/L^3.  This electric field oscillates with a frequency ck.

So now the stage is set.  The hydrogen atom sits inside a “large box” (which we’ll do away with later), and inside the box with is a huge mess of random electric fields that can push on the electron.  Now we should figure out how all this pushing affects the hydrogen energy.

[By the way, you may be bothered by the fact that all these randomly-arising photons seem to endow the interior of the box with an infinite amount of energy.  If that is the case, then there’s nothing much I can say except that you and I are in the same club, with only speculation to assuage our uneasiness.]

\hspace{1mm}

How the vacuum pushes on hydrogen

The essence of the Lamb shift is that the random electric fields push on the electron, and in doing so they move it slightly further away from the proton, on average, than it would otherwise be.  Another way to say this is that the distribution of the electron’s position gets blurred over some particular (small) length scale \delta r.  In particular, the sharp peak in the distribution near the center of the atom should get slightly rounded, like this:

smeared_hydrogen

The “smearing length” δr is greatly exaggerated in this picture.

The resulting shift of the electron distribution away from the center lowers the interaction energy of the electron to the proton.  To estimate the amount of energy that the electron loses, you can think that in those moments where the electron happens to approach be within a distance \delta r of the nucleus, it frequently finds itself getting pushed outward by an amount \delta r.  As a result of this outward push it loses an energy e^2/\delta r.  This means that the Lamb shift energy

\Delta E \sim (e^2/\delta r) \times [\text{fraction of time the electron spends within } \delta r \text{ of the nucleus}]

\Delta E \sim (e^2/\delta r) \times (\delta r)^3/a^3

\Delta E \sim e^2 (\delta r)^2/a^3

Now all that’s left is to estimate \delta r.

\hspace{1mm}

The trick here is to realize that all of those photons within the metal “box” are independently shaking the electron, and each push is in a random direction.  So if some photon with wave vector k_1 produces, by itself, a displacement of the electron (\delta r)_{k_1}, then the total displacement (\delta r) satisfies

(\delta r)^2 = (\delta r)^2_{k_1} + (\delta r)^2_{k_2} + (\delta r)^2_{k_3} ...

[This is a general rule of statistics: independently-contributing things add together in quadrature.]

A photon is essentially just an electric field that keeps reversing sign.

In our case, each (\delta r)_k comes from the influence of a photon, which has electric field \vec{\mathcal{E}_k}.  The simplest way to estimate (\delta r)_k is to imagine that the electric field \vec{\mathcal{E}_k} pushes on the electron for a time \tau \sim 1/kc (the period of its oscillation), after which it reverses its direction (as shown on the right).  During that time, the acceleration of the electron is something like |\vec{A}| \sim |\vec{F}|/m, where \vec{F} = e\vec{\mathcal{E}_k} is the force of the electric field pushing on the electron, and its net displacement (\delta r)_k \sim |\vec{A}| \tau^2.  This means

(\delta r)_k \sim e |\vec{\mathcal{E}_k}|/mk^2 c^2.

\hspace{1mm}

Now we should just add up all the (\delta r)^2_ks.  Since there is a very large number of photons with a nearly continuous range of energies (due to the very large size of the confining “box”), we can replace a sum over all k‘s with an integral: \sum_k (\delta r)^2_k\rightarrow L^3 \int d^3 k (\delta r)^2_k.  Inserting the expressions for (\delta r)_k and |\vec{\mathcal{E}_k}|^2 gives the following:

(\delta r)^2 \sim (e^2 \hbar/m^2 c^3) \int (1/k) dk.

You can notice that the size L of the box drops out of the expression, which is good because the box was completely fictitious anyway.

The only remaining thing to figure out is what to do with the integral \int(1/k) dk, which is technically equal to infinity.  In physics, however, when you get an infinite answer, it means that you forgot to stop counting things that shouldn’t actually count.  In this case, we should stop counting photons whose wavelength is either too short or too long to affect the electron.  On the long wavelength side, we should stop counting photons when their wavelength gets bigger than the size of the atom, a.  Such long wavelength photons make an electric field that oscillates so slowly that by the time it has changed sign, the electron has likely moved to a completely different part of the atom, and the net effect is zero.  One the short wavelength side, we should stop counting photons when their wavelength gets shorter than the Compton wavelength.  Such photons are super-energetic, with energy larger than mc^2, which means their energy is so high that they don’t push around electrons anymore: they spontaneously create new electrons from the vacuum.  [In essence, it doesn’t make sense to talk about electron position at length scales shorter than the Compton wavelength.]

Using these two wavelengths as the upper and lower cutoffs of the integral gives \int (1/k) dk = ln(1/\alpha).  Here, \alpha = e^2/\hbar c \approx 1/137 is the much-celebrated fine structure constant.

[It is perhaps worth pausing to note, as so many before me have done, what a strange and interesting object the fine structure constant is.  \alpha contains only the most fundamental constants of electricity, quantum mechanics, and relativity, (e, \hbar, and c), and they combine to produce exactly one dimensionless number.  How strange that this number should be as large as 137.  As a general rule, fundamental numbers produced by the universe are usually close to 1.]

\hspace{1mm}

Now we have all the pieces necessary to assemble a result for the Lamb shift.  And actually, if you like the fine structure constant, then you’ll love the final answer.  It looks like this:

\Delta E/R \sim \alpha^3 \ln(1/\alpha).

\hspace{1mm}

\hspace{1mm}

At the beginning of this post I mentioned that the Lamb shift is very small — only about 1/500,000 of the energy of hydrogen (the Rydberg energy, R).   Now, if you want to know why the Lamb shift is so small, the best answer I have is that the Lamb shift is proportional to \alpha^3, and in our universe the fine structure constant \alpha just happens to be a small number.

It’s interesting to note that if we somehow lived in a universe where \alpha was not so small, then the Lamb shift would get big, and those random fluctuations of the quantum field would get large enough to completely knock the electron off the nucleus.  This would be a universe without atoms, and consequently, without you and me.

\hspace{1mm}

\hspace{1mm}

Footnotes

Probability clouds for different hydrogen states. Different states along the same row are supposed to have the same energy, but the Lamb shift splits the S states from the P and D states.

1)  You might notice that the Lamb shift appeared only because the electron probability cloud had a peak at the center of the atom.  If it didn’t have a peak — say, if it went to zero near the center — then there would be no Lamb shift.  This is in fact exactly how the Lamb shift was discovered.  Certain excited states of the hydrogen atom have a peak near the center and others go to zero.  So while normal quantum mechanics predicts that, say, the 2S and 2P states (shown to the right) have the same energy, in fact the Lamb shift makes a small difference between them.  This difference can be observed as a faint radio wave microwave signal from interstellar hydrogen.

2)  It’s probably worth noting that if you increased the fine structure constant \alpha, you would have run into bigger problems long before you started fussing about the Lamb shift.

3) Also, while \alpha is a small number in our normal world, it’s not hard to imagine synthetic worlds (the interior of certain materials) where \alpha is not a small number.  For example, graphene is a material inside of which there is an effective speed of light that is 300 times smaller than c.  This makes \alpha very close to 1, and the sort of effects discussed in this post get very real.  This is part of why there has been so much ado about graphene among physicists: it’s quite an exciting (and frustrating) playground for people like me.

UPDATE:

4)  I just came across this video of Freeman Dyson (one of my personal favorite physicists) explaining the Lamb shift and some of the history behind it.  His conceptual summary of it starts at 2:43.

FURTHER UPDATE:

5) A reader points out to me that the lovely qualitative argument presented here was first put forward by American physicist Ted Welton, in 1948.

Sadly, Welton doesn’t have a Wikipedia article in English (although he does in German).  Would anyone like to create one for him?

Spare me the math

April 24, 2013

Part of what scuttled my blogging during the past couple years was the fact that every post took me such a long time to write.  Crafting a careful, conceptually clear, and readable blog post takes (me) a lot of work: I would estimate that the average physics-related post took me 12 (non-consecutive) hours to write.  As a consequence, my blogging became infrequent, and I was often too daunted to try and delve into some interesting scientific topic.

In this second time around, I’m going to try and fix that a little bit.  I’ve decided to confront my fear of writing sloppy or incomplete posts by, in short, writing sloppy and incomplete posts.  I’m hoping that the increased breadth of material will compensate, to at least some of my readers, for the potentially decreased quality.

I’ve also resolved to have less trepidation about delving into more “hard core” topics from condensed matter physics, which is my own subfield.  These topics aren’t conceptually more difficult than, say, the second law of thermodynamics or the double-slit experiment; in fact they’re usually simpler.  It’s just that they are generally discussed only by a more professional audience, and so they can seem daunting to someone who is uninitiated.  But it seems to me that for many of these topics there is a real dearth of qualitative discussion.  And a lot of them are really pretty cool.

In light of these two resolutions, I’ve decided to institute a new series of posts here at G&L, called “Spare me the math” (SMTM).  The idea of SMTM is to look at some topic from “advanced” physics in a brief and very conceptual way, with the goal of bringing some “feel” for a physical phenomenon that can be hard to get from typical textbooks or Wikipedia articles.  Where possible, I will try to include just enough equations to codify the most important physical relations at work.  If I do my job well, then there should be enough information to put together a schematic derivation of the primary results, at least to within numerical factors.  As usual, I use math as a tool for remembering and reasoning with basic dependencies, and not as a series of exact statements.

So in that sense, Spare Me The Math will not actually be “math-free”, in the sense that you can still expect to see very basic equations.  But it should be “hard math-free,” or at least “formality-free.”  Or in other words, the focus will be on big picture ideas, with a little algebra (and maybe a little calculus) to piece them together and help you keep track of them.

It has been my experience, by the way, that such simple derivations are much more useful for scientific thinking than more formal ones; so it’s unfortunate that textbooks (and academic papers) are almost always dominated by the latter.  I am always pleasantly surprised by how much easier it is to talk science one-on-one with someone than it is read their papers.  That’s because in a one-on-one conversation a scientist will talk to you in the language that s/he uses to think about the problem, whereas when writing a paper everyone gets paranoid that they’ll say something incorrect and be called out for it.  But as my undergraduate advisor used to say, “what’s a factor of \pi between friends?”

\hspace{1mm}

I should, of course, reiterate the caveat that my explanations may be unsatisfying, incomplete, or just plain wrong.  They represent only the best way I have to think about the problem.  If something sounds rotten to you, please say so in the comments.

The Boston Marathon, and Gratitude

April 21, 2013

As it happens, I was fortunate enough to be able to run in the 2013 Boston Marathon.

A lot has been written (and is still being written) about the marathon and its aftermath, and in many ways there is no point in me adding my own commentary on top of all that.  But when I think about the day of April 15, 2013, I have very particular way of understanding and feeling about it, which I feel a need to write down.

[As Eugene Wigner (one of my personal favorite figures in 20th century physics) said, “I have a weakness for reflection, and I want to leave some small record of the signal events of my life.”]

\hspace{1mm}

\hspace{1mm}

My strongest memory of the Boston Marathon is of being overwhelmed by feelings of gratitude.

At 10:00am on April 15, 2013, right after the starting gun fired for the Boston Marathon, I was nervous.  I had trained quite hard (if I may say so) to get to Boston, and now that I was lined up in that starting corral I was a bit overwhelmed by its undeniable “big league” feel.  If you ever want to meet an impressive, intimidatingly motivated group of people, you should ride the bus to the starting line of the Boston Marathon.

My nervousness lasted as we spurted past the starting line (in that nervous, stuttering start-stop motion that runners who have lined up in a corral understand) and all throughout the tense, overly-fast first five miles of the race.  But somewhere around mile 6, something surprising happened.

The first thing I remember was having no awareness of my own feet.  During a gentle uphill, where I found myself in the middle of a fairly dense pack, I remember looking down and being very aware of all the feet of the people around me going step-step-step against the ground.  And it suddenly seemed strange to me, all this step-step-stepping, because I was completely unaware of the process of lifting and dropping my own feet.  That is, I was running without a conscious awareness of the process of running.

Of course, there’s nothing unusual about this; after all, most of us have been walking around without putting any conscious thought into it since we were toddlers.  But for me, in that moment, to think that I was running (…fast, …in the Boston Marathon!) without having to actually think about the process of running brought an incredibly beautiful and inspiring feeling.  I can’t quite explain it, other than to say that it felt like some kind of magic, like I was being propelled at breakneck speed down the center of a beautiful road on some kind of magic carpet that floated just a few feet off the ground.  Or like I was some kind of animal, a pronghorn antelope maybe, that lives to run without understanding what it’s doing.

And people were cheering for me.  They lined the road on either side, holding signs and noisemaking devices, looking happy and excited for me.

And suddenly I felt an elated and overwhelming gratitude.  Gratitude for my own life, for this moment of being a running animal, and for all the people who, inexplicably, lined the road just to give me the gift of feeling like magic.  I spent the next few miles with a huge smile on my face, feeling happy to the point of laughter, and high-fiving every child who held their hand out (and there were a lot!).

\hspace{1mm}

There were plenty of other good moments in the marathon, but I will remember that moment of gratitude most strongly.

\hspace{1mm}

\hspace{1mm}

And now I wish that I could stop writing here.  I wish I could say “I loved the 2013 Boston Marathon” and be done.  I wish I could refuse to acknowledge that someone else altered or tainted the way we feel about that day.  But I’m afraid there is a little more that needs to be said.

In the immediate aftermath of the bombings, a lot of people (myself included) responded with very real and very personal anger.  For many of my fellow participants, this anger was expressed through sentiments like “you [the perpetrators] don’t understand our resolve, whose demonstration was our very purpose in gathering.  We will continue to run and we will refuse to be cowed.”

My feelings about the bombings were perhaps slightly different.  I saw them not as an attack on runners or on the sport of running. To me, they were something far more heinous: they felt like an attack on those people who had gathered to celebrate and cheer for the accomplishments of a stranger.  All I could think about were those people who had gathered along the road to cheer for me, a random stranger, as I went by: those small children, those college students, those grandmothers and grandfathers who had taken delight in my running, and who had allowed me to feel like magic.

Probably many people felt this way.  I know that many people shared the same boiling anger that came with it.

\hspace{1mm}

\hspace{1mm}

Immediately after crossing the finish line, I had imagined that 2013 would be my last and only time running the Boston Marathon.  But now I am convinced that I need to return next year.  Just to show my own gratitude, and to contribute to the city’s demonstration that we will not be scared away from the things and people we love.

My plan is to run dressed in an American Revolution costume (it is Patriot’s Day, after all), complete with tri-cornered hat.

Thank you, Boston.

Thank you, Boston.

Follow

Get every new post delivered to your Inbox.

Join 1,256 other followers