Skip to content

What if I were 1% charged?

May 22, 2013

In case you hadn’t heard, the universe is governed by four fundamental forces.  But when it comes to understanding nature at almost any level larger than a nucleus and smaller than a planet, only one of them really matters: the Coulomb interaction.

The Coulomb interaction — the pushing and pulling force between electric charges — is almost incomprehensibly strong.  One common way to express this strength is by considering the forces that exist between two electrons.  Two electrons in an otherwise empty space will feel pulled together by their mutual gravitational attraction and pushed apart by the Coulomb repulsion.  The Coulomb repulsion, however, is stronger than gravity by 4,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times.  (For two protons, this ratio is a more pedestrian 10^{36} times.)

When I was a TA, I enjoyed demonstrating this point in the following way.  Take a balloon, and rub it against the top of your head until your hair starts to stand on end.  Then stick the balloon to the ceiling, where it stays without falling due to static electricity.  Now consider the forces acting on the balloon.  Pulling up on the balloon are electric forces between the relatively few electrons I just rubbed off from my hair and the opposite charge that they induce in the ceiling.  Pulling down on the balloon are gravitational forces coming from the pull of the entire mass of the Earth.  Apparently the electric force created by those few (something like 10^{10}) electrons is more than enough to counterbalance the gravitational pull coming from every proton, neutron and electron in the planet below it (something like 10^{51}).

balloon_balancing

So electric forces are strong.  Why is it, then, that we can go about our daily lives without worrying about them buffeting us back and forth?

The short answer is that they do buffet us back and forth.  Pretty much any time you feel yourself being pushed or pulled by something (say, the ground beneath your feet or the muscles tied to your skeleton), the electric repulsion between microscopic charges is ultimately to blame.

But a better answer is that the very strength of electric forces is responsible for their seeming quietude.  Electric forces are so tremendously strong that nature will not abide having a large amount of electric charge collect in one place.  And so electric forces, at the scale of people-sized objects, are largely neutralized.

But what if they weren’t?

\hspace{1mm}

When I was a TA I got to walk my students through the following morbid little problem, which helped them see why it is that electric forces don’t really appear on the human scale.  Perhaps you will enjoy it.  Like most good physics problems, it is thoroughly contrived and, for a new student of physics, at least, its message is completely memorable.

The problem goes like this:

What would happen if your body suddenly lost 1% of its electrons?

\hspace{1mm}

Now, 1% may not sound like a big deal.  After all, there is almost no reason for excitement or concern when you lose 1% of your total mass.  But losing 1% of your electrons, without at the same time losing a equal number of protons, means that suddenly, within your body, there is an enormous amount of positive, unneutralized electric charge.  And nature will not abide its strongest force being so unrequited.

I’ll use my own body as an example.  My body has a mass of about 80 kg, which means that it contains something like 2 \times 10^{28} protons, and an almost exactly equal number of electrons.  Losing 1% of those electrons would mean that my body acquires an electric charge of 2 \times 10^{26} electron charges, or about 4 \times 10^9 Coulombs.

Now, 4 billion Coulombs is a silly amount of charge.  It is about 300 million times more than what gets discharged by a lightning bolt, for example.  So, in some sense, losing 1% of your electrons would be like getting hit by 300 million lightning bolts at the same time.

Things get even more dramatic if you start to think about the forces involved.

Suppose, for example, that in their rush to escape my body, those 4 billion Coulombs split in half and flowed to opposite extremities.  Say, each hand suddenly acquired a charge of 2 billion Coulombs.  The force between those two hands (spread apart, about 6 meters feet) would be 10^{27} Newtons, which translates to about 10^{26} pounds.  Needless to say, my body would not retain its structural integrity.

Of course, in addition to the forces pushing the extremities of my body apart, there would also be a force similar in magnitude pulling me toward the ground.  You may recall that when an electric charge is next to a grounded surface (like, say, the ground) it induces some opposite charge on that surface in a way that acts like an “image charge” of opposite sign.  In my case, the earth would accumulate a huge amount of negative charge around my feet so as to create a force like that of an “image me.”

image_me

Because of my 4 billion Coulombs, the force between myself and my “image self” would be something like 10^{23} tons.  To give that some perspective, consider that something with the same mass as the planet earth weighs only about 10^{21} tons.  So the force pulling me toward the earth would be something like the force of a collision between the earth and the planet Saturn.

But my hypercharged self would not only crush the earth.  It would also break open the vacuum itself.  At the instant of losing those 1% of electrons, the electric potential at the edge of my body would be about 40 exavolts.  This is much larger than the voltage required to rip apart the vacuum and create electron-positron pairs.  So my erstwhile body would be the locus of a vacuum instability, in which electrons were sucked in while positrons were blasted out.

In short, if I lost 1% of my electrons, I would not be a person anymore.  I would be a bomb.  A Coulomb bomb, if you will, with an energy equivalent to that of ten trillion (modern) atomic bombs.  Which would surely destroy the planet.  All by removing just 1 out of every 100 of my electrons.

\hspace{1mm}

The moral of this story, of course, is that nothing of observable size will ever get 1% charged.  The Coulomb interaction cannot be thus toyed with.  All of chemistry and biology function by the interactions between just a few charges at a time, and their effects are plenty strong as they are.

\hspace{1mm}

\hspace{1mm}

Footnote

As a PhD student, I worked on all sorts of problems that involved the Coulomb interaction, and occasionally my proposed solution would be very wrong.  The worst kind of wrong was the one that made my advisor remark “What you just created is a Coulomb bomb,” which meant that I had proposed something that wasn’t neutral on the large scale.

It’s one thing to feel like you just solved a problem incorrectly.  Its another to feel like your proposed solution would destroy the planet.

The Fibonacci sequence, under duress

May 14, 2013

Physics and math have a complicated relationship, and I mean that in almost exactly the same way that your Facebook friends mean it.

its_complicated

Allow me to elaborate.

One very legitimate way to view mathematics is as the exploration of a pristine and entirely non-physical universe of numbers and relationships.  In this view, which is largely the view of the academic mathematician, the universe of math exists in parallel to our own, real, universe.  Each mathematical theorem is a discovery of some feature of that universe, and its correctness does not depend in any way on the physical features of reality.  [In the (paraphrased) words of G.H. Hardy, it is true that 2 + 3 = 5, regardless of whether you think "2" stands for "two apples" or "two pennies", or anything else.]   In other words, pure mathematics exists completely independently of the human brain and its interests, and mathematicians are merely its explorers.

Physics, on the other hand, is a much more blue-collar pursuit.  The goal of physics is only to describe past observation so that we can predict the outcome of future observations.  Of course, such predictions can bring a tremendous amount of practical power, and they provide the foundation for nearly all technological innovation.  Physics can also be tremendously interesting, and even aesthetically pleasing (at least to suitably eccentric people like myself).  Still, by design physics makes no claim about absolute or human-independent truth, and indeed, the idea of truth outside of observable reality is fundamentally abhorrent to the discipline of physics.

Given this difference in philosophy, it may seem odd that physics has been so hopelessly entangled with mathematics for hundreds of years.  The reason for this extended liaison can be seen as a consequence of the remarkable parallels that keep emerging between our own real universe and the mathematical universe.  The discoveries of mathematicians keep proving to be useful, for no particularly apparent reason, in creating descriptions of the real universe, and so we continue to exploit them.  Still, to a physicist, there is nothing sacred about the use of mathematics.  Math is a tool that is useful only insomuch as it can be used as a highly-accurate metaphor for physical reality.  Math deals only with exact statements about a “fictitious” universe.  But physics must make approximate statements about a “real” universe.  If getting a useful descriptive/predictive statement requires abusing the purity and exactitude of mathematics along the way, then so be it.

In short, physicists view math in much the same way that politicians view philosophy.  You use it earnestly when you can, and you twist it to suit your own purposes when you can’t.

Part of becoming a physicist is learning to get comfortable with this ethos of exploitation, to one degree or another.  One has to get “familiar” with abuses of mathematics, and develop “intuition” as to how far it can be stretched before it yields, under duress, an answer that is wrong, or worse, not even wrong.

\hspace{1mm}

Lately I’ve been on a streak of talking about examples where “intuitive” manipulation of mathematics can lead to answers while straightforward calculation is difficult (namely, integrating sin(x) to infinity and deriving the Pythagorean theorem).  In this post I thought I would share one more of my favorite abuses of mathematics.  This is a derivation of the formula for the Fibonacci sequence.

Fibonacci Pineapples_0026

Calculating the Fibonacci sequence.

[I apologize if what follows is a bit stream-of-consciousness-y, but I thought it might help to illustrate the sort of intuitive line of thinking that one (I, at least) would actually follow to get the answer.]

The Fibonacci sequence, in case you have never encountered it, is the sequence of numbers that results from writing first 0 and 1, and then adding the previous two numbers to get the next one in the sequence.  The resulting sequence goes on like this:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, ...

Famously, as you go to high numbers in the sequence, the ratio of two successive numbers approaches the golden ratio

\varphi = (1 + \sqrt{5})/2 \approx 1.618.

What is perhaps less well-known is that you don’t have to count through the sequence one number at a time in order to figure it out.  There is a simple formula, f(n), for the sequence, which can tell you any term that you want to know.  For example, I can say without doing any tedious addition that the 91st term of the Fibonacci sequence is 4,660,046,610,375,530,309 (about 4 quintillion).

If you want to derive this sequence f(n) the way a physicist would, you should start with the following two steps:

1) Write down the exact relation that defines the sequence:  f(n) = f(n-1) + f(n-2).

2) Squint at it until it starts to look like something you already know how to deal with.

In my own personal case, this “squinting” involves thinking about what happens when n gets really large.  Clearly, at large n the sequence f(n) grows very quickly; by n = 91, f(n) is already at 4 quintillion!  Usually, when something grows that quickly, it means there is some kind of exponential dependence.  Exponential dependencies arise when your rate of growth is proportional to your size (just like logarithmic dependencies arise when your rate of growth is inversely proportional to your size).  So now there is a lead to follow: is the rate of growth of f(n) in fact proportional to its own value?

In fact, it is, and the simple way to see this is by first replacing n-1 by n in the definition of the sequence above, so that you have

f(n) = f(n+1) - f(n-1).

Now, if you really consider n to be large, then you can think of n \pm 1 as n \pm \delta n, where \delta n is something small (compared to n).  Then the right-hand side of the equation above really looks like a derivative: f(n+1) - f(n-1) =[f(n+\delta n) - f(n - \delta n)]/(\delta n).  This is all the evidence that you need to confirm your suspicion that f(n) is indeed proportional to its own derivative, at least at large n, and so it should be exponential.

Now you can make an educated guess for f(n): it should be something exponential, like f(n) = A e^{k n}, where A and k are some unknown constants.  This means f(n+1) = A e^{k} e^{k n} and f(n-1) = A e^{-k} e^{k n}.  Put these into the equation above, and what you’ll find is

1 = e^{k} - 1/e^{k}.

You can solve for k, but it’s actually more interesting (and easy) to solve directly for e^k.  You can use the quadratic equation for this, and what you find is that there are two solutions:

e^k = (1 \pm \sqrt{5})/2.

The first one of those two solutions (the plus sign), is the golden ratio, \varphi!  I hope this gives you another feeling of being on the right track.

The fact that there are two solutions for e^k — let’s call them \varphi = (1 + \sqrt{5})/2 \approx 1.618 and \psi = (1 - \sqrt{5})/2 \approx -0.618 — means that there are two different kinds of solutions for the governing equation f(n) = f(n+1) - f(n-1).  Namely, these solutions are A \varphi^n and B \psi^n.  Any combination of these two satisfies the same “Fibonacci relation”, so you can write

f(n) = A \varphi^n + B \psi^n.

Now all that’s left is to figure out the values of A and B by applying the conditions f(0) = 0 and f(1) = 1.  This process gives the final result:

f(n) = (\varphi^n - \psi^n)/\sqrt{5}.

\hspace{1mm}

So there you have it.  Now you can impress your friends by telling them that the 1776th Fibonacci number is approximately 4.5 \times 10^{370}.

You can also see why the ratio of two successive Fibonacci numbers at large n gives you the golden ratio: since \psi is smaller than 1, at large n its contribution gets completely eliminated from the sequence, and all you’re left with is f(n) \sim \varphi^n, so that f(n+1)/f(n) \sim \varphi.

\hspace{1mm}

\hspace{1mm}

Footnotes

1.  You can make your own “pseudo-Fibonacci” sequence by starting with any two numbers of your choosing, rather than 0 and 1, and then following the rule of adding the previous two to get the next number.  The same formula as above will hold, except that the coefficients A and B will be different.  And the ratio of two subsequent pseudo-Fibonacci numbers will still be equal to \varphi (regardless of whether you choose to start your sequence with positive, negative, or even imaginary numbers).

2.  It is perhaps funny to notice that since \psi is negative, the quantity \psi^n only gives a real answer when n is an integer.  That means that if you think of f(x) as a continuous function, then its value lives in the complex plane and only crosses the real axis when x is an integer.  Like this.

UPDATE:

3.  The fact that f(n) is not real at non-integer n means that if someone asks you “what’s the 1.5th term of the Fibonacci sequence?”, you can answer “0.92 + 0.22 i”.

4. You may have noticed that when I wrote f(n) =[f(n+\delta n) - f(n - \delta n)]/(\delta n), the right-hand side looks like twice the derivative.  This means that at large n you should have f(n) \approx 2 f'(n).  In fact, if you work it out, the exact answer is f(n) = f'(n)/\ln(\varphi) \approx 2.07 f'(n).

5.  If I were only slightly less mature, the second sentence of this post would have been:

Namely, Physics uses Math for \int e^x.

The simplest derivation of the Pythagorean theorem

May 8, 2013

Sometimes I am amazed by the permanence of mathematical discovery.  Math, it seems to me, is quite unique among the creative intellectual pursuits (science, art, engineering) for the seemingly unlimited lifetime of its innovations.

For example, Aristotle was a brilliant natural philosopher, as much a genius as just about any modern scientist, and he advanced (what would become) physics tremendously during the 4th century BC.  But by now his theory of the five elements is completely unnecessary for anyone to learn.  While it produced an important advancement in our thinking, it has been replaced by more correct physical theories.  Thus, Aristotle suffered that same fate that meets seemingly every scientist or inventor eventually: further discoveries made him obsolete.

Pythagoras, on the other hand, who lived roughly 200 years before Aristotle, is someone whose major contribution to mathematics is still used every day.  I literally could not do my job without the Pythagorean theorem, and neither could just about any scientist or engineer.  Unlike nearly all other kinds of innovations, it has very much not been replaced.

After 2,500 years, a^2 + b^2 is still equal to c^2.

What’s important to notice is not just that Pythagoras’s result is still important, but that the type of reasoning that leads to his result is still important.  Put simply, a good scientist or engineer needs to be capable of understanding and reproducing a derivation of the 2500-year-old Pythagorean theorem, not just because the theorem is important, but because that level of logical thinking is necessary for his/her job.

So in this post I think it’s worth sharing my own favorite derivation of the Pythagorean theorem.  This derivation is the simplest one I know of, and it doesn’t require any tremendous geometric cleverness (like a tangram puzzle) or complicated diagrams.  Instead, it relies only on a very basic use of scaling arguments.

\hspace{1mm}

\hspace{1mm}

Scaling arguments are among the simplest and most powerful tools in theoretical physics.  They allow you to reach remarkably concrete conclusions about a problem even when you don’t know essentially any details about the system in question.  The key idea is to imagine scaling the system up or down in size, and then saying something about how it should change as you do so.

For example, suppose you don’t know anything about triangles except that they have an area.  Since area is measured in units of length squared, you can immediately say that if you take some triangle and make its length X times bigger, than its area must get X^2 times larger.

In other words, if the following triangle has area Asmall_triangle

then the triangle below, which is the same as the previous one only magnified two times, must have an area 2^2 A.

triangle

 Meanwhile, all the side lengths of the  bigger triangle are exactly two times longer than for the smaller one.

What all this means is that, for a given triangle, the area is proportional to the square of any one of its side lengths.  I know this because as I make the triangle X times bigger, the side lengths all get X times longer, and the area gets X^2 times bigger.  So if I want I can write

A = (\text{something}) \times (\text{hypotenuse length})^2.

The “something” in that equation depends on the angles in the triangle, but for now let’s assume that I am more or less completely ignorant about triangles and I can’t tell you what it is.  Luckily enough for ignorant me, it turns out I don’t need to know what the “something” is in order to prove the Pythagorean theorem.

The key trick is to divide the large triangle into two smaller and completely equivalent triangles.  That is, take this triangle:

triangle-with_angles

and draw one line (an altitude through the right angle) so that it gets divided into two smaller triangles, like this:

triangle_divided

You can tell that the two newly-created triangles are just scaled-down versions of the original one, because they have all the same angles.  This means that the original triangle can be written as the sum of two smaller but otherwise completely identical triangles.  Like this:

triangle_equality

Finally, to prove the Pythagorean theorem, we just have to invoke the one equation in this post, A = (\text{something}) \times (\text{hypotenuse length})^2 for each triangle.  This gives:

(\text{something}) \times c^2 = (\text{something}) \times a^2 + (\text{something}) \times b^2.

Since all the triangles are the same, all the “something”s are also the same, which means

a^2 + b^2 = c^2.

Not bad, eh?

\hspace{1mm}

I don’t know whether you found the above proof “aesthetic,” but I certainly did.  And it’s a pretty nice feeling to think that an insight had by someone more than 2,500 years ago can still feel beautiful to someone like me.  And even more remarkably, that my life (and professional career) continue to profit from it.

\hspace{1mm}

\hspace{1mm}

Footnote

I learned the proof above from Leonid Levitov.  As it happens, he presented it during a talk about atomic collapse!

How we measure our happiness

May 6, 2013

Take one moment and try to answer, for yourself, the following question:

How happy are you?

Try to rate it on a scale from 1 to 10.  I’ll wait until you’re done.

\hspace{1mm}

\hspace{1mm}

Now let’s talk about how you came up with your answer.

On the face of it, the question “how happy are you?” is both difficult and almost impossibly ill-defined.  Nonetheless, I bet that you were able to come up with a number that felt reasonably accurate for you.  This number almost certainly didn’t come from any formula or numerical weighing of different factors, but rather from an instinctive overall feeling of satisfaction with your life.

But what determines this overall feeling?  This question, it seems to me, is an important one.  Our perception of our own lives has a very real effect on our happiness.  So it’s worth trying to figure out what it is that we measure our lives against when we assess their quality.

\hspace{1mm}

One angle  through which you can examine this issue is by looking for a correlation between wealth and self-reported happiness.  After all, nearly all of us put a lot of effort into obtaining money, so apparently money should be a significant contributor to happiness.

And in fact, it’s fairly clear that there is a correlation between wealth and perceived happiness.  For example, recent data collected by researchers at the University of Michigan characterizes the relationship like this:

weath-happiness

[As reported by The Economist's Daily Chart blog: here.]

This study looked at 13 different countries, but I should say first off that using the data to comment on the relative happiness levels of different countries is an almost entirely meaningless exercise, as  Steven Landsburg describes that pretty well here.  What I do think is interesting, though, is the way happiness depends on income within a given country.  For simplicity, during the remainder of this post I’ll focus on the USA.

Most people, including the authors of the study in question, will take as the primary conclusion of the above graph that more money equals more happiness, with no sign of satiation.  For me, though, what’s more interesting (and more accurate to say) is that self-reported happiness grows logarithmically with income.

Here, for example, is the same data above for the USA extrapolated to cover a wider range of income:

happiness-income-log

I should emphasize, in case it’s unclear, that this is a very slow growth.  For example, the difference between a $10,000/year income (in the US, this is the bottom 6%) and $100,000/year (the top 20%) is only about 1 point of “happiness.”  The far left side of the plot is a $1,000/year household income, and the right side is $10 million/year.

Here is that same curve plotted in a normal (non-logarithmic) scale [UPDATE: These are the exact same lines, just shown with a non-distorted x-axis]:

happiness-income-linear

[In lieu of a stern and much-needed warning about the danger of such extreme extrapolation, I'll just post this:

Nonetheless I will continue to take the apparent logarithmic dependence seriously.]

One excellent, and not terribly surprising, feature that jumps out from the data above is that every income group rates itself as happier than average (> 5).  You have to extrapolate the curve all the way down to $700/year of household income in order to arrive at a hypothetical demographic group that would consider itself less happy than average.  This, it seems to me, is a clear manifestation of the “Lake Wobegon Effect,” a psychological bias nearly everyone has toward considering themselves above average (named after the fictional town of Lake Wobegon, Minnesota, “where all the children are above average.”)  But the scale from 1-10 is arbitrary anyway, so whether the scale effectively starts from zero or from 5 doesn’t really matter.

The real question is, what does this logarithmic growth of “life satisfaction” with income imply about how we assess our happiness?

In general, logarithmic growth occurs when something is measured relative to itself.  For example, the plot above suggests that doubling someone’s income will have, on average, the same expected effect on their happiness, regardless of what the person’s salary was to start with.  That is, a “poor” person who has their annual salary increased from $10,000 to $20,000 will gain as much in happiness as a “rich” person who has their salary increased from $100,000 to $200,000  (about 0.4 points in each case).

In other words, as the wealth of a person increases, their standards for what constitutes a “better life” seem to increase proportionally.  And this is the fundamental reason why happiness increases only logarithmically with an improved standard of living.

\hspace{1mm}

If I were a very cynical or very idealistic person, who was inclined to interpret the world through a moral (or religious) lens, I would conclude here by making some ethical or spiritual point.  But for me, the elusively shifting standard of human happiness is something interesting rather than depressing.  It seems to me, for example, that an alien theorizing about human happiness would anticipate that, since money has a fixed purchasing power, a person should gain a constant amount of happiness from a constant amount of money.  But a real person will not be surprised to learn that this is not at all the case.  Humans, in some sense, are wired with a constant drive for accomplishment.  With each accomplishment a person gains some happiness, and some ability.  And as that person’s abilities and prior accomplishments grow, their standard for further accomplishment also grows.

This seems to be a beautiful design of evolution to keep our species alive and at the top of the food chain.  And I think it deserves to be celebrated as much as it deserves to be declaimed.  It is part of what it means to be human.

Most of all, our proportional measuring of happiness deserves to be recognized and to be understood, especially if we are to attempt to maximize our individual and collective well-being.

\hspace{1mm}

Footnotes

1. I personally think that our perception of the passage of time is also a logarithmic process, for similar reasons: psychologically, we weigh lengths of time against our own age.

2.  I wonder whether there is something very biologically programmed about our ability to appreciate increases only in proportion.  Our physical senses, for example, are subject to the Weber-Fechner law, which says that our sensitivity to small changes decreases in proportion to the magnitude of the sensory stimulus.

For example, you can hear a slight whisper during a silent scene in a movie theater, but in a loud rock concert you won’t be able to perceive anything quieter than a freight train.  Similar relations hold for our sense of sight (think of trying to see a faint light in a dark room versus a bright afternoon), touch, smell, and taste.

3.  It is perhaps instructive to compare the happiness-vs-income plot to the actual distribution of income in the US.  In the same scale as the plot above, that distribution looks like this:

income_distribution

(Data from The US Census Bureau.)

The takeaway message from combining these two plots is like this:  If you live in the US, then there is a 90% chance that you belong to a demographic group whose average self-reported happiness is between 6.5 and 8.0.

Please note, by the way, that all income numbers in this post are total household income, and not the salaries of individual jobs.

4. Here is a fun fact related to the “Lake Wobegon Effect”:  93% of Americans consider themselves above-average drivers.

A plug for poetry

April 30, 2013

I have never been a great creative writer.  But there was a time in my life where I devoted a fair amount of effort to it.  And it did me a great deal of good.

My creative writing came almost entirely during college, which was as turbulent a time for me as I imagine it is for most people.  Creative writing, and poetry in particular, was valuable during those years because it gave me a channel through which I could plainly state the way I perceived and felt about certain ideas.  Once those feelings were committed to paper I could begin to understand them (or at least be aware of them), and see how they meshed or conflicted with my intellectual understanding of the same ideas.  Eventually I came to realize that there was a very specific set of things that I found simultaneously moving and irreconcilable.  And I saw that there was something very particular that I wanted to say, and which felt very important, but that I couldn’t manage to put into words.  Finally, after over a year hovering around the same ideas (even when I tried not to), I wrote the best poem of my life.  I don’t know how other people would feel about it, but to me it was exactly the thing that I wanted to see written as black lines on white paper.  And it made me quite happy.

So, in short, I have a great and very personal appreciation for the potential benefits of poetry.  Today is April 30, which means there are technically a few hours remaining in National Poetry Month (as Brian Tung has been reminding me).  So I wanted to write a short post endorsing poetry in general.

In particular, I wanted to suggest two simple exercises for the person who has an interest in trying to write a poem, but has a hard time knowing where to start.  I credit both of these exercises to my wife (who got them from a class she took with Jude Nutter), although I imagine that they are pretty common in creative writing classes.

\hspace{1mm}

Exercise 1: Thirteen Ways

Wallace Stevens’ “Thirteen Ways of Looking at a Blackbird” is a pretty famous poem, and for many it may bring back bad memories of high school English class.  But it’s also surprisingly fun to play with.  In case you’ve never seen it before, the original poem is this:

Thirteen Ways of Looking at a Blackbird

I
Among twenty snowy mountains,
The only moving thing
Was the eye of the blackbird.

II
I was of three minds,
Like a tree
In which there are three blackbirds.

III
The blackbird whirled in the autumn winds.
It was a small part of the pantomime.

IV
A man and a woman
Are one.
A man and a woman and a blackbird
Are one.

V
I do not know which to prefer,
The beauty of inflections
Or the beauty of innuendoes,
The blackbird whistling
Or just after.

VI
Icicles filled the long window
With barbaric glass.
The shadow of the blackbird
Crossed it, to and fro.
The mood
Traced in the shadow
An indecipherable cause.

VII
O thin men of Haddam,
Why do you imagine golden birds?
Do you not see how the blackbird
Walks around the feet
Of the women about you?

VIII
I know noble accents
And lucid, inescapable rhythms;
But I know, too,
That the blackbird is involved
In what I know.

IX
When the blackbird flew out of sight,
It marked the edge
Of one of many circles.

X
At the sight of blackbirds
Flying in a green light,
Even the bawds of euphony
Would cry out sharply.

XI
He rode over Connecticut
In a glass coach.
Once, a fear pierced him,
In that he mistook
The shadow of his equipage
For blackbirds.

XII
The river is moving.
The blackbird must be flying.

XIII
It was evening all afternoon.
It was snowing
And it was going to snow.
The blackbird sat
In the cedar-limbs.

–Wallace Stevens

What’s great about this poem is how it takes a relatively common object — a blackbird — and constructs thirteen short images around it, each of which makes you see the object in a new light.  Trying to emulate this approach (as literally as you want) with your own chosen object can be surprisingly gratifying.

Here, for example, is my own go at it:

Thirteen Ways of Standing at a Bridge

I
Standing on a bridge, a body becomes aware
of its tenderness for open air.
A body becomes aware of all that it cannot be.

II
She was so small.
She was distant.
She walked as if in place.
But I saw her, on a straight line,
And the bridge was my telescope.
The bridge was my ravine
to channel the love I devoutly wished
would fill the space between us.

III
A bridge is a joining of triangles.
A bridge is geometry.
A bridge is not perfect.
A bridge is the feeling of being perfect.

IV
Above: thin and light whispering, tenuous fluid.
Below: dark and crashing roar, inexorable fluid.
I am held, just here, by the bridge

V
I am writing, I am resolute. I am filling the page with my hand.
My mind is here.
My mind is not here.
I am building a thin, gray bridge across the white paper.

VI
A bridge, sometimes, is everything.
A bridge is not alive.

VII
Listen: urine will cover the concrete.
The walls will writhe with vulgarness.
The traffic will echo, always from above.
But you,
sir,
you can live here.
This bridge will keep you dry.

VIII
When the sun rose, the bridge was yellow.
The bridge was yellow
in the dark night.

IX
In those days of rain, the floodwaters
culminated.
They embarassed us deeply.
We watched, together, a trailer house go sliding down the river:
A white bird alighted on the roof.
Together, they passed beneath the bridge.

X
Every atom is a stone.
It is held by others –
together, they must hold.
This bridge is my shoulder.
A shadow travels over it.

XI
In the evening, I sit.  I think of them.
They are silent.  I sit in fear.
My fear is a dark, black bridge that stands against an impossible sky.

XII
A child, in the world, is small.
A man, on a bridge, is a child.
A child, in the world, is small.

XIII
You did not know my courage
when I stepped into the kitchen.
You looked at me.  I stopped.
I was feeling the crack of a great, great bridge.

–Brian Skinner

If you’re curious, I wrote this poem largely thinking about the Stone Arch Bridge in Minneapolis:

Exercise 2: The Single Sentence

One of the things that a great poem can do is to surprise you by pulling you in a sudden and unexpected direction.  Here, for example, is a good one that does exactly that:

Where You Go When She Sleeps

What is it when a woman sleeps, her head bright
In your lap, in your hands, her breath easy now as though it had never been
Anything else, and you know she is dreaming, her eyelids
Jerk, but she is not troubled, it is a dream
That does not include you, but you are not troubled either,
It is too good to hold her while she sleeps, her hair falling
Richly on your hands, shining like metal, a color
That when you think of it you cannot name, as though it has just
Come into existence, dragging you into the world in the wake
Of its creation, out of whatever vacuum you were in before,
And you are like the boy you heard of once who fell
Into a silo full of oats, the silo emptying from below, oats
At the top swirling in a gold whirlpool, a bright eddy of grain, the boy
You imagine, leaning over the edge to see it, the noon sun breaking
Into the center of the circle he watches, hot on his back, burning
And he forgets his father’s warning, stands on the edge, looks down,
The grain spinning, dizzy, and when he falls his arms go out, too thin
For wings, and he hears his father’s cry somewhere, but is gone
Already, down in a gold sea, spun deep in the heart of the silo,
And when they find him, he lies still, not seeing the world
Through his body but through the deep rush of grain
Where he has gone and can never come back, though they drag him
Out, his father’s tears bright on both their faces, the farmhands
Standing by blank and amazed – you touch that unnamable
Color in her hair and you are gone into what is not fear or joy
But a whirling of sunlight and water and air full of shining dust
That takes you, a dream that is not of you but will let you
Into itself if you love enough, and will not, will never let you go.

– T. R. Hummer

One of the impressive things about this poem is that it is all one sentence, but it never feels like a run-on.  The writer pulls you fluidly from one image and one emotion to another, returning at the end to where you started.  But by the end the starting place feels very different than it did at the beginning of the poem.

This is also a fun poem to try and emulate.  That is, you can try to write a poem that tells a story with a single sentence.  When I tried this, I chose to do it from a similar first person view, but I chose for my subject someone that I thought would be more likely to think in long, unbroken sentences.

[Untitled]

When your vantage point is low to the ground, everything is edible –
the angled, cylindrical surface of a pencil
that yields to the teeth, leaving flecks in the mouth;
the cool and sharp-tasting metal of a dime,
with its muscular face and rippled edges to tickle the tongue;
the round, plastic coating of electrical wire;
the animalesque foot of a wooden table;
the hairy tendrils of a scuffed-up ball of carpet,
which feel like danger, but also like knowledge –,
in short, such a world is meant to be explored with the mouth,
each small or large thing encountered, somehow by chance,
and carefully weighed for taste and texture,
which together allow you to make a judgment
(a judgment only — not an opinion,
not an image,
not a coherent set of emotions,
and not a list of rules or guidelines)
of a world that is being presented to you
through a constant, dizzying sensorial flow,
a world that cannot be encountered in any way except
from the floor
in a state of great wonder,
in frequent pauses, where the head lolls
for trying to take in uncountable surroundings,
and through short, exultant bursts of motion,
rife with the joy of what it means
to be a living body
(the kind of joy that exists only
while “joy” is not its own word
or a category of certain things
to be sought or shunned or understood as a controlled part of being alive),
until finally, with little warning,
that feeling of being powerful and lost,
of being central to an inscrutable universe,
becomes intolerable
and you lie there, twinging,
feeling the body’s reaction to oppression,
and the depth of your own helplessness against it,
until that moment when, somehow, you are lifted
from your own terrible weight of being
and restored to comfort and familiarity
by a gentle but firm compression, a soft encirclement,
and the reassuring, nurturing warmth
of a round breast.

–Brian Skinner

My poem is, of course, in all ways a worse poem than T. R. Hummer’s; it’s uglier and less smooth, with weaker imagery.  But the point of this post is that you don’t have to be T. R. Hummer or Wallace Stevens to gain something by writing poetry.  So if you’re the kind of person who feels a vague interest in it, maybe you can use the last day of National Poetry Month as an excuse to try one of these exercises.

Of course, feel free to suggest other good exercises in the comments.

\hspace{1mm}

\hspace{1mm}

This is probably pretty immature, but at the end of this post I feel a need to wash away some of the artsy stink of pretentiousness that (unfairly) comes with talking about and writing poetry.  So feel free to enjoy this quote from Paul Dirac:

The aim of science is to make difficult things understandable in a simpler way; the aim of poetry is to state simple things in an incomprehensible way. The two are incompatible.

–P. A. M. Dirac

And this hilarious sketch from Stephen Fry and Hugh Laurie:

On the myth of the gay-straight dichotomy

April 29, 2013

Today marks the first time that an athlete in a major American sport has come out as gay.  It’s a fairly monumental moment, one that would have been unimaginable even during my (not-so-distant) childhood.  The tide of public sentiment is very firmly and dramatically changing.

As it happens, the athlete in question is Jason Collins, an NBA center who plays with the Washington Wizards.  I have long had a personal liking for Jason Collins, probably because he was an NBA center (which is my own dream job) who went to Stanford (where I wish I could have gone) and who listed in his official biography that his favorite book was Faulkner’s A Light in August (which I wish I had understood well enough to appreciate).  I even met him once (very briefly and incidentally) at the Radio Shack in the Minneapolis Skyway.

Today Sports Illustrated published an excellent article written by Collins about his coming out.  If you haven’t seen it yet, I highly recommend reading it.

\hspace{1mm}

There is one feature of his article, though, that bothers me.  This is, in fact, something that stems from a larger problem I have with the way we think about homosexuality.  Namely, Jason Collins goes out of his way to say that “Being gay is not a choice.”

This is a sentiment that is repeated by just about everyone who comes out publicly: “I didn’t choose to be this way.”  Of course, I don’t doubt that this is a true and even good thing to say.  But it bothers me for two reasons:

  1. It oversimplifies the complex and highly diverse nature of human sexuality, and it creates a misleading impression that people are born as either “gay” or “straight.”
  2. It implies that homosexuality is okay only because its practitioners have no choice in the matter.

Homosexuality is becoming more accepted; that much is very clear from the chart above.  But I am worried that it is being accepted only because people can be sympathetic to a very particular, and for the most part very untrue, narrative.  This is the narrative of the person born without the ability to feel attracted to the opposite sex, who is forced by nature to develop homosexual rather than heterosexual relationships.  This is the person who announces “I didn’t choose to be this way,” as if to say “I would have been straight if I could have been, but I was born gay so the decision wasn’t mine to make.”

But what would have been wrong if the person could have decided?  If some hypothetical person felt equally attracted to both genders, what would be wrong with them choosing to pursue a homosexual rather than heterosexual relationship?  It seems to me that this narrative is still one that we, as a society, are still not willing to accept.

And I think we should get over that.  Because, ultimately, the more freedom people are given to choose their partners, the more easily they will be able to form happy and stable relationships.  And I strongly suspect that, in a world where value judgments were not associated with the gender of one’s partner, gender would be a significant part of that choice for a great many people.

\hspace{1mm}

Let me try to say it this way.  The idea of the “gay person” is a myth, and so is the idea of the “straight person.”  Every individual is attracted to a particular set of features and attributes, both physical and personal, that they want in a potential partner.  For most people, members of the opposite gender are much more likely to have an attractive combination of those attributes than a member of the same gender.  But for no one is every member of the opposite gender more attractive than every member of the same gender.  In other words, no one is 100% straight, and similarly, no one is 100% gay.

To make this discussion a little more concrete, let me introduce a hypothetical measure of a person’s sexual preference.  If I were asked to design such a metric, it might go something like this.  For a given individual, randomly select one person from the opposite gender and one person from the same gender (if you want, choose both of them to be within the individual’s age group).  Have the individual spend some time with each of the two people, and then report an honest assessment of which person the individual found more attractive (hypothetically, you could get this information from something like plethysmography).  The probability that the individual will be more attracted to the person of their same gender could be called the “gay preference ratio.”

I am virtually certain that for no individual would this ratio be exactly 0 or exactly 1.  Instead, I imagine that its distribution across a large population would be something like this:

You can fairly say that people on the far left and on the far right of this distribution have essentially no choice in which gender to date: they are virtually never attracted to one of the two sexes.  All those people in the middle, however, could theoretically have a happy relationship with a person from either of the two genders.

Most people (most Americans, anyway) who are publicly gay probably do correspond to the far right of this distribution, and in this sense their proclamations of “I didn’t choose to be gay” are likely very honest and heartfelt.  After all, until quite recently (arguably), societal prejudice has made being a gay American so difficult that only those who have nearly no alternative would choose it.   (And in most places this is still the case, to varying degrees.)  I suspect, however, that for every publicly gay American there are a dozen straight Americans who, in a completely free society, could easily have settled down in a homosexual relationship had a good one presented itself.

Thus, while gay rights are advancing at an impressive rate, we are still a long way from granting the sort of casual non-judgmentalism that would benefit people in the middle of this distribution.  Their existence just doesn’t fit the narrative that we are willing to follow in order to accept homosexuality.  Perhaps in the near future our gay rights discussion will shift toward eliminating this completely false narrative, and advancing the idea that everyone should be free to choose who they want to be with, regardless of whether gender is part of that choice.

\hspace{1mm}

\hspace{1mm}

Footnotes

1.  As I read stories about Jason Collins, I can’t help but contrast them with the story of another professional basketball player, Sheryl Swoopes.  Swoopes was married in 1995 to a man and had a son.  Then in 2005 she “came out” as gay, saying that she had fallen in love with her former (female) assistant coach.  That relationship ultimately didn’t last, and in 2011 she became engaged to a man again.

While her coming out in 2005 was widely-reported, I haven’t read anything about her since.  Somehow, it’s easier to champion the courage of someone who is gay than of someone who is capable of being attracted to both men and women.  I hope that in the near future stories like hers become more commonplace and more easily acceptable.

2.  The schematic distribution drawn above is actually a logit-normal distribution.  It is, of course, completely hypothetical.

3.  There seems to be much hand-wringing about whether people are born gay, or whether they develop it through some factors present in their upbringing.  Personally, I can’t imagine why this distinction matters.  I love sports, but who cares whether that love arises primarily from genetic or environmental factors?

UPDATE:  4.  In this post, I have not bothered to draw a distinction between “gender” and “sex,” and have mostly used the former.  I understand that’s not very correct, but for the point I’m trying to make it doesn’t matter which way you decide to look at the word “gender.”

5.  Of course, it was silly of me not to bring up the Kinsey Scale in this post, which was the first real attempt to make a quantitative measure of “gay preference ratio.”  I hope no one got the impression that I consider myself the first person to realize that sexual orientation comes on a continuous spectrum.

Where the periodic table ends

April 27, 2013

periodic_table_ends

There is a wonderful story in physics, with a rich history, that begins with this question:  What is the biggest possible atomic number?

In other words, where does the periodic table end?  We (as a species) have managed to observe or create nuclei with atomic number ranging from 1 to as large as 118.  But how far, in theory, could we keep going?

\hspace{1mm}

As it turns out, there is a scientific law that says that nuclei whose charge Z is greater than some particular critical value Z_c cannot exist.  What’s more, this critical value Z_c is related to the mysterious fine structure constant \alpha = e^2/\hbar c \approx 1/137, one of the most fundamental and mysterious constants of nature.  (Here, e is the electron charge, \hbar is Planck’s constant divided by 2 \pi, and c is the speed of light.)

In particular, the periodic table should end at Z \approx 1/\alpha \approx 137.

In this post I’ll explain where this law comes from, and why it is that no point object can have a charge greater than \sim 137.

\hspace{1mm}

\hspace{1mm}

To begin with, imagine a point in space at which is localized a very large positive charge Z e.  Like this:

Z_nucleus

I’ll call this point the nucleus.  You can now ask the question of what happens if you release an electron in the neighborhood of this nucleus.  Obviously, the electron gets strongly bound to the nucleus, and it settles into a compact state with some size r around the nucleus.  To figure out how big r should be, you can remember that its value is determined by a balance between the typical energy of attraction -Ze^2/r between the electron and the nucleus and the kinetic energy K associated with confining the electron to within the distance r (as was explained here for the hydrogen atom).

The tricky part here is that for a very large nuclear charge Z the electron kinetic energy gets big, and the electron ends up moving with speeds close to the speed of light.  To see this, consider that when the nuclear charge Z is large, the electron becomes tightly bound, which means r is small.  From the uncertainty principle, confining the electron to within the distance r gives it a momentum p \sim \hbar/r.  If p is big enough (or r is small enough) that p \gg m c (where m is the electron mass), then the kinetic energy can be described using the relativistic formula K = p c \sim \hbar c/r.

Now if you put together the potential and kinetic energy, you’ll find that the total energy as a function of r is

E \sim \hbar c/r - Z e^2/r.

This is a disconcerting formula.  Unlike for the hydrogen atom (where the electron moves much slower than the speed of light), this energy has no minimum value as a function of r.  In particular, if Z is large enough that Z \gtrsim \hbar c/e^2 \sim 1/\alpha \sim 137, then the energy just keeps getting lower as r is made smaller.

What this means is that for Z \gtrsim 137, the electron state is completely unstable, and the electron collapses onto the nucleus.

\hspace{1mm}

\hspace{1mm}

This may not seem like a particularly big problem to you.  You may think that perhaps one can just keep electrons away from the nucleus (at least for a little while), and the nuclear charge Z > 137 will sit happily in space.

However, when Nature wants electrons badly enough, it finds a way to get them.

In this case,  the nuclear charge Z > 137 creates an electron binding energy that is so large, it becomes even larger than the rest mass energy of the electron, mc^2.  With such a large energy at stake, the nucleus can literally rip apart the vacuum and pull an electron from it.

Or, more correctly, the nucleus can wait until random fluctuations of the electromagnetic field produce an electron and positron pair (which under normal circumstances would immediately disappear again), then greedily suck in the electron and spit out the positron.  Like this:

greedy_nucleus

[This process is similar to the perhaps more famous phenomenon of Hawking radiation at the edge of a black hole.  In black holes the enormous gravitational field rips antiparticles from the vacuum and sucks them in, spitting out their (normal) particle partners.  The difference is that Hawking radiation is an extremely slow process, whereas the process described above would be nearly instantaneous.]

The ripping and devouring of vacuum electrons by the large nucleus continues until the charge Z has been reduced to the point where it becomes smaller than \sim 137, and everything settles down again.

It’s a fascinating instability of the vacuum itself, and its result is to prohibit too much charge from existing at any one location.

This means an end to the periodic table.

\hspace{1mm}

\hspace{1mm}

Footnotes

1.  The derivation in this post was pretty schematic, and all I showed was that the critical value Z_c of the nuclear charge is proportional to 1/\alpha \approx 137.  Up until the 1970s it was believed that Z_c was exactly 137.  More recent works, however, have put this number closer to 170.

2.  Sadly, there’s no easy way to observe this vacuum instability at Z > 1/\alpha; making nuclei with charge 137 is no simple matter.  So one can think of this as yet another interesting fable of physics relegated to trivia by the fact that in our universe \alpha just happens to be a small number.  However, there are synthetic systems (like graphene) where the effective fine structure constant happens to not be a small number.  In such cases even charges as small as Z = 2 cannot exist stably.

3.  The picture, and the title, at the top of this post is of course taken from Shel Silverstein’s Where the Sidewalk Ends.

Follow

Get every new post delivered to your Inbox.

Join 107 other followers