Skip to content

Where the periodic table ends

April 27, 2013

periodic_table_ends

There is a wonderful story in physics, with a rich history, that begins with this question:  What is the biggest possible atomic number?

In other words, where does the periodic table end?  We (as a species) have managed to observe or create nuclei with atomic number ranging from 1 to as large as 118.  But how far, in theory, could we keep going?

\hspace{1mm}

As it turns out, there is a scientific law that says that nuclei whose charge Z is greater than some particular critical value Z_c cannot exist.  What’s more, this critical value Z_c is related to the mysterious fine structure constant \alpha = e^2/\hbar c \approx 1/137, one of the most fundamental and mysterious constants of nature.  (Here, e is the electron charge, \hbar is Planck’s constant divided by 2 \pi, and c is the speed of light.)

In particular, the periodic table should end at Z \approx 1/\alpha \approx 137.

In this post I’ll explain where this law comes from, and why it is that no point object can have a charge greater than \sim 137.

\hspace{1mm}

\hspace{1mm}

To begin with, imagine a point in space at which is localized a very large positive charge Z e.  Like this:

Z_nucleus

I’ll call this point the nucleus.  You can now ask the question of what happens if you release an electron in the neighborhood of this nucleus.  Obviously, the electron gets strongly bound to the nucleus, and it settles into a compact state with some size r around the nucleus.  To figure out how big r should be, you can remember that its value is determined by a balance between the typical energy of attraction -Ze^2/r between the electron and the nucleus and the kinetic energy K associated with confining the electron to within the distance r (as was explained here for the hydrogen atom).

The tricky part here is that for a very large nuclear charge Z the electron kinetic energy gets big, and the electron ends up moving with speeds close to the speed of light.  To see this, consider that when the nuclear charge Z is large, the electron becomes tightly bound, which means r is small.  From the uncertainty principle, confining the electron to within the distance r gives it a momentum p \sim \hbar/r.  If p is big enough (or r is small enough) that p \gg m c (where m is the electron mass), then the kinetic energy can be described using the relativistic formula K = p c \sim \hbar c/r.

Now if you put together the potential and kinetic energy, you’ll find that the total energy as a function of r is

E \sim \hbar c/r - Z e^2/r.

This is a disconcerting formula.  Unlike for the hydrogen atom (where the electron moves much slower than the speed of light), this energy has no minimum value as a function of r.  In particular, if Z is large enough that Z \gtrsim \hbar c/e^2 \sim 1/\alpha \sim 137, then the energy just keeps getting lower as r is made smaller.

What this means is that for Z \gtrsim 137, the electron state is completely unstable, and the electron collapses onto the nucleus.

\hspace{1mm}

\hspace{1mm}

This may not seem like a particularly big problem to you.  You may think that perhaps one can just keep electrons away from the nucleus (at least for a little while), and the nuclear charge Z > 137 will sit happily in space.

However, when Nature wants electrons badly enough, it finds a way to get them.

In this case,  the nuclear charge Z > 137 creates an electron binding energy that is so large, it becomes even larger than the rest mass energy of the electron, mc^2.  With such a large energy at stake, the nucleus can literally rip apart the vacuum and pull an electron from it.

Or, more correctly, the nucleus can wait until random fluctuations of the electromagnetic field produce an electron and positron pair (which under normal circumstances would immediately disappear again), then greedily suck in the electron and spit out the positron.  Like this:

greedy_nucleus

[This process is similar to the perhaps more famous phenomenon of Hawking radiation at the edge of a black hole.  In black holes the enormous gravitational field rips antiparticles from the vacuum and sucks them in, spitting out their (normal) particle partners.  The difference is that Hawking radiation is an extremely slow process, whereas the process described above would be nearly instantaneous.]

The ripping and devouring of vacuum electrons by the large nucleus continues until the charge Z has been reduced to the point where it becomes smaller than \sim 137, and everything settles down again.

It’s a fascinating instability of the vacuum itself, and its result is to prohibit too much charge from existing at any one location.

This means an end to the periodic table.

\hspace{1mm}

\hspace{1mm}

Footnotes

1.  The derivation in this post was pretty schematic, and all I showed was that the critical value Z_c of the nuclear charge is proportional to 1/\alpha \approx 137.  Up until the 1970s it was believed that Z_c was exactly 137.  More recent works, however, have put this number closer to 170.

2.  Sadly, there’s no easy way to observe this vacuum instability at Z > 1/\alpha; making nuclei with charge 137 is no simple matter.  So one can think of this as yet another interesting fable of physics relegated to trivia by the fact that in our universe \alpha just happens to be a small number.  However, there are synthetic systems (like graphene) where the effective fine structure constant happens to not be a small number.  In such cases even charges as small as Z = 2 cannot exist stably.

3.  The picture, and the title, at the top of this post is of course taken from Shel Silverstein’s Where the Sidewalk Ends.

Nothing lasts forever

April 25, 2013

Quick, what’s the integral from zero to infinity of  \sin x?

\hspace{1mm}

If you’re a good math student, you’ll tell me that the answer is undefined, since \sin x oscillates forever and so the integral doesn’t converge.

I, on the other hand, am not a good math student, so I am free to tell you that I know the answer.

The integral from zero to infinity of \sin x is 1.

What's the total area under this curve (red - blue)?  I say it's 1.

What’s the total area under this curve (red – blue)?  1.

\hspace{1mm}

\hspace{1mm}

Let me explain where this answer comes from, and why I’m so confident that it’s right.  In doing so, perhaps I will demonstrate a little bit about the relationship that physics has with math.

First of all, as a physical-minded person I should interpret what it means to write \int_0^\infty \sin (x) dx.  In my mind, the only meaningful interpretation of the question “what is \int_0^\infty \sin (x) dx?” is something like “what is the net (integrated) effect of something that oscillates for a very long time?”  The \infty in the integral means that when someone asks “how long do you mean by ‘very long’?”, the correct answer is “as long as I want.”

\hspace{1mm}

Now I should answer the question, and I can do so as long as I hold to one belief: In the real world, nothing actually lasts forever.

That is, I don’t know how long, exactly, the \sin x in the integral will keep going, but I do know that it should die off eventually.  So let me assume that the amplitude of the sine wave dies off very slowly (“How slowly?”  “As slowly as I want.”).  Then I can calculate the integral, get a perfectly well defined answer, and verify that my answer doesn’t depend on how slowly I killed off the sine wave.

For example, say I kill off the oscillations of the sine function exponentially, by replacing \sin x in the integral with \sin(x) e^{-x/L}, where L is a very large number.  Then I can calculate the integral, and check what happens when L gets arbitrarily large.  You can do this exercise for yourself (by hand if you’re diligent, using Wolfram Alpha if you’re lazy) and you’ll find that when L \gg 1 the answer is very close to 1.  (“How close?”  “As close as you want.”).

\hspace{1mm}

The more precise mathematical statement goes like this:

\int_0^\infty \sin(x) dx = \lim_{L \rightarrow \infty} \int_0^\infty e^{-x/L} \sin(x) dx = 1.

(This is a mathematical trick that I use, in one form or another, all the time.)

\hspace{1mm}

\hspace{1mm}

So, to recap, what is the integral of \sin x from zero to infinity?

My math textbook, my math teacher, and all my math software says that the answer is undefined.  But as long as you grant me that nothing actually lasts forever, I’ll tell you that the answer is 1.

\hspace{1mm}

\hspace{1mm}

Footnotes

1.  It is not really my intention to bash on mathematics or math teachers.  For example, I found that the most “hard core” math course that I took in college — real analysis — was thoroughly grounded in intuitive and physical thinking of the sort I am advocating here.

2.  One way to think about the final answer, \int_0^\infty \sin(x) dx = 1, is that the area under the curve above (red – blue) depends on how many red bumps and how many blue bumps you count.  Every red bump contributes an area +2 and every blue bump has area -2.  So as you count them from left to right, your final tally for the area will go back and forth between +2 and 0.  The correct answer, 1, is the average of these two, which you might expect from any process that slowly washes out your counting procedure.

3.  If you’re curious, \int_0^\infty \cos(x) dx = 0.  (This can of course be worked out using the same mathematical trick as above.)

4.  If you’re really curious, \int_0^\infty \sin(x + \phi) dx =\cos(\phi).  So the integral from zero to infinity of an oscillating wave can take any value from -1 to 1, depending on its phase when it started.  This result could be anticipated from the simple argument in Footnote 2.

5.  Just to reassure you, there is nothing magical about the choice of an exponential cutoff e^{-x/L}.  I usually use it because it’s easy to work with.  But you’ll find that any slow damping of the oscillations will give the same result.

I suspect, in fact, that there is some nice theorem here.  Like:

For any continuous function f(y) such that f(y \rightarrow \infty) \rightarrow 0 [UPDATE: and f(0) = 1], \lim_{L \rightarrow \infty} \int_0^\infty \sin(x) f(x/L) dx = 1.

If I were a smarter person I could probably prove this theorem and generalize it to any oscillating function (with zero mean).  Can any of my more mathematically inclined readers shed light on the subject?  Maybe there are other necessary constraints on f(y)?

Spare me the math: the Lamb Shift

April 24, 2013

To kick off SMTM, I’ll look at a topic that I never really understood when I was in graduate school: the Lamb shift.

The Lamb Shift: what it is

In its most commonly-discussed form, the Lamb shift is a small effect.  In fact, it’s a very small effect (which is probably why I never bothered to learn it in the first place).  The Lamb shift is a miniscule change in (some of) the energy levels of the hydrogen atom relative to where it seems like they should be.  For example, the binding energy of an electron to the hydrogen nucleus (a proton) is about 13.6 electron volts.  The Lamb shift is a phenomenon that changes this energy level by about 4\times10^{-6} eV, or about 0.00003%.  But the existence of this shift was a serious puzzle to physicists in the 1940s and 50s, and its final resolution provided a beautiful piece of physics that helped spur the development of quantum electrodynamics, one of the most spectacularly successful scientific theories in history.

The essence of the Lamb shift can be stated like this: it is the energy of interaction between hydrogen and empty space.

The dominant contribution, of course, to the energy of the hydrogen atom is the interaction of the electron with the proton it’s orbiting.  If you want a really quick way to derive the energy of the hydrogen atom, all you need to remember is that the size of the electron cloud around the proton has some characteristic size a.  Confining the electron to within this cloud costs some kinetic energy \sim \hbar^2 /m a^2, and it buys you some energy of attraction, \sim -e^2/a (here I’m being too lazy to write out the 4\pi\epsilon_0s that come in SI units).  So the total energy is something like E \sim \hbar^2/ma^2 - e^2/a.  If you minimize E with respect to a (take the derivative and set it to zero), you’ll find that

a \sim \hbar^2/me^2

and

E \sim -me^4/\hbar^2 \sim -e^2/a.

The constant a \approx 0.5 Angstroms is the Bohr radius, which is the typical size of the hydrogen atom (and, roughly speaking, any atom).  The energy -e^2/a \equiv R is the Rydberg energy.

The Lamb shift comes from the way this balanced state between electron and proton is influenced by the slight, random buffetings from the vacuum itself.

\hspace{1mm}

The hydrogen atom

There are two players in this story: the hydrogen atom, and empty space.  I’ll describe the former first, since the latter (paradoxically) is considerably harder.

In reality, the Lamb shift is most easily observed in excited states of hydrogen (the P states, see Footnote 1 at the bottom), but for the purpose of this discussion it’s easiest to think about the ground state.  In terms of the electron probability cloud, the ground state of the hydrogenic electron looks like this:

ugly_hydrogen

It has a peak right at the middle of the atom, and it falls of exponentially.

I know there is a lot of trickiness associated with whether to think about an electron as a particle or as a wave (my own favorite take is here — in short, an electron is a particle that surfs on a wave), but for this discussion it’s easiest to think of an electron as a point object that just happens to arrange itself in space according to the probability density plotted above.

\hspace{1mm}

The vacuum

There is a lot going on in empty space.  If this crazy idea is completely new to you, I would (humbly) suggest reading my post on the Casimir effect.  The upshot of it is that all of space is filled by endlessly boiling quantum fields, and one of these, the electromagnetic field, is responsible for conveying electromagnetic forces.  As a result of its indelible boiling, however, the electromagnetic field can push on charged objects, like our electron, even when there are no other charged objects around to seemingly initiate the pushing.

To get a better description of the electromagnetic field in vacuum, it will be helpful to imagine that our electron sits inside a large metal box with size L.  Inside this metal box are lots of randomly-arising electromagnetic waves (“virtual photons”).  Something like this:

metal_box

When dealing with quantum fields, a good rule of thumb is to expect that, in vacuum, every possible oscillatory mode will be occupied by one quantum of energy.  In this case, it means that for every possible vector k = 2\pi / \lambda, where \lambda = 2L, 2L/2, 2L/3, 2L/4, ... is a permissible photon wavelength, there will be roughly one photon present inside the box.  This photon has an energy E_k = \hbar c k, where c is the speed of light.  One can estimate the typical magnitude of the electric field the photon creates, |\vec{\mathcal{E}_k}|, by remembering that |\vec{\mathcal{E}_k}|^2 gives the energy density of an electric field.  Since the photon fills the whole box, this means that |\vec{\mathcal{E}_k}|^2 \times L^3 \sim E_k, or |\mathcal{E}_k|^2 \sim \hbar c k/L^3.  This electric field oscillates with a frequency ck.

So now the stage is set.  The hydrogen atom sits inside a “large box” (which we’ll do away with later), and inside the box with is a huge mess of random electric fields that can push on the electron.  Now we should figure out how all this pushing affects the hydrogen energy.

[By the way, you may be bothered by the fact that all these randomly-arising photons seem to endow the interior of the box with an infinite amount of energy.  If that is the case, then there's nothing much I can say except that you and I are in the same club, with only speculation to assuage our uneasiness.]

\hspace{1mm}

How the vacuum pushes on hydrogen

The essence of the Lamb shift is that the random electric fields push on the electron, and in doing so they move it slightly further away from the proton, on average, than it would otherwise be.  Another way to say this is that the distribution of the electron’s position gets blurred over some particular (small) length scale \delta r.  In particular, the sharp peak in the distribution near the center of the atom should get slightly rounded, like this:

smeared_hydrogen

The “smearing length” δr is greatly exaggerated in this picture.

The resulting shift of the electron distribution away from the center lowers the interaction energy of the electron to the proton.  To estimate the amount of energy that the electron loses, you can think that in those moments where the electron happens to approach be within a distance \delta r of the nucleus, it frequently finds itself getting pushed outward by an amount \delta r.  As a result of this outward push it loses an energy e^2/\delta r.  This means that the Lamb shift energy

\Delta E \sim (e^2/\delta r) \times [\text{fraction of time the electron spends within } \delta r \text{ of the nucleus}]

\Delta E \sim (e^2/\delta r) \times (\delta r)^3/a^3

\Delta E \sim e^2 (\delta r)^2/a^3

Now all that’s left is to estimate \delta r.

\hspace{1mm}

The trick here is to realize that all of those photons within the metal “box” are independently shaking the electron, and each push is in a random direction.  So if some photon with wave vector k_1 produces, by itself, a displacement of the electron (\delta r)_{k_1}, then the total displacement (\delta r) satisfies

(\delta r)^2 = (\delta r)^2_{k_1} + (\delta r)^2_{k_2} + (\delta r)^2_{k_3} ...

[This is a general rule of statistics: independently-contributing things add together in quadrature.]

A photon is essentially just an electric field that keeps reversing sign.

In our case, each (\delta r)_k comes from the influence of a photon, which has electric field \vec{\mathcal{E}_k}.  The simplest way to estimate (\delta r)_k is to imagine that the electric field \vec{\mathcal{E}_k} pushes on the electron for a time \tau \sim 1/kc (the period of its oscillation), after which it reverses its direction (as shown on the right).  During that time, the acceleration of the electron is something like |\vec{A}| \sim |\vec{F}|/m, where \vec{F} = e\vec{\mathcal{E}_k} is the force of the electric field pushing on the electron, and its net displacement (\delta r)_k \sim |\vec{A}| \tau^2.  This means

(\delta r)_k \sim e |\vec{\mathcal{E}_k}|/mk^2 c^2.

\hspace{1mm}

Now we should just add up all the (\delta r)^2_ks.  Since there is a very large number of photons with a nearly continuous range of energies (due to the very large size of the confining “box”), we can replace a sum over all k‘s with an integral: \sum_k (\delta r)^2_k\rightarrow L^3 \int d^3 k (\delta r)^2_k.  Inserting the expressions for (\delta r)_k and |\vec{\mathcal{E}_k}|^2 gives the following:

(\delta r)^2 \sim (e^2 \hbar/m^2 c^3) \int (1/k) dk.

You can notice that the size L of the box drops out of the expression, which is good because the box was completely fictitious anyway.

The only remaining thing to figure out is what to do with the integral \int(1/k) dk, which is technically equal to infinity.  In physics, however, when you get an infinite answer, it means that you forgot to stop counting things that shouldn’t actually count.  In this case, we should stop counting photons whose wavelength is either too short or too long to affect the electron.  On the long wavelength side, we should stop counting photons when their wavelength gets bigger than the size of the atom, a.  Such long wavelength photons make an electric field that oscillates so slowly that by the time it has changed sign, the electron has likely moved to a completely different part of the atom, and the net effect is zero.  One the short wavelength side, we should stop counting photons when their wavelength gets shorter than the Compton wavelength.  Such photons are super-energetic, with energy larger than mc^2, which means their energy is so high that they don’t push around electrons anymore: they spontaneously create new electrons from the vacuum.  [In essence, it doesn't make sense to talk about electron position at length scales shorter than the Compton wavelength.]

Using these two wavelengths as the upper and lower cutoffs of the integral gives \int (1/k) dk = ln(1/\alpha).  Here, \alpha = e^2/\hbar c \approx 1/137 is the much-celebrated fine structure constant.

[It is perhaps worth pausing to note, as so many before me have done, what a strange and interesting object the fine structure constant is.  \alpha contains only the most fundamental constants of electricity, quantum mechanics, and relativity, (e, \hbar, and c), and they combine to produce exactly one dimensionless number.  How strange that this number should be as large as 137.  As a general rule, fundamental numbers produced by the universe are usually close to 1.]

\hspace{1mm}

Now we have all the pieces necessary to assemble a result for the Lamb shift.  And actually, if you like the fine structure constant, then you’ll love the final answer.  It looks like this:

\Delta E/R \sim \alpha^3 \ln(1/\alpha).

\hspace{1mm}

\hspace{1mm}

At the beginning of this post I mentioned that the Lamb shift is very small — only about 1/500,000 of the energy of hydrogen (the Rydberg energy, R).   Now, if you want to know why the Lamb shift is so small, the best answer I have is that the Lamb shift is proportional to \alpha^3, and in our universe the fine structure constant \alpha just happens to be a small number.

It’s interesting to note that if we somehow lived in a universe where \alpha was not so small, then the Lamb shift would get big, and those random fluctuations of the quantum field would get large enough to completely knock the electron off the nucleus.  This would be a universe without atoms, and consequently, without you and me.

\hspace{1mm}

\hspace{1mm}

Footnotes

Probability clouds for different hydrogen states. Different states along the same row are supposed to have the same energy, but the Lamb shift splits the S states from the P and D states.

1)  You might notice that the Lamb shift appeared only because the electron probability cloud had a peak at the center of the atom.  If it didn’t have a peak — say, if it went to zero near the center — then there would be no Lamb shift.  This is in fact exactly how the Lamb shift was discovered.  Certain excited states of the hydrogen atom have a peak near the center and others go to zero.  So while normal quantum mechanics predicts that, say, the 2S and 2P states (shown to the right) have the same energy, in fact the Lamb shift makes a small difference between them.  This difference can be observed as a faint radio wave microwave signal from interstellar hydrogen.

2)  It’s probably worth noting that if you increased the fine structure constant \alpha, you would have run into bigger problems long before you started fussing about the Lamb shift.

3) Also, while \alpha is a small number in our normal world, it’s not hard to imagine synthetic worlds (the interior of certain materials) where \alpha is not a small number.  For example, graphene is a material inside of which there is an effective speed of light that is 300 times smaller than c.  This makes \alpha very close to 1, and the sort of effects discussed in this post get very real.  This is part of why there has been so much ado about graphene among physicists: it’s quite an exciting (and frustrating) playground for people like me.

UPDATE:

4)  I just came across this video of Freeman Dyson (one of my personal favorite physicists) explaining the Lamb shift and some of the history behind it.  His conceptual summary of it starts at 2:43.

Spare me the math

April 24, 2013

Part of what scuttled my blogging during the past couple years was the fact that every post took me such a long time to write.  Crafting a careful, conceptually clear, and readable blog post takes (me) a lot of work: I would estimate that the average physics-related post took me 12 (non-consecutive) hours to write.  As a consequence, my blogging became infrequent, and I was often too daunted to try and delve into some interesting scientific topic.

In this second time around, I’m going to try and fix that a little bit.  I’ve decided to confront my fear of writing sloppy or incomplete posts by, in short, writing sloppy and incomplete posts.  I’m hoping that the increased breadth of material will compensate, to at least some of my readers, for the potentially decreased quality.

I’ve also resolved to have less trepidation about delving into more “hard core” topics from condensed matter physics, which is my own subfield.  These topics aren’t conceptually more difficult than, say, the second law of thermodynamics or the double-slit experiment; in fact they’re usually simpler.  It’s just that they are generally discussed only by a more professional audience, and so they can seem daunting to someone who is uninitiated.  But it seems to me that for many of these topics there is a real dearth of qualitative discussion.  And a lot of them are really pretty cool.

In light of these two resolutions, I’ve decided to institute a new series of posts here at G&L, called “Spare me the math” (SMTM).  The idea of SMTM is to look at some topic from “advanced” physics in a brief and very conceptual way, with the goal of bringing some “feel” for a physical phenomenon that can be hard to get from typical textbooks or Wikipedia articles.  Where possible, I will try to include just enough equations to codify the most important physical relations at work.  If I do my job well, then there should be enough information to put together a schematic derivation of the primary results, at least to within numerical factors.  As usual, I use math as a tool for remembering and reasoning with basic dependencies, and not as a series of exact statements.

So in that sense, Spare Me The Math will not actually be “math-free”, in the sense that you can still expect to see very basic equations.  But it should be “hard math-free,” or at least “formality-free.”  Or in other words, the focus will be on big picture ideas, with a little algebra (and maybe a little calculus) to piece them together and help you keep track of them.

It has been my experience, by the way, that such simple derivations are much more useful for scientific thinking than more formal ones; so it’s unfortunate that textbooks (and academic papers) are almost always dominated by the latter.  I am always pleasantly surprised by how much easier it is to talk science one-on-one with someone than it is read their papers.  That’s because in a one-on-one conversation a scientist will talk to you in the language that s/he uses to think about the problem, whereas when writing a paper everyone gets paranoid that they’ll say something incorrect and be called out for it.  But as my undergraduate advisor used to say, “what’s a factor of \pi between friends?”

\hspace{1mm}

I should, of course, reiterate the caveat that my explanations may be unsatisfying, incomplete, or just plain wrong.  They represent only the best way I have to think about the problem.  If something sounds rotten to you, please say so in the comments.

The Boston Marathon, and Gratitude

April 21, 2013

As it happens, I was fortunate enough to be able to run in the 2013 Boston Marathon.

A lot has been written (and is still being written) about the marathon and its aftermath, and in many ways there is no point in me adding my own commentary on top of all that.  But when I think about the day of April 15, 2013, I have very particular way of understanding and feeling about it, which I feel a need to write down.

[As Eugene Wigner (one of my personal favorite figures in 20th century physics) said, "I have a weakness for reflection, and I want to leave some small record of the signal events of my life."]

\hspace{1mm}

\hspace{1mm}

My strongest memory of the Boston Marathon is of being overwhelmed by feelings of gratitude.

At 10:00am on April 15, 2013, right after the starting gun fired for the Boston Marathon, I was nervous.  I had trained quite hard (if I may say so) to get to Boston, and now that I was lined up in that starting corral I was a bit overwhelmed by its undeniable “big league” feel.  If you ever want to meet an impressive, intimidatingly motivated group of people, you should ride the bus to the starting line of the Boston Marathon.

My nervousness lasted as we spurted past the starting line (in that nervous, stuttering start-stop motion that runners who have lined up in a corral understand) and all throughout the tense, overly-fast first five miles of the race.  But somewhere around mile 6, something surprising happened.

The first thing I remember was having no awareness of my own feet.  During a gentle uphill, where I found myself in the middle of a fairly dense pack, I remember looking down and being very aware of all the feet of the people around me going step-step-step against the ground.  And it suddenly seemed strange to me, all this step-step-stepping, because I was completely unaware of the process of lifting and dropping my own feet.  That is, I was running without a conscious awareness of the process of running.

Of course, there’s nothing unusual about this; after all, most of us have been walking around without putting any conscious thought into it since we were toddlers.  But for me, in that moment, to think that I was running (…fast, …in the Boston Marathon!) without having to actually think about the process of running brought an incredibly beautiful and inspiring feeling.  I can’t quite explain it, other than to say that it felt like some kind of magic, like I was being propelled at breakneck speed down the center of a beautiful road on some kind of magic carpet that floated just a few feet off the ground.  Or like I was some kind of animal, a pronghorn antelope maybe, that lives to run without understanding what it’s doing.

And people were cheering for me.  They lined the road on either side, holding signs and noisemaking devices, looking happy and excited for me.

And suddenly I felt an elated and overwhelming gratitude.  Gratitude for my own life, for this moment of being a running animal, and for all the people who, inexplicably, lined the road just to give me the gift of feeling like magic.  I spent the next few miles with a huge smile on my face, feeling happy to the point of laughter, and high-fiving every child who held their hand out (and there were a lot!).

\hspace{1mm}

There were plenty of other good moments in the marathon, but I will remember that moment of gratitude most strongly.

\hspace{1mm}

\hspace{1mm}

And now I wish that I could stop writing here.  I wish I could say “I loved the 2013 Boston Marathon” and be done.  I wish I could refuse to acknowledge that someone else altered or tainted the way we feel about that day.  But I’m afraid there is a little more that needs to be said.

In the immediate aftermath of the bombings, a lot of people (myself included) responded with very real and very personal anger.  For many of my fellow participants, this anger was expressed through sentiments like “you [the perpetrators] don’t understand our resolve, whose demonstration was our very purpose in gathering.  We will continue to run and we will refuse to be cowed.”

My feelings about the bombings were perhaps slightly different.  I saw them not as an attack on runners or on the sport of running. To me, they were something far more heinous: they felt like an attack on those people who had gathered to celebrate and cheer for the accomplishments of a stranger.  All I could think about were those people who had gathered along the road to cheer for me, a random stranger, as I went by: those small children, those college students, those grandmothers and grandfathers who had taken delight in my running, and who had allowed me to feel like magic.

Probably many people felt this way.  I know that many people shared the same boiling anger that came with it.

\hspace{1mm}

\hspace{1mm}

Immediately after crossing the finish line, I had imagined that 2013 would be my last and only time running the Boston Marathon.  But now I am convinced that I need to return next year.  Just to show my own gratitude, and to contribute to the city’s demonstration that we will not be scared away from the things and people we love.

My plan is to run dressed in an American Revolution costume (it is Patriot’s Day, after all), complete with tri-cornered hat.

Thank you, Boston.

Thank you, Boston.

“Gravity is a habit that is hard to shake off”

April 20, 2013

It turns out that I couldn’t stay away.

About 15 months ago I decided that it was time to close down this blog.  My reasoning, more or less, was that I needed to be more “serious.”  I had just finished my PhD and started my first postdoc, and I reasoned that if I really hoped to “make it” as a physicist then I couldn’t afford to waste time writing long, rambling posts about physics and semi-physics topics that are outside of my real research.

But I have missed blogging during the last year+.  And I have come to realize that blogging was more than just a fun way to spend idle time.  In fact, I think blogging provided me with something really valuable that I will need going forward.

It seems to be true, at least for me, that the only way to really learn something is to teach it to someone else.  From my perspective, teaching an interesting idea to someone else has three important effects on the teacher:

  • First, teaching forces the teacher to sharpen their own thinking: to identify the features of the idea that are most essential, to develop multiple parallel ways of understanding and explaining the idea, and to tie the idea firmly to a wider base of knowledge.
  • Second, teaching cements the idea in the teacher’s own memory.  There is no better way to learn a story than to become the storyteller.
  • Third, and perhaps most importantly, teaching allows one to reconnect in a personal way with the excitement behind the idea being taught, and to rekindle one’s love for the topic.

In short, this blog has been my outlet for teaching ideas that I love.  And I have realized that such teaching is immensely valuable for me, not just as a hobby, but as a tool for professional and personal development.  I love physics, and I want to make a career as a physicist.  This is a surprisingly daunting goal sometimes, but, in my final analysis, it turns out to be precisely the reason why I need to “waste time” blogging about physics.

So Gravity and Levity is back!  Try to contain your excitement.

\hspace{1mm}

\hspace{1mm}

I think the time has also come for me to make a slight shift in policy.  I had originally imagined that Gravity and Levity would serve as a locus for conceptual discussion of ideas in upper-level physics, and for this the idea of preserving my own anonymity (and that of other commenters) was valuable.  Physics students are often quite insecure, after all.

But now I have come to understand that this blog is by necessity a very personal endeavor.  To be simplistic, Gravity and Levity is not really a blog about physics; it is a blog about myself and the way I think and feel about physics.  And so it makes sense to acknowledge that personalness directly, and to explicitly tie this blog to my own identity.  It was never much of a secret anyway.

\hspace{1mm}

So let me introduce myself.

005My name is Brian Skinner.  I am a 29-year old postdoc in theoretical condensed matter physics at the University of Minnesota, where I also completed my PhD.  My undergraduate years were spent at Virginia Tech, where I studied physics and mechanical engineering.  My childhood was spent in lots of  different places, since my father was a fighter pilot in the US Air Force and our family moved every year or two.  I have an arthritic right knee and a receding hairline.

To the right is a picture of me looking like I’ve never seen a camera before.

One benefit that will perhaps come of introducing myself directly is that it will allow me to communicate in a more personal way the uncertainty and fear that comes from trying to make a career as a scientist.  My job is one that makes me feel inadequate every day.  More or less every day I feel insufficiently intelligent, insufficiently motivated, and insufficiently hardworking to achieve my goal of becoming a competent physicist and/or physics professor.  And I suspect that many other hopeful scientists feel this way.  I truly don’t know whether (or to what degree) I will “make it” as a physicist, but perhaps some public documentation of my own attempts to do so will provide a bit of catharsis to others who feel similarly inadequate.

Finally, I think the future of this blog will also contain less hesitancy about getting “off topic.”  To whatever readers I may have, be warned that I intend to consider Gravity and Levity as my outlet for discussing and developing any ideas that seem interesting and/or profound to me.  Such ideas will mostly have to do with physics, but I consider myself to have no allegiance to any particular discipline or professional banner.

\hspace{1mm}

I look forward to the future of G&L!  Thank you to all of you who read it.  It is a pleasure to meet you.

\hspace{1mm}

\hspace{1mm}

[The quote in the title of this post comes from Terry Pratchett's novel Small Gods, which I have never actually read.]

Zero Gravity (and Levity)

January 28, 2012

On that final note, I think it’s time to bring this blog to a close.

Now that I’ve moved on from graduate student to post-doc, my priorities have shifted, and you may have noticed that I haven’t been posting enough to keep this blog respectable.  This seems like more of a permanent change in my work habits than a temporary increase in busyness, so I think that now is the right time to close down Gravity and Levity.

It’s been a tremendous amount of fun, and along the way I somehow attracted a sort of dazzlingly intelligent set of readers leaving intelligent comments.  So thank you.

I leave unfilled a fairly significant list of half-started blog posts and half-developed ideas for future posts.  So if anyone ever wants to invite me to write a guest blog post, I will likely be highly tempted to do so.  If for whatever reason you would like to make such an invitation, feel free to say so in the comments and I will respond by email.

Since this post will sit at the top of Gravity and Levity for the forseeable future, I’ll close with a list of my 16 personal favorite blog posts.  Thanks again, everyone.

\hspace{1mm}

\hspace{1mm}

\hspace{1mm}

Parenting and the feeling of time: My eight lifetimes

In which I speculate about how we live in logarithmic time.

The fastest possible mile

I search for an asymptote in the progression of the mile word record and come up with 3:39.6.

Finding the hot (and cold) hand at a local gym

With a statistical analysis rebuked, Mrs. G&L and I head to the gym in search of the “hot hand.”  We find instead only evidence for the cold hand.

When Nature plays Skee-ball: the meaning of free energy

I explain free energy by imagining a four year old girl playing Skee-ball.

Braess’s Paradox and the Ewing Theory

An analogy between highway traffic and basketball might explain why your favorite team can get better when its best player is sitting out.

Friedel oscillations: wherein we learn that the electron has a size

Wherein Friedel oscillations are explained using the following sentence: “It’s a bit like letting the richest men in America decide the tax code: it may be right for the guys up front, but it’s too damn much for the people that come later!”

The most important idea in science, and why it’s true

Explaining atomism and the Lennard-Jones law using cheap hand drawings and a youtube video.

Does your culture really affect the gender distribution?

In which I play King Solomon with a suggestion made by science author Matt Ridley.

Your body wasn’t built to last: a lesson from human mortality rates

What we can learn about the human body from the mathematics of mortality.  This post is responsible for more than 1/3 of all web traffic to G&L.

The path integral: calculating the future from an unknown past

Using a life-or-death situation for ants to illustrate the power of path integrals.

A story about quasiparticles on the beach

A good story in science will affect your summer vacation.  In the best possible way.

Being pushed around by empty space: the Casimir effect

Dancing witches produce the Casimir effect.  True story.

“So is the universe made of tiny springs or isn’t it?!”

A memory of being exasperated in quantum field theory class.  Also, Freeman Dyson is much smarter than me.

Feynman’s Ratchet and the perpetual motion gambling scheme

Can you spot a (thermodynamic) scam when you see one?

LeBron James and the Lottery in Babylon

Jorge Luis Borges explains beautifully why we are so drawn to sports.

This is not a story about irony

In which I remember “Paul”, who felt strongly about his calling in life.

Follow

Get every new post delivered to your Inbox.

Join 110 other followers