I never intended for Gravity and Levity to have so much talk about basketball. But it’s becoming increasingly clear that my hobby (nerdy problems in basketball) is much more popular than my real job (nerdy problems in “real” life). Sadly, my real research has never been written up in Science News.
As it turns out, though, there really are a surprising number of interesting theoretical problems that come up when thinking about basketball. The answer to the question “what is the best strategy for my team?”, for example, includes elements of network theory and game theory, in addition to good old-fashioned probability theory. And if you’re a nerd like me as well as a basketball fan, those are pretty exciting things to dabble in. So basketball continues to be a source of fun for me even though I was never good enough to play it beyond the high school level. Probably not as fun as actually being a basketball player, but I suppose that in life we each work with what we’re given.
A month or so ago I was on an airplane and feeling restless, so I started thinking ambitious thoughts about constructing a great optimization theory for basketball. As I thought more, though, I realized that there were some very basic questions to which I didn’t know the answer that would have to be addressed first. This process of slowly realizing the depths of my own ineptitude continued, and after multiple rounds of mental simplifying I finally ended up with a comically over-simplified model of decision-making in basketball that I still couldn’t immediately write down the answer to. Specifically, I came up with the following question:
Suppose that your team runs an offense with exactly one play. Every time your team has the ball, you run the play, and at the end of the play you arrive at a shot opportunity. The quality of that opportunity varies: sometimes you end up with an easy layup that is almost certain to go in, and other times you end up with a contested jump shot that has essentially no chance. At the moment of the shot opportunity, your team has to make a decision: should you take the shot or should you reset the offense and run the play again in hopes of arriving at a higher-percentage opportunity?
Now suppose that the quality of the shot opportunity, as characterized by the probability p for the shot to go in, is randomly distributed between 0 and 1. How good must be the probability p for the shot to go in before you should take it?
It sounds like a pretty simple question, right? It’s tempting to say right away that you should shoot whenever , which is somehow equivalent to saying “take only above-average shots”.
But the right answer is actually a bit more complicated. For one thing, the answer depends very much on how much time you have (say, before the shot clock expires). If you have a lot of time, for example, you can just keep resetting the play whenever it doesn’t give you an easy layup. Having more time gives you a greater capacity to be selective.
So, generally speaking, suppose that you have enough time to run the play times. What is the lower cutoff for the shot quality such that your team should shoot whenever the shot opportunity has ? This function, , is what I call “the shooter’s sequence”. Posing the problem of seemed so simple when I first wrote it down on the airplane; I was surprised that I couldn’t immediately say what the answer was.
n = 1
Actually, if you start at the beginning, it’s not hard to write down some answers right away. For example, if your team has only enough time to run the play once, then you should take any shot that presents itself. This means that when , you should shoot whenever . So the first term in the sequence is
As a consequence, your average shooting percentage will be equal to the average shot quality , so that you score half the time.
n = 2
Now suppose that you have time to run the play twice. If you don’t like the first shot opportunity that comes up, you can reset and run again. If you choose to reset, you can expect to score half the time (as explained above). So, it’s not hard to see that the first shot opportunity should have in order to be worth taking. This means
If the team shoots only when , then on the first attempt its shooting percentage will be [the average of the interval (1/2, 1)]. Thus, the team’s combined shooting percentage for whole possession is:
(Probability that the team will shoot the first time around) (Team’s shooting percentage on the first attempt) (Probability that the team will hold and wait for the second play) (Team’s shooting percentage on the second attempt) .
n = 3
Now it should be fairly easy to see that if you have enough time to run the play three times, on the first attempt you should only take shots with . Because if the shot attempt on your first time through is worse than that, you can hold and with enough time for two plays you will on average get a shot whose chance of going in is . This means
In this way you can building a recursive sequence for finding out when your team should shoot given that there is enough time for plays. The general recursion rule looks like this:
Along with the condition , this equation defines the “shooter’s sequence”. Writing down the first few terms is actually kind of surprising:
It’s a pretty strange sequence of numbers to come out of such a simple problem, right? And, in fact, there is no analytical expression for the sequence. It can only be defined recursively, or evaluated approximately for large .
And, of course, you can expand on the problem by making the problem statement a little less specific. I ended up calculating some more variations, including the effects of turnovers and different ranges of shot quality, and eventually it became a short paper (although I’m still not sure what kind of journal, if any, would publish it).
It’s still far from clear how much (if any) practical knowledge can be gained from this “shooter’s sequence”. But I know that at moments like this I’m glad that I grew up to love nerdy problems in math and physics. Because if you like this sort of thing, then fun problems are everywhere.
Just look around you:
Why are the players at a sporting event more famous than the halftime show performers?
This question, which was posed in a recent column on ESPN.com (and which features one of my personal favorite performers, the Red Panda Acrobat), sounds a little dumb at first, almost tautological — the halftime show is, by definition, not the main event.
But try this slightly different version of the question: How many professional athletes can you name? How many circus performers can you name? Why are those two numbers so different?
If you were observing the human species from afar, you would probably be surprised by the discrepancy. After all, both categories of people are entertainers. Both base their livelihood on some elite level of physical skill. And it’s hard to believe that athletes have more natural talent or devotion to their craft than, say, this guy. But we (myself included) seem to be much more captivated by athletes than by circus performers. Why the big difference?
There are lots of ways to explain why humans love sports so much. My personal favorite, though, can be found in “The Lottery in Babylon”, a short story written by Jorge Luis Borges in 1941. I came across this story during high school and was immediately hooked by this killer first paragraph:
Like all the men of Babylon, I have been proconsul; like all, I have been a slave. I have known omnipotence, ignominy, imprisonment. Look here– my right hand has no index finger. Look here–through this gash in my cape you can see on my stomach a crimson tattoo–it is the second letter, Beth. On nights when the moon is full, this symbol gives me power over men with the mark of Gimel, but it subjects me to those with the Aleph, who on nights when there is no moon owe obedience to those marked with the Gimel. In the half-light of dawn, in a cellar, standing before a black altar, I have slit the throats of sacred bulls. Once, for an entire lunar year, I was declared invisible–I would cry out and no one would heed my call, I would steal bread and not be beheaded. I have known that thing the Greeks knew not–uncertainty. In a chamber of brass, as I faced the strangler’s silent scarf, hope did not abandon me; in the river of delights, panic has not failed me. Heraclides Ponticus reports, admiringly, that Pythagoras recalled having been Pyrrhus, and before that, Euphorbus, and before that, some other mortal; in order to recall similar vicissitudes, I have no need of death, nor even of imposture.
I owe that almost monstrous variety to an institution–the Lottery– which is unknown in other nations, or at work in them imperfectly or secretly.
(You can read the full text here.)
“The Lottery in Babylon” is, like so many Borges stories, an exploration of a mathematical idea in a fantastical setting. This particular one (it seems to me) is about how we perceive chance and randomness in life. In the story, The Lottery is an all-encompassing, secretive, cult-like religious institution that dictates almost every facet of people’s lives in a way that is supposedly both random and just.
To me, the most interesting part of the story is the telling of how The Lottery came to be. The narrator speculates that The Lottery began just like any other simple gambling game: some people bet money and a few won based on the roll of a die. But:
Naturally, those so-called “lotteries” were a failure. They had no moral force whatsoever; they appealed not to all a man’s faculties, but only to his hopefulness.
In the story, the key breakthrough for the lottery system is the inclusion of both prizes and penalties as the outcomes of lottery drawings. Knowing that a lottery player risks some kind of serious punishment changes entirely the emotional tone of the lottery. That is, playing the lottery becomes not just a matter of blind hope for gain, but something that involves courage and one’s sense of justice.
Babylonians flocked to buy tickets. The man who bought none was considered a pusillanimous wretch, a man with no spirit of adventure. In time, this justified contempt found a second target: not just the man who didn’t play, but also the man who lost
The rest of “The Lottery in Babylon” describes how The Lottery grows increasingly out of control: penalties and prizes become more severe and fundamentally non-monetary in nature, all drawings and decisions are conducted in secret, playing the Lottery becomes compulsory, etc. If you have any nerdy inclinations at all, the story is a pretty good investment of ten minutes of your leisure time.
But I have been thinking lately about “The Lottery in Babylon”, and about our love of sports. It seems to me that we love sports for a lot of the same reasons that the Babylonians loved The Lottery: it appeals not just to our sense of hope — as in, “Oh, I hope my favorite team wins” — but also to our sense of morality. A sporting event is, to a very large extent, a controlled series of random events. But when fans talk about sports we don’t do it because we love probability theory (okay, maybe a small fraction of us have this motivation on occasion). We talk about sports because we love to exalt the virtues and decry the vices of the players. We love to watch our heroes succeed. We love to watch our villains fail. We love to see how normal people become heroes and villains through competition.
But the truth is that all these athletes are, to a very large extent, playing a lottery. Those virtues and vices we talk about are largely things that we project onto the players based on random events. As Matt at hooptheory said, “The biggest fallacy in sports is that the better team will win.” In a sense, our athletes are so famous and so well-paid because we pay them to live a life where their glory or ignominy is dictated by random events. Watching those events unfold and judging the character of the participants is entertaining and moving to us for some profound psychological reason (that I don’t understand).
In contrast, when you go to watch a circus performer, there is no real chance of failure. Sure, the performer might fall briefly or drop a juggled ball. But ultimately, you can be pretty assured that their performance will make only one point: you get to be awed by some display of great skill. Unlike sports, the performance does not appeal to your sense of justice or of character development. You, as a spectator, have no real opportunity to judge the character of the performer: the show has only (apparent) heroes, and no villains. The circus, in other words, is like a “lottery” without penalties, and as such it can only appeal to a limited range of emotions.
These days, I think in particular about LeBron James. Now that the mighty Miami Heat have fallen in the NBA finals, LeBron is considered something of a “pusillanimous wretch”. His biggest sin, committed on the biggest stage, was an apparent refusal to play the game as wholeheartedly as we wanted him to. It seems to me, though, that had a few shots gone differently (say, if Dirk Nowitzki’s layups at the end of Games 2 and 4 had rolled out), we would be talking about LeBron James very differently right now. So many hundreds of articles decrying Dallas’s heroic nature and Miami’s flawed nature would be completely reversed. In other words, our sense of the “true character” of the participants in this sporting contest is based very largely on the outcome of two shots.
As a more extreme example, you can notice that Michael Jordan would have lost his most famous game had it not been for two blown shot-clock violation calls earlier in the game. Without that last-second, out-in-a-blaze-of-glory, second-threepeat-winning shot, the ultra-heroified Michael Jordan might have a significantly different public image.
What if this shot hadn’t mattered at all?
All this is not to say that we are somehow barbarians or idiots for being sports fans. I personally love watching sports, and I love the process of evaluating the character of the players involved, and I don’t feel at all guilty for that. For whatever reason, it’s an endlessly enjoyable process, and for the most part no one is being exploited in the name of my entertainment. But I do think it’s good to be aware of what it means to be an athlete and what it means to be a spectator. Namely, that we pay to watch our athletes play a modern-day version of the Lottery in Babylon.
…I would definitely include the following problem:
You live in the dorms and your upstairs neighbor, LeBrian Skinner, is a serious basketball player. He is about to declare for the NBA draft, but he fears that his merely average height will put him at a disadvantage. To compensate for his relative shortness, LeBrian decides that he needs to have a vertical jump of at least 36 inches.
In the evening you can hear LeBrian practicing his vertical leap, since he lives directly above you: you hear a loud creak when he first jumps followed by a loud thump when he lands again. You use a stopwatch to time the interval between the moment he first leaves the floor and the moment when he lands again. You measure this interval as 0.8 seconds.
Assuming that LeBrian lands with his legs fully extended (in the same position as when he leaves the floor), how high is he jumping? Is it enough?
For those who are curious, the solution is after the page break.
Ants are not smarter than humans (even the ones that are good at path integrals). Nonetheless, the list of their cultural and technological “inventions” is pretty impressive.
Consider, for example, the following abbreviated list of things that were invented by ants long before they were invented by humans:
- livestock cultivation
- a caste system
- large-scale warfare
- (altruistic) suicide
- slave uprisings
That’s quite a list for an animal with a brain 340,000 times smaller than a human’s. And that’s to say nothing of the things that ants can do that we can’t, like make bridges and shelters by linking their bodies together.
Today, for no particular reason, I feel like extolling the wonders (and/or horrors) of the formicidae family. I’ll do so by listing a few of my favorite ant genera (which, as I learned only fairly recently, is the plural of “genus”), along with pictures and brief description of the advanced things they do. I hope to leave you with the beautiful sense that ants have fully-formed civilizations and economies of their own, and that it is by no means a stretch to compare them with those of humans.
My guide in this whimsical exercise will be the fairly incredible book Ants of North America: A Guide to the Genera, which was an excellent birthday present a few years ago. Quotes and images below come from that book. And, of course, I use plenty of Wikipedia.
If there are any entomologists that happen across this blog post, I would be glad for your comments/corrections.
Atta: fungus growers
Ants from the genus Atta are farmers. For some reason it had never occurred to me that ants could farm, even though I had spent many childhood hours watching leaf-cutter ants carry away chopped up pieces of plants and I must have realized that ants probably can’t eat leaves.
In fact, those ants were carrying away the leaf fragments to use them as fertilizer for their vast, underground fungus gardens. In these gardens the ants cultivate various species of fungus for food, carefully keeping them free of pests and molds. This is agriculture on an enormous scale: Atta live in giant colonies of millions of individuals and have an intricate division of labor maintaining their economy.
Pyramica: Benefitters of the Farming Economy
The enormous economic output from the huge farming colonies of Atta (and the similarly agricultural genera Acromyrmex and Trachymyrmex) creates opportunities for more humble, specialized genera like Pyramica.
Ants in the Pyramica genus are specialized predators of little flea-like bugs called Collembola, or “springtails”. Collembola are a prevalent parasite in the fungus colonies of growers like Atta: they eat the fertilizer that the ants have collected. So the Pyramica perform a service for their farming neighbors by hunting the Collembola through the fungus gardens. Pyramica also benefit from the carefully climate-controlled environment that Atta maintain in their fungus gardens.
Nomamyrmex: Specialized predators of Atta
One of the striking things about ants is that they seem to fill every economic niche that one could imagine. Every time there is an opportunity for a living to be made in the insect world, ants find a way to exploit that opportunity.
Nomamyrex is a great example. These are specialized predators of the workers in the enormous Atta colonies. They’re not much bigger than Atta, but they are heavily armored with powerful mandibles. They hunt underground in orderly battalions that are said to “resemble a Panzer division on the march.”
[Update: Apparently, when you hunt Atta, you've got to be very careful.]
Labidus: Underground hunters
Ants that live underground have little use for eyes. So it is with Labidus, a genus of mostly subterranean army ants with powerful, barrel-shaped heads and no vision whatsoever. Labidus ants live in enormous colonies and make their living by sweeping through underground tunnels searching for bugs, worms, and whatever else they come across.
Prenolepsis: “Human food storage”
Prenolepsis ants are the ultimate foragers. They are hardy ants that live comfortably from California to Mexico, and are known for their ability to forage even in very cold temperatures. They also have the ability to store fat in their own enormously bloated abdomens and regurgitate it to feed others when the colony becomes dormant during the hot summer.
This is similar to the behavior of the more famous “honeypot ants” of Australia and the deserts of North America.
Crematogaster: Livetock tenders
Agriculture in the ant world does not stop at fungus tending. A number of ant genera, including Crematogaster, tend aphids and coccids as livestock. Most of the time, the ants are essentially dairy farmers: they protect the aphids in exchange for a syrupy secretion from the aphids called “honeydew”. The ants even have a way of “milking” the aphids by tapping the aphids on the back with their antenna to get them to secrete.
The ants can have other relationships with their livestock as well. Sometimes they play the role of poultry farmers, tending the aphids and collecting their eggs for food. Other times they simply eat the aphids.
Anergates: identity thieves
One way to make a living in the world is by hard work and industry. Another way is to steal from somebody else.
The genus Anergates is, apparently, poorly equipped to make a living. There are no workers, individuals are relatively puny, and the males of the genus are so helpless that they are described as “pupoidal — yellow to cream in color, wingless, and barely able to walk.”
Since there are no members of Anergates capable of procuring food, the females can only survive by invading the nests of other species, killing the queen, and then posing as the queen and being waited on by the other members of the host colony. In fact, Anergates has a preferred target for this identity theft: the species Tetramorium caespitum. Apparently no other ant species is weak enough to be susceptible to this one-ant invasion. It is said that “the best way to collect Anergates is to remove the queen from a T. caespitum colony in the spring and come back the following year”.
Ironically, since the new Anergates faux-queen is unable to produce any new workers, she can only survive as long as the youngest workers in the host colony. Eventually the colony dies out due to a lack of reproduction, and then the Anergates female dies out as well.
Formica: Slavers and Slaves
It was pretty shocking to me to learn that humans were not the original inventors of slavery. The ants of the Formica genus apparently have a varied range of slave-making behaviors. Ants of the sanguinea group, for example, raid the colonies of other Formica groups, carrying off some of their members for labor in the sanguinea‘s own colonies.
Sometimes adult workers are taken captive, and other times the slave raids are more violent: all adults in the colony are killed and only the developing pupae are carried away into a life of subjugation to their captors.
Some Formica groups can enslave multiple other groups of ants, but Formica sanguinea has a particular target: the Formica pallidefulva group:
The Formica pallidefulva group are omnivorous foraging ants who live in relatively small colonies (in the ant world, a few thousand counts as small). Unfortunately, they spend much of their lives enslaved by the slightly larger sanguinea group.
Of course, enslaved ants aren’t always docile. In recent years entomologists have documented cases of violent slave uprisings conducted by slave ants that were abducted as pupae. Here’s one summary of this incredible finding.
Cephalotes and Camponotus colobopsis: Plugheads
These genera of ants are so strange, you almost have to see them to believe it. They live exclusively in trees, digging nests for themselves in hollow branches or sticks. Since they are relatively small and few in number, they have developed a unique form of defense. Namely, they have large, flat, plug-shaped heads that they use to block the entries to the nest. When some foreign danger presents itself, the soldier caste quickly mobilizes to block all entries against intruders, like this.
Bonus — Discothyrea: ???
Just so you don’t leave thinking that all the mysteries of the ant family are solved, I’ll leave you with one of the strangest genera of all. This is Discothyrea, a tiny ant described as “like nothing else in the solar system”. It has weak, toothless mandibles, useless vestigial eyes, and very strangely-shaped antennae. It has been collected in rotten logs and in hummus, but almost nothing is known about how it lives.
At a recent concert I attended (in a tiny, campy theater in Wisconsin), the great guitarist (and ant enthusiast) Leo Kottke quipped that Discothyrea must survive by confusing potential attackers with its smooth, featureless knob of a head. “Some ants survive by attacking and some survive by hiding. This one seems to survive by suddenly appearing and posing an existential question.” [Kottke then went on to play the song "Disco", and I finally understood why the song sounded nothing at all like disco music.]
In summary, ants are awesome.
In lieu of a more intelligent conclusion than that, let me leave you with my favorite ant-themed song of all time.
It is not my intention to talk about irony.
On this particular day I feel a need to talk about Virginia Tech, and to tell a story about the time I spent there. But I want to make it extremely clear that I do not consider this story to be ironic. Neither is it poetic or instructive. It is certainly not uplifting.
Nonetheless, it happens to be true, and for that reason alone it feels important enough to relate.
Today’s post is of course entirely outside the normal scope of this blog. Feel free to skip past it if you like.
On the first day of my senior year at Virginia Tech, August 21, 2006, there was a manhunt. The campus was locked down and we were all ordered to stay home as hundreds of policemen swarmed around the school grounds. I remember feeling mildly annoyed.
Of course, annoyance was not my primary emotion a few days later, when I found out what had actually happened. A prisoner had escaped from a nearby hospital and gone into hiding along the beautiful wooded trail between the hospital and the campus. In the process he had killed two people. I later met both of their families: they had wives and pretty young daughters.
As it turns out, I knew the prisoner also. His name was William Morva, and I had met him during my freshman year flirtations with liberalism and war protesting. He was a young guy, about my age, who showed up without fail to all the liberal events on campus (and there were a lot of them during the year leading up to the Iraq war). I remember him for his almost religious enthusiasm and his relentless, semi-demented smile. I remember him standing behind a lectern wearing his cab driver hat and demanding accountability from some poor member of the Young Republicans club. I remember watching him dance in front of the community theater while someone played the djembe. I didn’t know at the time that he was homeless (a lot of people were going for that look). I also didn’t know that he had the capacity to kill people.
The police eventually found him, of course. He was curled up on the ground inside a bush, stripped down to his boxer shorts. He’s in prison now, awaiting execution.
Anyway, the story I mean to tell is not about murderers. It’s about a different guy I knew during my senior year at Virginia Tech, a sophomore named “Paul” (this is a fake name, since I have no right to speak about the real person).
Paul and I were classmates in Nikki Giovanni‘s Introduction to Creative Writing class during the fall semester of 2006. We were by no means real friends; our acquaintanceship was only at the level of saying hello when we passed each other on campus. Nonetheless, we knew each other better than most acquaintances, since we each listened to the other read fairly personal essays on a semi-weekly basis.
Paul was a serious guy. That’s not entirely true. Paul was a guy who felt a great responsibility in life to be serious. Much more than most people his age (or any age), Paul seemed to have a clear ideal of the man he wanted to become: that man was competent, deliberate, and upright. Despite his purposefulness, though, Paul seemed to have as much capacity for simple, boyish happiness as anyone else. Most of the time during class he would be sitting up straight in his chair, listening stoically. But every now and then something would make him laugh, and in those moments it became clear that he would be a great guy to share a joke with. To say that a person’s smile is “infectious” is pretty trite; I’ll just say that Paul had a smile that was disarmingly genuine.
Perhaps I am projecting things onto Paul, but that’s how I remember it.
Many of the world’s more intense people, it seems to me, have conversion stories; I have one or two of my own. Paul’s particular conversion was to military service. As a very young man (middle school age, as I remember), Paul had felt strongly moved by the call to protect his country by joining the military. Since that age he had taken every opportunity to follow the path that would best prepare him for the life of a military officer. I think that’s part of what drew Paul to Virginia Tech: the chance to join the Corps of Cadets, Virginia Tech’s academy-style military training program. I know that the promise of military authority can sometimes attract unpleasant people (believe me, I spent most of my childhood on military bases), but Paul’s feelings about its importance were genuine, and they were a recurring theme in his writing.
When I think about Paul as I knew him during that semester, the following fact stands out to me: In the days after the manhunt of August 21, 2006, Paul was angry. His anger stemmed not from the violence itself or from how the situation was handled, but from his own inability to help anyone while it was taking place. I remember him venting that anger during the class’s first writing assignment. I remember, with surprising clarity, how he said that he had joined the Corps for the one goal of helping to protect people during dangerous times. And then, he said, when such a time came he was ordered to stay inside his dorm room and just wait for news of how it all ended. I remember how he was frustrated to the point of rage.
Paul is dead now. He was one of thirty-two people killed during the shootings at Virginia Tech that next semester.
I remember sitting inside my apartment on April 16, 2007, infuriatingly helpless, waiting to hear news of how it all ended.
I don’t know anything about what Paul’s life was like during that last semester, or during those last few days or those last few minutes. I don’t know how he felt in that terrible moment when the situation became clear to the students inside that classroom. I don’t know what actions he took. I don’t know anything. And I don’t care to fill the holes in my knowledge with my own imagination.
But I will say this: I reject, categorically, the idea that Paul’s death forms part of some larger narrative. I refuse to call it ironic or poetic. I refuse to imagine God, watching the scene from above with a sad smile, knowing that this is part of some larger plan. The idea of someone describing that moment using the words “purpose” or “reason” makes me furious.
But I don’t know what better words to use.
And in that sense, there’s no point at all to this story, since I refuse to believe that there was any point in Paul’s death. I don’t know why I’m writing this all down, or why I’m calling it a “story”. I feel guilty for stringing the words together in a particular order as if to say that things make sense if you look at them in a particular way.
But on this particular day I feel a need to say something that I know is true. And at the moment all I have is this: I wish that Paul were not dead.
I came across this article in the BBC this morning, which posed the question “when, if ever, will humans run a sub-2-hour marathon?”.
Expert opinion seems to be somewhat divided on this question. The runners themselves seem to think that it’s possible, but not likely within the next few decades. One of the “leading authorities on marathon running in the US” says that it isn’t, while a kinesiology professor from the University of Montreal used some extrapolation formula to predict that the 2 hour mark will be broken in 2028. Just about everyone seemed to agree that 2 hours, 2 minutes is within reach (the current world record stands at 2:03:59).
I’m certainly no expert on distance running, but I did develop something of a method for addressing this question that I used to predict the “fastest possible mile” time (3 minutes, 39 seconds by my estimation).
So I decided to apply the same method here. My conclusion, surprisingly, is that even a marathon time of 2 hours, 2 minutes is far from given. In fact, my prediction is that the “fastest possible marathon” is 2:02:43, only 76 seconds faster than the current world record.
Below I quickly repeat the arguments/procedure I used for the mile and I show the graphical results. (Warning: If you haven’t read my earlier post, the following might not make very much sense).
Here is the progression of the marathon world record over the past 100 years (data from Wikipedia):
If the marathon world record is plotted as a function of “person-years” since 1908, when the world record was first kept, it shows a pretty convincing exponential decay to a particular value: 2:02:43.
Translated back into real time, the progression of the world record marathon time looks like this:
It’s not a rock-solid analysis, but I think the data is actually pretty convincing.
So count me among those who are skeptical that a 2 hour marathon is possible. In fact, count me among that rare (nonexistent?) group of people who are skeptical that a 2 hour, 2 minute marathon is possible.
I hope I’m not right — I love watching humans break records as much as anyone else. But either way, you should take a moment to enjoy watching Haile Gebrselassie’s record-setting performance from 2008. He may have been running to within 1.01% of maximum human capacity.
When I was a brand new graduate student, I attended one of those “get to know each other” lunches at the beginning of the first semester. Such events are usually pretty painful. I hate to perpetuate stereotypes, but physics graduate students (myself included) are by-and-large pretty bad at making pleasantries and light conversation. Half of us are too loath to engage in it and the other half seem blissfully unaware of their own conversational deficiencies. Thus discussions among physics students frequently devolve into one or two people monologuing vigorously while the rest look down at their feet and feel awkward.
At this particular event, however, there was one attendee who was well above average at making pleasant conversation, and who therefore found himself at the center of much of our discussion. He had been teaching physics in high school for the last few years and had decided to return to school to get a graduate degree. Teaching high school is more “real world” experience than most of us had, so I was enjoying listening to him explain how his classes had been organized and how he had designed various projects to keep the students engaged.
Until he told one particular story that really bothered me. One of his projects, he explained, was an assignment to “start a fire using only physics.”
I was mildly horrified.
The intent of the assignment, I suppose, was good — he was trying to get his students to understand how combustion reactions can be started without using ready-made chemical fuel sources. But by telling his students to “use only physics”, this teacher was spreading the somewhat damaging idea that the world can be compartmentalized into distinct blocks of independent knowledge. In other words, such an assignment helps solidify the false sense that every phenomenon belongs to some particular class in school: “when a rocket orbits the earth, that’s physics”, or “when you burn gasoline in your engine, that’s chemistry”, or “when cells divide, that’s biology”, or “when a person has brain damage, that’s psychology”, etc.
Nature is Nature, and it doesn’t care what we call ourselves when we describe it.
Richard Feynman said it this way:
…the full appreciation of natural phenomena, as we see them, must go beyond physics in the usual sense. We make no apologies for making these excursions into other fields, because the separation of fields, as we have emphasized, is merely a human convenience, and an unnatural thing. Nature is not interested in our separations, and many of the interesting phenomena bridge the gaps between fields.
–The Feynman Lectures on Physics, Vol. I, Ch. 35
It seems to me somewhat of a disservice to teach your students that physics/chemistry/biology etc. is a set of things in the universe rather than a language or a set of tools to describe things in the universe.
It was only much later that I realized that I myself was guilty of employing a similar false dichotomy in my thinking. What’s more, all the professional scientists around me seemed just as guilty. For years I had been given assignments by my professors that seemed very much equivalent to “start a fire using only physics”.
Namely, every problem I solve in physics begins by me declaring which of the objects in the problem are classical and which are quantum-mechanical.
Let me elaborate a little bit. Quantum mechanics (QM), as a theory, is built fundamentally around a sharp distinction between “quantum” objects and “classical” objects. QM prescribes a particular set of rules for how quantum objects interact with each other — these are the Schrodinger equation, the Dirac equation, the uncertainty principle, etc. — and how quantum objects interact with classical objects — these are called “measurements”. The set of rules for the two types of interactions (quantum-quantum and quantum-classical) are very different. I am therefore required to declare at the beginning of the problem which objects are quantum and which objects are classical (i.e. when and where the “measurements” are) so that I know which set of rules to use.
What’s more, QM deliberately refuses to make any predictions about a quantum system that does not interact with a classical object. QM is a theory whose only goal is to predict the outcome of measurements, and measurements by definition involve changing the state of a classical object. QM is decidedly agnostic about any situation involving only quantum objects.
Now I don’t mean to make QM sound ugly or arbitrary. It isn’t (at least not any more than most physical theories). QM can be quite elegant, and it is completely capable of reproducing classical physics (i.e. the laws of Newton, Hamilton, Lagrange, etc.) in the appropriate limits. For example, the laws that describe a quantum object of mass m become equivalent to the laws of classical mechanics when m becomes very large.
But that doesn’t mean that QM can stand on its own without relying on classical mechanics. This is a very unusual situation. Try contrasting this behavior with, for example, the theory of relativity. Einstein’s theory of general relativity is an entirely self-sufficient description and is completely independent of the theory it replaced: the laws of Newtonian mechanics and Newtonian gravity. It is capable of reproducing Newton’s laws when pushed to the right limit (say, when describing slow-moving bodies), but you are absolutely not required to accept anything Newton ever said in order to use the theory of general relativity.
This seems natural; by comparison, QM is quite strange.
Lev Landau can probably explain this better than I can. Below is an excerpt from the first few pages of the Landau-Lifshitz Course on Theoretical Physics, Volume 3 (Non-relativistic quantum mechanics).
A more general theory can usually be formulated in a logically complete manner, independently of a less general theory which forms a limiting case of it. Thus, relativistic mechanics can be constructed on the basis of its own fundamental principles, without any reference to Newtonian mechanics. It is in principle impossible, however, to formulate the basic concepts of quantum mechanics without using classical mechanics.
… The possibility of a quantitative description of the motion of an electron [for example] requires the presence also of physical objects which obey classical mechanics to a sufficient degree of accuracy. If an electron interacts with such a “classical object”, the state of the latter is, generally speaking, altered. The nature and magnitude of this change depend on the state of the electron, and therefore may serve to characterize it quantitatively.
In this connection the “classical object” is usually called “apparatus”, and its interaction with the electron is spoken of as “measurement”. However, it must be emphasized that we are here not discussing a process of measurement in which the physicist-observer takes part. By “measurement”, in quantum mechanics, we understand any process of interaction between classical and quantum objects, occurring apart from and independently of any observer.
Thus quantum mechanics occupies a very unusual place among physical theories: it contains classical mechanics as a limiting case, yet at the same time it requires this limiting case for its own formulation.
–Landau & Lifshitz, Course of Theoretical Physics, Vol. 3, Ch. 1
I should make an attempt to show you how ridiculous this sounds. Imagine that I approached one of my physics professors and asked “Why does it hurt to put my hand on a hot stove?”.
The professor might say something like this:
“Oh, well the stove is hot because the atoms that make up its surface have a high kinetic energy: they are vibrating around very quickly. When you put your hand on the stove these hot stove atoms begin to collide with the atoms of your hand, thereby transmitting some of that kinetic energy to your hand [of course atoms don't actually physically touch; they just interact by the electromagnetic force, which transfers momentum from one set of atoms to another]. The atoms on the surface of your hand collide with atoms deeper down in your hand, and so on on, until the heat is transferred a few millimeters past the surface. Then some nerve fibers in your hand respond to the heat, and the rest is biology.”
You might well object to that phrase “the rest is biology”; it is probably used as a substitute for “I don’t really know what happens next”. You might want to say: The whole world is made up of atoms, right? Atoms obey the laws of physics, right? So surely there must be a “physics” explanation to what’s happening that doesn’t allow the professor to cop out and say “the rest is biology.”
And of course, there is, even though it might be complicated. If pressed to elaborate, the professor might be able to say something about how the large kinetic energy of individual molecules in my hand causes cell walls to break apart (these walls were previously held together by strong electrostatic bonds, but the bonds are now weaker than their constituent molecules’ kinetic energy). When the cell walls break, chemicals inside the now-broken cells can diffuse around. When the right chemical diffuses into the right nerve cell, it triggers a channel in the nerve cell to open and let in sodium ions from the surrounding salt water. This in turn triggers nearby channels to open, and there is a wave of nerve channel openings that travels up a long nerve fiber into my brain. [Biologists out there: I apologize if I got some of the details wrong here. Please correct me.]
My point is this: in principle, my question can be completely answered by describing the motion of individual atoms (or, if you prefer, electrons and quarks) as they push and pull on each other by fundamental forces. Even the expletive-laden thoughts that cross my mind as I pull my hand off the stove can be described as a process of many atoms being pushed one way or another in my brain.
There is absolutely nothing real about the distinction between “physics” and “biology” in this explanation. There is only one reality; you can call yourself what you want as you describe it.
Now imagine that I asked the professor a different, more quantum question. For example, “What happens when I fire a single photon at a thin slab of metal?”
The professor might say something like this:
“Well, with a particular probability the photon is reflected, with some other probability the photon is transmitted through the slab, and with a third probability the photon is absorbed by the slab. [The professor can then calculate all these probabilities if I ask him to.] If the photon is absorbed, then the slab moves forward with the momentum it gained from the photon, as dictated by the laws of classical mechanics.”
Here I might object to the professor’s phrase “as dictated by the laws of classical mechanics” in the same way I objected above to the phrase “the rest is biology”. The slab might be big, but it is still made of electrons and protons and neutrons: each of these are quantum objects. So doesn’t the photon (a quantum object) just alter the quantum states of all the constituent particles in the slab?
For that matter, I myself am made of electrons and protons and neutrons. When I interact with the metal slab (perhaps just by watching its motion) doesn’t this change the quantum states of all my little constituent particles? Can’t you tell me what is happening in a consistent way where everything that we know to be a quantum object behaves like a quantum object?
The answer is no. The professor can only tell me what will happen when I let quantum objects interact with classical ones. If I refuse to let him tell me that the metal slab (or myself) is a “classical object” and I insist that everything is ultimately made of tiny particles that must respect QM, then the professor has nothing to tell me. QM refuses to make any statements about a “reality” independent of classical objects.
In short, QM cannot describe a universe made only of quantum objects, even though that is apparently the case in reality.
What a strange and impertinent way to be treated by history’s most successful scientific theory.
Personally, I find it very disconcerting that such a strong dichotomy — quantum versus classical — should be so central to our thinking as physicists. We even have an obvious identifier to tell the two types of objects apart: quantum objects always have Planck’s constant somewhere in their description and classical objects never do.
Somehow this doesn’t fit with my sense of aesthetics, which says that there can be only one universe that doesn’t care what we call classical and what we call quantum. I am left to speculate, or perhaps just to hope, that ultimately QM will be replaced by some theory that can stand on its own and make independent claims about reality. Because a theory that “contains classical mechanics as a limiting case, yet at the same time requires this limiting case for its own formulation” is a hard pill to swallow.
Perhaps this should teach me to be a little more kind to science teachers.