# Nothing lasts forever

Quick, what’s the integral from zero to infinity of ?

If you’re a good math student, you’ll tell me that the answer is undefined, since oscillates forever and so the integral doesn’t converge.

I, on the other hand, am not a good math student, so I am free to tell you that I know the answer.

The integral from zero to infinity of is .

Let me explain where this answer comes from, and why I’m so confident that it’s right. In doing so, perhaps I will demonstrate a little bit about the relationship that physics has with math.

First of all, as a physical-minded person I should interpret what it means to write . In my mind, the only meaningful interpretation of the question “what is ?” is something like “what is the net (integrated) effect of something that oscillates for a very long time?” The in the integral means that when someone asks “how long do you mean by ‘very long’?”, the correct answer is “as long as I want.”

Now I should answer the question, and I can do so as long as I hold to one belief: In the real world, nothing actually lasts forever.

That is, I don’t know how long, exactly, the in the integral will keep going, but I do know that it should die off eventually. So let me assume that the amplitude of the sine wave dies off very slowly (“How slowly?” “As slowly as I want.”). Then I can calculate the integral, get a perfectly well defined answer, and verify that my answer doesn’t depend on how slowly I killed off the sine wave.

For example, say I kill off the oscillations of the sine function exponentially, by replacing in the integral with , where is a very large number. Then I can calculate the integral, and check what happens when gets arbitrarily large. You can do this exercise for yourself (by hand if you’re diligent, using Wolfram Alpha if you’re lazy) and you’ll find that when the answer is very close to . (“How close?” “As close as you want.”).

The more precise mathematical statement goes like this:

(This is a mathematical trick that I use, in one form or another, all the time.)

So, to recap, what is the integral of from zero to infinity?

My math textbook, my math teacher, and all my math software says that the answer is undefined. But as long as you grant me that nothing actually lasts forever, I’ll tell you that the answer is .

### Footnotes

1. It is not really my intention to bash on mathematics or math teachers. For example, I found that the most “hard core” math course that I took in college — real analysis — was thoroughly grounded in intuitive and physical thinking of the sort I am advocating here.

2. One way to think about the final answer, , is that the area under the curve above (red – blue) depends on how many red bumps and how many blue bumps you count. Every red bump contributes an area and every blue bump has area . So as you count them from left to right, your final tally for the area will go back and forth between and . The correct answer, , is the average of these two, which you might expect from any process that slowly washes out your counting procedure.

3. If you’re curious, . (This can of course be worked out using the same mathematical trick as above.)

4. If you’re *reall*y* *curious, . So the integral from zero to infinity of an oscillating wave can take any value from to , depending on its phase when it started. This result could be anticipated from the simple argument in Footnote 2.

5. Just to reassure you, there is nothing magical about the choice of an exponential cutoff . I usually use it because it’s easy to work with. But you’ll find that any slow damping of the oscillations will give the same result.

I suspect, in fact, that there is some nice theorem here. Like:

For any continuous function such that [UPDATE: and ], .

If I were a smarter person I could probably prove this theorem and generalize it to any oscillating function (with zero mean). Can any of my more mathematically inclined readers shed light on the subject? Maybe there are other necessary constraints on ?

Does a photon travelling, without colliding with something eventually fade out or decay? If not, then would it essentially last as long as the Universe exists, which is in effect infinite?

That’s kind of a zen question! If you think about a photon as a classical electromagnetic wave, then you’re right that it should keep propagating forever. Of course, no EM wave is perfectly focused, so it will gradually spread out over time and its amplitude will decay.

Still, the point is that even photons can’t go on

literallyforever. Eventually the photon will interact with something and get absorbed or scattered, or it will run into the finite size of the universe. What I’m saying in this post is that sometimes just knowing thatsomethingbrings it to an end is enough to give you an answer, even if you don’t know what that something is or how ludicrously far away it is.Your definite integral at x=infinity only exists because exp(-x/L) goes to zero in this limit. But then you take the L=infinity limit to justify the final answer, which would invalidate the exponential going to zero in the definite integral since it’s no longer the exponential of something infinitely large but that of something infinitely large divided by something also infinitely large. So i’m not comfortable with the validity of this argument, it doesn’t seem to be a valid procedure to take limits by order of convenience for one thing to vanish but not another.

In other words I think the L->infinity limit of the integral of your modified decaying sin function is undefined for the same reason the integral of the original sine function is also undefined.

Well, if you don’t like the limit , then just make very large and see what happens. For example, set equal to one thousand and you’ll get for the integral . Set equal to one million and you’ll get . So it’s clear that the answer is getting arbitrarily close to as you cut off the sine very slowly.

Not really because your definite integral formula is assuming a L smaller than x no matter how large an actual value of L that you plug in, so yes it’s getting arbitrarily close to 1 when you make L 1 million or something very large, but it doesn’t converge to that as L gets infinitely large because the definite integral will no longer be well defined. Basically showing that for large L it gets close to 1 is not the same thing as saying that the limit as L goes to infinity is 1. The former has an answer, the latter is undefined.

That’s not to say you aren’t correct that a slowly decaying sine function integral will converge to 1, but it doesn’t follow that the integral of a sine function is 1 because the limit of L at infinity that is required to show this doesn’t exist.

This is, perhaps, the whole philosophy that I am trying to advocate here. In the real world, every sine function

isa slowly decaying sine function. So the answer I get by taking the limit outside the integral is perfectly valid for any physical problem.The business of pure and correct mathematics is not one in which I am employed. This whole blog, if you want, adopts the perspective of math as a tool to describe the universe, and not the perspective of math as a universe unto itself.

This is sort of why mathematicians dislike physicists. And real analysis isn’t grounded in intuitive and physical thinking. It’s grounded in formalized logical thinking. Intuition usually let’s you down.

Your proposed theorem is not true. Consider the function f(x)=1/x, which is continuous everywhere except at the origin, which doesn’t matter because sin(x)/x only has a removable discontinuity there. We then have f(x/L)=L/x. Therefore the limit as L tends to infinity of the integral of sin(x)f(x/L) equals the limit as L tends to infinity of L times integral of sin(x)/x which equals L times pi/2 as L tends to infinity, which of course tends to infinity, not 1.

Even if you assume that f(0)=1, the theorem stills fails. Let f(x)=1 for x in [0,1] and f(x)=1/x for x larger than 1. You still get infinity in the limit.

Actually, I think your piecewise function in [0, 1] and in gives the right answer. The integral becomes . I don’t know how to prove it, but it’s fairly easy to see numerically that this converges to .

And as for real analysis, maybe I never got far enough to really see the point where “intuition lets you down.” I found real analysis to be, not physical exactly, but concrete enough to be helpful in developing physical reasoning. I particularly enjoyed the part (very early on) where we learned that A is equal to B as long as you can show that the difference between A and B is as small as you want it to be.

The best example I can think of in real analysis where intuition lets you down in something a like a Weierstrass function, which is continuous everywhere but differentiable nowhere. These actually make up “most” continuous functions, but there’s not an intuitive reason that they exist. They’re very counterintuitive.

I guess I abandoned ship before we got anywhere that crazy. 🙂

My piecewise was a bad example. I have to think about that aspect a little more.

Another issue: suppose you integrate over sin (x) over all of the real numbers instead of only over the positive ones. This doesn’t exist, but you can at least assign a cauchy principal value of zero to it:

Lim as a–> infinity of the integral of sin(x) from -a to a equals 0.

Given your weird reasoning, you might as well say that the integral over just positive numbers is plus or minus one half of the Cauchy principlal value, which is zero. This allows you to give the integral a defined value without using any potentially specious dampening functions. Also, this makes more intuitive sense of the canceling which is supposedly going on.

Ok, so insisting that f(0)=1 is not enough. Let f(x)=1/(x+1). In this case, you still get the same infinite limit as you get in f(x)=1/x. The problem is that the exponential function you’ve chosen to use are integrable on the positive numbers, and f(x)=1/(x+1) is not integrable over the positive numbers. In order to prove what you want, you need the function to be bounded by an integrable function, and the dominated convergence theorem would then prove what you want.

It does start to get a little fishy, yes. But I would argue that since sin(x) at negative x is like an “upside down” version of sin(x) at positive x, then I can say from the beginning that the integral from -a to zero should be the negative of the integral from zero to a. So taking to be one half of is a mistake.

I disagree. I think that .

I agree that the it should not be 1/2 of the big integral. It should be -1/2 of it, which still gives you 0, a better answer than 1.

The integral that I want is . By the anti-symmetry of , I can expect that . This doesn’t tell me anything about what is.

Am I misunderstanding your argument?

I’m not sure if you’re understanding my argument, but it goes as follows:

If you’re in the business of assigning finite values to integrals with no clear mathematical value, you should be choosing the most parsimonious method available. The Cauchy principal value method is more parsimonious than introducing and choosing an arbitrary class of dampening functions to integrate against (I.e. if you asked my calculus II students what the value of the integral of sin(x) was from zero to infinity, they would say “zero” and be on firmer intuitive ground than if they said “one”). So if you get zero looking at the principle value of the integral of sin(x) from negative infinity to infinity, and this integral enjoys the similar cancellation/symmetry properties as integrating sin(x) over just the positive numbers, you should also get zero for this value as well. In fairness, the theorem you suggest might be true, and I unfortunately didn’t have time today to prove it or find an appropriate counterexample.

My main objection is that you shouldn’t be taking an entirely mathematical object like the integral of sin(x) from zero to infinity and suggesting it has a finite value based in your intuition about the finiteness of the natural world. The mathematical definition is entirely clear and shows that it admits no single definite value. The physics you think the situation should take into consideration does not have anything to do with the mathematics involved.

Fair enough. I suppose my tone was unnecessarily authoritative. You’re right that I have no right to make claims about purely mathematical objects. So let me modify my claim to the following:

In a physical problem, when solving for a measurable quantity, if you find yourself staring at something like the integral , the answer that you want is almost certainly .

Ok yes. If you’re explicit that the argument x of sin(x) is a physical variable, then I think you’re on reasonable ground, at least physically. This is an interesting blog, btw.

Thanks!

I think in order for the theorem to be true, you need the function f to be differentiable at zero. The proof is just to integrate by parts, get an L in the denominator of the new integral and cosine times f prime of x over L in the numerator. If the function is differentiable at zero, everything vanishes in the limit except for one. A friend of mine came up with a nonsmooth counter example, which is not really relevant to this discussion.

Ok, so I’m not exactly sure why you deleted my comments; maybe you didn’t like my tone. I’ll assume you had your reasons. I was just trying to clear up your confusion on this subject.

My apologies. I thought I had been deleted.

Sorry, I was just slow in approving them! I think this is your first time commenting here, so I had to approve you manually. Things should be fine from now on.

By the way, in life I have learned to abide by the general rule: “Never ascribe to malice what can be adequately explained by incompetence.”

🙂

By the way, everyone, if you want to put math in your comments you can use LaTeX code. Just write “$latex”, then some valid sequence of latex commands, then another “$”.

You assume that you can pass the limit as L goes to infinity with your integral. But the integrand does not converge uniformly to sin(x). So you are wrong. Read Principles of Mathematical Analysis chapter 7.

You’re right that what I wrote above generally violates the rules that you learn in textbooks (including the rules about when you can exchange limits and integrals). But my contention is that in any real-life system, where oscillations always decay, something that looks like will be correctly interpreted as equal to .

If you want, this is a post about what it means to write an integral to infinity when you’re trying to describe a real system. It might look like it doesn’t converge, but the real integrand, whatever it happens to describe, will eventually decay to zero and give you a convergent answer.

This actually cropped up while calculating the scattering cross-section for the coulomb potential and left a bad aftertaste for the rest of the day.

Thank you so much for clarifying this!