Thursday, May 13, 2010

Of Parallax and Paradox

  And the evening and the morning were the third day.
14  And God said, Let there be lights in the firmament of the heaven to divide the day from the night; and let them be for signs, and for seasons, and for days, and years;
15  And let them be for lights in the firmament of the heaven to give light upon the earth; it was so.
16  And God made two great lights; the greater light to rule the day and the lesser light to rule the night; he made the stars also.
17  And God set them in the firmament of the heaven to give light upon the earth,
18  And to rule over the day and over the night, and to divide the light from the darkness; and God saw that it was good.
19  And the evening and the morning were the fourth day.

Genesis, Chapter 1, verses 13-19

Last week, in part two of the current series, I told the story of how astronomer Ole Roemer first discovered that light does not move from one place to another instantaneously, and how he made his initial calculation of the actual speed that light travels.  In retrospect, I should perhaps have given more emphasis to the many additional experiments that have been conducted to measure the speed of light in the centuries since Roemer did it.  These measurements, involving ever-more accurate devices and improved experimental techniques, are the reason that we have such an exact value (299,792.458 kilometers per second) today.  Some discussion of these experiments would have established, beyond a shadow of a doubt, that 299,792.458 kilometers per second is not a “hypothesis” or a “theory” about the behavior of light that is subject to debate, or could be overturned in some future experiment.   299,792. 458 kilometers per second is a concrete fact, as real as the hand in front of your face.

The importance of the fact that light’s speed (physicists call it “c”) is a far cry from infinite will become clear a little later on.  Right now, I want to begin a different topic with a simple question:  In all the times in your life that you’ve been outside at night, and looking up at the planets and the stars, have you ever wondered how, and when, people began to figure out how far away they are?  Determining their distances certainly wasn’t easy.  For the vast majority of the time that we humans have been around, we lacked the right perspective to even frame the question correctly.   That’s because, just like the Sun appears to rotate around the earth each day (an illusion which made it devilishly difficult to understand that it is really the Earth that is rotating), the night sky looks like a two dimensional surface, at a fixed distance from the Earth, with little lights plastered across its surface.  Imprisoned, constrained, and completely fooled by this compelling illusion, the ancients proposed all sorts of colorful beliefs about the night sky, all of which were premised on the assumption that the heavenly lights were all the same fixed distance away.  One of my personal favorites held that a black curtain surrounds the earth, and the stars are holes in the cloth through which the dazzling brilliance of heaven is shining through.

It was the Greeks who started to part the curtain by employing a straightforward mathematical tool called trigonometric parallax.  To get a quick and dirty feeling for the tool without the messy mathematical details, all you need to do is carry out the following simple exercise.   Position yourself directly in front of your television at a comfortable distance away.  Close your left eye.  Raise your index finger of your right hand straight up in the air.  Position your index finger directly in front of, and about six inches away from, your nose.  Make sure your finger is located directly in front of the middle of your television screen. 

Next, without moving your finger, open your left eye and close your right eye simultaneously.  You’ll see your finger “jump” quite a ways to the right, maybe even all the way off your TV screen (if your screen is small enough, and you are far enough away).  Now, repeat the exercise, but extend your arm all the way so that your right index finger is as far as it can possibly get from your nose.  You will still see your finger “jump” when you switch eyes, but the distance covered by the jump will be noticeably reduced, small enough that your finger probably stays well inside the boundary of the TV screen.

The change in the position of your finger with respect to the background TV is simply due to the fact that you are looking at your finger (and the more distant TV screen) from slightly different positions when you switch eyes.  The closer your finger is to your eyes, the greater the difference those two positions will make, and the more your finger will appear to jump against the screen.  This is where the mathematics comes in.  If you know the exact distance between your right eye and your left eye (the two viewing points), and you know exactly how far your finger appeared to move against the TV screen in the background, you can use simple high school trigonometry, invented by the Greeks, to calculate exactly how far your finger is from your eyes.

What’s the take-home message from our little example?  As long as you can observe a foreground object (like your finger) from two separate locations (like your left and right eyes), and you have background objects much further away from you than the foreground object, then by measuring the amount that the foreground object appears to shift against the background, when observed from the two locations, you can recover the crucial third dimension, and determine exactly how far away the foreground object is from you.

In the thousands of years since this “trigonometric parallax” method was discovered, it has been used routinely to solve all kinds of useful problems, like determining how far a ship is from the shore.  Military types used to need that information in order to figure out how to fire their cannons to hit enemy vessels.  Not to be outdone, both the ancient Greeks and the first generation of astronomers to have access to telescopes used trigonometric parallax to calculate of the distance to objects within our own solar system.  For example, at about the same time that Ole Roemer was making his first observations of Io, Jupiter’s moon, Italian astronomer Giovanni Cassini was making careful observations of where the planet Mars was located with respect to nearby background stars from his location in Paris.  Moreover, Cassini had a research assistant make the same measurements from Equatorial Guinea, several thousand kilometers away (see the figure below). Through trigonometric parallax, Cassini determined a distance to Mars that was only 7% off of the currently accepted value, which we get far more directly by bouncing radar signals off the planet and measuring how long they take to return.

Does trigonometric parallax even work as a tool to measure the distance to the stars? Certainly not by placing observers at a known distance from each other on the Earth’s surface, the way Cassini did for Mars.  The shift in the star’s location against a background of more distant stars would be far too tiny to measure.  Every six months, however, the Earth itself moves in its orbit from one side of the Sun to the other, covering about 186 million miles in the process.  Perhaps that was a displacement large enough for the trigonometric parallax method to work?  Starting around the beginning of the 18th century, astronomers began to pick candidate stars and, with the help of telescopes, note their exact positions, relative to other stars in their vicinity on, say, Dec 21st, and then again on June 21st, six months later.  The method is illustrated in the figure below.  If one of the target stars was observed to move against the other stars in its vicinity, the movement had to be due to the fact that the foreground star was being observed from two different locations, like it was being observed from two eyes, except the two locations were separated by those 186 million miles!  If the size of the shift in position could be measured reliably, astronomers would have all the information needed to calculate the distance to the foreground object (the target star).

Simple, right?  Well, yeah, except that for the longest time, no matter what star they selected as the target, the poor astronomers couldn’t measure any shift.  Luckily for us all (all, that is, except those who cling to the literal truth of Genesis) failure was not an option, and by the early 19th century, several astronomers were competing to be the first to measure stellar parallaxes and compute stellar distances.  Finally, in 1838, an anal German scientist (is there any other sort?) named Friedrich Bessel won the race, and reported a reliable parallax shift for 61 Cygni (a relatively unassuming star in the constellation Cygnus).  Bessel had had good reason to suspect that this particular star might be a relatively near neighbor to the Sun (how Bessel made that determination is interesting in its own right, but beyond the scope of this blog), and was therefore a good candidate to reveal a parallax.  That parallax was exceedingly tiny, though, so small that when Bessel computed the distance to 61 Cygni, it came out to a jaw-dropping 52 trillion miles.

Suddenly, everything clicked into place.  As more stars had their distances determined, it became clear that the trigonometric parallax method had failed for so long simply because the stars are so mind-numbingly distant, which means that the amount they appear to shift was exceedingly tiny.  Although other stars were discovered to be closer than 61 Cygni, none were found any closer than Alpha Centauri, at 26 trillion miles away.  

Time to connect some dots.  At a speed of almost 300,000 kilometers per second, how long does it take light to cross the space between the stars and reach our eyes and our telescopes?  In the case of even the closest star, Alpha Centauri, the answer is 4.2 years. In the case of 61 Cygni, the answer is closer to 10 light years.  For other familiar stars, like Vega, we’re talking more like 26 years.

According to the Genesis version of creation, these stars all popped into existence in the course of a single day, the fourth after creation, and were lighting up the night sky by the time the fourth day came to an end.  But if we take into account the speed of light, and the distance of these stars from the Earth, this is flatly impossible.  On the assumption that the stars were approximately the same distance from the Earth 6000 years ago as they were when Bessel and his colleagues first measured their parallaxes (and, to a first order of approximation, that is the case), then their light had just gotten started on its journey to Earth on the day they were created, and would not have reached the Earth for years.  At risk of belaboring this point, the night sky would have been completely bereft of stars for the first 4.2 years, and then one single, solidarity point of light, the star we now call Alpha Centauri, would have winked on.  It would have taken the entire lifetime of Adam and Eve, for instance, for all the familiar stars in our sky to gradually become visible.

All right, you’re saying.  God has a lot of tricks up His sleeve.  Maybe he got around the pesky little speed-of-light problem by pulling a fast one on us.  Maybe, when He created the stars that would be visible to Adam and Eve in the night sky, He initially placed them all well inside the confines of our own solar system, close enough that they would all be visible on the evening of the fourth day (after all, he had up to 24 hours of wiggle room, so he could put them as far as 16 billion miles out in space, way past the orbit of Pluto, and their light would still have reached the earth before His deadline).   Then, maybe God employed some rather adroit stellar dynamics to whisk these stars out of the solar system fast enough that they would reach their present gargantuan distances in time for Bessel and his colleagues to measure them, 6000 years later. 

Fair enough.  After all, God is omnipotent, so there is really nothing beyond his power to accomplish if He sets his mind to it.  But even this incredible kluge cannot save Genesis, because, embedded in the speed of light and the distances to celestial objects lurks a far more profound challenge to the story.   That challenge will be the subject of the next blog.  Until then, would anybody like to try and trump me and hazard a guess as to what it is?

Thursday, May 6, 2010

Science and Religion at the Speed of Light


In the last blog, I promised to demolish the Genesis account of creation without referencing Darwin, evolution, or anything to do with Earth’s geology.  Instead, I’m going to build my case with ironclad facts from the fields of physics and astronomy, facts that have been established over the course of hundreds of years of painstaking inquiry into the nature of physical reality. 

Although no one knew it at the time, Genesis began to unravel the very first time somebody looked at the planet Jupiter through a telescope.  As most of you know, the person in question was Galileo, and the time was the beginning of the 17th century.  To his great astonishment, Galileo observed four points of light surrounding the giant planet.  Observing them over time, he noted that the positions of the lights changed in ways that made it obvious that they were satellites, orbiting Jupiter in the same way that the planets orbit the Sun.

One of these lights, Jupiter’s moon Io, is so close to Jupiter that it completes a full orbit in only a couple of days, going behind Jupiter for a brief period each time (in astronomy-speak, Jupiter “occults” Io).  Later on in the 17th century, the prominent Danish astronomer Ole Roemer systematically recorded the timing of these occultations over an extended period stretching from 1671 to 1677.   Combining his observations with those of some of his contemporaries, Roemer discovered a remarkably systematic pattern: the time between occultations gets steadily shorter as the Earth’s own orbital motion brings us closer to Jupiter, and then lengthens again as the Earth moves farther away.  Reporting his results to the French Academy of Sciences, Roemer hypothesized that

This… appears to be due to light taking some time to reach us from the satellite; light seems to take about ten to eleven minutes [to cross] a distance equal to the half-diameter of the terrestrial orbit

Clever fellow that he was, Roemer combined this time difference with early estimates of the “half-diameter of the terrestrial orbit” (the distance from the Earth to the Sun, from which he could easily calculate the difference in distance between the Earth and Jupiter when they are closest together compared to when they are furthest apart) to compute the speed of light for the first time (prior to that point, many prominent scientists, Newton included, believed the speed of light was infinite). 

The value Roemer obtained, about 211, 000 kilometers per second, is considerably lower than the currently accepted value of 299,792.458 kilometers per second.  That’s because the radius of the Earth’s orbit wasn’t yet known with any precision in his time.  Still, Roemer’s calculation was in the right ballpark, and the realization that the speed of light had a fixed value, instead of being either immeasurably high or infinite, marked a major advance in the history of science and our understanding of the natural world.  For example, we’ve already seen how fundamental a role the finite speed of light plays in generating time dilation effects.  In subsequent blogs, we’ll discover just how big a nail the speed of light was to drive in the coffin of the biblical story of creation.  But first, we have to identify another important nail, in the form of the distance between the stars and us.  That is the topic of the next blog.