If we cannot actually see a planet, how can we possibly know that it exists? There are two methods. First, it is not accurate to say that a planet orbits a star. The bodies orbit around their common center of mass. That means, if the orbit lies at right angles to the direction of the star as seen from Earth, the star’s apparent position in the sky will show a variation over the period of the planetary year. That change will be tiny, but if the planet is large, the movement of the star might be small enough to measure.
The other (and to this date more successful) method of detection relies on the periodic shift in the wavelengths of light that we receive from a star and planet orbiting around their common center of gravity. When the star is approaching us because the planet is moving away from us, the light will be shifted toward the blue. When the star is moving away from us because the planet is approaching us, the star’s light will be shifted toward the red. The tiny difference between these two cases allows us, from the wavelength changes in the star’s light, to infer the existence of a planet in orbit around it.
Since both methods of detection depend for their success on the planet’s mass being an appreciable fraction of the star’s mass, it is no surprise that we are able to detect only the existence of massive planets, Jupiter-sized or bigger. And so far as rogue worlds are concerned, far from any stellar primary, our methods for the detection of extra-solar planets are no use at all.
The solar focus.
We go to general relativity again. According to that theory, the gravitational field of the Sun will bend light beams that pass by it (actually, Newtonian theory also turns out to predict a similar effect, a factor of two less in magnitude). Rays of light coming from a source at infinity and just missing the Sun will be bent the most, and they will converge at a distance from the Sun of 550 astronomical units, which is about 82.5 billion kilometers. To gain a feeling for that number, note that the average distance of the planet Pluto from the Sun is 5.9 billion kilometers; the solar focus, as the convergence point is known, is a fair distance out.
Those numbers apply for a spherical Sun. Since Sol rotates and so has a bulge at its equator, the Sun considered as a lens is slightly astigmatic.
If the source of light (or radio signal, which is simply another form of electromagnetic wave) is not at infinity, but closer, then the rays will still be converged in their passage by the Sun, but they will be drawn to a point at a different location. As McAndrew correctly points out in the eighth chronicle, a standard result in geometrical optics applies. If a lens converges a parallel beam of light at a distance F from the lens, then light starting at a distance S from the lens will be converged at a distance D beyond it, where 1/F = 1/S + 1/D.
This much is straightforward. The more central element of this chronicle involves far more speculation. When, or if you prefer it, if, will it be possible to produce an artificial intelligence, an “AI,” that rivals or surpasses human intelligence?
It depends which writers you believe as to how you answer that question. Some, such as Hans Moravec, have suggested that this will happen in fifty years or less. Others, while not accepting any specific date, still feel that it is sure to come to pass. Our brains are, in Marvin Minsky’s words, “computers made of meat.” It may be difficult and take a long time, but eventually we will have an AI able to think as well or better than we do.
However, not everyone accepts this. Roger Penrose, whom we have already mentioned in connection with energy extraction from kernels, has argued that an AI will never be achieved by the further development of computers as we know them today, because the human brain is “non-algorithmic.”
In a difficult book that was a surprising best-seller, The Emperor’s New Mind (1989), he claimed that some functions of the human brain will never be duplicated by computers developed along today’s lines. The brain, he asserts, performs some functions for which no computer program can be written.
This idea has been received with skepticism and even outrage by many workers in the field of AI and computer science. So what does Penrose say that is so upsetting to so many? He argues that human thought employs physics and procedures drawn from the world of quantum theory. In Penrose’s words, “Might a quantum world be required so that thinking, perceiving creatures, such as ourselves, can be constructed from its substance?”
His answer to his own question is, yes, a quantum world-view is required. In that world, a particle does not necessarily have a well-defined spin, speed, or position. Rather, it has a number of different possible positions or speeds or spins, and until we make an observation of it, all we can know are the probabilities associated with each possible spin, speed, and position. Only when an observation is made does the particle occupy a well-defined state, in which the measured variable is precisely known. This change, from undefined to well-defined status, is called the “collapse of the quantum mechanical wave function.” It is a well-known, if not well-understood, element of standard quantum theory.
What Penrose suggests is that the human brain itself is a kind of quantum device. In particular, the same processes that collapse the quantum mechanical wave function in sub-atomic particles are at work in the brain. When humans are considering many different possibilities, Penrose argues that we are operating in a highly parallel, quantum mechanical mode. Our thinking resolves and “collapses to a thought” at some point when the wave function collapses, and at that time the many millions or billions of possibilities become a single definite idea.
This is certainly a peculiar notion. However, when quantum theory was introduced in the 1920s, most of its ideas seemed no less strange. Now they are accepted by almost all physicists. Who is to say that in another half-century, Penrose will not be equally accepted when he asserts, “there is an essential non-algorithmic ingredient to (conscious) thought processes” and “I believe that (conscious) minds are not algorithmic entities”?
Meanwhile, almost everyone in the AI community (who, it might be argued, are hardly disinterested parties) listens to what Penrose has to say, then dismisses it as just plain wrong. Part of the problem is Penrose’s suggestion as to the mechanism employed within the brain, which seems bizarre indeed.
As he points out in a second book, Shadows of the Mind (Penrose, 1994), he is not the first to suggest that quantum effects are important to human thought. Herbert Fröhlich, in 1968, noted that there was a high-frequency microwave activity in the brain, produced, he said, by a biological quantum resonance. In 1992, John Eccles proposed a brain structure called the presynaptic vesicular grid, which is a kind of crystalline lattice in the brain’s pyramidal cells, as a suitable site for quantum activity.
Penrose himself favors a different location and mechanism. He suggests, though not dogmatically, that the quantum world is evoked in elements of a neuron known as microtubules. A microtubule is a tiny tube, with an outer diameter of about twenty-five nanometers and an inner diameter of fourteen nanometers. The tube is made up of peanut-shaped objects called tubulin dimers. Each dimer has about ten thousand atoms in it. Penrose proposes that each dimer is a basic computational unit, operating using quantum effects. If he is right, the computing power of the brain is grossly underestimated if neurons are considered as the basic computing element. There are about ten million dimers per neuron, and because of their tiny size each one ought to operate about a million times as fast as a neuron can fire. Only with such a mechanism, Penrose argues, can the rather complex behavior of a single-celled animal such as a paramecium (which totally lacks a nervous system) be explained.