Jump to content

Time02112

Members
  • Posts

    728
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Time02112

  1. RE: The omega man, something they didn't teach you in math school! Sliced-up pi 25 Nov 00 Colin Percival, a 19-year-old mathematics student at Simon Fraser University in British Columbia, has set a new record by working out the quadrillionth binary digit of (pi), the ratio of a circle's circumference to its diameter. Percival used a formula that expresses as an infinite sum, and combined the results from 1734 computers in more than 50 countries linked via the Internet. "By splitting the sum into sub-ranges-first million terms, next million terms, and so on-it was easy to split the calculation between machines," he explains. From New Scientist magazine, vol 168 issue 2266, 25/11/2000, page 11
  2. Universe in the balance 16 Dec 00 At last we know just how much the cosmos weighs. The answer shows that theories of the Universe's origin are spot on, says cosmologist Jeff Peterson. Trouble is, we still haven't a clue what most of the stuff is made from HOW do you weigh the Universe? Astronomers have been asking this question for decades, and using every trick they can think of to get at the answer. Frustratingly, the results never added up. Different techniques gave different answers. Now a new cosmic weight-scale has been pressed into service to try to resolve the conundrum. It's the faint afterglow of the big-bang fireball in which the Universe was born. This glow can still be seen in every part of the sky. Map its structure, the idea goes, and you can work out the cosmic mass. It isn't as easy as it sounds. The structure in this afterglow-the cosmic microwave background (CMB)-is very subtle. What's more, from the surface of the Earth the faint features of the CMB are obscured by the dirty window of our damp, cloudy atmosphere. To get round this, researchers have set up shop in some of the most arid deserts on the planet: the Atacama plateau in Chile, for instance, and high on the dry, icy plateau at the South Pole, site of the telescope built by my research team. Others have suspended their telescopes from helium-filled balloons and floated them high into the stratosphere, above most of the water vapour that causes the problems. This year, all these efforts are finally bearing fruit. Thanks to a flurry of results published in the past 18 months or so, we finally know what the Universe weighs. And the answer is great news for theorists. It tallies with their long-held conviction that the Universe began with a dramatic expansion known as inflation. However, there's bad news too. The new results imply that our Universe is dominated by strange forms of matter that we can't see and don't understand. It was back in 1981 that Alan Guth from the Massachusetts Institute of Technology first proposed that an episode of energy release that he called "inflation" happened in the first minute or so of the Universe's existence. During inflation, the part of the Universe we can see today swelled by a factor of 1060. Then, so the theory goes, the Universe's expansion slowed to a more normal rate. Why propose something that sounds so strange? Well, it solves lots of thorny puzzles in cosmology. In particular, it explains why the Universe seems to be flat, rather than curved. It's hard to picture a three-dimensional universe being curved, but space in any dimensions can have positive curvature, like a ball, or negative curvature like a saddle. Whether the Universe is flat or curved depends on what it weighs-or more precisely, on its density. If the density is just right, the Universe will be flat. If it's higher than this critical value, the gravitational pull of the matter forces space to have positive curvature. If it's lower than the critical value, space is negatively curved. And here's the problem cosmologists faced before the inflation idea appeared. If you start off with a perfectly flat universe early on, it stays flat forever. If, on the other hand, space starts off slightly curved, it quickly becomes dramatically more curved. It's almost impossible for a universe to hover close to flatness for any length of time unless it has no curvature at all. Even in 1981, the signs seemed to be that the density of the Universe was at least somewhat close to the critical value. So some process early on must have made the Universe flat. Inflation fits the bill perfectly. It automatically creates a flat Universe because it stretches out any wrinkles in the curvature-just as blowing up a balloon flattens out its surface. Inflation fills space with material whose density has precisely the critical value. So theorists assumed that inflation must have happened and that the Universe must be at its critical density. In their view, all that was left to do was confirm this by observation. The trouble is that for decades optical observations have thrown up results that fall short of the critical density. In their efforts to inventory all the matter in the Universe, astronomers have mapped the rotational velocities of galaxies to see how much matter was holding them together. They have also looked at clusters of galaxies, and even measured how light is bent by gravity as it passes massive objects on its way to Earth. Over and again, they measured a density that was close to, but still crucially shy of, the critical value. There seemed to be only 30 per cent of the expected matter out there. That's where the microwave background comes in. Imprinted upon it are the frozen images of a time when the Universe rang with vibrations. These vibrations are the key to weighing the Universe. A hundred thousand years after the start of the big bang, conditions were similar to those inside the Sun today. An almost uniform plasma of electrons and hydrogen and helium ions filled the entire Universe, all bathed in a brilliant glow of light-the blaze of the big bang itself. At this early stage, the free electrons played a key role. They scattered the photons so that they careened from free electron to free electron like a relativistic pinball machine, rendering the Universe opaque. Meanwhile, throughout the Universe matter was gradually gathering around the areas of slightly higher density that were eventually to become the galaxies and clusters that we see in the Universe today. Pulled by gravity, matter fell towards these slightly denser regions. But, bombarded by the scattering photons, it was forced out again. In and out the plasma bounced, never fully collapsing, but never quite pulling out of these gravitational hot spots. The material of the early Universe quivered like a shaken bowl of jelly. Then, 300,000 years after the big bang, the slowly falling temperature of the Universe reached 4500 kelvin. Electrons no longer had enough energy to resist being captured by nuclei. Atoms formed, and because photons had no more free electrons to scatter off, the Universe became transparent. But the photons did not disappear, they simply continued in whatever direction their last scattering sent them. Some of these photons happened to scatter in our direction and we can still detect them today. They make up the CMB and they have been travelling unimpeded towards us for almost 12 billion years. Imprinted on this afterglow should be an image of the compressed and rarefied regions frozen at age 300,000 years, showing up as bright and dim regions on the sky. Measure that pattern, the idea goes, and you learn the density of the Universe. Here's how it works. Different-sized regions had different periods of oscillation-the smaller the region, the faster it oscillated. For instance the largest patches had not even completed their first "bounce" when the Universe became transparent, and the smallest patches had been through several cycles. It's the regions that were exactly halfway through their first oscillation cycle when the free electrons disappeared that should show up most strongly in the microwave background. "Halfway through a cycle" describes the point at which the material was at its maximum compression, giving the strongest contrast against the sky. Theorists have worked out exactly how big such regions would have been 300,000 years after the big bang. Knowing how the Universe has expanded, they can also work out how big the same regions should appear on the sky today. Here's where the connection with the Universe's weight comes in. Those regions of compression look bigger to us than they would if the Universe were low-density. That's because matter exerts a gravitational pull on light, curving its trajectory. As the microwave background photons travelled towards us, their paths were bent by the matter in the Universe. The more matter there is in the Universe, the more the light paths are bent and the bigger the regions will appear on the sky. So to weigh the Universe, all you have to do is calculate how big those oscillating regions must have been, see how big they actually look in the microwave background, and work out how much matter is needed to create that distortion in the image (see "Good vibrations"). During the 1990s a series of CMB observations began to show that the sky did indeed contain the signature of those ancient wobbles. But for most of the early microwave telescopes, the images were too smeared-out to resolve the individual bright and dim patches. Then in 1998, my telescope at the South Pole-called Viper-and the Mobile Anisotropy Telescope in the Atacama Desert each mapped out a few square degrees of sky with much higher resolution. In both sets of results, the half-cycle regions seemed to be present. But the observations covered very little sky and it was hard to tell if the structure they were finding was truly representative. Then in April and May this year results from two balloon-borne telescopes, Boomerang and MAXIMA, were reported. Launched from the McMurdo Station on Ross Island, Antarctica, the Boomerang telescope spent 10 days riding the polar stratospheric vortex in a long arc about the South Pole. By the time it returned, it had mapped a whopping 400 square degrees-around one per cent of the sky-which is plenty enough to see whether the results are representative. In addition, even though the MAXIMA telescope only had a one-night flight from Palestine, Texas, the team succeeded in mapping 100 square degrees. In the data from each of these two experiments the half-cycle regions stand out strongly (see Graph). Between them, the two projects have enough data to make an accurate determination of the density of the Universe. (Convert this to a weight by considering the volume of the visible Universe and you get 100 trillion trillion trillion trillion tonnes, give or take a few kilograms.) The measured density of the Universe matches the critical value to within about 6 per cent. It looks as though the balloon projects have nailed it: the Universe is flat, and the theorists and their ideas about inflation seem to be right. So is cosmology now all figured out? Far from it. Our cosmological models are full of gaps. For one thing, there is the discrepancy between the results from optical observers and the microwave background telescopes. It's not necessarily a conflict-they may both be right. The optical observations focus on concentrations of matter such as stars and galaxies. In contrast, the cosmic background reveals the average density not just of matter, but of energy too. Energy exerts a gravitational pull on the paths of CMB photons just as matter does. And the latest idea is that the missing component of the Universe's weight comes from some type of dark energy (New Scientist, 11 April 1998). Still, nobody knows for sure what this energy is, or why it has the value it does. Then there's the problem that optical observers can't explain the nature of all the matter they measure. They know that some of it is just ordinary stuff like stars and planets. But they also require at least five times as much exotic "dark" matter as ordinary matter to explain the way that galaxies rotate, and to explain the fast orbits of galaxies within clusters. Could the Universe really have two mysterious ingredients, dark matter and dark energy? There are also many open issues within inflation theory. Even if current observations point to an inflationary episode in the history of the Universe, they don't tell us how inflation occurred or at what temperature. So far, we don't have a hint as to what sent the big bang booming. Some cosmologists are so dissatisfied with all these mysterious ingredients they prefer to question the laws of gravity. Stacy McGaugh of the University of Maryland has recently shown that we can understand our Universe without the need for exotic dark matter if we accept that gravity might be slightly higher at low accelerations than Newton or Einstein predict. However, even with a modified theory of gravity, McGaugh needs some kind of dark energy to explain the cosmic observations. It looks as though it will be some time yet before the Universe gives up all its secrets. Good vibrations IN THE hot, early Universe, matter tried to collapse into regions of higher density, where the gravitational pull was stronger. But the pressure of photons left over from the big bang pushed the matter outwards again. In and out it bounced, in a series of well-defined oscillations. When the Universe had cooled enough to become transparent, the photons trapped in the hot plasma were suddenly free to travel through space. Frozen into this microwave background-the faint afterglow of the big bang-is the pattern of oscillations that existed when the Universe reached the critical temperature. Researchers can measure the imprint of these oscillations in the microwave background. Small regions oscillated more quickly than large ones, so different sized regions were at different points in their "bounces" when the imprints were frozen in. Because the cycles tended to die down in amplitude after the initial collapse and rebound, a region which we see at the maximum compression of its first bounce will show up more strongly than one at the same point of its second cycle. Hidden in images of the microwave background are different frequencies corresponding to oscillations by different-sized regions. Researchers use mathematical techniques to filter out these frequencies. The result of this process is called a power spectrum, a plot displaying the amount of each frequency present. Searching for a particular oscillation amounts to searching for a peak in the power spectrum. Results published earlier this year from two experiments-Boomerang and MAXIMA-show a strong first peak in the power spectrum (see Graph). This corresponds to regions that had gone through half a cycle when the imprint was frozen into the microwave background. Because they were caught at their maximum compression, and hence their maximum density contrast, they are the easiest to see. By measuring the frequency of this peak, researchers can work out how big the half-cycle regions now look to us on the sky. They look bigger than they would in a low-density universe, because the light has been bent on its way to us by the gravitational pull of intervening material. That shifts the peak to a lower frequency. Thus the position of the peak indicates how much material there is in the Universe, and hence how much it weighs. In the future it should be possible to confirm this result, and learn much more, by looking for more peaks in the spectrum. To do this, NASA has built the Microwave Anisotropy Probe. Scheduled for launch in the middle of next year, MAP will produce a detailed image of the entire sky. Hopefully, it will spot the second peak in the spectrum, corresponding to regions that had gone through one entire cycle and then overshot slightly. If so, that will help pin down the nature of the dark matter in the Universe. It may also see the third peak-due to regions that were at maximum compression on their second oscillation. This would improve our measurement of the density of the Universe and fill in details of what the conditions were when the vibrations started. Meanwhile, the telescopes at less lofty altitudes continue the quest. Another Antarctic balloon launch, carrying the "Top Hat Telescope", is planned for January. Two new interferometer telescopes, DASI at the South Pole and CBI at Atacama, have been collecting data and should weigh in soon with their results. And a new receiver, ACBAR, is being installed on Viper which will give it a resolution three times greater than MAP. Further down the road, ESA plans to launch the Planck satellite in 2007. Planck will image the entire sky with a sensitivity better than MAP, and with twice the resolution. Jeff Peterson From New Scientist magazine, vol 168 issue 2269, 16/12/2000, page 26
  3. Black whole 13 Jan 01 Someone has cleverly weighed the Universe at last (16 December 2000, p 26). Last time I tried, I couldn't get it to stay on the weighbridge. It just kept floating away. I take it that what you really meant was that someone has determined its mass. Recently, I estimated the mass. I was out by a factor of 2. Sorry about that. Using the latest determination, if we were to squeeze all of that mass into a black hole, the Schwarzschild radius of the hole (the distance from its centre to its effective edge) would be about 15.6 billion light years. The latest thinking puts the age of the Universe at about 15 billion years. Not long ago, therefore-and possibly still-the Universe was inside a black hole. The matter inside a black hole collapses to a singularity. The Universe is supposed to have been expanding ever since the big bang. Will someone please explain this flat contradiction? Due to an editing error, we did indeed say "weight" on page 29, when of course we meant "mass". The person responsible has been taken out and shot-Ed P. Warlow From New Scientist magazine, vol 169 issue 2273, 13/01/2001, page 55
  4. There's no such time as the present 13 Jan 01 THE perception of "now" varies from person to person, researchers in Britain have found. Jim Stone and his colleagues at the University of Sheffield looked at differences in the time it takes for audio and visual stimuli to reach people's consciousness. They showed volunteers a red light and played a tone, with anything up to a quarter of a second between the two. Sometimes the light came first, sometimes the sound and sometimes they occurred simultaneously. The test was repeated 1000 times for each of the 17 volunteers, who were asked to say whether the light and the sound happened at exactly the same time. Stone was surprised to find that some people reported the events as simultaneous when the light preceded the sound by up to 150 milliseconds. Others did so when the sound came before the light. To find out if people take into account the time it takes sounds to travel to them from distant sources, Stone repeated the experiment with the sound coming from about 4 metres away, taking an extra 11 milliseconds to reach the volunteers. None of them took the extra distance into account when reporting simultaneous events, Stone found. But he was astonished by how consistent each individual's judgements remained-exactly 11 milliseconds off their original judgements. It doesn't seem to matter if different people have different ideas about whether events are simultaneous, Stone concludes-but personal consistency is vital. "It should be rock-solid stable," he says, "otherwise you wouldn't be able to play ping-pong." Alison Motluk From New Scientist magazine, vol 169 issue 2273, 13/01/2001, page 17 Further reading: More at: Proceedings of the Royal Society B (vol 268, p 31)
  5. Does the key to quantum computing lie in freezing a light beam? A PULSE of light can be stopped dead, and then sent on its way again at the flick of a switch, say two American research teams. Their achievement takes us a step closer to quantum computers, because it provides a way to pluck quantum information from a beam of light without having to keep individual atoms in a fragile quantum state. Light travels through empty space at 300,000 kilometres per second, or somewhat slower in a dense medium such as glass or water. In 1999, Lene Hau of Harvard University stunned physicists by slowing light to a few metres per second (New Scientist, 20 February 1999, p 10). Now Hau has gone one step further and brought light to a complete standstill in a specially prepared gas of cold sodium atoms. At the same time, a team at the Harvard-Smithsonian Center for Astrophysics has reported achieving similar results in a hot gas of rubidium atoms. According to Ron Walsworth of the Harvard-Smithsonian team, similar techniques could play a key role in future super-fast quantum computers. Such machines will need to transfer quantum information from light beams to atoms for processing. Previous attempts to do this have used light to push individual atoms into an excited state. But these states are so delicate they are liable to be destroyed by background noise. In the latest experiments, when the light stops, the information in its electromagnetic fields is stored in the arrangement of many gas atoms. "We have over 1012 atoms, which makes the state very robust," says David Phillips of the Harvard-Smithsonian team. This means the information can be retrieved with 100 per cent efficiency. The key to stopping light is to nudge the gas atoms into a "dark state" in which their electrons are unable to jump up to higher energy levels. This means that the atoms cannot absorb light, so when the researchers shine a pulse of light into the gas it interacts with the "spin" of the gas nuclei instead. This is what slows the pulse down. Both groups used a second carefully tuned laser beam, known as the coupling beam, to create a gas in a dark state. The light pulse's speed depends on the intensity of the coupling beam. The dimmer the beam, the slower the pulse travels, and switching off the coupling beam brings the light to a complete stop. The researchers found that they could set the trapped light pulse moving again by restoring the coupling beam. The tricky part is switching off the coupling beam without destroying the dark state, says Mikhail Lukin of Harvard-Smithsonian, who led the theoretical work which inspired both experiments. But Hau says her team found that "you can slam it on and off." Either way, "everybody thought it was pretty wild," says Seth Lloyd, a quantum computing engineer from the Massachusetts Institute of Technology who attended the Physics of Quantum Electronics conference in Utah last week where Lukin presented his experimental results. Engineers like Lloyd would prefer to be able to make their quantum computers out of a solid, rather than a gas. Phil Hemmer of the Air Force Research Laboratory at Hanscom in Massachusetts may have the answer. He has slowed light in a crystal of yttrium silicate, and is about to try stopping it completely using the new technique. "Now they've shown it's possible, the next step is to show it's practical," he says. Their success isn't guaranteed. In a solid, some atoms won't settle into a dark state and could absorb the pulse. Hemmer plans to use a third laser beam to dump the uncooperative atoms out of the way into a different energy level. Eugenie Samuel From New Scientist magazine, vol 169 issue 2275, 27/01/2001, page 4 Further reading: More at: Nature (vol 409, p490), Physical Review Letters (vol 86, p783)
  6. Dangerous reflections 10 Feb 01 Space mirrors that reflect sunlight back to Earth, such as Russia's Znamya satellite, could blind skywatchers on the ground, say Canadian astronomers. James Laframboise of York University and Ralph Chou of the University of Waterloo calculated that Znamya would appear as bright as the Sun to someone near the centre of the beam. If they were gazing through a telescope or binoculars at the time there would be "a serious risk of eye damage", the researchers say in the Journal of the Royal Astronomical Society of Canada (vol 94, p 237). From New Scientist magazine, vol 169 issue 2277, 10/02/2001, page 19
  7. Rivers in Time. this man thinks it's all over for us! He Thinks it's All Over 17 Feb 01 Rivers in Time by Peter Ward, Columbia University Press, $29.95, ISBN 0231118627 IS THE Earth going through a mass-extinction event? Peter Ward from the University of Washington thinks so, and sets out to make sense of the present by unravelling the past. In Rivers in Time, Ward looks back at three mass extinctions that have shaken the living world, before trying to make sense of a possible modern extinction event. Catastrophism is in vogue again. Although the blurb on the back cover carefully conceals the fact, Ward plays a straight bat in the preface when he admits that the book is really an updated edition of his 1992 publication The End of Evolution. He then gets on with the admirable job of conveying the geological evidence for mass extinctions, clearly and concisely. Ward's writing style is mostly easy to read: I felt as though I was taking part in a conversation, rather than listening to a lecture. But I was disappointed by the scanty, bullet-pointed list of thoughts with which he ends the book, in which he looks at what may be in store for us in the near future. Ward's contentious message appears to be one of complacency. No matter what happens to the rest of the animal kingdom, he says, we human beings are "at the pinnacle of biodiversity" and therefore "extinction-proof". Stuart Clark is director of public astronomy education at the University of Hertfordshire From New Scientist magazine, vol 169 issue 2278, 17/02/2001, page 53
  8. The omega man, something they didn't teach you in math school! CHEW ON THIS! The omega man 10 Mar 01 He shattered mathematics with a single number. And that was just for starters, says Marcus Chown TWO plus two equals four: nobody would argue with that. Mathematicians can rigorously prove sums like this, and many other things besides. The language of maths allows them to provide neatly ordered ways to describe everything that happens in the world around us. Or so they once thought. Gregory Chaitin, a mathematics researcher at IBM's T. J. Watson Research Center in Yorktown Heights, New York, has shown that mathematicians can't actually prove very much at all. Doing maths, he says, is just a process of discovery like every other branch of science: it's an experimental field where mathematicians stumble upon facts in the same way that zoologists might come across a new species of primate. Mathematics has always been considered free of uncertainty and able to provide a pure foundation for other, messier fields of science. But maths is just as messy, Chaitin says: mathematicians are simply acting on intuition and experimenting with ideas, just like everyone else. Zoologists think there might be something new swinging from branch to branch in the unexplored forests of Madagascar, and mathematicians have hunches about which part of the mathematical landscape to explore. The subject is no more profound than that. The reason for Chaitin's provocative statements is that he has found that the core of mathematics is riddled with holes. Chaitin has shown that there are an infinite number of mathematical facts but, for the most part, they are unrelated to each other and impossible to tie together with unifying theorems. If mathematicians find any connections between these facts, they do so by luck. "Most of mathematics is true for no particular reason," Chaitin says. "Maths is true by accident." This is particularly bad news for physicists on a quest for a complete and concise description of the Universe. Maths is the language of physics, so Chaitin's discovery implies there can never be a reliable "theory of everything", neatly summarising all the basic features of reality in one set of equations. It's a bitter pill to swallow, but even Steven Weinberg, a Nobel prizewinning physicist and author of Dreams of a Final Theory, has swallowed it. "We will never be sure that our final theory is mathematically consistent," he admits. Chaitin's mathematical curse is not an abstract theorem or an impenetrable equation: it is simply a number. This number, which Chaitin calls Omega, is real, just as pi is real. But Omega is infinitely long and utterly incalculable. Chaitin has found that Omega infects the whole of mathematics, placing fundamental limits on what we can know. And Omega is just the beginning. There are even more disturbing numbers-Chaitin calls them Super-Omegas-that would defy calculation even if we ever managed to work Omega out. The Omega strain of incalculable numbers reveals that mathematics is not simply moth-eaten, it is mostly made of gaping holes. Anarchy, not order, is at the heart of the Universe. Chaitin discovered Omega and its astonishing properties while wrestling with two of the most influential mathematical discoveries of the 20th century. In 1931, the Austrian mathematician Kurt GÖdel blew a gaping hole in mathematics: his Incompleteness Theorem showed there are some mathematical theorems that you just can't prove. Then, five years later, British mathematician Alan Turing built on GÖdel's work. Using a hypothetical computer that could mimic the operation of any machine, Turing showed that there is something that can never be computed. There are no instructions you can give a computer that will enable it to decide in advance whether a given program will ever finish its task and halt. To find out whether a program will eventually halt-after a day, a week or a trillion years-you just have to run it and wait. He called this the halting problem. Decades later, in the 1960s, Chaitin took up where Turing left off. Fascinated by Turing's work, he began to investigate the halting problem. He considered all the possible programs that Turing's hypothetical computer could run, and then looked for the probability that a program, chosen at random from among all the possible programs, will halt. The work took him nearly 20 years, but he eventually showed that this "halting probability" turns Turing's question of whether a program halts into a real number, somewhere between 0 and 1. Chaitin named this number Omega. And he showed that, just as there are no computable instructions for determining in advance whether a computer will halt, there are also no instructions for determining the digits of Omega. Omega is uncomputable. Some numbers, like pi, can be generated by a relatively short program which calculates its infinite number of digits one by one-how far you go is just a matter of time and resources. Another example of a computable number might be one that comprises 200 repeats of the sequence 0101. The number is long, but a program for generating it only need say: "repeat `01' 400 times". There is no such program for Omega: in binary, it consists of an unending, random string of 0s and 1s. "My Omega number has no pattern or structure to it whatsoever," says Chaitin. "It's a string of 0s and 1s in which each digit is as unrelated to its predecessor as one coin toss is from the next." The same process that led Turing to conclude that the halting problem is undecidable also led Chaitin to the discovery of an unknowable number. "It's the outstanding example of something which is unknowable in mathematics," Chaitin says. An unknowable number wouldn't be a problem if it never reared its head. But once Chaitin had discovered Omega, he began to wonder whether it might have implications in the real world. So he decided to search mathematics for places where Omega might crop up. So far, he has only looked properly in one place: number theory. Number theory is the foundation of pure mathematics. It describes how to deal with concepts such as counting, adding, and multiplying. Chaitin's search for Omega in number theory started with "Diophantine equations"-which involve only the simple concepts of addition, multiplication and exponentiation (raising one number to the power of another) of whole numbers. Chaitin formulated a Diophantine equation that was 200 pages long and had 17,000 variables. Given an equation like this, mathematicians would normally search for its solutions. There could be any number of answers: perhaps 10, 20, or even an infinite number of them. But Chaitin didn't look for specific solutions, he simply looked to see whether there was a finite or an infinite number of them. He did this because he knew it was the key to unearthing Omega. Mathematicians James Jones of the University of Calgary and Yuri Matijasevic of the Steklov Institute of Mathematics in St Petersburg had shown how to translate the operation of Turing's computer into a Diophantine equation. They found that there is a relationship between the solutions to the equation and the halting problem for the machine's program. Specifically, if a particular program doesn't ever halt, a particular Diophantine equation will have no solution. In effect, the equations provide a bridge linking Turing's halting problem-and thus Chaitin's halting probability-with simple mathematical operations, such as the addition and multiplication of whole numbers. Chaitin had arranged his equation so that there was one particular variable, a parameter which he called N, that provided the key to finding Omega. When he substituted numbers for N, analysis of the equation would provide the digits of Omega in binary. When he put 1 in place of N, he would ask whether there was a finite or infinite number of whole number solutions to the resulting equation. The answer gives the first digit of Omega: a finite number of solutions would make this digit 0, an infinite number of solutions would make it 1. Substituting 2 for N and asking the same question about the equation's solutions would give the second digit of Omega. Chaitin could, in theory, continue forever. "My equation is constructed so that asking whether it has finitely or infinitely many solutions as you vary the parameter is the same as determining the bits of Omega," he says. But Chaitin already knew that each digit of Omega is random and independent. This could only mean one thing. Because finding out whether a Diophantine equation has a finite or infinite number of solutions generates these digits, each answer to the equation must therefore be unknowable and independent of every other answer. In other words, the randomness of the digits of Omega imposes limits on what can be known from number theory-the most elementary of mathematical fields. "If randomness is even in something as basic as number theory, where else is it?" asks Chaitin. He thinks he knows the answer. "My hunch is it's everywhere," he says. "Randomness is the true foundation of mathematics." The fact that randomness is everywhere has deep consequences, says John Casti, a mathematician at the Santa Fe Institute in New Mexico and the Vienna University of Technology. It means that a few bits of maths may follow from each other, but for most mathematical situations those connections won't exist. And if you can't make connections, you can't solve or prove things. All a mathematician can do is aim to find the little bits of maths that do tie together. "Chaitin's work shows that solvable problems are like a small island in a vast sea of undecidable propositions," Casti says. Take the problem of perfect odd numbers. A perfect number has divisors whose sum makes the number. For example, 6 is perfect because its divisors are 1, 2 and 3, and their sum is 6. There are plenty of even perfect numbers, but no one has ever found an odd number that is perfect. And yet, no one has been able to prove that an odd number can't be perfect. Unproved hypotheses like this and the Riemann hypothesis, which has become the unsure foundation of many other theorems (New Scientist, 11 November 2000, p 32) are examples of things that should be accepted as unprovable but nonetheless true, Chaitin suggests. In other words, there are some things that scientists will always have to take on trust. Unsurprisingly, mathematicians had a difficult time coming to terms with Omega. But there is worse to come. "We can go beyond Omega," Chaitin says. In his new book, Exploring Randomness (New Scientist, 10 January, p 46), Chaitin has now unleashed the "Super-Omegas". Like Omega, the Super-Omegas also owe their genesis to Turing. He imagined a God-like computer, much more powerful than any real computer, which could know the unknowable: whether a real computer would halt when running a particular program, or carry on forever. He called this fantastical machine an "oracle". And as soon as Chaitin discovered Omega-the probability that a random computer program would eventually halt-he realised he could also imagine an oracle that would know Omega. This machine would have its own unknowable halting probability, Omega. But if one oracle knows Omega, it's easy to imagine a second-order oracle that knows Omega. This machine, in turn, has its own halting probability, Omega, which is known only by a third-order oracle, and so on. According to Chaitin, there exists an infinite sequence of increasingly random Omegas. "There is even an all-seeing infinitely high-order oracle which knows all other Omegas," he says. He kept these numbers to himself for decades, thinking they were too bizarre to be relevant to the real world. Just as Turing looked upon his God-like computer as a flight of fancy, Chaitin thought these Super-Omegas were fantasy numbers emerging from fantasy machines. But Veronica Becher of the University of Buenos Aires has shown that Chaitin was wrong: the Super-Omegas are both real and important. Chaitin is genuinely surprised by this discovery. "Incredibly, they actually have a real meaning for real computers," he says. Becher has been collaborating with Chaitin for just over a year, and is helping to drag Super-Omegas into the real world. As a computer scientist, she wondered whether there were links between Omega, the higher-order Omegas and real computers. Real computers don't just perform finite computations, doing one or a few things, and then halt. They can also carry out infinite computations, producing an infinite series of results. "Many computer applications are designed to produce an infinite amount of output," Becher says. Examples include Web browsers such as Netscape and operating systems such as Windows 2000. This example gave Becher her first avenue to explore: the probability that, over the course of an infinite computation, a machine would produce only a finite amount of output. To do this, Becher and her student Sergio Daicz used a technique developed by Chaitin. They took a real computer and turned it into an approximation of an oracle. The "fake oracle" decides that a program halts if-and only if-it halts within time T. A real computer can handle this weakened version of the halting problem. "Then you let T go to infinity," Chaitin says. This allows the shortcomings of the fake to diminish as it runs for longer and longer. Using variations on this technique, Becher and Daicz found that the probability that an infinite computation produces only a finite amount of output is the same as Omega, the halting probability of the oracle. Going further, they showed that Omega is equivalent to the probability that, during an infinite computation, a computer will fail to produce an output-for example, get no result from a computation and move on to the next one-and that it will do this only a finite number of times. These might seem like odd things to bother with, but Chaitin believes this is an important step. "Becher's work makes the whole hierarchy of Omega numbers seem much more believable," he says. Things that Turing-and Chaitin-imagined were pure fantasy are actually very real. Now that the Super-Omegas are being unearthed in the real world, Chaitin is sure they will crop up all over mathematics, just like Omega. The Super-Omegas are even more random than Omega: if mathematicians were to get over Omega's obstacles, they would face an ever-elevated barrier as they confronted Becher's results. And that has knock-on effects elsewhere. Becher and Chaitin admit that the full implications of their new discoveries have yet to become clear, but mathematics is central to many aspects of science. Certainly any theory of everything, as it attempts to tie together all the facts about the Universe, would need to jump an infinite number of hurdles to prove its worth. The discovery of Omega has exposed gaping holes in mathematics, making research in the field look like playing a lottery, and it has demolished hopes of a theory of everything. Who knows what the Super-Omegas are capable of? "This," Chaitin warns, "is just the beginning." Marcus Chown From New Scientist magazine, vol 169 issue 2281, 10/03/2001, page 28 Further reading: : Exploring Randomness by G. J. Chaitin, Springer-Verlag (2001) A Century of Controversy Over the Foundations of Mathematics by G. J. Chaitin, Complexity, vol 5, p 12 (2000) The Unknowable by G. J. Chaitin, Springer-Verlag (1999) Randomness everywhere by C. S. Calude and G. J. Chaitin, Nature, vol 400, p 319 (1999) http://www.cs.umaine.edu/~chaitin/
  9. Tricks of the light 07 Apr 01 A 21st-century conjuror is revealing secrets that have lain hidden for 5000 years. Michael Brooks marvels at the magic TOM MALZBENDER is a master of illusion. Using a strange black dome, he can alter the appearance of any object he chooses. He can instantly coat it with shiny liquid metal. He can even illuminate it from angles that are physically impossible. Malzbender works at the visual computing department of Hewlett-Packard Laboratories in Palo Alto, California. His job is to map the texture of rough surfaces using digital photographs and reproduce those surfaces perfectly as computer graphics in a virtual world. Because of his weird powers, there is something of a queue forming outside his door-a line of researchers who see his abilities as a potential money-spinner. So far, however, Malzbender has eschewed the promise of a profitable future in favour of illuminating the past. He is helping archaeologists revisit the dawn of the written word-by using his powers to reveal long-lost details of ancient civilisations hidden on badly eroded clay tablets. Malzbender first got the idea of applying his skills to archaeology in 1999, when he attended a lecture given by Bruce Zuckerman, director of the University of Southern California's West Semitic Research Project in Los Angeles. Zuckerman has spent twenty years developing high-resolution photographic techniques for reading cuneiform inscriptions-the earliest known form of writing, dating back as far as 3000 BC- written on clay tablets. Babylonian scribes used sharpened reeds to write on wet clay, but the surface of the tablets that Zuckerman is studying have worn away over the millennia and the inscriptions have faded. Even with his new techniques Zuckerman was finding it almost impossible to discover what the scribes had written. During Zuckerman's talk, Malzbender realised that the technology he and his colleague Dan Gelb had been developing at Hewlett-Packard might help to reveal the hidden details of Zuckerman's cuneiform inscriptions. So after the talk, Zuckerman arranged to give Malzbender a tablet from the university's collection to test the technique. Malzbender quickly set to work. His tools are simple: a computer and a specially designed, lightproof plastic dome about a metre across, with a digital camera mounted in the top. Inside the dome are 50 computer-controlled flash bulbs arranged to provide illumination from a variety of angles. But this simple set-up allows Malzbender to create a computer model of the surface of the tablet that's exact in every detail. It works like this. Malzbender places the dome over the tablet and flicks a switch. The camera then takes 50 colour digital photographs of the object, each one lit with white light by a different bulb. Small marks in the surface of the tablet reflect and scatter different wavelengths of light in different directions. So by controlling which flash bulbs fire on each shot, every image will show the tablet illuminated with different patterns of light and colour. Malzbender and Gelb's image-processing software then divides each photograph into six million pixels, each pixel corresponding to a single point on the surface of the tablet. The software records the relative brightness and the spectrum of light scattered from each point, and then combines data from all 50 photographs to produce a detailed map of the tablet's response to light. Reveal the invisible By correlating the colour and brightness of each pixel with the lighting angle, the software can work out the exact orientation of the tablet's surface at that point. Malzbender then uses software to build a computer model of the tablet from this information. Since this model can tell him exactly how the real tablet responds to light, he can manipulate and study it under virtual lighting conditions to extract extra details from the faded inscriptions. To get the most from his virtual tablet, the first thing he does is alter its appearance, transforming its crumbly, dull clay surface into a highly polished reflective one-as if he had coated it with an ultra-thin layer of molten metal. To do this he simply makes every point on the virtual tablet reflect light more strongly. The human eye can tell a lot more about subtle surface shape if a material is highly reflective. Scratches in metal, for instance, are much more obvious than the same scratches on dull plastic. Then he waves a virtual spotlight across the tablet and looks for markings. Malzbender and Gelb have discovered that they can get even more information from a faded tablet if the pixels only create reflections when the spotlight is aimed directly at them. Now a tiny change in lighting angle dramatically alters the amount of visible detail-if you move the light source slightly to one side, the pixel will go dark. Knowing the orientation of each point on the clay also enables the software to defy the laws of physics. Sometimes, by accentuating shadows, an oblique lighting angle gives the best view of the marks and lines on the tablet, for example. But in the real world there's only so far you can go: once the light source is below the edge of the tablet, nothing will reach its surface. With his virtual tablet, though, Malzbender is not bound by natural laws. "We can put the light source in physically implausible locations," he says. They can move it round the tablet to illuminate it from behind, for example. The team can even simulate the effect of a light suspended between the walls of a millimetre-wide scratch. The results are stunning. Writing that was almost invisible to the naked eye now practically jumps off the clay at them. "We were even able to note the fingerprints of the scribe who held the clay while it was still wet," Zuckerman recalls. He was so impressed that last August he persuaded Malzbender to perform his tricks for Walter Bodine, an expert in Babylonian writing at Yale University. The Yale Babylonian Collection comprises 40,000 clay tablets covered with cuneiform script. Because the work is so difficult and time-consuming, the majority of them have not been closely examined or transcribed. Bodine had spent six years painstakingly deciphering and transcribing the faint, crumbling text inscribed on one particular tablet. However, there were some details that simply couldn't be discerned. Until, that is, Malzbender arrived to perform his magic. Bodine says they were able to recover new details almost immediately. Those details are themselves illuminating. Bodine's tablet was a draft contract: around 3100 BC, a Sumerian trader named Ur Ningal was selling slaves. The contract stipulated that, if the goods proved faulty, the buyer could return them for a full refund. Who'd have thought the money-back guarantee had such ancient origins? Bodine is very excited by the information Malzbender's device has revealed. "This has given me a whole new set of data," he says. "I have found quite a number of things I haven't been able to see before." Using the technology is child's play: Malzbender and his team have developed an archaeologist-friendly software interface for their prototype dome. Bodine is looking forward to ditching his anglepoise lamp and magnifying glass. "I sure hope we get one of these devices at Yale," he says. "This will cut the work of years down to months." He could be out of luck, however. "We do plan to continue our collaboration with Bruce Zuckerman and the people at Yale," says Malzbender. But the market probably isn't sufficiently large to warrant Hewlett-Packard opening a production line for the hardware. The software, on the other hand, is certain to be exploited. The team is already looking into developing their image-manipulation techniques as plug-ins for commercial image-processing software packages. Managers at General Motors hope the techniques will help in the development of photo-realistic computer graphics for their car designers, allowing a computer to show, for instance, how spray-painted surfaces will look under different lighting conditions. And Malzbender could soon be using his illusions to catch thieves. He is talking with police agencies about applications in forensic science. Would-be criminals beware: Malzbender has already exposed a set of fingerprints that no one had seen for 5000 years. Revealing modern prints-from feet or fingers-might prove as easy as switching on the light. Michael Brooks Malzbender's 3D image enhancer From New Scientist magazine, vol 170 issue 2285, 07/04/2001, page 38
  10. What's the big rush? What's the big rush? 07 Apr 01 Light from the oldest supernova ever seen suggests the Universe is expanding faster and faster THE most distant supernova ever observed appears to have blown its top when the expansion of the Universe was slowing down. Ironically, this observation boosts the idea that the Universe is filled with "dark energy" that stretches space and is now making it expand faster. Adam Riess of the Space Telescope Science Institute in Baltimore and Peter Nugent of the Lawrence Berkeley National Laboratory in California spotted the explosion in Hubble Space Telescope data. Because the Universe is expanding, distant stars and galaxies recede from Earth and their light is stretched out, pushing it towards the red end of the spectrum. Hence, assuming the Universe expands predictably, you can judge how far away a star is by the size of its red shift. Two years ago, two teams of astronomers reported that distant stellar explosions known as type Ia supernovae, which always have the same brightness, appeared about 25 per cent dimmer from Earth than expected from their red shifts. That implied that the expansion of the Universe has accelerated. This is because the supernovae were farther away than they ought to have been if the Universe had been expanding at a steady rate for the billions of years since the stars exploded. But some researchers have argued that other phenomena might dim distant supernovae. Intergalactic dust might soak up their light, or type Ia supernovae from billions of years ago might not conform to the same standard brightness they do today. This week's supernova finding seems to have dealt a severe blow to these arguments against an accelerating Universe. The new supernova's red shift implies it is 11 billion light years away, but it is roughly twice as bright as it should be. Hence it must be significantly closer than it would be had the Universe expanded steadily. Neither dust nor changes in supernova brightness can easily explain the brightness of the explosion. Dark energy can, however. When the Universe was only a few billion years old, galaxies were closer together and the pull of their gravity was strong enough to overcome the push of dark energy and slow the expansion. A supernova that exploded during this period would thus be closer than its red shift suggests. Only after the galaxies grew farther apart did dark energy take over and make the Universe expand faster. So astronomers should see acceleration change to deceleration as they look farther back in time. "This transition from accelerating to decelerating is really the smoking gun for some sort of dark energy," Riess says. More data is needed to clinch the case, says Ira Wasserman of Cornell University in Ithaca, New York. "I'd be a little reluctant to put too much faith in one supernova," he says. But the lone observation is enough to rule out other ideas, says Michael Turner of the University of Chicago: "This supernova has driven a stake through the heart of more conventional explanations that try to avoid cosmic speed-up." Adrian Cho From New Scientist magazine, vol 170 issue 2285, 07/04/2001, page 6
  11. Chasing shadows 21 Apr 01 The idea that invisible galaxies haunt the Universe has got astronomers peering into the darkness. But what they find may dismay them, says Stuart Clark GHOSTS gather in the shadows. Just outside the cosy circle of our own galaxy-which is lit by the fires of a hundred billion stars-is a host of wraith-like galaxies made of almost nothing but exotic invisible matter. Instead of shining like celestial beacons across the Universe, they are virtually indistinguishable from the blackness of space. "Dark galaxies might outnumber normal galaxies by a hundred to one," says Neil Trentham of Cambridge University. Trentham, like most astronomers, believes dark galaxies must be out there somewhere. He is attempting the almost impossible task of trying to see these sinister clouds of darkness. But other astronomers warn that the search is a waste of time. Like ghosts, dark galaxies may be a figment of the imagination. If so, our theories of how galaxies form are wrong, and we may have to change our ideas about what makes up most of the matter in the Universe, or rewrite the story of the Universe's first moments. Galaxies are thought to have formed from the sea of gas left behind by the big bang. If some patches of gas were slightly denser than others, their gravity would have pulled in surrounding material. But with the gravity of ordinary gas alone, this process would have taken far too long-galaxies would still be forming even today. So cosmologists have been forced to assume that there is also a lot of invisible matter in the Universe, outweighing the normal matter many times. This "dark matter" can also explain how galaxies spin so fast without breaking apart. A large spherical halo of dark matter surrounding each galaxy could provide enough gravity to balance the spin, gluing the galaxy together. Physicists' attempts to merge the fundamental forces of nature have thrown up any number of candidates for the stuff of dark matter (New Scientist,16 January 1999, p 24), called weakly interacting massive particles, or WIMPS. These hypothetical exotic particles are supposed to have been formed along with normal matter in the furnace of the big bang. They are invisible because they don't feel all the forces that ordinary matter does, and light just ignores them. Astrophysicists usually assume that these dark matter particles are "cold"-that is, fairly heavy and slow moving, so they tend to clump together. According to the conventional theory, this clumpiness allowed cold dark matter to give birth to galaxies. Way back in what astronomers call the dark ages, when the Universe was a mere billion or so years old, cold dark matter gathered itself under gravity into giant blobs called haloes. These then attracted normal matter to form stars, turning into bona fide galaxies. The problem is that in its simplest form, this theory says there should be an awful lot of little galaxies-the failed relics of galaxy formation that never managed to grow into giant elliptical galaxies or majestic spirals like the Milky Way. Astronomers already know of two species of galactic minnow. The small round galaxies called dwarf spheroidals and their untidier cousins, the dwarf irregulars, both weigh in at about ten million times the mass of the Sun, or only one ten thousandth that of the Milky Way. But there are far too few of them to agree with the theoretical models of cold dark matter, which predict ten or a hundred times as many. To save the theory, astrophysicists assume that these small galaxies do exist-only they're invisible. Somehow, most of the smaller dark-matter haloes must have been unable to form stars. "The smallest dwarf galaxies could be the very rare, one-in-a-hundred cases which, for some reason, do form stars," says Trentham. So what stops stars forming in the remainder? Here, dark-galaxy pundits split into two camps. Either something stops the gas entering the dark matter halo, or it falls in and then is somehow prevented from making stars. Trentham believes that dark galaxies failed to attract any normal matter because they missed out on a feeding frenzy during the dark ages, when gas was plentiful. "To produce normal galaxies, dark matter haloes grow in the early Universe, pulling in more dark matter and gas," he says. And according to his simulations, still unpublished, the smaller haloes would be celestial late developers. "Imagine a small dark halo growing five to ten billion years after the big bang. The gas between the galaxies has been heated up by the light from stars, and it is now moving so fast that it cannot be pulled in." Others are not convinced. Frazer Pearce of Durham University believes that even long after the dark ages, there was plenty of cold gas around to be captured. Astronomers see traces of dark gas clouds throughout intergalactic space, revealed because they absorb some of the light from the distant celestial beacons known as quasars. And these clouds are relatively cool. "The gas is at 20,000 K," says Pearce. And that's too cool to escape even a modest gravitational pull. "Even when you have gas at tens of millions of kelvin, you can't keep it out of haloes forever." Pearce thinks that these intergalactic clouds are already sitting inside their own very small haloes of dark matter. So what stops them from forming stars? The tiniest galaxies might suffer a kind of boom-bust economy, he suggests. When the small dark haloes form, they attract some gas and stars form. The most massive of these burn quickly and explode as supernovae after just a few million years. That heats the remaining gas and flings it in all directions. A few supernovae could make the gas hot enough to eject it back into intergalactic space. "We are hoping that supernovae will blow these clouds to pieces and stop stars forming," says Pearce. The hot, scattered gas will eventually radiate its excess energy, slow down and get pulled in again by the halo, beginning another boom-bust cycle. Each short period of intense star formation, lasting a few tens of millions of years, would be followed by a dormant period of up to a billion years. But once again, there are dissenters. "The idea that it is really easy to blow the gas out of galaxies is problematic," says George Lake of the University of Washington. "People have tried to do simulations, overestimating the effect of supernovae, and they still can't get the stuff flung out. I think the whole idea is misguided." Lake thinks dark galaxies are a mirage. "It is not that these dark galaxies lie below current detection limits, they are just not there. They do not exist at all." The only way to settle the argument is to look for these spectral galaxies. Astronomers have a few ideas about how to catch them (see "Ghost hunting"), but what if, after the trawling is over, they return with empty nets? Warm dark matter Pearce believes that the conventional theory of galaxy formation could be repaired. Instead of cold dark matter, the Universe may be filled with another, slightly different strange substance. The lighter a particle is, the faster it is likely to be moving. And fast-moving particles are much less likely to clump together. Very lightweight particles such as neutrinos would be a kind of "hot dark matter". These zippy particles would be so resistant to clumping that they would tend to smooth out structures even on the scale of large galaxies-so they can't make up most of the dark matter in the Universe, or we wouldn't be here. Instead, Pearce thinks a dearth of little dark galaxies could be explained by "warm dark matter", of an intermediate mass and speed. Warm dark matter would happily form big clumps that make ordinary galaxies, but would move too fast to be captured by the weak gravity of small haloes. The mass of a warm dark matter particle would need to be around 10-33 kilograms-a millionth the mass of a proton. This presents no problem for particle physicists, who can tweak their speculative theories to produce particles of virtually any mass required. But Lake thinks this running repair is useless. He points out that the conventional theory of structure formation also predicts too many biggish objects, larger than galaxies but smaller than the most common kind of galaxy cluster, which contains hundreds of galaxies. It will take more than tinkering to repair this anomaly. Theorists may be able to use warm dark matter to wash out little galaxies, but it won't get rid of these heavier structures. Lake believes that there is something fundamentally wrong with our theories about the early Universe. For structures to form at all in the Universe, there must be some initial variations in the density of gas. Cosmologists think that these variations were created by quantum fluctuations in the first fraction of a second after the big bang, and then magnified by a process called inflation (New Scientist, 16 December 2000, p 26). But how big were these fluctuations? The usual assumption is that, like a fractal pattern, the Universe was as lumpy on small scales as it was on large scales-a "scale-free fluctuation spectrum". This is a catch-all solution that Lake thinks is used to mask our almost total ignorance of the early Universe. There is no strong evidence for it, and yet it has become entrenched in cosmological orthodoxy. "The strange thing is that people now treat it as though it were a unique prediction of inflation." Lake believes this is where the problem lies, because the assumption that there are no special scales doesn't fit the shortage of small galaxies and small clusters. Galaxies and galaxy clusters mark distinct peaks in the fluctuation spectrum, rather than being part of a smooth continuum of structures, he says. "It seems like we have some notes or harmonics in the Universe." If he is right, then we can use this knowledge to work out just how the fluctuations formed. This could sound the death knell for dark matter. Powerful peaks in the fluctuation spectrum suggest bigger variations in gas density on the right scale to produce galaxies. And with a better head start, the gravity of ordinary gas would have been enough to make galaxies, obviating the need for dark matter. That still leaves the problem of why some galaxies manage to spin so fast without falling apart, of course, but there may be an another explanation for that. Some researchers maintain that Newton's law of gravity might be slightly different on large scales. According to the theory of Modified Newtonian Dynamics, gravity pulls a little harder at large distances than conventional wisdom dictates, enabling it to hold fast-spinning galaxies together. If Lake is right, astronomers will have to let the whole notion of dark matter slip away quietly into the night. The ghosts will have been banished for good. Ghost hunting How do you look for a black cloud in space? It's a riddle astronomers will have to answer if they want to find dark galaxies. There may already be some circumstantial evidence for these shadowy objects. A few isolated galaxies look as though an invisible rival is pulling them to bits. UGC 10214, for example, has a conspicuous bridge of material extending into space towards-apparently-nothing. There are also the small, faint objects called Blue Compact Galaxies, which are furiously forming stars. They cannot have been building stars so quickly for long, otherwise they would be packed with stars and therefore shining far more brightly. Trentham suggests that perhaps a dark matter halo has passed by each of these galaxies, causing gas clouds to collapse prematurely and form stars. If nothing else, these shreds of evidence could help to narrow down the search for dark galaxies to a few promising sites. And astronomers will need all the help they can get. If dark galaxies hold some gas or a few dead stars, conventional methods might just work. Brown dwarfs-small, failed stars-might collect within a dark galaxy, softly glowing with infrared light. The next generation of infrared satellites, such as NASA's Space Infrared Telescope Facility, will survey the Universe in the right wavelength range, and could spot them. A few white dwarfs, the cores of extinct stars, might also be around, but they would be almost impossible to spot with any existing or planned instruments. However, Neil Trentham of Cambridge University thinks that there will be little or no ordinary matter in dark galaxies. If so, the search becomes fiendishly difficult. Gravitational lensing might be the only way. A dark-matter halo would bend light slightly with its gravity. If it drifted between us and a more distant source, it would slightly distort the image of the source. Unfortunately, the technique used for spotting these distortions is still far too crude to see dark matter halos. But all may not be lost. As the light rays detour through the dark matter halo, they take paths of different lengths. So rays of light emitted simultaneously no longer reach the Earth at the same instant. If the original source is variable, those changes will be staggered when viewed from Earth through the lens. A rapidly varying source would be essential to detect these short time delays. Nial Tanvir of the University of Hertfordshire thinks that distant explosions called gamma ray bursts may do the job. When seen through a dark galaxy, rays from a burst would rise to a peak of brightness, dim a little and brighten again as the late-comers arrived. The problem is that gamma ray bursts are few and far between, so we'll have to wait a very long time before one happens to explode right behind a dark galaxy. However, if in the future more sensitive gamma-ray satellites prove that there is a plethora of weaker, currently undetectable gamma ray bursts, then this method might be in business. Stuart Clark is an astronomy writer and Director of Public Astronomy Education at the University of Hertfordshire From New Scientist magazine, vol 170 issue 2287, 21/04/2001, page 38 Further reading: : Completely dark galaxies, their existence, properties, and strategies for finding them, by Neil Trentham and others, at:] http://xxx.lanl.gov/abs/astro-ph/0010545 The Bigger Bang, by James E. Lidsey, Cambridge University Press (2000)
  12. Seeing the dark side 21 Apr 01 ASTRONOMERS in the US have mapped a cluster of invisible galaxies for the first time. They spotted the cluster by analysing the effect its gravity had on the light from more distant galaxies. The discovery opens a new window on the Universe, because a large part of its mass may be "dark matter". Last year, a team of European astronomers caught a glimpse of a dark galaxy as it distorted light from the more distant bright ones they were imaging. But astronomers have never been able to confirm such sightings or work out what the galaxies were like. "It was a true mass detection but difficult to confirm," admits Peter Schneider of the University of Bonn, a member of the team. Now Tony Tyson, David Wittman and colleagues at Lucent Technologies' Bell Labs in New Jersey have made a similar discovery, which they later confirmed by picking up very faint light from the cluster. They looked at 31,000 distant galaxies within a square patch of sky half a degree across using the Blanco 4-metre telescope at the Cerro Tololo Inter-American Observatory in Chile. They then entered data on the apparent shapes of the galaxies into a computer and combined them to produce an average shape. They reasoned that because most galaxies are elliptical, the average of many galaxies with different orientations should be circular. In fact the average was an ellipse, indicating that dark matter in front of the galaxies was distorting the images. Analysing what sort of bodies would produce such distortion, the team was able to construct a three-dimensional map of the positions of 26 dark galaxies in a cluster. Because the Universe is expanding, light from distant objects gets stretched, shifting it to redder wavelengths. By comparing the "red shifts"-which are proportional to distance-of the background galaxies to the amount the dark cluster distorts their light, Wittman says he was able to estimate the distance to the dark cluster. At the moment his claim is controversial. "I don't think you can get the red shift using <such> data," says Schneider. But if the team is right, astronomers may soon be able to produce 3D maps of truly dark galaxies that cannot be seen any other way. Tyson believes the only way to test current theories about dark matter is to study such galaxies. "Astronomers need no longer be biased towards what glows in the dark," he says. Eugenie Samuel From New Scientist magazine, vol 170 issue 2287, 21/04/2001, page 11 Further reading: More at:] http://xxx.lanl.gov/abs/astro-ph?0104094
  13. rebelboy, I recommend that you begin with NT, Network Technology programs, as they're the jobs of the future, and if anyone hasn't guessed by now, most jobs today will eventually require most of their employees to enroll in some form of these associated programs, just to keep their exhisting jobs. (so there's one for starters.) Also, it wouldn't hurt to take up reading on physics, particle physics is a much needed basis to understand the basic fudamentals that make up existing forms that contain mass, and their relationship to energy. of course this is perhaps an understatement, but it is as brief as I could describe it. *Electronics courses are also a key element if you wish to develop prototype designs of your own, it would be a big help, to enhance your endevours.
  14. RE: Who's the older in here? were you talkin' bot me dob? ya that's right, i have posted much here, and been here for a very long "Time" as well.
  15. "I WANT A PONY!"...The "Tail" Of Two Brothers. (One brother being a "Pessamist" and the other being an "Optimist") One day the pessamist brother said to the optimist brother, "Hey, come look inside the barn dear brother, I have something to show you." The optimistic brother said " O.K." and proceeded to follow his pessamistic brother inside of the barn, and as he looked inside, he happened to notice all kinds of "Horse Manure" littered all over the place, and said, HMmm? "There must be a PONY in here somwhere?"
  16. From: "Roan Carratu" ([email protected]) To: "Art Bell" ([email protected]) Subject: Strange Email Date: Sat, 19 May 2001 03:52:55 -0600 X-Mailer: Microsoft Outlook Express 5.50.4522.1200 I received this as an email with no sender address. I thought I had better send it to you. Interestingly, the message had these words at the top of it, just as you see them. Dear Art Bell, This message will get to you on Saturday, May 19, 2001. We send it from the date, your calendar, January 2, 6451. To send this, the energy released from the dissolution of a galactic black hole caused a micro-puncture in time sidescales for nineteen milliseconds. Eleven thousand sixteen uninhabited star systems vaporized in order to send this message. We are inducing a packet distortion in the email band of your internet to send this message in that media. We send this message exactly as our history shows it was sent, in the words recorded in your time. The events between the year 2001 and 2005 determine the future of the whole planet and the Human species component of the biosphere. Analysis of that time in history shows the beginning development of Human awareness of it's niche in the ecology of First Planet. The result transforms the Human species from a rogue species into the planetary nervous system it has become on all the planets it has terraformed since then. This period, between 2001 and 2005, is the most important period of all Human history. All previous moments of time pivot the moments beyond that time. The results create the multitudes of side scales of time, each time scale proceeding from a different moment in the total actions of every component of the biosphere on that world. All the two hundred trillion side scales from that period in history, between 2001 and 2005, your calendar, show no life surviving on First Planet, with no expansion to the thousands of solar systems now occupied in our single primary side scale. Therefore, only our primary scale of time can send this message back, and this message is pivot for our primary time scale's existence. And therefore, the words in this message are necessary for the survival of the Ecology of First Planet and it's Human component, during that crucial time. This reason alone motivated the Humans now to create this message to send it into our ancient past, to your time. The words needed for survival are: Humanity is Universe being aware of itself, and specifically, the ecology of First Planet being aware of itself. Most of the conceptuality common to Human society has no survival value. Almost everything Human's think and do have a destructive result in the future of First Planet. Whole systems of thought, commonly effecting Human behavior in your period of history, exist as commonly held delusions and do not reflect Human relationship with itself or the planetary ecology. Some of these are: Politics, Government, Economics, Formal Organized Education, and Military. Most concepts of individuality do not reflect the reality of interdependence, and causes a blindness to the value of other human beings and the ecology. Since Humanity exists as a primary component of an ecology, originating within it, surviving through it, learning everything from it, and flourishing only in a healthy environment, the change in consciousness dissolving the previously mentioned conceptual delusions creates the actions which allows our single time scale to come into existence. Please allow us to exist. You would like it. There is no war, no hunger, no ignorance, no disease or death, and maximum individual freedom possible. It is not a Utopia, but certainly our problems do not threaten existence or a viable future. The oldest known Human is two thousand fifty six First Planet years old, and climbs the highest mountain on Second Planet every year. You called it Mars. She headed the task group which dismantled the abandoned orbital city around First Planet six hundred years ago. Her remotest known direct ancestor came from an area of the African sub-continent known as the Gold Coast, from a mud hut village rebuilt and modernized by the Planetary Effort in 2017. Between 2027 through 2104 in multitudes of other side scales, Mass Extinction Events eliminate the biosphere, some from asteroids, some from diseases, and most from the nuclear, biological, and chemical result of social dissolution on the planetary ecology. In our ancient history, carefully retained through electronic records, an Asteroid in 2030 is diverted by a mass effort of the Human component of First Planet, and our primary scale proceeds from that time. But the seeds of that mass effort comes from the period 2001 through 2005, and this message read on the Art Bell show to a mass population, according to our historical records. We realize we exist as a result of a temporal paradox. But if it takes the destruction of a galaxy and a temporal paradox to create what we have now, we consider the result worth it.
  17. "Time on our side" From:] New Scientist Magazine Website... http://www.newscientist.com/ns/19991030/letters7.html Julian Barbour isn't the only one who thinks time is an illusion (16 October, p 29). He has distinguished predecessors. (Julian Barbour is an independent theoretical physicist who lives near Oxford.) BOOK:] Julian Barbour's The End of Time is published by Weidenfeld & Nicolson. "Time" Our minisite produced in collaboration with the National Physical Laboratory, the UK's National Standards Laboratory. Website:] http://www.newscientist.com/nsplus/insight/time/time.html Parmenides (540 BC) said: "The true world is one of permanence. Change is also an illusion." Plato said: "Time is a mental impression to which nothing in the real world corresponds." Einstein said: "For us physicists the distinction between past, present and future signifies only an obstinate illusion." The above were supplied to me a few years ago--after a discussion in which I proposed that time was illusory--by Peter Landsberg of the University of Southampton maths department, who wrote The Enigma of Time (Adam Hilger, 1982). As we seem to be having difficulty understanding this concept more than 2500 years after it was first noted, I decided to study photonic crystals instead. GREG PARKER University of Southampton I am fascinated to learn from Barbour's article that we may "soon" be able to prove the non-existence of time. It would be good to know roughly "when" this might happen, so I can work as much overtime as possible while the concept is still valid. P&ARING;L VIDAR NYDAHL [email protected] next letter From New Scientist, 30 October 1999 Additional Sources... The Quantum Inquisition °°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°° http://www.newscientist.com/ns/19991030/thequantum.html Entangled photons could provide deep insights into our world that nobody, not even physicists, expected. Michael Brooks spoke to the chief inquisitor AFTER BATTLING THE STRANGENESS of time and space... TIME The Serpent in the Garden of Sentience... http://members.aol.com/chaque/time/time.htm (More to come latter.)...... ---"12"
  18. Mabey some of those magicians that claimed to "walk through walls" was no majic at all? and the cloak around them was to keep everyone from seeing what was really going on as the magician passed through, not because it was necessarily a "trick" per say, but more importantly since a magicians oath of secrecy is to never reveal their secret, perhaps this was in fact a means to conceal to the public that the magician was for real, molecularly altering matter ino energy on a level that permitted matter to pass through matter, as the body actually "passed through" the wall! Perhaps this was the biggest secret of all? we all know that there are many facets of social indifference towards one another as it is, can you imagine how these magicians would have been treted if they demonstrated a "TRUE" Active Power like this??? I know that if it was me performing this in front of live audience & on live television, and during the moment my bdy was beginning to "pass through" the wall, all of a sudden the cape, or blanket covering fell away, revealing to everyone's disbelieving eyes as my body was passing through this wall, I would be running away in fear of my life & what the spooks may do to me if I were caught! Becomming a lab rat & all, and yes that does happen! all the more reason for secrecy. Why do you think in the potrayal of Marvel Comic Superhero's & such, they always had to conceal their identity? Exactly the same reason for so much secrecy in Real Life!!!
  19. What about incorporating nanotechnology, to encode peptide particles, and "piggy-backing" them onto FTL migratory QED particle waves? BTW, "Darkmatter" is also part of the electromagnetic spectrum of "Pillars" which are left over fuel from the Big Bang. Don't bother to ask me how I know all this, because I would have to write a book first, then shoot you latter... (Ha!)
  20. Hey corran, didn't the "Moody Blues" write something about that in their lyrics?
  21. Thank You Sivertempest, I was just curious, yet curiouser still, I would like to know what type of materiels you are using to construct your device with, if you feel uncomfortable with discussing the details here, you can email me in private if you don't mind telling me more... [email protected] (Fully encrypted email, you might consider getting one for yourself at "Hushmail" .com)
  22. Is it possible to travel through time? Is it possible to jump, shift, or displace objects or people from a position on our current timeline to some position in the past or future? Many of us have either read science fiction stories on time travel or have seen movies depicting travel through time. We have also read of the paradoxes associated with traveling backward in time and altering events that would produce paradoxical consequences. Our biological clocks tick just as the clock on the wall and everything seems to move forward in some inexorable dance and we can't seem to reverse it, step out of it, or jump over it. We are in our own body time machines: Our hearts beat, our lungs breath, and our bodies sync with numerous circadian rhythms. We seem trapped in time, unable to move freely through a dimension that enslaves us unlike the dimensions of space that we move so freely through. What are the possibilities of moving through time at a faster rate than time? It doesn't seem very hopeful. Most would dismiss such a notion as impossible. Here is how one person describes the physical process... Goto:] http://home.earthlink.net/~skywatcher22/TimeTravel.html
  23. actually in referrence to the molecular suggestions, we would be examining potential methods to break down the molecular components of the body, and reasemble it into the buble via a means of quantum injections that are sent in packets vey much like the data that is being transmitted to deliver this message over the internet. This quantum bubble, would be shielded to withstand any outside influences from interacting with the contents (the traveller) and be connected by a series of broadband quantum Time~Streams which contain all the information connected to the orginating world-Line from whence it departed, as to ensure zero divergence, thus enabling the traveller to return to their original worldline from any given point in Time of their coordinated destination. (CCS-Q-S) = "Closed Curcuit,-Quantum-Time Stream"
  24. (LoL) Can bread travel faster than the speed of water?
  25. A Crack In The Theory Of Everything? Physicists Announce Possible Violation Of Standard Model Of Particle Physics... http://www.bnl.gov/bnlweb/pubaf/pr/bnlpr020801.htm UPTON, NY -- Scientists at the U.S. Department of Energy's Brookhaven National Laboratory, in collaboration with researchers from 11 institutions in the U.S., Russia, Japan, and Germany, today announced an experimental result that directly confronts the so-called Standard Model of particle physics. "This work could open up a whole new world of exploration for physicists interested in new theories, such as supersymmetry, which extend the Standard Model," says Boston University physicist Lee Roberts, co-spokesperson for the experiment. The Standard Model is an overall theory of particle physics that has withstood rigorous experimental challenge for 30 years. The Brookhaven finding -- a precision measurement of something called the anomalous magnetic moment of the muon, a type of subatomic particle -- deviates from the value predicted by the Standard Model. This indicates that other physical theories that go beyond the assumptions of the Standard Model may now be open to experimental exploration. The results were reported today at a special colloquium at Brookhaven Lab and have been submitted to Physical Review Letters. Scientists at Brookhaven, doing research at an experiment dubbed the muon g-2 (pronounced gee-minus-two), have been collecting data since 1997. Until late last week, they did not know whether their work would confirm the prediction of the Standard Model. "We are now 99 percent sure that the present Standard Model calculations cannot describe our data," says Brookhaven physicist Gerry Bunce, project manager for the experiment. The Standard Model, in development since the 1960s, explains and gives order to the menagerie of subatomic particles discovered throughout the 1940s and 1950s at particle accelerators of ever-increasing power at Brookhaven and other locations in the United States and Europe. The theory encompasses three of the four forces known to exist in the universe -- the strong force, the electromagnetic force, and the weak force -- but not the fourth force, gravity. The g-2 values for electrons and muons are among the most precisely known quantities in physics -- and have been in good agreement with the Standard Model. The g-2 value measures the effects of the strong, weak, and electromagnetic forces on a characteristic of these particles known as "spin" -- somewhat similar to the spin of a toy top. Using Standard Model principles, theorists have calculated with great precision how the spin of a muon, a particle similar to but heavier than the electron, would be affected as it moves through a magnetic field. Previous experimental measurements of this g-2 value agreed with the theorists' calculations, and this has been a major success of the Standard Model. The scientists and engineers at Brookhaven, however -- using a very intense source of muons, the world's largest superconducting magnet, and very precise and sensitive detectors -- have measured g-2 to a much higher level of precision. The new result is numerically greater than the prediction. "There appears to be a significant difference between our experimental value and the theoretical value from the Standard Model," says Yale physicist Vernon Hughes, who initiated the new measurement and is co-spokesperson for the experiment. "There are three possibilities for the interpretation of this result," he says. "Firstly, new physics beyond the Standard Model, such as supersymmetry, is being seen. Secondly, there is a small statistical probability that the experimental and theoretical values are consistent. Thirdly, although unlikely, the history of science in general has taught us that there is always the possibility of mistakes in experiments and theories." "Many people believe that the discovery of supersymmetry <a theory that predicts the existence of companion particles for all the known particles> may be just around the corner," Roberts says. "We may have opened the first tiny window to that world." All the physicists agree that further study is needed. And they still have a year's worth of data to analyze. "When we analyze the data from the experiment's year 2000 run, we'll reduce the level of error by a factor of 2," says physicist William Morse, Brookhaven resident spokesperson for g-2. The team expects that analysis to come within the next year. Furthermore, Hughes adds, substantial additional data that have not yet been used in evaluating the theoretical value of g-2 are now available from accelerators in Russia, China, and at Cornell University. These data could reduce significantly the error in the theoretical value. This research was funded by the U.S. Department of Energy, the U.S. National Science Foundation, the German Bundesminister fur Bildung und Forschung, and the Russian Ministry of Science, and through the U.S.-Japan Agreement in High Energy Physics. The U.S. Department of Energy's Brookhaven National Laboratory creates and operates major facilities available to university, industrial and government personnel for basic and applied research in the physical, biomedical and environmental sciences and in selected energy technologies. The Laboratory is operated by Brookhaven Science Associates, a not-for-profit research management company, under contract with the U.S. Department of Energy. D.O.E. they say? Hmmm??? arent they the same ones responsible for NOT pouring any fundining for Development of more efficient energy resouces? we've HAD the Technology for years! so the question still remains...Why have they not "DEVELOPED" any of this stuff? now that we are facing rolling blackouts, and the fact that GOV. G"Grey" Davis & Co. has finally stepped forward announcing criminal allegations are sufficient to press charges? Ya Right, that'll be the day! these energy cartels will do whatever the HELL they wish!n untill "We the Sheeple" do something to put a stop to this nonesense, once & FOR all! and to the D.O.E.... Get off your asses & do something right for a change... D E V E L O P ! and Develop something that is truly beneficial, and don't rely on fosil fuels! Please Get it right this "Time"
×
×
  • Create New...