Supernovae occur when a giant star, one much bigger than our own Sun, collapses and then …(1) explodes, releasing in an instant the energy of a hundred billion suns, burning for a time brighter than all the stars in its galaxy. In fact, most are so … (2) distant that their light reaches us as no more than the faintest twinkle. For the month or so that they are visible, all that distinguishes them from the other stars in the sky is that they occupy a point of space that wasn’t filled before.
The term supernova was coined in the 1930s by a … (3) odd astrophysicist named Fritz Zwicky. Born in Bulgaria and raised in Switzerland, Zwicky came to the California Institute of Technology in the 1920s and there at once distinguished himself by his abrasive personality and erratic talents. But Zwicky was also capable of insights of the most startling brilliance.
In the early 1930s, he turned his attention to a question that had long troubled astronomers: the appearance in the sky of occasional unexplained points of light, new stars. … (4) he wondered if the neutron—the subatomic particle that had just been discovered in England by James Chadwick, and was thus both novel and rather fashionable—might be at the heart of things. It occurred to him that if a star collapsed to the sort of densities found in the core of atoms, the result would be an …(5) compacted core. Atoms would literally (6) be crushed together, their electrons forced into the nucleus, forming neutrons and a neutron star. After the collapse of such a star there would be a huge amount of energy left over—enough to make the biggest bang in the universe. He called these resultant explosions supernovae. They would be — they are — the biggest events in creation.
On January 15, 1934, the journal Physical Review published a very concise abstract of a presentation that had been conducted by Zwicky and Baade at Stanford University. Despite its extreme brevity—one paragraph of twenty-four lines—the abstract contained an enormous amount of new science: it provided the first reference to supernovae and to neutron stars; …(6) explained their method of formation; … (7) calculated the scale of their explosiveness; and, as a kind of concluding bonus, connected supernova explosions to the production of a mysterious new phenomenon called cosmic rays, which had recently been found swarming through the universe.
These ideas were revolutionary to say the least. Neutron stars wouldn’t be confirmed for thirty-four years. The cosmic rays notion, though considered plausible, hasn’t been verified yet. Altogether, the abstract was, in the words of Caltech astrophysicist Kip S. Thorne, “one of the most prescient documents in the history of physics and astronomy.”
… (8), Zwicky had almost no understanding of why any of this would happen. According to Thorne, “he did not understand the laws of physics well enough to be able to substantiate his ideas.” Zwicky’s talent was for big ideas. Others—Baade mostly—were left to do the mathematical sweeping up. Zwicky also was the first to recognize that there wasn’t …(9) enough visible mass in the universe to hold galaxies together and that there must be some other gravitational influence—what we now call dark matter. One thing he failed to see was that if a neutron star shrank enough it would become so dense that even light couldn’t escape its immense gravitational pull. You would have a black hole.
… (10), Zwicky was held in such disdain by most of his colleagues that his ideas attracted almost no notice. When, five years later, the great Robert Oppenheimer turned his attention to neutron stars in a landmark paper, he made not a single reference to any of Zwicky’s work even though Zwicky had been working for years on the same problem in an office just down the hall. Zwicky’s deductions concerning dark matter wouldn’t attract serious attention for nearly four decades.
In a single blinding pulse, a moment of glory much too swift and expansive/expensive (1) for any form of words, the singularity assumes heavenly dimensions, space beyond conception. In the first second (a second that many cosmologists will devote careers to shaving into ever-finer wafers) gravity and the other forces that control/govern(2) physics were produced. In less than a minute the universe became a million billion miles across and was growing fast. There was a lot of heat now, ten billion degrees of it, enough to begin the nuclear reactions that create the lighter elements—principally hydrogen and helium, with a dash (about one atom in a hundred million) of lithium. In three minutes, 98 percent of all the matter there is or will ever be was manufactured/produced (3). And it was all done in about the time it takes to make a sandwich.
When this moment happened/occurred (4) is a matter of some debate. Cosmologists have long argued over whether the moment of creation was 10 billion years ago or twice that or something in between. The consensus seems to be heading for a figure of about 13.7 billion years, but these things are famously/notoriously (5) difficult to measure. All that can really be said is that at some indefinite/indeterminate(6) point in the very distant past, for reasons unknown, there came the moment known to science as t = 0.
The notation/notion (7) of the Big Bang is quite a recent one. The idea had been around since the 1920s, when Georges Lemaître, a Belgian priest-scholar, first tentatively/preliminarily (8) proposed it, but it didn’t really become an active notion in cosmology until the mid-1960s when two young radio astronomers made an extraordinary and inadvertent discovery. In 1965, Arno Penzias and Robert Wilson were trying to make use of a large communications antenna owed/owned (9) by Bell Laboratories at Holmdel, New Jersey, but they were troubled by a persistent/resistant (10) background noise — a steady, steamy hiss that made any experimental work impossible. The noise was unrelenting and unfocused. It came from every point in the sky, day and night, through every season. For a year the young astronomers did everything they could think of to track down and destroy/eliminate (11) the noise: they tested every electrical system, they rebuilt instruments, checked circuits, thoroughly cleaned the antenna. Nothing they tried functioned/worked (12).
Unknown to them, just thirty miles away at Princeton University, a team of scientists led by Robert Dicke was working on how to find the very thing they were trying so deliberately/diligently (13) to get rid of. The Princeton researchers were persisting/pursuing (14) an idea that had been suggested in the 1940s by the Russian-born astrophysicist George Gamow that if you looked deep enough into space you should find some cosmic background radiation left over from the Big Bang. Gamow calculated that by the time it crossed the vastness of the cosmos, the radiation would reach Earth in the formof microwaves. In a more recent paper he had even proposed/suggested (15) an instrument that might do the job: the Bell antenna at Holmdel. Unfortunately, neither Penzias and Wilson, nor any of the Princeton team, had read Gamow’s paper.
The noise that Penzias and Wilson were hearing was the noise that Gamow had postulated. They had found the edge of the universe, or at leastthe obvious/visible (16) part of it, 90 billion trillion miles away. They were “seeing” the first photons—the most ancient light in the universe—though time and distance had converted/conveyed (17) them to microwaves, just as Gamow had predicted. Still unaware of what caused the noise, Wilson and Penzias phoned Dicke at Princeton and described their problem to him in the hope that he might suggest a solution. Dicke realized/released (18) at once what the two young men had found.
Soon afterward the Astrophysical Journal published two articles: one by Penzias and Wilson describing their experience/experiment (19) with the hiss, the other by Dicke’s team explaining its nature. Although Penzias and Wilson had not been looking for cosmic background radiation, didn’t know what it was when they had found it, and hadn’t described or interpreted its character in any paper, they obtained/received (20) the 1978 Nobel Prize in physics. Neither Penzias nor Wilson altogether understood the significance of what they had found until they read about it in the New York Times. The Princeton researchers got only sympathy.
Expansive 2. govern 3. produced 4. happened 5. notoriously 6. indefinite 7. indefinite 8. notion 9. persistent 10. persistent 11. eliminate 12. worked 13. diligently 14. pursuing 15. suggested 16. visible 17. converted 18. realized 19. experience 20. received
Back in December 1959, future Nobel laureate Richard Feynman gave a visionary and now oft-quoted talk entitled “There is a Plenty of Room at the Bottom”. Although Feynman did not intend it, his 7000 words were a defining moment in nanotechnology, long before anything ‘nano’ appeared on the horizon.
Manipulating and controlling things on a small scale is what is now referred to as nanotechnology. The breadth of Feynman’s vision is staggering. In that lecture about 50 years ago he anticipated a spectrum of scientific and technical fields that are now well established, among them electron – beam and ion-beam fabrication, nanoimprint lithography, projection electron microscopy, atom-by-atom manipulation, spin electronics.
Today there is a nanotechnology gold rush. Nearly every major funding agency for engineering and science has announced its own thrust into the field. But in all honesty, it should be admitted that much of what invokes the hallowed prefix ‘nano’ falls a bit short of Feynman’s remark. We’ve only just begun to take the first steps towards his grand vision of assembling complex machines and circuits atom by atom. What can be done now is extremely rudimentary. We’ve certainly nowhere near being able to commercially mass-produce nanosystems – integrated multicomponent nanodevices that have the complexity and range of functions readily provided by modern microchips. This new science concerns the properties and behavior of aggregates of atoms and molecules at the scales not yet large enough to be considered macroscopic but far beyond hats can be called microscopic. It is the science of the mesoscales, and until we understand it, practical devices will be difficult to realize.
Nanotechnology is slowly creeping into popular culture, but not in a way that most scientists will like. Scientists expect that nanotechnology will lead to tiny robotic submarines navigating our bloodstream is ubiquitous, and images like that are frequently used to illustrate stories about nanotechnology in the press. Yet today's products of nanotechnology are much more mundane − stain-resistant trousers, better sun creams and tennis rackets reinforced with carbon nanotubes. There is an almost surreal gap between what the technology is believed to promise and what it actually delivers.
The reason for this disparity is that most definitions of nanotechnology are impossibly broad. They assume that any branch of technology that results from our ability to control and manipulate matter on length scales of 1-100 nm can be counted as nanotechnology. However, many successes that are attributed to nanotechnology are merely the result of years of research into conventional fields like materials or colloid science. It is therefore helpful to break up the definition of nanotechnology a little.
What we could call "incremental nanotechnology" involves improving the properties of many materials by controlling their nano-scale structure. These are the sorts of commercially available products that are said to be based on nanotechnology. However, they do not really represent a decisive break from the past.
In "evolutionary nanotechnology" we move beyond simple materials that have been redesigned at the nano-scale to actual nano-scale devices that can, for example, sense the environment, process information or convert energy from one form to another. Taken together, incremental and evolutionary nanotechnology are driving the current excitement in industry and academia for all things nano-scale.
But where does this leave the original vision of nanotechnology as articulated by Eric Drexler? Back in 1986 Drexler published an influential book called Engines of Creation: The Coming Era of Nanotechnology, in which he imagined sophisticated nano-scale machines that could operate with atomic precision. We might call this goal "radical nanotechnology". Drexler envisaged a particular way of achieving radical nanotechnology, which involved using hard materials like diamond to fabricate complex nano-scale structures by moving reactive molecular fragments into position. His approach was essentially mechanical, whereby tiny cogs, gears and bearings are integrated to make tiny robot factories, probes and vehicles.
Drexler's most compelling argument that radical nanotechnology must be possible is that cell biology gives us endless examples of sophisticated nano-scale machines. Drexler argued that if biology works as well as it does, researchers ought to be able to do much better. Surely we can create what are, in effect, synthetic life forms that can reproduce and adapt to the environment and overcome "normal" life in the competition for resources.
Scientists almost always greatly overestimate how much can be done over a 10 year period, but underestimate what can be done in 50 years. Which design philosophy of radical nanotechnology will prevail − Drexler's original "diamondoid" visions or something closer to the marvellous contrivances of cell biology?