Energy Supply, World, combined resources by which the nations of the world attempt to meet their energy needs. Energy is the basis of industrial civilization; without energy, modern life would cease to exist. During the 1970s the world began a painful adjustment to the vulnerability of energy supplies. In the long run, conserving energy resources may provide the time needed to develop new sources of energy, such as hydrogen fuel cells, or to further develop alternative energy sources, such as solar energy and wind energy. While this development occurs, however, the world will continue to be vulnerable to disruptions in the supply of oil, which, after World War II (1939-1945), became the most favored energy source.
BACKGROUND OF TODAY’S SITUATION
Wood was the first and, for most of human history, the major source of energy. It was readily available, because extensive forests grew in many parts of the world and the amount of wood needed for heating and cooking was relatively modest. Certain other energy sources, found only in localized areas, were also used in ancient times: asphalt, coal, and peat from surface deposits and oil from seepages of underground deposits.
This situation changed when wood began to be used during the Middle Ages to make charcoal. The charcoal was heated with metal ore to break up chemical compounds and free the metal. As forests were cut and wood supplies dwindled at the onset of the Industrial Revolution in the mid-18th century, charcoal was replaced by coke (produced from coal) in the reduction of ores. Coal, which also began to be used to drive steam engines, became the dominant energy source as the Industrial Revolution proceeded.
Growth of Petroleum Use
Although for centuries petroleum (also known as crude oil) had been used in small quantities for purposes as diverse as medicine and ship caulking, the modern petroleum era began when a commercial well was brought into production in Pennsylvania in 1859. The oil industry in the United States expanded rapidly as refineries sprang up to make oil products from crude oil. The oil companies soon began exporting their principal product, kerosene—used for lighting—to all areas of the world. The development of the internal-combustion engine and the automobile at the end of the 19th century created a vast new market for another major product, gasoline. A third major product, heavy oil, began to replace coal in some energy markets after World War II.
The major oil companies, which are based principally in the United States, initially found large oil supplies in the United States. As a result, oil companies from other countries—especially Britain, the Netherlands, and France—began to search for oil in many parts of the world, especially the Middle East. The British brought the first field there (in Iran) into production just before World War I (1914-1918). During World War I, the U.S. oil industry produced two-thirds of the world’s oil supply from domestic sources and imported another one-sixth from Mexico. At the end of the war and before the discovery of the productive East Texas fields in 1930, however, the United States, with its reserves strained by the war, became a net oil importer for a few years.
During the next three decades, with occasional federal support, the U.S. oil companies were enormously successful in expanding in the rest of the world. By 1955 the five major U.S. oil companies produced two-thirds of the oil for the world oil market(not including North America and the Soviet bloc). Two British-based companies produced almost one-third of the world’s oil supply, and the French produced a mere one-fiftieth. The next 15 years were a period of serenity for energy supplies. The seven major U.S. and British oil companies provided the world with increasing quantities of cheap oil. The world price was about a dollar a barrel, and during this time the United States was largely self-sufficient, with its imports limited by a quota.
Formation of OPEC
Two series of events coincided to change this secure supply of cheap oil into an insecure supply of expensive oil. In 1960, enraged by unilateral cuts in oil prices by the seven big oil companies, the governments of the major oil-exporting countries formed the Organization of Petroleum Exporting Countries, or OPEC. OPEC’s goal was to try to prevent further cuts in the price that the member countries—Venezuela and four countries around the Persian Gulf—received for oil. They succeeded, but for a decade they were unable to raise prices. In the meantime, increasing oil consumption throughout the world, especially in Europe and Japan, where oil displaced coal as a primary source of energy, caused an enormous expansion in the demand for oil products.
The Energy Crisis
The year 1973 brought an end to the era of secure, cheap oil. In October, as a result of the Arab-Israeli War, the Arab oil-producing countries cut back oil production and embargoed oil shipments to the United States and the Netherlands. Although the Arab cutbacks represented a loss of less than 7 percent in world supply, they created panic on the part of oil companies, consumers, oil traders, and some governments. Wild bidding for crude oil ensued when a few producing nations began to auction off some of their oil. This bidding encouraged the OPEC nations, which now numbered 13, to raise the price of all their crude oil to a level as high as eight times that of a few years earlier. The world oil scene gradually calmed, as a worldwide recession brought on in part by the higher oil prices trimmed the demand for oil. In the meantime, most OPEC governments took over ownership of the oil fields in their countries.
In 1978 a second oil crisis began when, as a result of the revolution that eventually drove the Shah of Iran from his throne, Iranian oil production and exports dropped precipitously. Because Iran had been a major exporter, consumers again panicked. A replay of 1973 events, complete with wild bidding, again forced up oil prices during 1979. The outbreak of war between Iran and Iraq in 1980 gave a further boost to oil prices. By the end of 1980 the price of crude oil stood at 19 times what it had been just ten years earlier.
The very high oil prices again contributed to a worldwide recession and gave energy conservation a big push. As oil demand slackened and supplies increased, the world oil market slumped. Significant increases in non-OPEC oil supplies, such as those in the North Sea, Mexico, Brazil, Egypt, China, and India, pushed oil prices even lower. Production in the Soviet Union reached 11.42 million barrels per day by 1989, accounting for 19.2 percent of world production in that year.
Despite the low world oil prices that have prevailed since 1986, concern over disruption has continued to be a major focus of energy policy in the industrialized countries. The short-term increases in prices following Iraq’s invasion of Kuwait in 1990 reinforced this concern. Owing to its vast reserves, the Middle East will continue to be the major source of oil for the foreseeable future. However, new discoveries in the Caspian Sea region suggest that countries such as Kazakhstan may become major sources of petroleum in the 21st century.
In the 1990s, oil production by non-OPEC countries remained strong and production by OPEC countries rebounded. The result at the end of the 20th century was a world oil surplus and prices (when adjusted for inflation) that were lower than in 1972.
Experts are uncertain about future oil supplies and prices. Low prices have spurred greater oil consumption, and experts question how long world petroleum reserves can keep pace with increased demand. Many of the world’s leading petroleum geologists believe the world oil supply will peak around 80 million barrels per day between 2010 and 2020. (In 1998 world consumption was approximately 70 million barrels per day.) On the other hand, many economists believe that even modestly higher oil prices might lead to greater supply, since the oil companies would then have the economic incentive to exploit less accessible oil deposits.
Natural gas may be increasingly used in place of oil for applications such as power generation and transportation. One reason is that world reserves of natural gas have doubled since 1976, in part because of the discovery of major deposits of natural gas in Russia and in the Middle East. New facilities and pipelines are being constructed to help process and transport this natural gas from production wells to consumers.
PETROLEUM AND NATURAL GAS
Petroleum (crude oil) and natural gas are found in commercial quantities in sedimentary basins in more than 50 countries in all parts of the world. The largest deposits are in the Middle East, which contains more than half the known oil reserves and almost one-third of the known natural-gas reserves. The United States contains only about 2 percent of the known oil reserves and 3 percent of the known natural-gas reserves.
Geologists and other scientists have developed techniques that indicate the possibility of oil or gas being found deep in the ground. These techniques include taking aerial photographs of special surface features, sending shock waves through the earth and reflecting them back into instruments, and measuring the earth’s gravity and magnetic field with sensitive meters. Nevertheless, the only method by which oil or gas can be found is by drilling a hole into the reservoir. In some cases oil companies spend many millions of dollars drilling in promising areas, only to find dry holes. For a long time, most wells were drilled on land, but after World War II drilling commenced in shallow water from platforms supported by legs that rested on the sea bottom. Later, floating platforms were developed that could drill at water depths of 1,000 m (3,300 ft) or more. Large oil and gas fields have been found offshore: in the United States, mainly off the Gulf Coast; in Europe, primarily in the North Sea; in Russia, in the Barents Sea and the Kara Sea; and off Newfoundland and Brazil. Most major finds in the future may be offshore.
As crude oil or natural gas is produced from an oil or gas field, the pressure in the reservoir that forces the material to the surface gradually declines. Eventually, the pressure will decline so much that the remaining oil or gas will not migrate through the porous rock to the well. When this point is reached, most of the gas in a gas field will have been produced, but less than one-third of the oil will have been extracted. Part of the remaining oil can be recovered by using water or carbon dioxide gas to push the oil to the well, but even then, one-fourth to one-half of the oil is usually left in the reservoir. In an effort to extract this remaining oil, oil companies have begun to use chemicals to push the oil to the well, or to use fire or steam in the reservoir to make the oil flow more easily. New techniques that allow operators to drill horizontally, as well as vertically, into very deep structures have dramatically reduced the cost of finding natural gas and oil supplies.
Crude oil is transported to refineries by pipelines, barges, or giant oceangoing tankers. Refineries contain a series of processing units that separate the different constituents of the crude oil by heating them to different temperatures, chemically modifying them, and then blending them to make final products. These final products are principally gasoline, kerosene, diesel oil, jet fuel, home heating oil, heavy fuel oil, lubricants, and feedstocks, or starting materials, for petrochemicals.
Natural gas is transported, usually by pipelines, to customers who burn it for fuel or, in some cases, make petrochemicals from chemicals extracted, or “stripped,” from it. Natural gas can be liquefied at very low temperatures and transported in special ships. This method is much more costly than transporting oil by tanker. Oil and natural gas compete in a number of markets, especially in generating heat for homes, offices, factories, and industrial processes.
In its early days, the oil industry generated considerable environmental pollution. Through the years, however, under the dual influences of improved technology and more stringent regulations, it has become much cleaner. The effluents from refineries have decreased greatly and, although well blowouts still occur, new technology has tended to make them relatively rare. The policing of the oceans, on the other hand, is much more difficult. Oceangoing ships are still a major source of oil spills. In 1990 the Congress of the United States passed legislation requiring tankers to be double hulled by the end of the decade.
Another source of pollution connected with the oil industry is the sulfur in crude oil. Regulations of national and local governments restrict the amount of sulfur dioxide that can be discharged by factories and utilities burning fuel oil. Because removing sulfur is expensive, however, regulations still allow some sulfur dioxide to be discharged into the air.
Many scientists believe that another potential environmental problem from refining and burning large amounts of oil and other fossil fuels (such as coal and natural gas) occurs when carbon dioxide (a by-product of the burning of fossil fuels), methane (which exists in natural gas and is also a by-product of refining petroleum), and other by-product gases accumulate in the atmosphere. These gases are known as greenhouse gases, because they trap some of the energy from the Sun that penetrates Earth’s atmosphere. This energy, trapped in the form of heat, maintains Earth at a temperature that is hospitable to life. Certain amounts of greenhouse gases occur naturally in the atmosphere. However, the immense quantities of petroleum, coal, and other fossil fuels burned during the world’s rapid industrialization over the last 200 years are a contributing source of higher levels of carbon dioxide in the atmosphere. During that time period, these levels have increased by about 28 percent. This increase in atmospheric carbon dioxide, coupled with the continuing loss of the world’s forests (which absorb carbon dioxide), has led many scientists to predict a rise in global temperature. This increase in global temperature might disrupt weather patterns, disrupt ocean currents, lead to more violent storms, and create other environmental problems. In 1992 representatives of over 150 countries convened in Rio de Janeiro, Brazil, and agreed on the need to reduce the world’s emissions of greenhouse gases. In 1997 world delegations again convened, this time in Kyōto, Japan. During the Kyōto meeting, representatives of 160 nations signed an agreement known as the Kyōto Protocol, which would require 38 industrialized nations to limit emissions of greenhouse gases to levels that are an average of 5 percent below the emission levels of 1990. In order to reduce their fossil fuel emissions to achieve these levels, the industrialized nations would have to shift their energy mix toward energy sources that do not produce as much carbon dioxide, such as natural gas, or to alternative energy sources, such as hydroelectric energy, solar energy, wind energy, or nuclear energy. While the governments of some industrialized nations have ratified the Kyōto Protocol, others have not, including that of the United States.
Oil shale, heavy oil deposits, and tar sands are the most prevalent forms of petroleum found in the world. Reserves of these sources are many times more abundant than the world’s total known reserves of crude oil. Because of the high cost of converting shale oil and tar sands into usable petroleum products, however, only a small percentage of the available material is processed commercially. An industry to make oil products from tar sands has been started in Canada, and Venezuela is looking at the prospects of developing the vast reserves of tar sands in its Orinoco River basin. Nevertheless, the quantity of oil products produced from these two raw materials is small compared with the total production of conventional crude oil. Until world petroleum prices increase, the quantity of oil produced from oil shale and tar sands will likely remain small relative to the production of conventional crude oil.
Coal is a general term for a wide variety of solid materials that are high in carbon content. Most coal is burned by electric utility companies to produce steam to turn their generators. Some coal is used in factories to provide heat for buildings and industrial processes. A special, high-quality coal is turned into metallurgical coke for use in making steel.
The world’s coal reserves are vast. The amount of coal (as measured by energy content) that is technically and economically recoverable under present conditions is five times as large as the reserves of crude oil. Just four regions contain three-fourths of the world’s recoverable coal reserves: the United States, 24 percent; the countries of the former Soviet Union, 24 percent; China, 11 percent; and Western Europe, 10 percent.
In industrialized countries, the greater convenience and lower costs of oil and gas in the earlier 20th century virtually forced coal out of the market for heating homes and offices and driving locomotives. Oil and gas also ate heavily into the industrial market for coal. Only an expanding utility market enabled coal output in the United States, for example, to remain relatively constant between 1948 and 1973. Even in the utility market, as oil and gas captured a greater share, coal’s contribution to the total energy picture dropped dramatically—in the United States, for instance, from about one-half to less than one-fifth. The dramatic jumps in oil prices after 1973, however, gave coal a major cost advantage for utilities and large industrial customers, and coal began to recapture some of its lost markets. In contrast to the industrialized countries, developing countries that have large coal reserves (such as China and India) continue to use coal for industrial and heating purposes.
The average price of coal has remained virtually unchanged since the early 1980s and is forecast to decline in the early part of the 21st century. However, in industrialized countries the need to comply with stricter environmental regulations has made burning coal more costly.
Despite coal’s relative cheapness and huge reserves, the growth in the use of coal since 1973 has been much less than expected, because coal is associated with many more environmental problems than is oil. Underground mining can result in black lung disease for miners, the sinking of the land over mines, and the drainage of acid into water tables. Surface mining requires careful reclamation, or the unrestored land will remain scarred and unproductive. In addition, the burning of coal causes emission of sulfur dioxide particles, nitrogen oxide, and other impurities. Acid rain—rainfall and other forms of precipitation with a relatively high acidity that is damaging lakes and some forests in many regions—is believed to be caused in part by such emissions (see Air Pollution). The U.S. Clean Air Act of 1970 (revised in 1970 and 1990) provides the federal legal basis for controlling air pollution. This legislation has significantly reduced emissions of sulfur oxides—known as acid gases. For example, the Clean Air Act requires facilities such as coal-burning power plants to burn low-sulfur coal. In the 1990s concern over the possible warming of the planet as a result of the greenhouse effect caused many governments to consider policies to reduce the carbon dioxide emissions produced by burning coal, oil, and natural gas. During the world’s rapid industrialization through the 19th and 20th centuries, levels of carbon dioxide in the atmosphere increased approximately 28 percent from pre-industrial levels.
Solving these problems is costly, and who should pay is a matter of controversy. As a result, coal consumption may continue to grow more slowly than would otherwise be expected. The vast coal reserves, the improved technologies to reduce pollution, and the further development of coal gasification (see Gases, Fuel) still indicate, however, that the market for coal will increase in coming years.
Synthetic fuels do not occur in nature but are made from natural materials. Gasohol, for example, is a mixture of gasoline and alcohol made from sugars produced by living plants. Although making various types of fuel from coal is possible, the large-scale production of fuel from coal will likely be limited by high costs and pollution problems, some of which are not yet known. The manufacture of alcohol fuels in large quantities will likely be restricted to regions, such as parts of Brazil, where a combination of low-cost labor and land, plus a long growing season, make it economical. Thus, synthetic fuels are unlikely to make an important contribution to the world’s energy supply anytime soon.
Nuclear energy is generated by the splitting, or fissioning, of atoms of uranium or heavier elements. The fission process releases heat, which is used to produce steam to drive a turbine to generate electricity. The operation of a nuclear reactor and the related electricity-generating equipment is only one part of an interconnected set of activities. The production of a reliable supply of electricity from nuclear fission requires mining, milling, and transporting uranium; enriching uranium (increasing the percentage of the uranium isotope U-235) and packing it in appropriate form; building and maintaining the reactor and associated generating equipment; and treating and disposing of spent fuel. These activities require extremely sophisticated and interactive industrial processes and many specialized skills.
Britain took an early lead in developing nuclear power. By the mid-1950s, several nuclear reactors were producing electricity in that country. The first nuclear reactor to be connected to an electricity distribution network in the United States began operation in 1957 at Shippingport, Pennsylvania. Six years later, the first order was placed for a commercial nuclear power plant to be built without a direct subsidy from the federal government. This order marked the beginning of an attempt to convert rapidly the world’s electricity-generating systems from reliance on fossil fuels to reliance on nuclear energy. By 1970, 90 nuclear power plants were operating in 15 countries. In 1980, 253 nuclear power plants were operating in 22 countries. However, the attempt to move from fossil fuels to nuclear energy faltered because of rapidly increasing costs, regulatory delays, declining demand for electricity, and a heightened concern for safety.
Questions about the safety and economy of nuclear power created perhaps the most emotional battle fought over energy. As the battle heated during the late 1970s, nuclear advocates argued that no realistic alternative existed to increased reliance on nuclear power. They recognized that some problems remain but maintained that solutions would be found. Nuclear opponents, on the other hand, emphasized a number of unanswered questions about the environment: What are the effects of low-level radiation over long periods? What is the likelihood of a major accident at a nuclear power plant? What would be the consequences of such an accident? How can nuclear power’s waste products, which will remain dangerous for centuries, be permanently isolated from the environment? These safety questions helped cause changes in specifications for and delays in the construction of nuclear power plants, further driving up costs. They also helped create a second controversy: Is electricity from nuclear power plants less costly, equally costly, or more costly than electricity from coal-fired plants? Despite rapidly escalating oil and gas prices in the late 1970s and early 1980s, these political and economic problems caused an effective moratorium in the United States on new orders for nuclear power plants. This moratorium took effect even before the 1979 near meltdown (melting of the nuclear fuel rods) at the Three Mile Island nuclear power plant near Harrisburg, Pennsylvania, and the 1986 partial meltdown at the Chernobyl’ plant north of Kyiv in Ukraine (see Chernobyl’ Accident). The latter accident caused some fatalities and cases of radiation sickness, and it released a cloud of radioactivity that traveled widely across the northern hemisphere.
In 1998 a total of 437 nuclear plants operated worldwide. Another 35 reactors were under construction. Eighteen countries generate at least 20 percent of their electricity from nuclear power. The largest nuclear power industries are located in the United States (107 reactors), France (59), Japan (54), Britain (35), Russia (29), and Germany (20). In the United States, no new reactors have been ordered for more than 20 years. Public opposition, high construction costs, strict building and operating regulations, and high costs for waste disposal make nuclear power plants much more expensive to build and operate than plants that burn fossil fuels.
In some industrialized countries, the electric power industry is being restructured to break up monopolies (the provision of a commodity or service by a single seller or producer) at the generation level. Because this trend is pressuring nuclear plant owners to cut operating expenses and become more competitive, the nuclear power industry in the United States and other western countries may continue to decline if existing nuclear power plants are unable to adapt to changing market conditions.
Asia is widely viewed as the only likely growth area for nuclear power in the near future. Japan, South Korea, Taiwan, and China all had plants under construction at the end of the 20th century. Conversely, a number of European nations were rethinking their commitments to nuclear power.
Sweden’s political parties have committed to phasing out nuclear power by 2010, after Swedish citizens voted in 1980 against future development of this energy source. However, industry is challenging the policy in court. In addition, critics argue that Sweden cannot fulfill its commitment to reducing emissions of greenhouse gases without relying on nuclear power.
France generates 80 percent of its electricity from nuclear power. However, it has canceled several planned reactors and may replace aging nuclear plants with fossil-fuel plants for environmental reasons. As a result, the government-owned electricity utility, Electricité de France, plans to diversify the country’s electricity-generating sources.
The German government announced in 1998 a plan to phase out nuclear power. However, as in Sweden, nuclear plant owners may take the government to court to seek compensation for plants shut down before the end of their operating lives.
In Japan, several accidents at nuclear facilities in the mid-1990s have undercut public support for nuclear power. Japan’s growing stockpile of plutonium and its shipments of spent nuclear fuel to Europe have drawn international criticism.
China, which currently operates only three nuclear power plants, has plans to expand its nuclear capabilities. However, whether China will be able to obtain sufficient financing or whether it can develop the necessary skilled work force to expand is uncertain.
A number of eastern European countries—including Russia, Ukraine, Bulgaria, the Czech Republic, Hungary, Lithuania, and Slovakia—generate electricity from Soviet-designed nuclear reactors that have various safety flaws. Some of these reactors have the same design as the Chernobyl reactor that exploded in 1986. The United States and other western countries are working to address these design problems and to improve operations, maintenance, and training at these plants.
Solar energy does not refer to a single energy technology but rather covers a diverse set of renewable energy technologies that are powered by the Sun’s heat. Some solar energy technologies, such as heating with solar panels, utilize sunlight directly. Other types of solar energy, such as hydroelectric energy and fuels from biomass (wood, crop residues, and dung), rely on the Sun’s ability to evaporate water and grow plant material, respectively. The common feature of solar energy technologies is that, unlike oil, gas, coal, and present forms of nuclear power, solar energy is inexhaustible. Solar energy can be divided into three main groups—heating and cooling applications, electricity generation, and fuels from biomass.
Heating and Cooling
The Sun has been used for heating for centuries. The Mesa Verde cliff dwellings in Colorado were constructed with rock projections that provide shade from the high (and hot) summer Sun but allow the rays of the lower winter Sun to penetrate. Today a design with few or no moving parts that takes advantage of the Sun is called passive solar heating. Beginning in the late 1970s, architects increasingly became familiar with passive solar techniques. In the future, more and more new buildings will be designed to capture the Sun’s winter rays and keep out the summer rays.
Active solar heating and solar hot-water heating are variations on one theme, differing principally in cost and scale. A typical active solar-heating unit consists of tubes installed in panels that are mounted on a roof. Water (or sometimes another fluid) flowing through the tubes is heated by the Sun and is then used as a source of hot water and heat for the building. Although the number of active solar-heating installations has grown rapidly since the 1970s, the industry has encountered simple installation and maintenance problems, involving such commonplace occurrences as water leakage and air blockage in the tubes. Solar cooling requires a higher technology installation in which a fluid is cooled by being heated to an intermediate temperature so that it can be used to drive a refrigeration cycle. To date, relatively few commercial installations have been made.
Generation of Electricity
Electricity can be generated by a variety of technologies that ultimately depend on the effects of solar radiation. Windmills and waterfalls (themselves very old sources of mechanical energy) can be used to turn turbines to generate electricity. The energies of wind and falling water are considered forms of solar energy, because the Sun’s heating power creates wind and replenishes the water in rivers and streams. Most existing windmill installations are relatively small, containing ten or more windmills in a grid configuration that takes advantage of wind shifts. In contrast, most electricity from hydroelectric installations comes from giant dams. Many sites suitable for large dams have already been tapped, especially in the industrialized nations. However, during the 1970s small dams used years earlier for mechanical energy were retrofitted to generate electricity.
Large-scale hydroelectric projects are still being pursued in many developing countries. The simplest form of solar-powered electricity generation is the use of an array of collectors that heat water to produce steam to turn a turbine. Several of these facilities are in existence.
Other sources of Sun-derived electricity involve high-technology options that remain unproven commercially on a large scale. Photovoltaic cells (see Photoelectric Effect; Solar Energy), which convert sunlight directly into electricity, are currently being used for remote locations to power orbiting space satellites, gates at unattended railroad crossings, and irrigation pumps. Progress is needed to lower costs before widespread use of photovoltaic cells is possible. The commercial development of still other methods seems far in the future. Ocean thermal conversion (OTC) generates electricity on offshore platforms; a turbine is turned by the power generated when cold seawater moves from great depths up to a warm surface. Also still highly speculative is the notion of using space satellites to beam electricity via microwaves down to Earth.
Fuels from biomass encompass several different forms, including alcohol fuels (mentioned earlier), dung, and wood. Wood and dung are still major fuels in some developing countries, and high oil prices have caused a resurgence of interest in wood in industrialized countries. Researchers are giving increasing attention to the development of so-called energy crops (perennial grasses and trees grown on agricultural land). There is some concern, however, that heavy reliance on agriculture for energy could drive up prices of both food and land.
The total amount of solar energy now being used may never be accurately estimated, because some sources are not recorded. In the early 1980s, however, two main sources of solar energy, hydroelectric energy and biomass, contributed more than twice as much as nuclear energy to the world energy supply. Nevertheless, these two sources are limited by the availability of dam sites and the availability of land to grow trees and other plant materials, so the future development of solar energy will depend on a broad range of technological advances.
The potential of solar energy, with the exception of hydroelectricity, will remain underutilized well beyond the year 2000, because solar energy is still much more expensive than energy derived from fossil fuels. The long-term outlook for solar energy depends heavily on whether the prices of fossil fuels increase and whether environmental regulations become stricter. For example, stricter environmental controls on burning fossil fuels may increase coal and oil prices, making solar energy a less expensive energy source in comparison.
Geothermal energy, an aspect of the science known as Geothermics, is based on the fact that the earth is hotter the deeper one drills below the surface. Water and steam circulating through deep hot rocks, if brought to the surface, can be used to drive a turbine to produce electricity or can be piped through buildings as heat. Some geothermal energy systems use naturally occurring supplies of geothermal water and steam, whereas other systems pump water down to the deep hot rocks. Although theoretically limitless, in most habitable areas of the world this subterranean energy source lies so deep that drilling holes to tap it is very expensive.
ENERGY EFFICIENCY IMPROVEMENTS
In addition to developing alternative sources of energy, energy supplies can be extended by the conservation (the planned management) of currently available resources. Three types of possible energy conservation practices may be described. The first type is curtailment, that is, doing without—for example, closing factories to reduce the amount of power consumed or cutting back on travel to reduce the amount of gasoline burned. The second type is overhaul, that is, changing the way people live and the way goods and services are produced—for example, slowing further suburbanization of society, using less energy-intensive materials in production processes, and decreasing the amount of energy consumed by certain products (such as automobiles). The third type involves the more efficient use of energy, that is, adjusting to higher energy costs—for example, investing in cars that go farther per unit of fuel, capturing waste heat in factories and reusing it, and insulating houses. This third option requires less drastic changes in lifestyle, so governments and societies most commonly adopt it over the other two options.
By 1980 many people had come to recognize that increased energy efficiency could help the world energy balance in the short and middle term, and that productive conservation should be considered as no less an energy alternative than the energy sources themselves. Substantial energy savings began to occur in the United States in the 1970s, when, for example, the federal government imposed a nationwide automobile efficiency standard and offered tax deductions for insulating houses and installing solar energy panels. Substantial additional energy savings from conservation measures appear possible without dramatically affecting the way people live.
A number of obstacles stand in the way, however. One major roadblock to productive conservation is its highly fragmented and unglamorous character; it requires hundreds of millions of people to do mundane things such as turning off lights and keeping tires properly inflated. Another barrier has been the price of energy. When adjusted for inflation, the cost of gasoline in the United States was lower in 1998 than it was in 1972. Low energy prices make it difficult to convince people to invest in energy efficiency. From 1973 to the mid-1980s, when oil prices increased in the United States, energy consumption per person dropped about 14 percent, in large part due to conservation measures. However, because oil has become cheaper during the 1990s, the U.S. Energy Department predicts that by the year 2000 energy use in the United States will increase to within 2 percent of 1973 levels. Over time, improvements in energy efficiency more than pay for themselves. However, they require large capital investments , which are not attractive when energy prices are low. Major areas for such improvements are described below.
Whereas transportation uses 25 percent of the total energy consumed in the United States, it accounts for 66 percent of the oil used in the United States. Cars built in other countries have long tended to be more efficient than American cars, partly because of the pressures of heavy taxes on gasoline. In 1975 the U.S. Congress passed a law that mandated doubling the fuel efficiency of new cars by 1985. This law, coupled with gasoline shortages in 1974 and 1979 and substantially higher gasoline prices (especially since 1979), caused the average efficiency of all U.S. cars to improve by about 40 percent between 1975 and 1990. However, much of this improvement has been offset by dramatic increases in the number of cars on the road and by the growth in sales of sport utility vehicles and light trucks (which are not covered by federal efficiency standards). By 1996 the number of automobiles used worldwide had grown to 652 million vehicles. This number is expected to increase to nearly 1 billion by 2018. Experts predict that unless more efficient technologies are developed, this growth will raise demand for gasoline by over 20 million barrels per day. Automobile manufacturers have the technical capability today to build cars with a much higher fuel efficiency than that mandated by Congress. Mass-production of cars with this efficiency would require vast capital investments, however. New engine technologies that rely on electric batteries or highly efficient fuel cells, as well as engines that run on natural gas, may play a much greater role in the early 21st century. Increases in the prices of gasoline and parking have encouraged two other modes of transportation: ride sharing (either van or car pools) and public transportation. These methods can be highly efficient, but the sprawling character of many U.S. cities can make their use difficult.
Profit-conscious business managers increasingly emphasize the modification of products and manufacturing processes in order to save energy. The industrial sector, in fact, has recorded more significant improvements in efficiency than either the residential or the transportation sector. Improvements in manufacturing can be classified into three broad, somewhat overlapping, categories: improved housekeeping—doing routine maintenance on furnaces and using only necessary lighting; recovery of waste—recovering heat and recycling waste by-products; and technological innovation—redesigning products and processes to embody more efficient technologies.
In the 1950s and 1960s efficient energy use was often neglected in constructing buildings and houses, but the high energy prices of the 1970s changed that. Some office buildings built since 1980 use only a fifth of the energy used in buildings constructed just ten years earlier. Techniques to save energy include designing and siting buildings to use passive solar heat, using computers to monitor and regulate the use of electricity, and investing in more efficient lighting and in improved heating and cooling systems. A life-cycle approach, which takes into account the total costs over the entire life of the building rather than merely the initial construction cost or sales price, is encouraging greater efficiency. Also, the retrofitting of old buildings, in which new components and equipment are used in existing structures, has been successful.
Chemistry, History of
Chemistry, History of, history of the study of the composition, structure, and properties of material substances, of the interactions between substances, and of the effects on substances of the addition or removal of energy in any of its several forms. From the earliest recorded times, humans have observed chemical changes and have speculated as to their causes. By following the history of these observations and speculations, the gradual evolution of the ideas and concepts that have led to the modern science of chemistry can be traced.
ANCIENT TECHNOLOGY AND PHILOSOPHY
The first known chemical processes were carried out by the artisans of Mesopotamia, Egypt, and China. At first the smiths of these lands worked with native metals such as gold or copper, which sometimes occur in nature in a pure state, but they quickly learned how to smelt metallic ores (primarily metallic oxides and sulfides) by heating them with wood or charcoal to obtain the metals. The progressive use of copper, bronze, and iron gave rise to the names that have been applied to the corresponding ages by archaeologists. A primitive chemical technology also arose in these cultures as dyers discovered methods of setting dyes on different types of cloth, and as potters learned how to prepare glazes, and, later, to make glass.
Most of these craftspeople were employed in temples and palaces, making luxury goods for priests and nobles. In the temples, the priests especially had time to speculate on the origin of the changes they saw in the world about them. Their theories often involved magic, but they also developed astronomical, mathematical, and cosmological ideas, which they used in attempts to explain some of the changes that are now considered chemical.
GREEK NATURAL PHILOSOPHY
The first culture to consider these ideas scientifically was that of the Greeks. From the time of Thales, about 600 bc, Greek philosophers were making logical speculations about the physical world rather than relying on myth to explain phenomena. Thales himself assumed that all matter was derived from water, which could solidify to earth or evaporate to air. His successors expanded this theory into the idea that four elements composed the world: earth, water, air, and fire. Democritus thought that these elements were composed of atoms, minute particles moving in a vacuum. Others, especially Aristotle, believed that the elements formed a continuum of mass and therefore a vacuum could not exist. The atomic idea quickly lost ground among the Greeks, but it was never entirely forgotten. When it was revived during the Renaissance, it formed the basis of modern atomic theory (see Atom).
Aristotle became the most influential of the Greek philosophers, and his ideas dominated science for nearly two millennia after his death in 323 bc. He believed that four qualities were found in nature: heat, cold, moisture, and dryness. The four elements were each composed of pairs of these qualities; for example, fire was hot and dry, water was cold and moist, air was hot and moist, and earth was cold and dry. These elements with their qualities combined in various proportions to form the components of the earthly planet. Because it was possible for the amounts of each quality in an element to be changed, the elements could be changed into one another; thus, it was thought possible also to change the material substances that were built up from the elements—lead into gold, for example.
ALCHEMY: RISE AND DECLINE
Aristotle's theory was accepted by the practical artisans, especially at Alexandria, Egypt, which after 300 bc became the intellectual center of the ancient world. They thought that metals in the earth sought to become more and more perfect and thus gradually changed into gold. It seemed to them that they should be able to carry out the same process more rapidly in their own workshops and so artificially to transmute common metals into gold. Beginning about ad100 this idea dominated the minds of the philosophers as well as the metalworkers, and a large number of treatises were written on the art of transmutation, which became known as alchemy. Although no one ever succeeded in making gold, a number of chemical processes were discovered in the search for the perfection of metals.
At almost the same time, and probably independently, a similar alchemy arose in China. Here, also, the aim was to make gold, although not because of the monetary value of the metal. The Chinese believed that gold was a medicine that could confer long life or even immortality on anyone who consumed it. As did the Egyptians, the Chinese gained practical chemical knowledge from incorrect theories.
Dispersal of Greek Thought
After the decline of the Roman Empire, Greek writings were less openly studied in western Europe, and even in the eastern Mediterranean they were largely neglected. In the 6th century, however, a sect of Christians known as the Nestorians, whose language was Syriac, spread their influence throughout Asia Minor. They established a university at Edessa in Mesopotamia and translated a large number of Greek philosophical and medical writings into Syriac for use among scholars.
In the 7th and 8th centuries Arab conquerors spread Islamic culture over much of Asia Minor, North Africa, and Spain. The caliphs at Baghdād became active patrons of science and learning. The Syriac translation of Greek texts were again translated, this time into Arabic, and along with the rest of Greek learning the ideas and practice of alchemy once again flourished.
The Arabic alchemists were also in contact with China in the East, thus receiving the concept of gold as a medicine, as well as the Greek idea of gold as a perfect metal. A specific agent, the philosopher's stone, was thought to stimulate transmutation, and this became the object of the alchemists' search. The alchemists now had an added incentive to study chemical processes, for they might lead not only to wealth but also to health. The study of chemicals and chemical apparatus made steady progress. Such important reagents as the caustic alkalis (see Alkali Metals) and ammonium salts (see Ammonia) were discovered, and distillation apparatus was steadily improved. An early realization of the need for more quantitative methods also appeared in some Arabic recipes, where specific instructions were given regarding the amounts of reagents to be employed.
The Late Middle Ages
A great intellectual reawakening began in western Europe in the 11th century. This was stimulated in part by the cultural exchanges between Arabs and Western scholars in Sicily and Spain. Schools of translators were established, and their translations transmitted Arabic philosophical and scientific ideas to European scholars. Thus, knowledge of Greek science, passed through the intermediate languages of Syriac and Arabic, was disseminated in the scholarly tongue of Latin and so eventually came to all parts of Europe. Many of the manuscripts most eagerly read were those concerning alchemy.
These manuscripts were of two types: Some were almost purely practical, and some attempted to apply theories of the nature of matter to alchemical problems. Among the practical subjects discussed was distillation. The manufacture of glass had been greatly improved, particularly in Venice, and it now became possible to construct even better distillation apparatus than the Arabs had made and to condense the more volatile products of distillation. Among the important products obtained in this way were alcohol and the mineral acids: nitric, aqua regia (a mixture of nitric and hydrochloric), sulfuric, and hydrochloric. Many new reactions could be carried out using these powerful reagents. Word of the Chinese discovery of nitrates and the manufacture of gunpowder also came to the West through the Arabs. The Chinese at first used gunpowder for fireworks, but in the West it quickly became a major part of warfare. An effective chemical technology existed in Europe by the end of the 13th century.
The second type of alchemical manuscript transmitted by the Arabs was concerned with theory. Many of these writings reveal a mystical character that contributed little to the advancement of chemistry, but others sought to explain transmutation in physical terms. The Arabs had based their theories of matter on Aristotle's ideas, but their thinking tended to be more specific than his. This was especially true of their ideas concerning the composition of metals. They believed that metals consisted of sulfur and mercury—not the familiar substances with which they were perfectly well acquainted, but rather the “principle” of mercury, which conferred the property of fluidity on metals, and the “principle” of sulfur, which made substances combustible and caused metals to corrode. Chemical reactions were explained in terms of changes in the amounts of these principles in material substances.
During the 13th and 14th centuries the influence of Aristotle on all branches of scientific thought began to weaken. Actual observation of the behavior of matter cast doubt on the relatively simple explanations Aristotle had given; such doubts spread rapidly after the invention around 1450 of printing with movable type. After 1500 printed alchemical works appeared in increasing numbers, as did works devoted to technology. The result of this increasing knowledge became apparent in the 16th century.
The Rise of Quantitative Methods
Among the influential books that appeared at this time were practical works on mining and metallurgy. These treatises devoted much space to assaying ores for their content of valuable metals, work that required the use of the laboratory balance, or scale, and the development of quantitative methods (see Chemical Analysis). Workers in other fields, especially medicine, began to recognize the need for greater precision. Physicians, some of whom were alchemists, needed to know the exact weight or volume of the doses they administered. Thus, they used chemical methods for preparing medicines.
These methods were combined and forcefully promoted by the eccentric Swiss physician Theophrastus von Hohenheim, generally called Paracelsus. He grew up in a mining region and became familiar with the properties of metals and their compounds, which he believed were superior to the herbal remedies used by orthodox physicians. He spent most of his life in violent quarrels with the medical establishment of the day, and in the process he founded the science of iatrochemistry (the use of chemical medicines), the forerunner of pharmacology. He and his followers discovered many new compounds and chemical reactions. He modified the old sulfur-mercury theory of the composition of metals by adding a third component, salt, the earthy part of all substances. He declared that when wood burns “that which burns is sulfur, that which vaporizes is mercury, and that which turns to ashes is salt.” As with the sulfur-mercury theory, these were principles and not the material substances. His emphasis on combustible sulfur was important for the later development of chemistry. The iatrochemists who followed Paracelsus modified some of his wilder ideas and collected his and their own recipes for preparing chemical remedies. Finally, at the end of the 16th century, Andreas Libavius published his Alchemia, which organized the knowledge of the iatrochemists and is frequently called the first textbook of chemistry.
In the first half of the 17th century a few men began to study chemical reactions experimentally, not because they were useful in other disciplines, but rather for their own sake. Jan Baptista van Helmont, a physician who left medical practice to devote himself to the study of chemistry, used the balance in an important experiment to show that a definite quantity of sand could be fused with excess alkali to form water glass, and that when this product was treated with acid, it regenerated the original amount of sand (silica). Thus were laid the foundations of the law of conservation of mass. Van Helmont also showed that in a number of reactions an aerial fluid was liberated. He called this substance “gas.” A new class of substances with its own physical properties was shown to exist.