Introduction to When the Earth Moves
Earthquakes dominated the news in late 1999. The deadliest earthquake of the year killed more than 17,000 people in and around the town of Izmit, Turkey, on August 17. Three weeks later, on September 7, the strongest quake to hit Athens, Greece, in 100 years killed at least 135 people in that city and left some 60,000 homeless. And on September 21, the year's most powerful earthquake struck the island of Taiwan, claiming more than 2,300 lives. On November 12, the death toll in Turkey rose when a strong aftershock of the August earthquake killed nearly 700 more people.
Although earthquakes as deadly as the one that struck Izmit are unusual events, earthquakes are a daily occurrence on our planet. Most are too slight to be noticed, even by sophisticated monitoring devices, but about 30,000 occur each year that are strong enough to be felt by people. Of that number, about 60 are large enough and close enough to populated areas to cause significant damage and loss of life.
On average, some 10,000 people die each year from earthquakes, but the number is sometimes much higher. The earthquakes of 1999 caused more than 20,000 deaths, well above the average, though not extraordinarily high in comparison with some of the quakes that have been recorded. In 1556, for example, a devastating earthquake in China killed at least 830,000 people. But catastrophic death tolls are not confined to the distant past. In 1976, a great earthquake shook Tangshan, China, causing many deaths. According to the Chinese government, 255,000 people died, but because the government denied independent observers access to the affected area, the actual extent of the damage remains poorly known.
Because earthquakes can be so deadly and destructive, seismologists (scientists who study earthquakes) have been searching for ways to reliably predict when and where they will occur. As of 2000, their efforts remained unsuccessful. However, their research has accumulated a wealth of knowledge about earthquakes that has helped reduce the damage and loss of life they cause. That knowledge has led to new methods for designing and constructing buildings, roads, and other structures that have made them less vulnerable to earthquakes. It has also resulted in long-term regional forecasts of earthquake risk and warning systems designed to detect the first sign of an earthquake, trigger an alarm, and inform emergency response teams about the location and severity of the quake.
Early Theories and Research
People have feared earthquakes and puzzled over their cause since the dawn of civilization. Many ancient cultures developed mythological explanations for earth tremors, often involving angry gods or enormous beasts lumbering about the underworld. Some of the earliest attempts to explain earthquakes scientifically came from the ancient Greeks. In China and Japan, people kept detailed records of earthquakes as early as 3,000 years ago, and by the A.D. 100's the Chinese had developed primitive seismoscopes (instruments that detect ground motion).
The modern era of seismology began in the mid-1700's, driven in large measure by a catastrophic earthquake in Lisbon, Portugal, in 1755 that killed some 70,000 people and practically destroyed the city. In the middle and late 1800's, there were several major developments in the field of seismology. They included the seismometer, an instrument that detects earthquakes and records the time they occurred, and the seismograph, which records the input from a seismometer onto a moving sheet of paper.
In the United States, earthquake researchers in 1910 founded the Seismological Society of America in the wake of the great San Francisco earthquake of 1906. At least 3,000 people died as a result of that earthquake—most of them perishing in the fires that raged for three days after the quake. Despite the relatively small death toll, seismologists consider the San Francisco earthquake one of the most significant quakes of all time because of the amount of ground movement it caused. What seismologists learned from the San Francisco quake helped them to unlock many of the mysteries about the nature and cause of earthquakes.
The researchers discovered, for instance, that the quake had occurred on a fault—a fracture in the ground along which land on one side moves relative to the other side. The fault was 430 kilometers (270 miles) long and located west of the city. In some places, land on the western side of the fault had slipped an incredible 7 meters (23 feet) northward. This fracture was later recognized as a segment of the San Andreas Fault, a 970-kilometer (600-mile) fault that stretches from off the northwestern California coast to the southeastern part of the state.
Seeking A Better Understanding of Earthquakes
It was the study of the great San Francisco earthquake that led an American geologist, Harry Fielding Reid, to formulate the elastic-rebound theory in 1910. According to this theory, which is still accepted by scientists, most earthquakes are caused by the sudden release of stress from a portion of the earth's crust, or rocky outer shell, where masses of rock are slowly moving in opposing directions. As two sections of crust grind against each other, resistance to their motion causes them to lock together. As a result, stress gradually builds up in the rock. Eventually, the rock reaches its breaking point and fractures, allowing the sections to momentarily break free. Land areas on either side of the fault line rebound, or snap into a new position, which relieves the stress. The rebounding releases energy in the form of vibrations called seismic waves. Seismic waves are responsible for the shaking of the earth that characterizes an earthquake.
Geologists have identified several types of faults and classified them into three broad categories based on the direction of displacement: normal faults, reverse faults, and strike-slip faults. Normal faults occur in areas where two blocks of earth move apart and one drops to a lower level. Reverse faults, also called thrust faults, result when two blocks of earth collide and one block is forced down and beneath the other. Strike-slip faults occur in areas where two blocks slide past each other.
The Development of Measurement Scales
While some seismologists were trying to understand why earthquakes occur, others were devising ways to measure their strength. Before the introduction of modern seismometers, earthquakes were measured in terms of their intensity—the observable effects of ground shaking at a specific point. Of the many intensity scales that were developed to measure the force of earthquakes, the best-known in the U.S., Europe, and Japan is the modified Mercalli intensity scale. This scale, developed in 1931 by two American seismologists, Harry Wood and Frank Neumann, measures the effects of the shaking on people, artificial structures, and the landscape at a certain location. The scale identifies 12 increasing levels of intensity, each designated by a Roman numeral. A level of IV on the Mercalli scale represents a small amount of observable damage, such as the cracking of weak plaster walls and ceilings; level XII signifies total destruction.
The major disadvantage of the Mercalli scale is that it does not have a physical basis. It is based on reports from witnesses, which can be subjective, and damage estimates for an earthquake, which can vary widely. Nonetheless, the Mercalli scale has proved helpful in determining which parts of quake-prone areas experience the worst shaking, and so it continues to be used by building engineers and city planners.
By the mid-1900's, seismologists had developed magnitude scales to measure the actual strength of earthquakes. These scales were made possible by new seismic monitoring devices that were sensitive enough to detect vibrations in the Earth from thousands of kilometers away. A magnitude scale calculates the amount of energy released at the hypocenter (point of origin within the Earth) of the earthquake. It is based on how much the ground moves at the site of the seismograph as a seismic wave passes through. Although the hypocenter is the actual center point of an earthquake, most people are more familiar with the term epicenter, which refers to the point on the surface directly above the hypocenter.
The best-known earthquake magnitude scale was developed in 1935 by Charles F. Richter, a seismologist at the California Institute of Technology. Although the Richter scale is theoretically open ended, nature seems to have placed an upper limit on the power of an earthquake at about magnitude 9. Each number on the Richter scale represents a 10-fold increase in ground motion and a 32-fold increase in the amount of energy released. Earthquakes measuring 1 or 2 on the Richter scale are usually so small that only seismometers can detect them, and those measuring 3.9 or less are generally called minor. Light to moderate earthquakes measure 4 to 5.9 on the Richter scale, and strong quakes measure between 6 and 6.9. Earthquakes of magnitude 7 to 7.9 are considered major earthquakes, and those measuring 8 and above are classified as great earthquakes.
To measure the largest earthquakes, seismologists began in the 1970's to use the moment magnitude scale, which relies on data recorded by instruments that are more sophisticated than those available in Richter's time. Moment is a unit of measure related to the total energy released by the earthquake. It is calculated from the amount of slip of the fault and the area of the surface affected by the energy release. Moment magnitude and Richter magnitude are about the same for earthquakes up to magnitude 7, but the moment magnitude scale is considered more accurate for gauging major and great earthquakes.
Why the Earth Moves: Plate Tectonics
As the study of seismology progressed, one major question remained unanswered: Why do the huge masses of rock in the Earth's crust move about so as to create the stresses that lead to earthquakes? Researchers realized that earthquakes tend to occur frequently in some places but rarely or never in others. Maps showing the location of earthquakes throughout history showed that most earthquake activity is concentrated in narrow zones that extend around the Earth. Most of these zones coincide with prominent geologic features such as mountain ranges (both on land and in the ocean), island chains, and volcanoes. This observation led to a new theory to explain the underlying cause of most earthquakes: plate tectonics.
The theory of plate tectonics was based on the “continental drift” hypothesis first proposed by the German meteorologist Alfred Wegener in 1912. Wegener theorized that all of Earth's land masses were once part of a single “supercontinent” that he called Pangaea. About 200 million years ago, the theory states, Pangaea began breaking up into smaller fragments that gradually drifted to their present positions. This theory explains why the shapes of some land masses—the eastern coast of the Americas and the western coast of Africa, for instance—seem to fit together like pieces of a jigsaw puzzle. But Wegener included no explanation for why the continents might have drifted, and most geologists dismissed his theory. In the 1960's, however, a growing body of evidence began to generate support for the idea of plate tectonics, a detailed version of continental drift.
According to the theory of plate tectonics, Earth's lithosphere—the layer of the planet that includes the crust and the region directly below it, called the upper mantle—is a mosaic of continental (land-surface) plates and oceanic (sea-floor) plates. These rocky tectonic plates, which average about 80 kilometers (50 miles) thick, float on a layer of partially molten rock. Researchers have identified about 30 tectonic plates, all of which are moving constantly in relation to one another, driven mainly by currents of molten material rising up from the lower mantle.
In the 1960's, geologists recognized that geologic features called mid-ocean ridges and subduction zones are related to the motion of crustal plates. These discoveries were crucial to the recognition of plate tectonics. Midocean ridges are regions of the sea floor where plates are pulling apart, allowing molten material to well up from below and create new crust. Subduction zones form in areas where an oceanic plate and a continental plate are colliding. The collision forces the heavier, denser oceanic plate downward, beneath the lighter, less-dense continental plate. It is this sort of resistance to tectonic motion at points where plates come into contact that leads to earthquakes.
Although tectonic activity can account for most earthquakes, there are notable exceptions. Several large earthquakes that struck near New Madrid, Missouri, in 1811 and 1812, for example, occurred far from any plate boundary. Such intraplate quakes may occur when tectonic stresses from a distant plate boundary are transmitted through the crust and reach an area far from the fault, strongly affecting a region where the crust has some preexisting weakness. Such a weakness could, for example, be the result of an ancient fault.
How Earthquakes Cause Damage
Seismologists have investigated several ways that earthquakes cause destruction. Most of their studies have focused on ground motion and soil liquefaction, a phenomenon in which soil behaves much like a liquid.Ground motion is the most obvious source of earthquake damage. The energy-carrying seismic waves of an earthquake radiate in all directions from the hypocenter. Scientists have learned that there are two major types of seismic waves—body waves and surface waves. Body waves, which travel directly through the Earth, are short, sharp motions that usually cause little damage away from the epicenter. Surface waves, which travel along the surface of the Earth, cause the most destruction. Surface waves can demolish buildings and trigger landslides and avalanches far from the epicenter.
The force of the ground motion at a particular location depends on the magnitude of the earthquake, the distance of the site from the epicenter, and the surface geology in the area. Surface waves travel at different speeds through different types of material. When passing from rock to soil, for example, the waves slow down but get bigger. Areas with soft soil therefore shake more violently than rocky areas the same distance from the epicenter. Structures built atop loose soil layers are thus more likely to be damaged than nearby structures built on bedrock. For instance, when the 1989 Loma Prieta earthquake struck the San Francisco Bay area, the greatest damage was caused to San Francisco's Marina District, the Bay Bridge linking San Francisco and Oakland, and Interstate 880 in Oakland, all of which were built on loose soil layers.
Surface soil is also vulnerable to liquefaction. This phenomenon occurs in saturated soils, in which the tiny spaces between individual soil particles are filled with water. Under normal conditions, such soil seems solid because the individual particles are in contact with one another. The intense shaking of an earthquake can cause the particles to pull apart and float freely in the surrounding water, temporarily giving the ground the consistency of quicksand. Structures on liquefied soil can sink or tilt and end up permanently stuck when the shaking stops and the soil resolidifies. A classic example of liquefaction damage occurred in 1964 during an earthquake in Niigata, Japan. Several apartment buildings in that city sank into the soil or toppled over.
Earthquakes occurring on the ocean floor, even in areas far from civilization, can also bring catastrophe. When a significant portion of the sea floor is displaced by an earthquake, it can cause a huge volume of water to shift suddenly, forming a large, fast-moving wave called a tsunami. A tsunami can travel thousands of kilometers, causing devastation far from the quake that created it. The wave is often not noticeable on the open ocean, but it can become very large when it reaches shallow coastal areas. As it surges toward the shore, the huge mass of water piles up, reaching heights of 30 meters (100 feet) or more. On May 22, 1960, an immense subduction-type earthquake shook the floor of the Pacific Ocean off the coast of Chile, spawning several tsunamis. The waves killed more than 2,000 people, including 138 in Japan. As of 2000, this earthquake was the most powerful ever recorded, measuring 9.5 on the moment magnitude scale.
Seeking Ways to Predict Earthquakes
Scientists have become expert at measuring the intensity of earthquakes and pinpointing their location. They have been far less successful, however, in finding ways to predict when earthquakes will occur. In the early 1970's, seismologists were optimistic that they could find reliable warning signs of an impending earthquake. They studied a number of natural phenomena that they thought might precede earthquakes, including geological dilatancy (changes in rocks caused by increased stress), emissions of the radioactive gas radon from the ground, changes in underground waterflow, and even unusual animal behavior. Unfortunately, none of these things proved to be reliable short-term predictors of earthquakes.
Another possible earthquake predictor that temporarily raised seismologists' hopes was the phenomenon of foreshocks, small tremors that originate in a fault zone and often precede an earthquake. On the basis of foreshocks, Chinese scientists in 1975 predicted that an earthquake would occur near Haicheng in northern China. Officials evacuated people from buildings in the area, and just hours later an earthquake struck. Despite that dramatic success, however, seismologists have since come to the conclusion that foreshocks are not a reliable way to predict earthquakes. Many large earthquakes strike with no foreshocks or other warning, and small tremors often occur that are not followed by an earthquake.
After many years with no obvious or reliable success, many seismologists came to believe that earthquake prediction was impossible. However, in 2000, researchers using computer models (simulations) of earthquakes expressed renewed hope that prediction might become a reality. In simplified computer-simulated fault zones, these researchers found what may be identifiable patterns in the evolution of stress in an actual, known fault zone. These patterns could lead to the discovery of a reliable earthquake indicator—some subtle but detectable change in a fault that would signal that an earthquake is about to occur.
Forecasts Versus Predictions
While some seismologists continued to search for ways to predict earthquakes, others concentrated on developing reliable earthquake forecasting methods and detection and warning systems. Seismologists make a distinction between predictions and forecasts. While earthquake prediction seeks to specify the time, place, magnitude, and probability of an anticipated earthquake, forecasting aims simply at making general statements about the future possibility of an earthquake in a particular area. Even though they are not specific enough to prompt evacuations, such forecasts can heighten public awareness about earthquake risks and help communities prepare for the day when a quake does occur.
One potential forecasting method for earthquakes relies on the seismic gap hypothesis. This hypothesis states that the region on a fault most likely to experience a large earthquake in the near future is the zone that has had large earthquakes in the past, but not recently. For instance, the 1999 Izmit earthquake on the North Anatolian Fault came as no surprise because it occurred in a seismic gap where no major earthquakes had occurred since 1967. Many seismologists, however, doubt the validity of such forecasts, so the seismic gap system remained controversial in 2000.
Most research on earthquake forecasting in 2000 was confined to identifying areas of above-average risk for an earthquake. These can be areas near known faults or simply regions where many earthquakes have occurred in the past. For areas for which there is little or no record of past earthquakes, seismologists can examine the geologic record. For example, by carefully dating signs of ground movement, such as disturbed layers in an exposed rock formation, a seismologist can create a timeline of major earthquakes in an area covering many thousands of years. Geologic analysis can also help in making predictions about the probable effects of a future earthquake. Such analysis can reveal whether an area is prone to landslides or liquefaction, for example, or if it is likely to be shaken by strong seismic waves.
New Technologies and Hope For the Future
New technologies are also being used in earthquake forecasting. For instance, seismologists have used the Global Positioning System (GPS), a network of satellites orbiting the Earth, to enhance their monitoring of seismic activity. The satellites beam down a continuous stream of data that enables a person with a GPS receiver to determine his or her position to within a few centimeters. By equipping seismic monitoring stations with GPS receivers, seismologists can measure changes in the location of points on the surface that may be signals of a coming large earthquake.
Another approach to forecasting, one that was still experimental in 2000, also uses satellites to study earthquake-prone areas. This technique involves radar-mapping satellites, which make three-dimensional maps of a region by bouncing radar beams off the surface and timing their return. Using a computer, researchers combine two radar maps made at different times. Variations between the maps can reveal horizontal ground movement of as little as a few millimeters. This information can help researchers identify areas where stresses are building toward an earthquake.
Technology has already played a major role in developing early-warning systems for earthquakes, which focus on alerting people that an earthquake has begun. In several earthquake-prone regions, such as southern California, seismologists have set up networks of monitoring stations that continuously monitor ground movement. These stations can transmit an alert by radio or telephone line after the first tremors occur. The alert gives distant communities time to take emergency steps—sound alarms and turn off gas mains, for example—before the quake's full effects hit.
Earthquake warning systems may also issue tsunami warnings, which can be extremely valuable because tsunamis can be so devastating. Although a tsunami travels up to 970 kilometers (600 miles) per hour, the vastness of the oceans makes it possible to alert areas in the path of a wave long before it hits. A tsunami generated by an earthquake off the coast of Alaska, for example, would take 5 hours to reach Hawaii. A warning from Alaska could thus give officials in Hawaii ample time to evacuate coastal regions ahead of the wave's arrival. At the start of the 2000's, earthquake prediction was still not possible, but advances in seismology and construction engineering had helped reduce the devastation and loss of life from earthquakes. Many experts, however, worried that the rapid growth of the human population would lead to uncontrolled building in earthquake zones around the world. They also expressed the hope that, with further understanding of the Earth's movements, even communities in the riskiest areas could be made less vulnerable to earthquakes.
