The magnitude 9.0 2011 Tohoku, Japan, and 2004 Sumatra-Andaman, Indonesia, earthquakes were black swans1 to many earthquake scientists because of their unexpectedly large magnitudes. The devastating tsunamis triggered by each earthquake killed more than 250,000 people and caused massive economic losses. With the wake-up call from these two megaquakes, scientists and businesses want to know the maximum size earthquake that can occur in a region so that the seismic hazard can be better addressed and businesses can be better prepared.
The Tohoku and Sumatra-Andaman earthquakes occurred in subduction zones, where one of the tectonic plates comprising Earth’s surface dives beneath another plate. The subduction process causes the largest earthquakes in the world. Why were the maximum earthquake sizes in both regions underestimated? Can we find a way to better estimate them? With these questions, FM Global collaborated with Prof. David D. Jackson from University of California at Los Angeles (UCLA), a renowned expert in earthquake research, to evaluate the earthquake magnitude limits along subduction zones.
Maximum earthquake magnitude
The maximum earthquake magnitude (mmax) is defined as the largest possible earthquake that could happen on a fault or in a region. In seismic hazard analysis, it is usually treated as a “hard” cut-off magnitude: earthquakes larger than mmax will not be considered in the analysis. As a consequence, the choice of mmax can significantly affect the seismic hazard and risk results. The use of mmax is also convenient for engineers and insurers to make decisions on construction standards or on insurance policies. The most intuitive way to estimate mmax is to examine the earthquake history of a region, and find out the historical maximum earthquake magnitude. However, the period of historic earthquake observation may be short compared with the return time of large earthquakes, so the largest earthquakes may not have been recorded or may not have happened yet. Thus mmax can be severely underestimated using this method. For example, none of the known historical earthquakes in Tohoku region before the 2011 event was larger than magnitude 8.4. As a result, most of the prior seismic hazard estimates for the Tohoku regions limited the mmax to below 8.5.
Another way to estimate mmax is by examining the known faults. A fault is a fracture in the Earth’s crust along which movement can take place, causing an earthquake. The size of a fault can limit the size of earthquakes occurring on the fault—large earthquakes require large faults. But earthquake ruptures can jump from one fault to another and create earthquakes larger than a single fault can hold. For instance, the rupture of 2002 magnitude 7.9 Denali Fault Earthquake in Alaska started on a previously unknown fault, now called the Susitna Glacier fault, continued on the Denali Fault, and finally terminated on the Totschunda fault. At present, we don’t know what serves as the absolute stopper for such multi-fault ruptures.
As a matter of fact, mmax is ill-defined because it does not specify the time interval over which it is valid. Using the available historical earthquake data, it is neither possible to determine nor to test mmax for the whole 4.5 billion year geologic history of the Earth. Moreover, given an absolute mmax, it is difficult to argue why an earthquake with a magnitude just slightly larger is not possible. To overcome the predicaments of mmax, we introduced the concept of probable maximum earthquake magnitude within a time period (T) of interest, mp(T). The new concept not only contains the earthquake magnitude information, but also provides how frequently such size earthquakes can happen. In this study, we not only determine the median values of mp(T) but also their uncertainty.
The “Ring of Fire,” and estimating probable maximum magnitude
One of the most simple and useful statistical relationships in seismology is the Gutenberg-Richter (GR) law. It states the relationship between the magnitude and frequency of earthquakes: the number of earthquakes exponentially decreases with the increase of magnitude, and the plot of magnitude vs. logarithm of frequency of earthquakes follows a straight line. The slope of the straight line is called the b-value; it governs the ratio of the number of small to large earthquakes. The GR law is amazingly robust. The b-value is typically about 1.0 in seismically active regions. For example, on average about 1,300 magnitude 5–5.9, 130 magnitude 6–6.9, and 15 magnitude 7–7.9 earthquakes are observed around the globe each year. However, the number of events usually drops more quickly at larger magnitudes, which is not captured by the simple GR distribution. To overcome that shortcoming, scientists created some ways to modify the large magnitude tail distribution.
One of the best modifications of this is called the Tapered Gutenberg-Richter (TGR) distribution, proposed by Yan Y. Kagan, a famous statistical seismologist at UCLA. In the TGR distribution, he added a parameter called the corner magnitude, above which the frequency of earthquakes drops exponentially compared to the GR distribution. Using the TGR distributions, we can determine the most probable maximum magnitude corresponding to different time periods of interest. We applied this methodology to the circum-Pacific “Ring of Fire,” a belt of subduction zones marked by earthquakes and volcanoes surrounding the Pacific Ocean. About 90 percent of the world’s earthquakes occur in the Ring of Fire (Figure 1).
To populate the TGR distributions, first we need to delineate the subduction zones on a map. We use Flinn-Engdahl zones (blue polygons in Figure 1) defined by scientists of the same names back in 1965. We then determine b-value and corner magnitude for each of the zones surrounding the Pacific Ocean. We get the b-value by examining the number of events of different magnitudes in each of the zones. However, the available history (just over one hundred years for most of the zones) is too short for determining corner magnitude because the determination of corner magnitude mainly depends on the magnitude distribution of large earthquakes, and not enough large earthquakes have occurred during the historic period.
To deal with the daunting task of estimating the corner magnitude, we employed a theory called seismic moment conservation principle. Seismic moment is another quantity used by earthquake scientists to measure the size of an earthquake. It has dimensions of energy and it can be tied to earthquake magnitude. If the earthquake magnitude increases by one unit, the seismic moment increases about 30 times. The advantage of using seismic moment is that the moment is additive: a collection of earthquakes produces a total seismic moment equal to the sum of the individual moments. Furthermore, seismic moment can be related to faulting. As tectonic plates push against one another, the stress in rocks increases. Because of the friction on a fault between the plates, the two plates cannot slide freely. Instead, they deform elastically and build up energy, like compressing a spring. When the stress exceeds the friction or the breaking strength of the rocks, the rocks break or slide in an earthquake; and in a short period of time, the elastic energy stored in the deformed rock is released by the earthquake. We refer to this built-up energy as the “tectonic moment,” which can be estimated from the characteristics of subduction zone faults and the motion of tectonic plates. Over a long period of time, the total seismic moment released by earthquakes in a region should be equal to or less than the tectonic moment. This forms the basis of seismic moment conservation principle.
Using the earthquake statistics with the constraint of the estimated tectonic moment rate, we obtained the parameters for constructing TGR distributions for each subduction zone. The TGR distributions tell us the occurrence rate of earthquakes at each magnitude. The reciprocal of occurrence rate is the earthquake return period. Accordingly, a TGR curve tells mp for a different period of interest. The uncertainty of mp varies among zones; however, for most zones it is within ±0.2 to 0.3 magnitude unit. Figure 2 illustrates the median values of mp for 100-, 500-, and 10,000- year periods for each of the subduction zones. We also plotted the largest historical magnitudes since 1900 in the figure as a comparison. Our results show that most of the circum-Pacific subduction zones can generate m ≥ 8.8 earthquakes over a 500-year interval and m ≥ 9.0 earthquakes over a 10,000-year interval. Furthermore, most of the zones have not experienced the probable maximum magnitude earthquakes in the past 110 years, the period of historical observation.
Because the traditional methods for estimating mmax are inadequate and mmax itself is ill-defined, we introduced the probable maximum magnitude concept by adding an effective time period to mmax. After estimating the probable maximum magnitude for each of the circum-Pacific subduction zones, we concluded that most of the zones can generate mega earthquakes over a period of social interest. There is hope that earthquake scientists, including seismic hazard modelers, will discard the ill-defined absolute maximum earthquake magnitude and embrace the new concept of probable maximum magnitude.
The available historical earthquake catalogs are too short to validate our results for periods of more than a few hundred years. On this issue, paleoseismic data, that is, the geologic record of ancient earthquakes, can help. Along the Cascadia subduction zone in the U.S. Pacific Northwest, the paleoseismic studies have unearthed an approximate 10,000-year history of great earthquakes. We will discuss how to use those data to further constrain our results in part two of this article in the next issue of Reason.
1Black swan is a metaphor, developed by Nassim Nicholas Taleb in his 2007 book, The Black Swan, to describe an event that is considered rare and unpredictable, has a has a massive impact, but is retrospectively predictable.
Yufang Rong, Ph.D., is a senior research scientist at FM Global.