Sedimentary rocks are laid down and accumulated on the surface of the earth under normal temperatures and pressures. The particles that make up the sediments - clays, silts, sand, pebbles, and boulders, are formed from pre-existing rocks by weathering and are then transported by various agents of erosion until they are finally deposited in a permanent resting place. Molded then by further sedimentary layers deposited on top of them, sediments are compressed, so that any water that lies between the grains is squeezed out in dewatering, often leaving behind any minerals it contained in solution to form cements that bind the sediments together. Eventually the squeezing and cementation allows one to say that a rock has formed from the original sediment. The process is called lithification.
If in the process, organisms or traces of organisms have been buried along with the sedimentary particles, then those organisms may survive well enough to be exposed, recognized, and collected as fossils. We will return to the process of fossilization.
We need to be able to place any fossils we find in the context of the rocks we find them in. The evidence allows us to do two different interpretations.
Example: Interpretation of a desert shoreline 3.5 billion years ago (Warrawoona rocks of Australia, where the oldest fossils on Earth were discovered - cyanobacteria).
Another concept is considerably weaker: the principle of original horizontality. It suggests that most surfaces of sediments are roughly horizontal, so any new layer of sediment must inevitably be roughly horizontal too. It easy to think of situations where this is not true. And what that means is that one cannot really trust rocks that look similar to be the same age at any major distance from the place one is studying. There may be exceptions, but they have to be demonstrated.
So a geologist may observe a sequence of rocks in one road cut: mudstones, sandstones, limestones, etc., and another sequence in the next road cut a mile up the road. If the sequences are identical, the geologist might begin to form the hypothesis that they are the same rock beds, and so of the same age, and this might allow him or her to infer that the rock beds were continuous between the two roadcuts, even if they run under the ground/grass/poison oak and so cannot be seen. The whole science of studying rock layers is stratigraphy, and the process of matching identifying rocks laid down at the same time is correlation.
Unfortunately, sequences of common kinds of sedimentary rock frequently look alike. If there is a special, rather distinctive or rare sediment in the sequence, that would act as a "marker bed" to allow the geologist to be more confident in identifying two sequences as identical. Examples might be beds laid down by a tsunami (a wave kicked up in the ocean by an earthquake), or a volcanic ash bed. These marker beds are not only distinctive, but are formed practically instantaneously, so that they are precise time markers. We do not know how old they are in thousands or millions of years, but we do know that they formed on the same day!
So simply matching rock types (lithostratigraphy) can be a powerful method of correlation. However, there is a more powerful way, based on the existence of specific time markers in rock sequences. The same problem arises in archaeology, of course. One must find markers that change with time.
Archaeologists use pottery, or coins, or weapons, but they too can be diachronous (they may not exactly mark the same time level). Even so, archaeologists can often use things that people have dropped: pottery fragments in archeology, and in ocean sedimentation and in landfills (Coke bottles and beer cans of various designs and metals).
It would be nice for paleontologists if Klingon spaceships had dropped color-coded BBs on Earth every million years or so: each layer of BBs would mark a different time in the rock record. What paleontologists have instead is fossils that change with time. Since life first formed on this planet, it has evolved. The organisms alive at any time were different from their ancestors, and different from their descendants. What we have to do is to examine the sequence of fossils that occur in sedimentary rocks, and (laboriously) to collect up and down the layers, noting the changes, and getting to know which fossils are the best ones for telling us the age at which they were alive. This process is biostratigraphy (see below).
(Just as an aside, you don't *have* to believe in evolution to make this work. All you have to recognize is that fossils have changed through time. You can believe that fossils were planted by Klingons if you like, and you can still be an effective biostratigrapher. In fact, biostratigraphy began, and was very successful, long before Darwin explained the mechanism for evolution.) Change through time is a fact, and, of course, change through time is evolution. You might want to argue about the way it happened, but evolution itself is a fact, and oil companies bet millions of dollars on it every year. Here's how:
Large fossils (ones you can see with the naked eye) are desirable, but an oil company drilling holes in rocks could not expect to hit any given large fossil. Instead, oil companies rely on microfossils, which are often the preserved shells of single-celled organisms that live floating on the surface waters of the ocean. These creatures fulfil many of the criteria for good zone fossils: they are abundant, they float around so that their shells drop into all kinds of sediment, and they typically evolve fast. And if they are present by the thousands in sediment layers, a drill bit will certainly hit them, and a geologist will be able to gather them from the drill cores taken from the drill hole.
Oil companies may spend $100,000 a day to drill an oil well, so it is important to know precisely what age of rock the drill bit has reached. If it has gone beyond (below) the level at which the company expected to find oil, the hole is a "dry" one and it is time to stop. A trained paleontologist will be able to tell the age of the rock at the base of the hole within a matter of hours, so may be able to save the company more than her annual salary, perhaps several times a year!
Biostratigraphy denotes the use of fossils in stratigraphy. The process of evolution continually molds species of organisms, and new species evolve while old species become extinct. This gives a directionality to time, and since successive sets of species living in a region over time are never exactly alike, the flow of time is marked by the deposition of assemblages of fossils that can be differentiated by a skilled paleontologist if the preservation is good enough. By comparing sets of fossils over wide areas, one can gradually define a system of time-dependent guide fossils that will denote a particular time in relative terms.
Of course, not all organisms evolve rapidly, and not all organisms preserve well as fossils, and not all organisms are abundant enough in the fossil record to be used consistently as guide fossils. Some organisms are better guides to environment than to time. Thus, while biostratigraphy is at present the most widely used, most reliable, and cheapest way of telling the age of a rock formation, it is not easy and it is not free from ambiguity. There are other, powerful time markers: see section on absolute dating (below). Biostratigraphy is only one of a series of methods used in trying to refine the history of a set of rocks.
In the end, we have been able to divide up the rock record based on the fossils found in it. There is a hierarchy of names for time periods in Earth history: we will concentrate on the larger ones, Eras (e.g. Phanerozoic), and Periods (e.g. Devonian, Jurassic). Stages and zones are finer subdivisions, down to periods that may be as small as 100,000 years (more precise that radiometric age dates).
Here is the way these divisions came about. Early geologists in Western Europe collected fossils in the extensive chalk deposits that extend from England and France all the way to Russia. The fossils (called Cretaceous after the Latin for "chalk") were generally alike, and they were different from those collected in the rocks sequences just below (named the Jurassic after the Jura Mountains of eastern France). So rather quickly, geologists began to set up a sequence of time periods, during which certain sedimentary sequences of rocks were laid down, each period with a recognizable set of fossils that reliably occurred in the rocks and defined the period. Geologists competed to discover and name these sets of rocks-with-their-own-fossils, and they were often named after the areas in which they were defined. So Devonian rocks were named after the county of Devon, in southern England; Permian rocks were named after the city of Perm, in Russia. Overall, the geological time scale has now been stabilized, codified, and refined to the point where a good fossil collection from any particular rock bed can usually be identified as belonging to a particular time. Often we can do much better than "period": periods are typically divided into stages, and each stage is dividied into fossil zones.
Some fossils are more useful than others for telling time. Animals that evolve slowly are obviously not as useful for telling time as those that evolve quickly, and organisms that are widespread in geography and in habitat are more useful than those that are restricted in their range. Those that are abundant are more useful than those that are rare. It helps if the evolutionary changes that occur are easily recognised. So certain fossils (called zone fossils) have come to be used more than others. A set or assemblage of fossils is better than a single guide fossil.
Potassium (K) is an element that is a vital ingredient of the common mineral potassium feldspar (K-feldspar), which occurs in many kinds of igneous rocks. As a K-feldspar crystal forms, precipitating from a molten magma, it will contain a large proportion of "ordinary" K, and a small but fixed proportion of redioactive K-40. Over time, that K-40 breaks down to form atoms of argon-40, Ar-40, which is an inert gas that simply sits there within the crystal.
After geological time, that crystal may be collected by a geologist who wishes to know its age. By measuring the amount of Ar-40 in the crystal, one can calculate the time since crystallization. The principle is simple, though the techniques are often laborious. K-40 breaks down to form Ar-40 at a rate such that half of it has gone in about 1.3 billion years. If we measure the Ar-40 in a K-feldspar crystal, and find that half the original amount of K-40 has gone, then the age of the crystal is 1300 m.y.
One problem is that the rate of breakdown is so very slow that it is difficult to measure in a lab. That rate is what gives time to the clock, so we may be as much as 1% off because we can't easily determine how fast the clock runs. However, other radioactive clocks exist (using Uranium, for example), and usually when a crystal is dated using two or three different methods, the resul;ts come out very close to one another. Therefore, though we know there are errors in the method, they are small if the geologist and the chemist did their work well. As a result, for example, we know that a huge extinction event at the end of the Permian period occurred at 250 ± 2 million years ago (250 Ma), and the end of the dinosaurs occurred at 65 Ma ± 1 m.y.
Perhaps the most well-known is the radiocarbon method, based on the breakdown of C-14, carbon-14. However, the half-life of C-14 is only about 5370 years, so only a tiny fraction of the original C-14 is left after about 40,000 years. Therefore C-14 dating is valuable to archeologists and historians, but not to most geologists. There are other problems with the C-14 method, too.
Naturally, there are problems. Crystals may have been reheated, or even recrystallized, setting their radioactive clocks back to zero well after the time the rock originally formed. Chemical alteration of the rock may have removed some of the newly produced element, also giving a date younger than the true age.
Worst of all, however, most elements used for radioactive age dating are not used by living organisms to build shells or bones. Usually, we cannot date fossils directly. Instead, we have to measure the age of a lava flow or volcanic ash layer as close to the fossil-bearing bed as possible. Sometimes reliable dates can be obtained this way, but, for example, many of the arguments about hominid evolution in East Africa have resulted from problems in assessing age dates from lavas and ashes close to hominid remains. Absolute methods are also expensive and time-consuming, and an oil company drilling a well at tens of thousands of dollars a day may not like to wait several days to find out the age of the rock it is drilling through, when a paleontologist can do it faster.
By convention, absolute ages in millions of years are given in megayears (Ma), while time periods or intervals are expressed in millions of years (m.y.).
Return to 107 main menu
This mini-essay last revised January 9, 2003.
Return to Geology 107 page