In the summer of 1921, a small team of physicists and geologists (William P. Haseman, J. Clarence Karcher, Irving Perrine and Daniel W. Ohern) performed an historical experiment

Figure 1. Schematic diagram of the seismograph used to detect and record seismic reflections in the 1921 Vines Branch experiment. “TRANS” refers to amplifier circuits, and the seismic sensors are labeled “microphones” (Schriever, GEOPHYSICS 1952).
near the Vines Branch area in south-central Oklahoma. Using a dynamite charge as a seismic source and a special instrument called a seismograph (Figure 1), the team recorded seismic waves that had traveled through the subsurface of the earth. Analysis of the recorded data showed that seismic reflections from a boundary between two underground rock layers had been detected. Further analysis of the data produced an image of the subsurface — called a seismic reflection profile (Figure 2a) — that agreed with a known geologic feature. That result is widely regarded as the first proof that an accurate image of the earth’s subsurface could be made using reflected seismic waves.

The Vines Branch experiment was motivated by the possibility of using seismic reflections as a tool in exploration for oil and gas. With the experiment’s success, one would have expected immediate financial support for follow-up work. Alas, interest in the new method soon waned due to a coincidental precipitous but short-lived drop in the price of oil. Reports vary, but apparently the price fell as low as 5 cents to 15 cents per barrel. It wasn’t until about 1929, after oil prices had recovered and further experiments were done, that seismic reflections became an accepted method of prospecting for oil.

Business boomed once reflection seismology was a proven technique for finding hydrocarbons. By 1934, Geophysical Service Inc. (GSI), one of the pioneer seismic reflection oil service companies, had more than 30 seismic crews exploring for oil and gas. Today, reflection seismology is a thriving business. Seismic reflection data are acquired worldwide in both land and marine environments. The acquisition and processing of seismic reflection data generates billions in revenue for modern oil service companies.

The Vines Branch experiment, because of its simplicity, can be used to describe the basic concepts of reflection seismology, and the contrasts between this first seismic reflection profile and a present-day profile represents the vast amount of technology that brought this science from one point to the other.

Fundamental challenges
Reflection seismology is based on a simple, familiar phenomenon: echoes. When a
Figure 2. Sketch of the 1921 Vines Branch experiment. (a) The very first seismic reflection profile identified a dipping reflecting boundary between two rock layers, the Sylvan shale and the Viola limestone. (b) A plan view of the Vines Branch experiment shows the positions of the shots and receivers and an overhead view of the reflection raypath for the fifth shot in the survey. (Image courtesy of WesternGeco)
compressional seismic wave travels through a material — whether solid, liquid or gas — part of the wave reflects wherever a change in acoustic impedance occurs. Thus, we can think of the Vines Branch experiment as creating and measuring echoes from below ground. The experiment had five main components: a seismic source, devices called receivers to detect the reflections, a recording system to make a permanent record of the reflections, a plan specifying the placement of sources and receivers and an analysis of the data to produce the final result.

Challenges faced by early practitioners of reflection seismology soon included three tough problems: obtaining reflection signals of sufficient quality, constructing accurate subsurface images and determining the optimal geometric pattern for the shots and receivers. These obstacles have driven most major technological innovations made in the seismic reflection method since the Vines Branch experiment.

The experimenters analyzed the vibrations — the electric impulses from each receiver — by identifying the arrivals of the reflected impulses in the seismic traces and using the timing signal to determine the elapsed time between these arrivals and the shot firing times. These time measurements were then transformed into distances using the propagation rate of the seismic waves in the upper rock layer, which had been determined in an earlier experiment. Using the known source and receiver positions and the measured distances traversed by the reflections, the experimenters constructed the seismic profile of the reflecting horizon (Figure 2a).

Seismic sources and receivers
The seismic source, or shot, was a small dynamite charge that was ignited in a shallow hole. Figure 2b shows a plan view of the shot and receiver positions for the Vines Branch experiment. The shots were set off one by one. For each shot, two receivers were placed in the ground 900 ft (300 m) away along a line perpendicular to the shot line.

Explosives were the only sources capable of yielding signals strong enough to produce useful seismic shot records. On land, explosives were compact, mobile and (theoretically, at least) simple to handle and store; explosives also created a nearly ideal impulsive seismic wavelet, which enhanced the seismic resolution. Dynamite was later replaced with VibroSeis equipment (Figure 3).

The next major components used in the Vines Branch survey were seismic receivers (also called sensors, detectors or seismometers), which are transducers that convert mechanical
Figure 3. A comparison between records from an early vibrator and dynamite shots. Both datasets were acquired in the same area of southeast Montana. (a) shows the correlated vibrator records; (b) shows the dynamite shot records. Note the improved signal-to-noise in the vibrator records indicated by the clearly visible reflections annotated A, B and C. (Image courtesy of ConocoPhillips)
energy in a seismic wave into an electrical signal. The transducers used on land and in water are called geophones and hydrophones, respectively. The Vines Branch experimenters tested several types of transducers. Interestingly, two of them — a movable coil of wire suspended in a magnetic field and a stack of piezoelectric quartz crystals — are, in much improved forms, the most widely used seismometers today.

Vintage 1930s electromagnetic geophones were large, heavy devices. The only permanently magnetic materials available then were weak compared to those of today. Thus, large magnets and coils were needed to obtain the necessary sensitivity. Damping was accomplished mechanically by suspending the moving parts in viscous oil. By 1965, improved materials and the use of electrical damping reduced the size and weight of geophones dramatically.

Seismic survey design
For about the first 30 years of seismic exploration, reflection surveys were single-fold. That is, each subsurface point was probed by just one geometric arrangement of a shot and receiver. Many subsurface points are required to map a subsurface structure. That was accomplished by recording each shot simultaneously at a number of geophone positions and then relocating the source and geophones to new positions for the next shot. Initially, the number of traces per shot record was less than 10, but by the late 1940s that had grown to 20 traces or more. The surveys, however, were still single-fold.

By the late 1930s geophone arrays with 3-6 elements were in use. By the late 1940s geophone arrays with 100 or more elements were used in some noisy areas. Shot arrays were used along with geophone arrays for especially intractable noise problems.

In 1950, Harry Mayne of Petty Geophysical Engineering patented the idea that each subsurface reflecting point could be probed by a sequence of shot/receiver pairs using a continuous range of source-to-receiver offsets. Mayne called the method “common reflection point” (CRP) acquisition, but it is also known as common midpoint (CMP) and common depth point (CDP) acquisition. The number of common points in CMP acquisition is called the fold. At the time CMP acquisition was invented, there was no practical way of using it. Too many data were involved both from an acquisition and a processing point of view. However, later developments would eventually revolutionize the way seismic surveys are acquired and processed using Mayne’s CMP method.

Seismic data recorders
For about the first 30 years of seismic exploration, data recording systems were purely analog systems. That is, an electrical output from a sensor or an array of sensors was treated as a continuous signal. The signals passed through analog amplifiers and filters and were recorded as continuous traces on media such as film and paper. To use a modern term, these systems were WYSIWYG (what you see is what you get). That is, once a trace was recorded on film, there was no mechanism for changing the amplification or removing noise. Noise problems were addressed by applying bandpass analog filters to the signals before they were recorded. Optimum filter parameters were determined in the field by analyzing a sequence of test shots recorded with different filter settings.

In the early 1960s, the introduction of digital technology drastically changed the nature of seismic instrumentation. In digital recording, signals are no longer continuous. Instead, they are discretely sampled at predetermined and fixed time intervals.

The final component in producing a seismic reflection is to process the recorded seismic field data to produce an accurate image of the earth’s subsurface. In the earliest days of reflection seismology, data processing consisted mostly of three key steps: static corrections, velocity determination and image construction. These were accomplished by a “computer,” which was the job title of the individual on a seismic crew who performed these functions using pencil and paper. The measured traveltimes for reflection events were adjusted for weathering and elevation. Next, velocity profiles were constructed based either on direct well measurements or on the observed changes in the arrival times of reflections as the distance between sources and receivers was varied. Time versus depth tables were built from the velocity information and used to convert reflection times to depth. Finally, images were constructed by correlating depth horizons over the area being surveyed. Ambiguous correlation of depth horizons caused by spatial sparseness of the data was resolved by recording and analyzing additional shots.

Geophysicists of that era recognized the shortcomings of their data processing methods. Their problem was that they just didn’t have the tools to do much better. During the 1950s, clever enhancements including inverse filtering using analog computers and migration using special drafting tools were made to some seismic data processing functions. However, seismic data processing as we know it today began in the 1960s when transistors and solid-state integrated circuits made digital computers affordable.

Reflection seismologists have had — and probably always will have — an insatiable hunger for computer processing power. They were quick to understand and use the advantages of array processors, faster CPUs, denser recording media, larger memories, parallel architectures, more flexible and reliable operating systems, and Linux-based clusters of microprocessor nodes. If seismic data processing has a search for a holy grail, then surely it must be the search for the perfect imaging method. The search started at the very beginning of seismic exploration, and continues unabated today.

This article is a condensed version of “A Historical Reflection on Reflections” was published by permission of SEG. The article originally appeared in the The Leading Edge, 2005, October S1, pages S46-S70.