Seismic Reservoir Characterization Laboratory
Reservoir
characterization is a crucial prerequisite to predict the economic potential of
a hydrocarbon reservoir or to examine different production scenarios.
Unfortunately, it is impossible to determine the exact reservoir properties at
the required scale. The most abundant seismic data has a resolution around 100
ft. Wells resolve the reservoir down to the centimeter scale, but only at some
points in the vertical direction. Instead, one resorts to using statistical
methods to fill in small variations in the reservoir. The Seismic Reservoir
Characterization Laboratory (SRCL) is a new research program in the Department
of Geological Sciences at Virginia Tech. Its objective is to develop methods to
determine the parameters for the statistical reservoir models from seismic data.
Dr. M. G. Imhof
Derring Hall 4044 (0420)
Dept. of
Geological Sciences
Virginia Tech
Blacksburg, VA 24061  0420
phone: 540 231 6004
fax: 540 231 3386
email:
mailto://mgi@vt.edu
Reservoir
characterization is an essential step in exploration and development of a new
petroleum or natural gas reservoir. Typically, the process starts with a
thorough analysis of a potential region with respect to geology, probability for
finding hydrocarbons, and economic factors such as risk or investment needed.
Once it has been decided to explore for resources in a given region, one
acquires a seismic exploratory survey where controlled sound sources set off
acoustic sound waves. These waves penetrate the earth, and propagate, reflect,
and refract until they reach back to the surface of the earth where they are
recorded by geophones or hydrophones. Figure 1 presents a
schematic of the seismic method.
Figure 1: Schematic of the seismic method. A truck
generates a sound wave which penetrates the earth. At boundaries between
different layers, a portion of the sound wave is reflected back to the surface
where it is recorded with geophones.

Figure 2: Seismic datacube with interpretion
overlaid. Shown in green is a layer which got displaced by the red and yellow
faults. The pink layer exemplifies the 3D nature of geology as demonstrated by
the top surface cutting through some topographically high spots.

After
a great deal of data processing, an accurate image of the subsurface can be
obtained. An example is shown in Figure 2. This
subsurface image is carefully examined for potential accumulation points of
hydrocarbons. Indications are bright amplitudes or structural high spots.
Currently, however, there exists no surefire method to locate the accumulation
points. To prove the presence of hydrocarbons, a well needs to be drilled. Only
one in three wells encounters hydrocarbons! Finding hydrocarbons does not mean
that they can be produced economically. First, the reservoir might not be porous
enough to contain large amounts of fluid. Second, although there are enough
pores, they are often only partially filled with hydrocarbons. Lastly, if the
pores are not connected with each other, the hydrocarbons cannot be produced
even if there is a large amount of porespace filled to the brim with oil! These
three constraints are known as porosity, hydrocarbon saturation, and
permeability. If they were uniform throughout the reservoir, measurements made
in the exploratory well would be enough to determine the economic feasibility of
the reservoir. Unfortunately, they vary greatly over the reservoir extension.
The process of determining these spatially varying parameters is known as
reservoir characterization.
Reservoir characterization is a critical aspect of reservoir development and
future production management. Knowing the details of the reservoir allows
simulation of different scenarios. The problem, however, is to define an
accurate and suitable reservoir model including smallscale heterogeneity.
Currently, the most abundant data about the reservoir, i.e. the seismic data, do
not have enough resolution. The typical resolution of seismic data is on the
order of 100 ft or more. Figure 3 illustrates
the seismic resolution issue very nicely.
Figure 3: Photograph of one of the cliffs exposing
the Roda sandstone in Spain. The geologic interpretation overlain by a typical
seismic sound pulse is displayed on the righthand side to illustrate the
differences between seismic and geological resolution.

Boreholes yield an excellent
description of the vertical heterogeneity at scales ranging from centimeters to
hundreds of meters, albeit only locally and at a very high price. The lateral
component, however, can rarely be derived from well data alone because there are
seldom enough wells. This scarce information is usually not sufficient for
building a deterministic reservoir model including all the small scale
variations (Figure 4).
Figure 4: Although the wells yield a perfect
measurement of the heterogeneity, they only capture the vertical components at
a few locations. In between wells, the reservoir model needs to be filled in
based on statistical and geological data.

A
viable alternative is to construct a deterministic framework using seismic data
and well data. In a second step, small heterogeneities are filled in using
statistical methods. For any statistical model, many different realizations can
be generated which all exhibit the same statistical properties. This trait is
actually an advantage, since the uncertainties of the different scenarios can be
studied by generating a large number of equally possible reservoir models.
Obviously, their validity depends highly on the geological realism. Three
different models of stochastic smallscale reservoir heterogeneity appear
suitable to describe the Zuata field:
 1.
 variogrambased,
 2.
 objectbased, and
 3.
 geologic process response.
For any two
locations, the variogram as a function of distance relates to the statistical
probability that the facies at these points differ. Any point of the reservoir
gets a facies assigned such that the true facies is prescribed at known
locations, e.g. along a borehole, while the statistics of the overall reservoir
model approximates the chosen variogram. Parameters to be defined are, among
others, the average distances between facies changes which are typically
different in vertical and horizontal directions.
Figure 5: Two possible realizations of a reservoir
sharing the same statistics. Warm colors represent clean sands, while colder
ones correspond to clays.

Using the
objectbased approach, the reservoir model is generated from objects that have
some genetic significance rather than being built up one elementary pixel or
voxel at a time as with the variogram approach. For each lithofacies, a
geometric object is selected, e.g. a halfellipsoid representing a river
channel. A realization of the reservoir is generated by randomly placing these
objects in the model until the prescribed overall proportions for the different
lithofacies are attained. Parameters to be defined include geometric factors,
proportions, or how different objects are positioned with respect to others.
Figure 6: An object oriented reservoir model whith
three channels.

The
geologic process response simulates the sedimentological processes which formed
the reservoir in geologic time. Processes simulated include discharge of clastic
sediments, transport, deposition, erosion, compaction, or subsidence. Due to
their large number, it is often difficult to specify the input parameters. Many
parameters are complexly related to each other.
Figure 7: Geological process model of a progressing
delta.

Figure 8: Geostatistical distribution of ancient
river channels both filled by a geostatistical simulation of small scale
heterogeneity. Succession of transgression, regression, and an alluvial
channel system.

Clearly, all three approaches can be used in conjuction. As illustrated, all
these descriptions of reservoir heterogeneity contain free parameters which need
to be determined. In many instances, such quantitative information is provided
by measurements performed on outcropping formations considered geologically
analogous to the subsurface case study. For example, the distribution of
channels within a reservoir and their size may be derived from measurements
performed on a deltaic outcrop for which the depositional environment and the
sequence stratigraphy context are analogous to the reservoir. Other methods to
obtain these parameters are analogous mature reservoirs with a dense well
spacing, additional shallow boreholes, pressure and production tests, or
horizontal wells. The most abundant data, however, stem from reflection seismic
surveys. Ordinarily, seismic data is only used to delineate deterministic
features or to constrain the stochastic modelling procedure. Instead, we propose
to use seismic data to parametrize the stochastic reservoir model. Obviously,
the validity of forecasts derived from such an approach depends highly on the
geological realism of the statistical models. The first problem is the choice of
the statistical model. The second problem is to choose a set of suitable
parameters. Typically, the correct kind of statistical model can be determined
by interpreting well and seismic data. Its parameters are often found by
studying outcropping formations considered geologically analogous to the
reservoir, or analogous but mature reservoirs with a well spacing dense enough.
The key observation is that the parameters are usually not determined in situ
from the reservoir of interest but from some analogous one. As previously
stated, there are large quantities of in situ data available: the seismic data.
Although it does not have enough resolution to resolve small features, it can be
used to determine stochastic model parameters. The Seismic Reservoir
Characterization Laboratory (SRCL) is being established to investigate how these
parameters can be inferred from the seismic data, i.e., how can one determine
the parameters needed to generate the statistical reservoir models shown in
Figure 5 from
seismic data as shown in Figure 2.
The Seismic
Reservoir Characterizatioon Laboratory (SRCL) is being established in the
Geological Sciences Department at Virginia Tech to examine how seismic data
relates to reservoir heterogeneities at scales below the typical resolution of
100 ft. Currently, the laboratory consists of Dr. Imhof and a graduate student.
Two additional graduate students are expected to join within a year. The
laboratory will be funded from both industry and government organizations in the
form of research grants, sponsored research, and in the future, an industry
consortium.
Figure 9: Schematic of seismic reservoir
characterization project: field data or synthetic data is preconditioned by
seismic data processing using industry standard software available to SRCL.
The development of Heterogeneity Parameters Estimation software and its theory
are the objective of this initiative. The wave equation simulators to generate
synthetic test and research data are readily available, e.g., developed by our
group or contained in the seismic processing software.

SRCL
owns or accesses a range UNIX workstations and other peripherals:
 Sun Ultra 30s and 60s
 Sun SparcStations
 SGI Indigos
 SGI Octane
 OpenStep PCs (UNIX)
 HP 755cm, Versatec V36, and OYO GS 612 plotters
 Printers
SRCL uses the following software packages:
Among others, SRCL has the
following datasets available for research:
 3D poststack datacube: Green Canyon protraction area, Gulf of Mexico.
 3D poststack datacube: South Timbalier, Gulf of Mexico.
 3D poststack datacube, Well data: South Marshal Island Field, Golf of
Mexico.
 3D poststack datacube: Neptune, Golf of Mexico.
 3D pre and poststack data: Niger Delta, West Africa.
Seismic Reservoir Characterization Laboratory
This document was generated using the LaTeX2HTML
translator Version 98.1p1 release (March 2nd, 1998)
Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer
Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html split 0
no_navigation srcl.
The translation was initiated by Matthias Georg Imhof on 19991210
Matthias Georg Imhof
19991210