A joint venture in technology development between researchers at Texas A&M University and the Department of Energy has produced a new computer tool that will increase recovery of up to 218 billion barrels of bypassed oil remaining in mature domestic fields. The nation's current proven reserve is 21 billion barrels.


The developed technology has already been adopted by two companies. As a result of widespread interest in advancing this technology, A&M researchers have an on-going industry research and development consortium funded by eight oil production and service companies and won a grant from the National Science Foundation.


The groundbreaking Texas A&M technology was developed in a project managed by the Office of Fossil Energy's National Energy Technology Laboratory, which provided the research funding for this effort. Total investment was US $890,000 with the government share of $630,000 and the university provided cost-sharing of $160,000 for this 3 year research project.


This approach adapts sophisticated computer modeling to the personal computer, using "Generalized Travel Time Inversion" technology. The developed software makes reservoir modeling capabilities feasible for small domestic producers. It will save time and money in predicting the location of such bypassed oil and in planning its recovery.


More than two-thirds of all the oil discovered in America to date remains in the ground and is economically unrecoverable with current technology. About 218 billion barrels of it, a volume approaching the proven reserves of Saudi Arabia, lies at depths of less than 5,000 ft (1,525 m). This bypassed oil represents a huge target for the roughly 7,000 independent producers active in the thousands of mature US fields which cumulatively account for a significant share of the country's crude oil supply.


Much bypassed oil lies in difficult-to-access pockets. Predicting the location and size of these elusive, compartmentalized deposits is costly because it often requires complex computing capabilities. Many independent producers aren't able to commit the personnel or buy the expensive supercomputer time required to build and operate the models needed to find and produce these overlooked stores of oil.


The A&M research effort engineered a cost-effective way to streamline computer-generated reservoir models. It provides significant savings in computation time and manpower.


Reservoir characterization identifies "unswept" regions in these mature fields containing high oil or gas saturation. In this process, geoscientists first employ computer models to develop an accurate picture, or characterization, of a productive oil reservoir. "History-matching" is then used to calibrate the model by correlating its predictions of oil and gas production to a reservoir's actual production history.


A key input to the history-matching process is data from tracer tests, in which traceable gases or liquids are injected into a well to determine the paths and velocities of fluids as they move through the reservoir. This information helps reservoir engineers calculate how much oil remains in the reservoir and determine the most efficient methods to sweep this residual oil from the reservoir.


In the Texas A&M project, researchers developed a novel, computerized method for rapidly interpreting field tracer tests. This innovation promises a cost-effective, time-saving solution for estimating the amounts of remaining oil in bypassed reservoir compartments. The new method integrates computer simulations with history-matching techniques, allowing scientists to design tracer tests and interpret the data using practical PC-based software -- a process that is much faster than conventional history matching. The cost and time savings coupled with the streamlined model and accessible PC-based tools make the technology feasible for small independent producers.