As production decline rates accelerate for low-permeability reservoirs such as those in U.S. shale plays, machine learning neural networks can be a simpler, faster and as accurate alternative to traditional decline curve analysis for production forecasting, analysts say.
A common model used to forecast decline curves—the Arps model—has its challenges, according to Alexandre Ramos-Peon, senior analyst of shale research for Rystad Energy. These include infinite parameters used in the equation that introduce variables—such as varying well designs and completion techniques—which could impact production modeling.
“You need to come up with good parameters if you want to have reasonable forecasts,” Ramos-Peon said during a recent webinar.
The topic was addressed as production from legacy wells continues to decline. Oil production from legacy wells in the Permian Basin, for example, was projected to drop by 236,697 barrels per day in December, according to the U.S. Energy Information Administration’s latest Drilling Productivity report. Legacy gas production was also projected to fall by about 352 billion cubic feet per day.
RELATED: Staying Ahead Of The Decline Curve
Instead of using the theoretical, traditional decline curve model, Ramos-Peon explained that statistical methods are used directly on accumulated empirical data, exploiting billions of data points based on observed production numbers.
“We have a lot of data points and we assume that if the well has produced long enough it will behave as other wells have performed in the past. … A neural network is a way that allows us to do this in some way,” he said. “This technique only seems to work for wells that have a sufficiently long production forecast” of at least nine months.
He used wells in the Bakken as an example. With data from thousands of horizontal shale wells in the Bakken, the data is preprocessed to remove the parts that do not correspond to the natural flow such as production bumps caused by refracturing or production dips due to well shut-ins, he said. Such “abnormal behavior” is removed as part of data preprocessing, and production in the first few months before peak production is truncated. He added that the data is smoothed out using a Savitzky-Golay filter, removing the “undesired noise” characteristic of production wiggling up or down at certain points.
Using the historical production data, a supervised learning dataset is constructed by taking the ratio of monthly average production to maximal monthly average production. “If I know production at month 9, 10 and 11, I should be able to guess production at month 12,” Ramos-Peon said.
With the artificial neural network machine learning algorithm in play, several data points can be input to create an expected output—taking into account different weights, or combinations of the interchanged inputs. “You do this several times with different weights. Each of these lines is called a neuron because it kind of mimics the behavior in the brain. Then you apply the so-called nonlinear activation function,” he said, adding the objective is to train the computer to identify the best values that result in an output close to the desired one.
“You just throw all of this data to the computer and it will train itself somehow to guess the best value so the accuracy is as high as possible. … There is a mathematics theorem that tells you that any continuous function can be approximated by any degree of accuracy by such a device,” Ramos-Peon said. “Another feature of these devices is that they’re able to capture the functional relationships that are hidden in the data in a nonlinear way.”
The network has been trained by Rystad on 13,000 wells with promising results. Ramos-Peon referred to statistics on how the neural network model performed on a set of wells with more than 30 months of production. “We forecast within 15% of accuracy at the maximum time horizon,” he said.
The method often outperforms classical statistical forecasting methods, according to Ramos-Peon.
“This conceptually simpler approach might deliver comparable results and is actually less computationally intensive,” he added.
Velda Addison can be reached at firstname.lastname@example.org.