Backdropping the oil industry’s historical supply/demand fluctuations, every upswing typically brings new challenges as well as open-ended opportunities. As shareholders of super-major oil companies evaluate the balance between production and reserves, increased consumption continues to exert pressure on these companies to keep pace. As a result, seismic data processors are finding themselves dealing with an abundance of work.


As companies scramble to identify and prove out untapped reserves, geophysical service companies are both interpreting new data and re-interpreting old data virtually in a fast-forward mode.


New geophysical environment


Primarily what separates this cycle from previous ones is a technological leap — the increasing number of geophysical electronic data files, which was not the case until the last decade. Further, data re-interpretation is occurring due to development of customized software that has been optimized or, in might be said in field jargon, “Now you can read more, see more and hear more.” Data has become a real growth industry.


As a result, many leases are being re-evaluated as today’s technology is more than 1,000 times more granular than even 5 or 10 years ago. Therefore, large volumes of seismic data are both being processed for the first time and being re-processed in an effort to increase reserves. However, this is precisely where a major problem crops up. Seismic shops can only run as many jobs as they have the capacity for within their IT infrastructure. This means that geologists may not be able to obtain critical seismic information on time or at all. To an oil producer, unprocessed data could mean a delay in the recognition of reserves.


Evolution of COD


It became clear that a solution to the data capacity question was needed in order for data processing to become predictable and continuous — hence computing-on-demand (COD). Its advent allowed companies to perform reservoir modeling, seismic processing and run simulations during times of high demand related to exploration or other business-driven activities. But COD did not enter the marketplace without some bumps in the road.


Oil companies were faced with building out infrastructure in order to process the abundance of seismic data that inundated their geological teams. As more companies needed secure, reliable, high-performance computing, providing a core infrastructure became a viable business. It was necessary to have a critical mass of infrastructure to run the types of applications and jobs in the geophysical field, and that was not usually the case with most companies.


A second issue involved addressing the provisioning or the actual set-up. One of the challenges in the Linux world was that both software and performance had been optimized for all types of hardware. Companies created such customized software to run only on specific pieces of hardware. In each company’s own controlled and standardized environment, that posed little problem. But when a third party was brought in, other issues arose such as ensuring the appropriate operating system was there and that the software could actually run on that particular platform. While a major cost component for COD was incurred internally for equipment procurement, an even larger cost centered on both the operations and hosting of the equipment.


Revolutionized model for COD


A new service offering needed to be developed to complement and enhance the existing COD offering. The purpose was multiple: help with customization and provisioning so that a job could be quickly installed, up and running, and the be “cleaned off” and removed in a secure manner. By doing that expeditiously, another customer or application could come in and take advantage of those cycles.


CyrusOne partnered with Appro, a well-known storage cluster solutions provider, to develop a new model for COD that meets these needs for customers. This new technique overrides all of the traditional disadvantages of COD — infrastructure, provisioning and costs.


Instead of an oil company spending capital dollars on building an infrastructure to support data processing, the company leverages the existing infrastructure of a data center. By opting to use this economic model, companies are able to reduce costs associated with provisioning and equipment purchases that would traditionally depreciate over 3 years. The spike in data processing is affordable and no longer a problem as customers pay for what they use as they go — “on-demand.” Overall, seismic data can be immediately interpreted without a major capital investment in hardware, software or more data center space.


From a business perspective as well as a technological one, this new model makes considerably better sense. Even in times of US $70-per-bbl of oil, seismic data processing groups are still understandably cost-conscious and project-centric. This model lets them focus on software and their core skill of interpretation while leveraging clusters at a megacenter on an “as needed” basis.


Super-major validation


In 2005, this new model was tested by a super-major oil company during a pilot. The oil company simply wanted to keep up with project workloads, not have to greatly expand necessary hardware and avoid having to build more data centers. CyrusOne created the external infrastructure — including the financial, business and provisioning model — and it was then able to spin out those capabilities on a scaled-down basis for wide-ranging seismic and data processing companies and groups.


Ultimately, COD is becoming a highly effective data solution within the geophysical field for numerous reasons. One involves dealing with capacity and demand, specifically including the ability to control capacity, which is extremely important with clusters that consume 20 times the power of a standard computer rack. The COD bottom line could scarcely be clearer. Most seismic customers experience processing downtime because of backlogged uninterpreted data, a lag that impacts both productivity and revenue. In many ways, COD revolves around a greater sense of fiscal responsibility by super-majors and, in fact, companies of all sizes, from 10 or 15 years ago. They have become more cautious, realizing that extra-high oil prices will not likely last forever. The ability to reduce COD costs, even by a few cents, has a significant impact on their bottom line. Consider, at minimum, the time issues that have contributed to CODs being an enhanced service offering. Approximately 12 months are required to build a data center, followed by 6 months to provision, install, set up equipment and then go live operationally.


If COD were still in the conceptual stage, many companies might still take a wait-and-see approach. The successful experience by a super-major, however, made questions moot by validating COD as a viable business model for seismic data processing and interpretation that companies of all sizes can employ.