Visualization centers were the shiny new toys of the late 1990s and early 2000s, but their luster dimmed as companies began to question the return on investment. Also, the need to move to a separate “collaboration center” that required set up and data transfer was not conducive to the way people wanted to do their jobs.

With the onslaught of new computing kit – GPUs for data processing, remote visualization, and Cloud computing – companies like NetApp, NVIDIA, Cisco, and NICE are suggesting a new paradigm for visualization in the upstream oil and gas industry. Instead of keeping the visualization center separate from the data center, these companies recommend a blending of these environments to reflect the increased integration taking place within the industry. This would bring CPUs and GPUs in closer contact, a growing necessity with the explosion of data in recent years.

Graphic processing in the data center

GPUs, originally developed for gaming technology, are increasingly being used to speed up time-intensive tasks such as seismic data processing and reservoir simulation. NVIDIA has taken the lead in GPU design, not only for gaming systems but also for industries such as automotive, medicine, and oil and gas.

“In the area of seismic processing, we’ve turned simulations that would have taken weeks and dramatically reduced the amount of time that it would take to process the data,” said Jen-Hsun Huang, co-founder, president, and CEO of NVIDIA, at Schlumberger’s SIS Global Forum earlier this year. “Geologists are looking at gigabytes of data, and we render it by turning pixels into volumetric geometries called voxels. This creates a volumetric impression of the subsurface.”

image- Combining data centers with visualization

Combining data centers with visualization centers enables better collaboration. (Image courtesy of NetApp)

He added that over the next few years this technology will be expanded to create massively parallel implementations of rendering technology. Already WesternGeco has more than 10,000 GPUs in its data centers.

Next-generation data centers will use GPUs for computing as well as for remote visualization. Huang said GPUs are ideal for the intensive compute needs of the oil and gas industry. “We built our GPU to be everything a CPU is not,” he said. “For instance, in a CPU, only a small percent is dedicated to mathematics. The throughput process of a GPU makes it a better tool to process mathematical calculations.” The parallel design of a GPU makes it capable of executing concurrent threads. Its cores can process arrays of data elements the same way that arrays of pixels can be manipulated. This accelerates almost any parallel application.

Despite these advantages, the industry has been somewhat slow to embrace GPU processing because of memory limitations and programming issues. The OpenACC specification addresses some of the programming problems, and GPU manufacturers are working to combine GPUs and CPUs on a single processor board. Remote visualization

GPUs also aid in the use of remote visualization technologies. Given the mobility of the industry, having remote visualization capabilities is important to enable remote workers to have full access to the company’s IT resources. Developments in this area are enabling companies to deliver higher resolution, higher performance, and lower bandwidth.

According to Keith Cockerham, development manager for the energy sector at NVIDIA, the ability to virtualize GPUs with low latency remote display capabilities is enabling enterprise companies to deploy graphics-intensive applications to all users.

“With these capabilities, we can extend and enhance the integrated infrastructure to fully incorporate compute, storage, networking, and graphics with standard commercial hypervisors like Citrix XenServer to deliver a scalable and agile IT infrastructure,” Cockerham said. “These GPU and remote graphics technologies make advanced visualization available when and where it’s needed, which will improve interdisciplinary collaboration and enable workflows that may cross geographical boundaries. They also will alleviate concerns that arise due to steadily increasing dataset sizes and the increased level of precision needed for unconventional plays. Companies will be able to deliver these solutions while allowing strategic data to remain within an optimized, secure, and central location.”

Eni has recognized the value of remote visualization and has a strategy to consolidate its applications on centralized infrastructure, with a data center located in Milan that handles the bulk of European visualization and HPC worldwide and a visualization hub in Houston.

The company has experienced significant advantages from remote visualization, including cost savings as it moved from expensive workstations to lower cost desktops. Current hardware requires no specialized software installed locally, reducing desktop IT support.

It also has benefited by the use of virtual teams, which ensures the best skillsets are being applied to the problems at hand.

Integration with the Cloud

“Virtualization and Cloud technology will make it possible to integrate and dynamically share all the compute, GPU, networking, and storage resources necessary for computation, visualization, and interpretation as part of a next-generation upstream oil and gas data center,” said Peter Ferri, energy industry director at NetApp. “These new data centers will deliver greater flexibility, efficiency, and economies of scale while improving computational performance, optimizing data management, and facilitating the use of critical visualization capabilities by experts and decision-makers.”

The ultimate goal is an interconnection between local and remote facilities so that oil and gas data centers can use remote IT resources, including public clouds, to meet compute and visualization requirements.

Cloud computing is still a rather new and sometimes scary concept to the oil and gas industry, which worries about relying on a public cloud to manage sensitive data. Ferri said that public cloud options actually are preferable in numerous scenarios.

For instance, several public providers already offer GPU and other HPC resources, including Amazon EC2, Nimix, Peer1, and Penguin Computing. This opens up the option of using public resources in addition to or even instead of internal resources.

One of the primary concerns is simply the volume of data managed by the oil and gas industry. Updating critical data to a public cloud would not only raise questions about its security but also could be time-prohibitive. Companies that offer HPC resources are offering services such as “private clouds” within their infrastructure to help mitigate these concerns. They also offer high-speed networking to move data more quickly, Ferri said.

Public clouds make the most sense in specific locations where access to internal resources is affected by latency or connectivity issues. They also are useful for small, shared projects that have a short timeline.

An IT infrastructure for the long term

Bringing GPUs into the data center will have a long-lasting impact on the way business is done in the oil and gas industry, Ferri said. “This new IT infrastructure will lead to further changes in workflows and practices for better collaboration, faster deployment of new joint venture projects, and provisioning of advanced IT capabilities to remote operations,” he said.

For more detailed information on this topic, download the new E&P white paper titled “Next-generation data center architecture for advanced visualization in upstream oil and gas” at epmag.com/whitepapers? .