Monday, 17 August 2015

Connected barrels: Transforming oil and gas strategies with the Internet of Things

dupress.com

After years of high and rising oil prices led to a longstanding oil price of more than $100 per barrel, new extraction technologies have opened up fresh sources of supply that suggest a new price equilibrium of $20 to $30 less per barrel.1,2 This new normal of lower oil prices not only will lay bare inefficient oil and gas (O&G) companies but will push even the efficient ones to find ways to preserve their top and bottom lines. Luckily for the O&G industry, a new suite of technologies promises to help companies tackle these challenges.

The Internet of Things (IoT), which basically integrates sensing, communications, and analytics capabilities, has been simmering for a while. But it is ready to boil over, as the core enabling technologies have improved to the point that its widespread adoption seems likely. The IoT’s promise lies not in helping O&G companies directly manage their existing assets, supply chains, or customer relationships—rather, IoT technology creates an entirely new asset: information about these elements of their businesses.

In an industry as diverse as O&G, it is no surprise that there is no one-size-fits-all IoT solution. But there are three business objectives relevant to IoT deployments in the O&G industry: improving reliability, optimizing operations, and creating new value. Each O&G segment can find the greatest benefit from its initial IoT efforts in one of these categories, which are enabled by new sources of information. With this in mind, this article provides segment-level perspectives, aimed at helping companies understand how information creates value, the impediments to value creation and how they can be addressed effectively, and how companies can position themselves to capture their fair share of that value.3

Upstream companies (e.g., exploration and production) focused on optimization can gain new operational insights by analyzing diverse sets of physics, non-physics, and cross-disciplinary data. Midstream companies (e.g., transportation, such as pipelines and storage) eyeing higher network integrity and new commercial opportunities will tend to find significant benefit by building a data-enabled infrastructure. Downstream players (e.g., petroleum products refiners and retailers) should see the most promising opportunities in revenue generation by expanding their visibility into the hydrocarbon supply chain and targeting digital consumers through new forms of connected marketing.

TAPPING A GUSHER OF DATA

The new period of much lower prices is taking hold in the O&G industry, which is putting heavily indebted O&G companies on credit-rating agencies’ watchlists and derailing the capital-expenditure and distribution plans of even the most efficient ones.4 Addressing this structural weakness in oil prices requires more than financial adjustments. It demands a change in the industry’s approach to technology: from using operational technologies to locate and exploit complex resources, to using information from technologies to make hydrocarbon extraction and every successive stage before sale more efficient and even revenue-generating.

Enabling this shift to information-based value creation are the falling costs and increasing functionality of sensors, the availability of advanced wireless networks, and more powerful and ubiquitous computer power, which have collectively opened the floodgates for the amount of data that the industry can swiftly collect and analyze. Sensor prices have tumbled to about 40 cents from $2 in 2006, with bandwidth costs a small fraction of those even five years ago, helping the industry amass individual data sets that are generating petabytes of data.5,6

The industry is hardly resistant to adopting these new technologies. During the past five decades, it has developed or applied an array of cutting-edge advances, including geophones, robots, satellites, and advanced workflow solutions. However, these technologies primarily function at an asset level, or they are not integrated across disciplines or do not incorporate business information. According to MIT Sloan Management Review and Deloitte’s 2015 global study of digital business, the O&G industry’s digital maturity is among the lowest, at 4.68 on a scale of 1 to 10, with 1 being least mature and 10 being most mature. “Less digitally mature organizations tend to focus on individual technologies and have strategies that are decidedly operational in focus,” according to the study.7

O&G companies can reap considerable value by developing an integrated IoT strategy with an aim to transform the business. It has been estimated that only 1 percent of the information gathered is being made available to O&G decision makers.8 Increased data capture and analysis can likely save millions of dollars by eliminating as many as half of a company’s unplanned well outages and boosting crude output by as much as 10 percent over a two-year period.9 In fact, IoT applications in O&G can literally influence global GDP. Industry-wide adoption of IoT technology could increase global GDP by as much as 0.8 percent, or $816 billion during the next decade, according to Oxford Economics.10

LINKING WITH BUSINESS PRIORITIES

Deploying technology does not automatically create economic value. To do so, companies must link IoT deployments, like any technology deployment, with specific business priorities, which can be described, broadly, using three categories of increasing scope. In the narrowest sense, companies seek to minimize the risks to health, safety, and environment by reducing disruptions (improving reliability). Next, companies seek to improve the cost and capital efficiency of operations by increasing productivity and optimizing the supply chain (optimizing operations). At the largest scope, companies seek to explore new sources of revenue and competitive advantage that transform the business (creating new value) (see figure 1).

DUP1169_Figure-1

Upstream players have together taken great strides in enhancing their operations’ safety, especially in the five years since the Macondo incident.11,12,13  Although technologies will continue to play an important role in improving the safety record of exploration and production (E&P) firms, lower oil prices are driving companies to place a higher business priority on optimization where IoT applications are relatively immature. Improving operational efficiency is more complex than ever given the increased diversity of the resource base being developed: conventional onshore and shallow water, deepwater, shale oil and gas, and oil sands.

The midstream segment traditionally has been a stable business connecting established demand and supply centers. Not any longer: The rise of US shale has altered the supply-demand dynamics—including the growing exports of liquids and natural gas—and increased midstream companies’ business complexity. To effectively serve this newly found growth and increased dynamism in the business, midstream companies are focusing on maintaining and optimizing their networks, a priority for which technology exists but that midstream companies have yet to fully integrate across their full network of pipelines and associated infrastructure.

By contrast, downstream players are relatively mature in monitoring risks and optimizing operations because of their standardized operations and long history of automation and process-control systems. But slowing demand growth worldwide, rising competition from new refineries in the Middle East and Asia, and changing and volatile feedstock and product markets are pressuring downstream players to explore new areas of optimization and extend their value beyond the refinery.

Regardless of the business priority served by new sources of data, the way in which the resulting information creates value can be understood using a common analytical framework: the Information Value Loop (see page 6). It is the flow of this information around this loop that creates value, and the magnitude of the information, the risk associated with that flow, and the time it takes to complete a circuit determine the value that is created. Organizations should design IoT deployments to create a flow of information around the value loop most relevant to a given business priority. Impediments to that flow can be thought of as bottlenecks in the value loop, and so a key challenge to realizing the value of any IoT deployment is correctly identifying and effectively addressing any bottlenecks that materialize.

The Information Value Loop
The suite of technologies that enables the Internet of Things promises to turn most any object into a source of information about that object. This creates both a new way to differentiate products and services and a new source of value that can be managed in its own right. Realizing the IoT’s full potential motivates a framework that captures the series and sequence of activities by which organizations create value from information: the Information Value Loop.
DUP1169_Value_Loop

For information to complete the loop and create value, it passes through the loop’s stages, each enabled by specific technologies. An act is monitored by a sensor that creates information, that information passes through a network so that it can be communicated, and standards—be they technical, legal, regulatory, or social—allow that information to be aggregated across time and space. Augmented intelligence is a generic term meant to capture all manner of analytical support, collectively used to analyze information. The loop is completed via augmented behavior technologies that either enable automated autonomous action or shape human decisions in a manner leading to improved action.
Getting information around the Value Loop allows an organization to create value; how much value is created is a function of the value drivers, which capture the characteristics of the information that makes its way around the value loop. The drivers of information value can be captured and sorted into the three categories: magnitude, risk, and time.

UPSTREAM: ASSIMILATING DIVERSE DATA SETS

The fall in crude prices and the push to optimize operations come as E&P players face a period of rising technical and operational complexity. Players are placing more equipment on the seabed and developing systems that are able to operate at pressures of 20,000 pounds per square inch and withstand temperatures of up to 350°F particularly in deepwater; increasing downhole intensity and above-ground activity in shales; moving to hostile and remote locations where safety is key; and producing from old fields that have significant maintenance needs.14

This increased complexity, when captured with the tens of thousands of new sensors now deployed, has driven a data explosion in the E&P segment; by some estimates, internal data generated by large integrated O&G companies now exceed 1.5 terabytes a day.15 This data surge, however, has yet to generate the hoped-for economic benefits. “The upstream industry loses $8 billion dollars per year in non-productive time (NPT) as engineers spend 70 percent of their time searching for and manipulating data,” according to Teradata.16
Spot-art_1

On the one hand, the growing scale and frequency of hydrocarbon reservoirs data (or physics-based data that follow established scientific principles) are challenging E&P companies’ data-processing capabilities. On the other hand, the rising need to expand the scope of data (inclusion of non-physics-based data independent of scientific principles that add assumptions, conditions, uncertainties, and scenarios and cross-disciplinary data that cut across exploration, development, and production) is restricted by companies’ weak data-management capabilities. “Ample opportunities exists for upstream oil and gas companies to improve performance via advanced analytics, but weak information management is inhibiting the progress for many,” according to Gartner.17

Companies are struggling to alleviate these bottlenecks, in large part due to a lack of open standards that is limiting the flow of data at the aggregate stage and thus analysis. For example, a company operating several thousand gas wells in the Piceance basin in Colorado wanted to upgrade its supervisory control and data acquisition system to manage growing complexity in operations. As the system was using a vendor-proprietary data-communications format, the new vendor had to write a new driver from scratch to communicate with the old system, costing $180,000 to the operator.18  In some cases, not even this sort of additional investment is enough, and data flow comes to a standstill, choking process flows as well. To eliminate such costs across the industry, users, vendors, and industry councils (e.g., the Standards Leadership Council) could collaborate to create open standards, enabling compatibility and interoperability.

In addition, oilfield service (OFS) companies could play a larger role in standardizing and integrating data. Their deep understanding of physics-based data and long history of working with data-management and IT service providers position them well to play a de facto standardizing role in the industry’s value loop. Building on this expertise might allow OFS companies to create a new revenue stream and help them fend off advances from IT service providers that are beginning to vertically integrate and market their developed OFS capabilities directly to E&P companies.19

Delivering insights from aggregated data may have no value if those insights get to decision makers late or if the data overload a company’s infrastructure. The data explosion—coupled with bandwidth challenges—increasingly calls for a complementary, localized data-processing infrastructure that pre-processes information closer to where it is generated and transmits only selective data to the cloud. While moving network intelligence closer to the source has broader uses, it is well suited for remote locations that generate terabytes of data and demand predictable latency.

No matter what data-processing architecture a company erects, it must analyze that data if it is to optimize existing operations and, more importantly, to identify new areas of performance improvement. For E&P companies, the analysis of standardized data will likely most affect production, followed by development and exploration. By some projections, IoT applications could reduce production and lifting costs by more than $500 million of a large O&G integrated company with annual production of 270 million barrels.20  For example:

Production: The opportunity to automate thousands of wells spread across regions (a large company handles more than 50,000 wells) and monitor multiple pieces of equipment per well (a single pump failure can cost $100,000 to $300,000 a day in lost production) makes production the biggest potential O&G beneficiary of IoT applications.21
Development: Smart sensors, machine-to-machine connections, and big data analytics can increase active rig time, while a connected supply chain dependent on networked mobility and big data can reduce cost inflation and delays in new projects.
Exploration: Advancements in seismic data acquisition (4D, micro-seismic) and computing power have already improved E&P companies’ understanding of subsurface geology by providing more and better data about what lies beneath.22 However, still greater opportunity lies in faster processing of existing seismic data and transforming them into surface models.
Beyond the technical advantages, if common data standards are able to integrate diverse sets of data, companies can likely gain insights into previously invisible aspects of operations and adjust how they make decisions. For example, analytics applied to a variety of physics-based data at once—seismic, drilling, and production data—could help reservoir engineers map changes in reservoirs over time and provide insights for production engineers making changes in lifting methods.23  Similarly, a company could generate savings by analyzing the non-physics-based data, such as the impact that choices made during a well’s development phase would have on the design and effectiveness of production decisions.

For example, Apache Corp., a large US E&P company, in collaboration with an analytics software firm, not only improved the performance of its electrical submersible pumps (ESPs) but also developed the ability to predict a field’s production capacity in three steps. The first step used hybrid and multi-disciplinary data about pumps, production, completion, and subsurface characteristics to predict submersible-pump failure with prescriptions to avoid future failures. The second step enabled Apache to use the additional data generated in the first stage to prescribe the optimal pump configuration for the next well. The third step helped the company to use these additional ESP performance data to evaluate fields’ potential production capacity before acquiring them.24

This “compounding effect,” in which one level of data analytics provides insights that can then lead to additional analytics, promises to give E&P companies new operational insights that simply were never before available or visible.

MIDSTREAM: PIPELINES OF INFORMATION

Since the start of the US shale boom, pipeline companies have seen their business shift from a simple business model—transporting limited grades of liquids and natural gas between fixed supply and demand centers—to a complex and more dynamic model of transporting variable volumes and grades of products from multiple locations to new end users and markets.

This rising business complexity—combined with aging pipeline networks, legacy and manual monitoring and control devices, and the ongoing challenge of service differentiation—presents both challenges and opportunities for midstream companies. With annual losses of approximately $10 billion due to fuel leaks and thefts in the United States alone,25 companies face considerable upside in improving pipeline safety and reliability.

Installing more operational hardware and software with limited pre-defined tags (e.g., pressure, temperature, volume, vibration) and following rules-based approaches (e.g., statistical, historical) would likely do little to reduce risks or improve a network’s reliability. What may be needed is a shift toward data-enabled infrastructure—in other words, getting started on the Information Value Loop by investing in sensors that create new data. “Midstream energy companies lag far behind what other industries invest in information technology,” according to Oil and Gas Monitor.26

Spot-art_2

Enbridge, TransCanada, and PG&E, for example, are relieving this bottleneck by creating data about potential pipeline breaches from advanced sensors installed inside or outside the pipeline. TransCanada and Enbridge are testing four technologies that essentially see, feel, smell, and hear various aspects of their oil pipelines: vapor-sensing tubes that “see” bitumen spilled by shooting air down a tube; a fiber-optic distributed temperature sensing system that “feels” fluctuations in temperature caused by bitumen leaking into ambient soil; hydrocarbon sensing cables that send electric signals to “smell” hydrocarbons; and a fiber-optic distributed acoustic sensing system that “hears” sound variations and can indicate a pipeline leak.27,28

PG&E, along with research institutions and government agencies, is testing many non-invasive, three-dimensional (3D) imaging technologies such as the 3D toolbox, first developed for the dental industry, which accurately identifies and measures dents, cracks, and corrosion on the pipeline’s outer surface. The system automatically collects and feeds images into calculation tools to generate an assessment within minutes, helping engineers to put together a corrective-action plan immediately. Similarly, PG&E is adapting NASA’s airborne laser-based system for methane leak detection, in which leaks’ GPS coordinates are automatically stored and the data captured can be correlated with variables such as temperature, time, and pipeline configuration for improved monitoring and control.29

Enhancing pipeline safety is in all players’ interest, since a spill by any single operator can lead to higher costs and tighter regulations for the entire industry. As a result, companies are joining forces in developing a data-enabled monitoring infrastructure. Thus, the industry-wide benefit of this collaboration outweighs any single company’s competitive or commercial advantage. Ensuring safety and minimizing risks are table stakes—to truly differentiate itself in the midstream segment, a company often must go further.

In fact, a midstream company would likely accrue a larger competitive and commercial advantage if it analyzes product and flow data more comprehensively all along its network— similar to the way US electric companies are analyzing energy data using smart devices and meters. According to some estimates, every 150,000 miles of pipeline generates 10 terabytes of data, an amount of data equal to the entire printed collection of the Library of Congress.30

The “midstream majors” are well positioned to create insights from this new data of volumes because of their diverse portfolio and integrated network.31 A big midstream company can leverage the data across its pipelines, helping shippers find the best paths to market and charging them differently for having route optionality in contracts. Forecasting algorithms on historic volumes transported can reveal ways in which a midstream major might use pricing incentives that induce producers and end users to smooth volumes.32 Similarly, a real-time analysis of changing volumes across its network of shale plays can alert the company to new price differentials.

The pipeline data, when combined with growing data from an expanding network of export facilities, markets, marine terminals, and product grades in a timely manner, can give rise to a data-equipped midstream enterprise. “Forward-thinking, innovative midstream organizations can take advantage of the unprecedented volume of new types of data. Emerging types of data, such as machine and sensor data, geolocation data, weather data, and log data become valuable at high volumes, especially when correlated against other data sets,” according to Hortonworks.33

DOWNSTREAM: FROM INSIDE OUT TO OUTSIDE IN

Crude-oil refining is a mature business with few recent innovations in processing technology. This, and the highly commoditized nature of petroleum products, make refining the most commercially challenging part of the energy value chain. Consequently, refiners worldwide have traditionally focused on running refineries as efficiently as possible and seeking to increase the yield of higher-value products.

Avoiding shutdowns is a critical part of increasing refinery output. Between 2009 and 2013, there were more than 2,200 unscheduled refinery shutdowns in the United States alone, an average of 1.3 incidents per day.34 These shutdowns cost global process industries 5 percent of their total production, equivalent to $20 billion per year.35 Ineffective maintenance practices also result in unscheduled downtime that costs global refiners on average an additional $60 billion per year in operating costs.36

Typically, refiners schedule maintenance turnarounds for the entire refinery or for individual units on a pre-set schedule to allow coordination of inspection and repair activities and to plan for alternative product-supply arrangements. For individual components, refiners routinely pull devices into the workshop for inspection and overhaul, without much information about a particular device’s expected condition, perhaps wasting efforts on devices that need not be repaired. But now non-intrusive smart devices (sensors), advanced wireless mesh networks (network), open communication protocols (standards), and integrated device and asset-management analytics (augmented intelligence) are driving a shift away from time-based preventive planning to condition-based predictive maintenance strategies.

For example, a crude unit of Phillips 66 was subject to preheat train fouling (accumulation of unwanted material reducing plant equipment’s efficiency). There were no data to quantify how much energy was being lost, or which exchangers to clean or when to clean them. Using wireless temperature and flow-measurement sensors, the refiner was able to predict the health of exchangers by correlating these measurements with production and environmental data. Such integrated analytics helped the refiner quickly spot where and when energy loss could exceed the target, providing estimated annual savings of $55,000 per exchanger. Most importantly, it helped the refiner identify periods of best performance and define best practices by comparing the performance of exchangers across units, which in turn allowed the company to improve performance across the plant.37

This seems like a fairly straightforward example of deploying sensors to create new data and generate value. Despite many similar examples, why have so few refiners thus far failed to fully capitalize on these sorts of IoT-enabled improvements? In many instances, data capture and analytics, or the flow of information, mostly happens at an asset level or, to some extent, at an overall plant level. What has been less common is analysis of data across the system (including pre- and post- links in logistics and distribution) and, moreover, across the ecosystem (adding external variables such as consumer profile and behavior, etc.) (see figure 2).

DUP1169_Figure-2

Optimizing the supply chain by streamlining the planning and scheduling process is one aspect where IT service providers’ automated software and hardware solutions have already made significant inroads. Using the visibility into the fully hydrocarbon supply chain as a system for enhancing refining operations and flexibility is another aspect—integrated information can help create and capture new value for refiners. This, in particular, may make sense for US refiners, which are fast changing their crude sourcing strategy from mostly buying medium and heavy crude under long-term contracts (following a typical supply-chain process) to buying a greater range of light, medium, and heavy crude blends in the spot market (requiring greater supply-chain dynamism to reap benefits).

One US refiner, for example, wanted to properly value its future crude purchases, especially cheap crude available for immediate purchase on the spot market.  However, the refiner had limited data on future operating and maintenance costs for the various crudes it processes and buys—varying sulfur and bitumen content in a crude can lead to additional operating and maintenance expense that could nullify the price benefit. The refiner first installed pervasive sensors on refinery equipment, which allowed it to gather data on the impact of processing various crudes. Once collected and analyzed, the data from the sensors was then integrated with market data on crudes (cargo availability, price, grade, etc.) on a central hub, allowing the refiner to effectively bid for its future crude cargoes in a timely manner.38

This analysis, if extended and combined with information on variations in oil delivery times, dock and pipeline availability, storage and inventory levels, and so on (scope), could help the refiner come up with several what-if scenarios, making its crude sourcing more dynamic and competitive.

Changing issues of efficiency and handling data don’t stop at the inbound logistics of crude-oil sourcing—there’s the outbound logistics of product distribution to consider. The distribution ecosystem includes not only refining and marketing companies but the customers to which they sell. The rapid innovation and proliferation of consumer personal-communication technologies—smart handheld devices and telematics systems in a vehicle—have led to the emergence of connected consumers who, by extension, are demanding a connected fueling experience. So how should fuels retailers think about competing in a digitally enabled consumer’s world?

Spot-art_3

Automotive companies, with a head start on IoT-based connected applications, provide telling clues. Toyota, for example, has developed, with SAP and VeriFone, a prototype solution that simplifies a driver’s fueling experience.39 Currently, drivers need to deal with multiple systems to find the “right” gas station—locating the station, swiping the card, punching in a memorized PIN, and, if required, keeping a record of receipts. The prototype is aimed at providing consumers a one-touch, one-screen solution that can aggregate information on a vehicle’s location, route, and, most importantly, fuel level using the SAP HANA cloud platform and Bluetooth Low Energy wireless standard; the system aims to navigate the driver to the closest “enrolled” gas station, authorize an automatic payment using VeriFone’s point-of-sale solution, and send personalized coupons and offers.

At this level, the challenges faced by companies are large and not entirely technical. While data can be brought together and displayed using existing communication and telematics, the greatest bottleneck is in getting consumers to act. The interface must be designed as augmented behavior complementing natural human decision processes or it risks being rejected by consumers as “dictatorial,” “creepy,” or “distracting.” Beyond mere technical challenges, designing such a system involves deep insight into human behavior.

However, if a company is able to design a workable and secure system, the benefits may be immense. At a minimum, fuels retailers can boost sales of their gas stations and convenience stores by partnering or, at least, enrolling in such connected-car prototypes. At a next level, they can add more appeal to their traditional loyalty and reward programs, which aim to incentivize customers by offering discounts or redeemable points. The use of collected customer information in running analytics is minimal or constrained by limited buying behavior data of any individual customer at pumps and linked convenience stores; aggregating data promises more useful information.

The future of retail marketing can correlate consumer profiles with fuel purchases and in-store purchases across a retailer’s owned stations and franchisees, mash up existing petro-cards data with the data collected by cloud-enabled emerging telematics solutions, and combine data from multiple sources such as status updates and notifications from social-media networks to facilitate behavioral marketing and predictive analytics. By industry estimates, about 33 percent of IoT-derived benefits for an integrated refiner/marketer can come from connected marketing.40

COMPLETING THE LOOP

Facing the new normal of lower oil prices, the O&G industry is beginning to see the IoT’s importance to future success. But it’s not as simple as adding more sensors: Creating and capturing value from IoT applications requires clearly identifying primary business objectives before implementing IoT technology, ascertaining new sources of information, and clearing bottlenecks that limit the flow of information (see table 1).

DUP1169_Table-1

Upstream players focused on optimization can gain new operational insights by standardizing the aggregated physics and non-physics data and running integrated analytics across the functions (exploration, development, and production).
Midstream players targeting higher network integrity and new commercial opportunities can benefit by investing in sensors that touch every aspect of their facilities and analyzing volume data more comprehensively all along their network.
Downstream players operating at an ecosystem level can create new value by expanding their visibility into the complete hydrocarbon supply chain to enhance core refining economics and targeting new digital consumers through new forms of connected marketing.
Investing in IoT applications is just one aspect. Companies need to closely monitor IoT deployments and results to keep applications on track, at least in the initial few years. Both IT and C-suite executives must regularly ask and answer questions as to whether the IoT is creating the necessary momentum and learning across the businesses and employees, what the future costs and complexities associated with retrofitting and interoperability of applications are, and what the security shortcomings are in light of new developments.

For a given company, IoT applications’ self vs. shared development will determine the time to commercialization and the magnitude of realizable benefits. Building proprietary capabilities, although essential for competitive advantage in some cases, can slow down the pace of development and restrict a company to realize the IoT’s transformative benefits. “We can’t do all of this [development of technology] alone. We believe that in the future we will have to be far more collaborative,” said BP Chief Operating Officer James Dupree.41 Collaborative business models can enable the industry not only to address current challenges but also to take the intelligence from fuels to a molecular level and extend the IoT’s reach from cost optimization to capital efficiency and mega-project management in the long term.42


By reinforcing the importance of information for all aspects of the business and elevating information to the boardroom agenda, a company can fundamentally change how it does business rather than just optimizing what it has always done (see figure 3).
DUP1169_Figure-3

No comments:

Post a Comment