June 2019, Vol. 246, No. 6

Features

Reducing Leakage Risks Using Advanced Digitalization

By John Nixon and William Hahn, Siemens PLM Software

Stricter regulations, price volatility, and emerging technologies are disrupting conventional operational approaches in today’s midstream pipeline industry; even transport demand has never been greater. Therein lie both challenges and opportunities.

First, let’s look at the challenges. While pipelines remain much safer and more economical means of transporting oil, gas, and liquid gas products compared to truck and rail, the industry must continue addressing safety concerns, often due to leaks often caused by sub-optimal maintenance.  

In response, the industry developed an approach to adapt the safety management system (SMS) model from other industries: the American Petroleum Institute (API) Recommended Practice (RP) 1173, released in 2015 but still being implemented. 

As readers know, managing complex energy pipeline operations require the coordination of many stakeholders – owner-operators, pipeline builders, OEMs, service firms, and regulators – with different responsibilities. That’s why implementing an SMS over long pipelines can be so difficult. Fortunately, API RP 1173 provides a framework of the following 10 elements that can help standardize the work involved:

  1. Leadership and management commitment  
  2. Stakeholder engagement 
  3. Risk management 
  4. Operational controls 
  5. Incident investigation, evaluation,
           and lessons learned 
  6. Safety assurance 
  7. Management review and
           continuous improvement 
  8. Emergency preparedness and response 
  9. Competence, awareness, and training 
  10. Documentation and record keeping

Although these API RP 1173 elements focus on safety management, they can serve to improve overall operations management, too. Generally, industrial enterprises with good safety records are also well-run and have higher quality outputs and greater profitability.

Now, let’s look at the opportunities. Fortunately for midstream operators, digital technologies – backed by open standards – can thread together all of these areas in many ways and provide auditable traceability. Benefits include cost savings, new efficiencies, and greater end-to-end operational visibility across the entire length of pipelines and multiple ones. And by reducing operational risks via digitalization can potentially lower insurance premiums, too.

Digitalization can also help eliminate protracted, costly, and error-prone compliance audits done manually. Automated report generation for use in regulatory audits can dramatically reduce the risk of fines while improving traceability and accountability of relevant operator activities. OPEX-based, pay-as-you-go “compliance-as-a-service” models are possible to manage the data from key performance indicators (KPIs) that regulators would want to see in audits or investigations. 

But the many asset lifecycle management capabilities and benefits that end-to-end digitalization can offer pipeline operators don’t stop there. Consider the emergence of digital twins. These are virtual proxies of physical assets developed at pre-FEED project stages. They can support operational optimization and continuous improvement throughout the many decades of a pipeline’s service life-cycle, thus improving safety, compliance, efficiency, and ultimately, profitability. 

Documentation can be similarly handled, using a systems engineering approach that can eliminate engineering silos and the errors and omissions they can cause. Another example is the closed-loop refinement of an asset’s design and operations optimization over its entire lifecycle. 

This capability draws trusted data from field-level smart sensors – calibrated via correlated diagnostic and fault alarm data – then securely sends the data to a cloud-based suite of simulation and analytics software for continuous design and operational improvements.   

Data Complexity

By definition, midstream pipelines are extremely complex, distributed operations, given that their infrastructure of pipes and pumping stations typically span hundreds, if not thousands, of miles with terminals and tank farms to manage at either or both ends or at junctions with other pipelines. 

That’s what makes managing them and achieving the visibility of a “single point-of-truth” so difficult, especially when operations involve so many roles and responsibilities within the operator’s organization, not to mention those of its many OEM suppliers and services providers. 

On top of that, in today’s traditional operational models, despite extensive applications of digital technologies over the years, too much data is still collected manually, taking time and being error-prone. 

Aggregating and reporting from this data suffers these same issues and worse: They create latencies that can limit effective decision-making and responsiveness to serious operational issues, ones that can cause costly disruptions to operations or health, safety, and environment (HSE) violations – or both.

Then there’s the problem of data silos. These can consist of various spreadsheets, documents, databases on individual’s hard drives or on department file-sharing platforms, either premise-based ones or, more and more, in the cloud. Version control can be a nightmare or even a practical impossibility.

Premise-based silos can be vulnerable to poor data protection practices (i.e., security and backup-and-recovery). Meanwhile, cloud-based data-sharing platforms might provide backup and recovery protection, but still be subject to cyberthreats, especially when workers share their login credentials. 

The data problems for midstream operators can be worse than the proverbial needle in a haystack. For example, a midstream operator might have a thousand inspection forms on a shared drive, but each form might contain scores or hundreds of procedures. If one procedure’s data is needed for a compliance audit or an incident investigation, a manual search could take days, if it could even be found.

Or, when advanced in-line inspection tools, so-called smart pigs, are used, they can generate a data spreadsheet, some 200,000-plus lines long, with several thousand or tens of thousands of corrosion instances recorded. Each instance requires a decision, then the documentation of that decision and what action, if any, was taken. And, similar to the previous example, if a compliance audit or incident investigation is required, how can that be done in an effective, timely way?

The good news is advanced digital technologies in software tools can eliminate manual steps in midstream operational workflows, along with the data silos that manual steps tend to create. 

For example, one software solution to address documentation and recordkeeping (#10 on API RP 1173’s list) can pull KPI and other operational and engineering data out of the various file types where they reside now and aggregate them in a single, global database that’s always updated with the latest content versions and always backed up. An authorized change made in one record immediately propagates itself throughout the system. 

The ultimate result is a single point-of-truth, available via browser-based dashboards, both in real time and with historical rollbacks readily available. And this data is securely accessible around the clock via role-based privileges to those needing to know, whether inside or outside a pipeline operator’s organization. 

What’s more, access can be available via any intelligent device – desktop or laptop PCs, tablets, or smartphones – from anywhere an Internet connection is available and over a secure virtual private network (VPN). If compliance audits are required, regulators evidentiary needs can be met in seconds.

Technology in Action

Of course, end-to-end digitalization of an entire pipeline or network of pipelines is easier said than done, but the vision is not impossible to achieve. Eventually, the economic and competitive benefits of doing so will be too compelling to ignore, making implementation an imperative, not an option. 

Fact is, today’s midstream operators have already started their end-to-end digitalization journeys, whether they realize it or not. Just as they likely have many existing silos of legacy manual data collection, aggregation, and reporting occurring in their various workflows and activities, they have digital islands operating as well, whether in their control rooms, pump stations, data centers, and elsewhere in their enterprises. 

A best-practice approach for extending and interconnecting those islands of digitalization is to identify virtually adjacent manual workflows and activities that could be interconnected, then proceed with the following six straightforward steps, using a pump station example as the target for greater digitalization:    

Measure: Acquire the KPI parameters of sensors at the pumps, compressors, motors, and balance-of-plant equipment in the pump station. Technology is available to make existing sensors “smart” or they can be replaced with sensors that have built-in intelligence. ‎Preferred sensors would have the Open Platform Communications Unified Architecture (OPC UA) protocol built-in. It is a machine-to-machine communication protocol for industrial automation developed by the OPC Foundation that enables third-party devices to share data. Relay the data over a SCADA network to an edge computing device on the pump station premises.

Aggregate: Perform initial mathematical operations and data processing onsite using an edge computing device or devices to reduce bandwidth and data-streaming uploading requirements. Edge processing of data can also be used to optimize pump station performance, such as minimizing transient pressure changes during a batch-change, as in Figure 1.  

Another use could be to identify KPI anomalies, such as in flow-rates, that might indicate upstream pipeline leaks that need investigation or remediation. In addition, pumping station data can be shared in real time with upstream and downstream stations to automate motor controls across the stations to better manage energy use and avoid utility ratchet charges. This application can save millions of dollars a year in costs while helping midstream operators reduce their carbon footprints.

Upload: Upload the data into a cloud environment securely using an IoT gateway that encrypts the data using Transport Layer Security (TLS) 1.2 or better – as defined by the Internet Engineering Task Force (IETF) – before its transmission. TLS is the successor to SSL (secure sockets layer) encryption and provides greater data security. Advanced data analytics tools residing in the cloud platform can be used to perform pattern recognition and other statistical techniques to provide operational insights. 

Persist: Keep data in the cloud to allow it to be available for future use cases. For example, further processing of the data can be performed using algorithms based on artificial intelligence and machine learning models to identify behavioral anomalies that can indicate potential leaks or conditions that might need remediation to prevent leaks from occurring. Historian databases can also be located in the cloud, accessible for compliance audits and forensic investigations.

Optimize: Map field data and run optimization to compute an optimal configuration for pipeline products for batch optimization. The data can also be deployed into sophisticated hydraulic models to study and determine the best use of drag reduction agents (DRA) and automating other pipeline operational optimization models. 

Apply: Deliver optimization recommendations to the pipeline control room operators and make it available, via secure credentials, to company analysts and even OEM suppliers, so they can also use the data on a need-to-know basis for continuous improvements or remote equipment diagnostics, mitigation, and remediation. This model can also support remote condition-monitoring, which can provide the basis for cost-saving condition-based maintenance approaches.

Figure 1 shows how pump-station digitalization, using on-premise edge processing of four KPI parameters – pump pressure (kPa), mechanical power (kW), motor speed (rpm), and flow rate (m3/h) – can help control operators during batch changes to minimize spikes in a pipeline’s transient pressures, which over time can otherwise cause the inside pipe walls to deteriorate, causing leaks.

Conclusion

As North America’s midstream oil and gas industry accelerates a massive buildout of needed midstream transport infrastructure in coming years, the implementation of SMSs in accord with API RP 1173 by pipeline operators will have marked a milestone in the industry’s evolution. 

Not only will safety be better assured in a more systematic and standardized way – to address regulatory and public concerns – but also efficiencies and profitability can be enhanced as a result. 

Digital technologies can and will help, with end-to-end applications becoming increasingly common, driven by their economic and competitive benefits. Early adopters will benefit soonest, of course, providing them with cost and operational advantages that can enable them to gain and extend their market leadership. P&GJ


John Nixon is senior director, Energy Sector at Siemens PLM Software and has 26 years’ experience in energy and utilities. He has been awarded patents for pipeline repair technologies and served on several boards for energy startups, as well as being co-founder of the National Corrosion Center. 

William Hahn is a solutions consultant at Siemens PLM Software. He previously worked for a major pipeline operator, focusing on asset integrity engineering and operations He has also worked with Booz Allen on operations planning and systems engineering for the International Space Station at NASA’s Johnson Space Center.

 

Related Articles

Comments

{{ error }}
{{ comment.comment.Name }} • {{ comment.timeAgo }}
{{ comment.comment.Text }}