refining india 2025
Smart control, real impact: AI-driven autonomy cuts energy use, emissions, and operator burden
Jagadesh Donepudi and Michelle Wicmandy KBC (A Yokogawa Company) Nitin Soni Yokogawa, INDIA Hiroaki Kanokogi Yokogawa, Japan
As refiners contend with increasing com- plexity, ageing control systems, and grow- ing sustainability pressures, AI-driven autonomy is shifting from ambition to real- ity. Reinforcement learning (RL) – a branch of artificial intelligence (AI) that learns through interaction – offers a way to navi- gate this complexity without relying on pre- defined models or manual tuning. Think of it as a skilled operator who learns by doing, adapting to what works and adjusting deci- sions in real-time without a rulebook. Unlike traditional proportional-inte- gral-derivative (PID) or advanced pro- cess control (APC) systems, which require extensive engineering and struggle in non- linear environments, RL adapts in real time. According to Energies Media, the oil and gas sector faces challenges such as defect detection, cybersecurity, and logis- tics network optimisation. However, AI offers a powerful advantage that converts raw data into actionable insights, enhanc- ing operations while reducing costs. The World Economic Forum adds that digital technologies could lower global industrial emissions by at least 4% by 2030, posi- tioning RL as a high-impact tool for effi- ciency and decarbonisation. RL learns via real-time interaction, mak- ing it well-suited for dynamic, multivari- able environments like refineries. This article explores the real-world applica- tion of Factorial Kernel Dynamic Policy Programming (FKDPP), an RL devel- oped by Yokogawa and the Nara Institute of Science and Technology. Tested and deployed in live plant operations, FKDPP acts like an experienced process engineer who never sleeps, constantly learning, adjusting, and ensuring the system stays in balance even when conditions change. Inefficiencies in Traditional Control Methods Traditional control strategies, including PID control and APC, are well-established but exhibit limitations when applied to highly nonlinear and dynamic industrial environ- ments. Tuning PID loops for non-linear valve behaviour remains difficult and often leads to suboptimal performance under distur- bances or feed variations. Furthermore, APC implementation cycles are lengthy, typically requiring seven to eight months, and necessitate extensive step testing for model identification. These factors limit the responsiveness of traditional controls to real-time changes. Reinforcement learning has been explored as a solution; however, con- ventional RL algorithms (such as Deep Q-Networks) demand a large number of tri- als and are highly sensitive to the training dataset. Such characteristics make them unsuitable for real-world process control, where each trial carries operational risk, safety implications, and cost. The indus- try requires an RL algorithm that is both sample-efficient and robust across variable process conditions.
Yokogawa oce
Customer site
Customer site
STEP 1
STEP 2
Control AI Model Generation System
Data collection
SaaS Application
Data
Simulator generator by AI
Reinforcement learning
Customer
Generate
Deep learning
STEP 3
STEP 4
AI Control Model Generator (Agent)
Data-driven simulator (Environment)
Model deployment at Control AI S tation at site
AI Model + report
Data
Data
Study
Generate
Su b mit report
AI Control Model
Deploy performance report
Deploy performance report generator
Control AI S tation AI Model deployment
Generate
Customer
Systems using AI control models (engineering)
Process history data for two years
Figure 1 FKDPP AI model lifecycle, from data collection to field deployment via Control AI Station
reduce energy use, and stabilise operations under these dynamic inputs. Following training and validation, the FKDPP model was implemented in the plant’s CENTUM TM VP control environment. During a controlled trial, the AI system operated the column autonomously for 35 consecutive days. While Figure 2 highlights overall achievements, the FKDPP system also delivered highly specific improvements in level control, heat recovery, and distur- bance rejection, as outlined below: • Precise maintenance of liquid levels within the distillation column. • Optimised use of waste heat, reducing reboiler energy consumption. • Elimination of off-spec product batches. • Real-time response to weather-induced process disturbances. The AI controller achieved these results without operator intervention, demon- strating autonomous handling of setpoint adjustments and disturbance rejection. Importantly, when the plant underwent rou- tine shutdown and subsequent restart, the AI model resumed operation without requir- ing retraining, underscoring its robustness. Results Quantitative results from the ENEOS case study highlight the value of FKDPP in refin- ery and chemical operations: Energy savings: Steam consumption for the controlled column was reduced by approximately 40%.
Emissions reduction: A corresponding reduction in CO₂ emissions was achieved through optimised heat recovery. Operational stability: Variability in key pro- cess variables was minimised despite fluc- tuations in feed composition and external temperature. Product quality: 100% of batches met required specifications, eliminating losses associated with reprocessing or disposal. Operator workload: Autonomous opera- tion reduced the burden on control room personnel, enabling a shift towards higher- level supervisory tasks. The algorithm’s ability to maintain con- trol without re-tuning further reduces main- tenance overhead, making it a sustainable solution with long-term benefits. Conclusion The successful deployment of FKDPP dem- onstrates the viability of reinforcement learning as a control strategy in opera- tional refinery environments. By addressing key limitations of traditional and earlier AI approaches – namely, sample ineffi- ciency and sensitivity to process variation – FKDPP offers a path forward for AI-enabled automation. Beyond technical performance, the AI system delivers measurable improve- ments in ROI, sustainability, and pro- cess resilience. Energy efficiency gains directly impact operating costs and car- bon intensity, aligning with broader indus- try goals for decarbonisation. Additionally, the system enables consistent quality and throughput, positioning it as a tool for maintaining competitiveness in volatile markets. The integration of FKDPP into refinery operations represents a significant step toward autonomous process control. As further applications are explored, ranging from three-phase separators to upstream and midstream assets, this technology is poised to play a key role in the next genera- tion of smart manufacturing systems.
To overcome these limitations, FKDPP was developed as a data-driven, self-adaptive control model requiring significantly fewer learning trials compared to conventional RL techniques. In simulation environments, it achieved stable control of complex pro- cesses such as distillation and decantation within about 30 learning iterations, demon- strating its rapid convergence capability. Unlike data-sensitive deep learning models, FKDPP is designed for resilience against process variability, enabling deploy- ment in real-world industrial systems with- out retraining for every operational change. Figure 1 illustrates this four-step process. FKDPP’s architecture is optimised to reduce sample complexity and maintain control objectives under disturbances. Integration into existing control systems is achieved through a dedicated Control AI Station operating at Level 2.5/3 of the auto- mation architecture, interfacing with the distributed control system (DCS) via Open Platform Communications (OPC) protocols. Case Study: Autonomous Operation of a Distillation Column A chemical production unit operated by ENEOS in Japan served as the testbed for FKDPP deployment. The unit includes a dis- tillation column previously managed via man- ual intervention, especially under varying ambient conditions and feed disturbances. The objective was to assess whether the AI controller could maintain product quality,
Autonomous control
Safe operation and improved productivity
Reduced costs and time loss
Two years
Only high-quality products were produced, so losses in the form of fuel, labour costs, time, etc. that occur due to production of o-spec products were eliminated
Areas that previously could not be controlled with PID control and APC were autonomously controlled by reinforcement learning- based AI (the FKDPP alg o rithm)
Managed and controlled with CENTUM VP integrated production control system
Simultaneously achieved safe operation and improved productivity, with stable quality, high yield , and energy saving
Figure 2 Summary of operational benefits observed from FKDPP deployment at ENEOS, including yield, safety, and automation gains
Contact: jagadesh.donepudi@kbc.global
9
Powered by FlippingBook