The gap in re f inery optimi s ation
4.5E+17 5E+17 3.5E+17 4E+17 2.5E+17 3E+17 1.5E+17 2E+17 5E+16 1E+17 0 FLOPS
1970–1990 Second generation optimi s ation The metaheuristic approach
1940–1950
The earliest opt i mi s ation Calculus optima
First generation optimi s ation Local, sequential and reliant
Third generation optimi s ation The Al-based approach
Early classic opt i mi s ation Linear programming
1920–1940
1950–1970
1990–2021
Computer processing power-oating-point operations carried out per second (FLOPS)
• Local optimization • Sequential/Iterative search • Reliance on gradients or higher order derivatives • CPLEX, LINDO, GAMS, SNOPT, and MATLAB
• Calculus-based
• Linear programming • Nonlinear programming • Constrained optimization • Single- and multi- objective optimization • Goal programming • Dynamic programming
• Particle Swarm Optimization • Ant Colony Optimization • Tabu Search • Artificial Bee Colony
• Supervised ML • Unsupervised ML • Reinforcement ML Algorithms • Random forest, neural networks, gradient boosting, etc.
formulae (Pierre De Fermat) • Iterative methods to search for an optimum
(Isaac Newton and Johann C.F. Gauss)
Figure 2 Evolution of process optimisation
(DRL) is a powerful machine learn- ing technique, which can be effec - tively used to optimise industrial processes for different strategic goals, allowing the user to shift focus intelligently and confidently. The DRL algorithm is a long-standing machine learning framework with close association to optimal con- trol. The mechanism of DRL can be summarised as an agent seeking an optimal policy by interacting with its environment through feedback between observation states and quantified rewards, which can be traced back to a Markov Decision Process (Sutton, 1998). Reinforcement learning makes use of algorithms that do not rely only on historical data sets to learn to make a prediction or perform a task. Just as we humans learn using trial and error, these algorithms also do the same. A reinforcement learn - ing agent is given a set of actions that it can apply to its environment to obtain rewards or reach a certain goal. These actions create changes to the state of the agent and the envi- ronment. The DRL agent receives rewards based on how its actions bring it closer to its goal. DRL agents can start by know- ing nothing about their environ- ment and selecting random actions. Because reinforcement learning sys- tems figure things out through trial and error, they work best in situa- tions where an action or sequence of
learning model between the design variables and objectives/constraints. Just like internet companies that create high value by optimising the commercial assets in the consumer market by data analytics, indus- trial AI solutions can generate high value by optimising the assets from the process real-time analytics and processes data they have acquired from operations and production. Algorithmic advances in machine learning and deep learning, signifi - cant increases in computing power and storage, and huge amounts of data generated by digital trans- formation efforts make AI a game changer in hydrocarbon processing (see Figure 2 ). AI solutions can play an even more important role in achiev- ing sustainable development – from sustainable production, pol- lution control and water efficiency to industrial safety, clean air and
healthcare. These advanced digi - tal applications integrate real-time data and advanced analytics for bet- ter decision making, and underpin applications that can dramatically improve process control behaviour, efficiency, and sustainability. The purpose of the Modcon-AI package is to provide process engi- neers with a set of modern optimisa- tion tools which enable connectivity, validation, and prediction of main KPIs, to take the correct decisions to maintain and improve effective industrial processes management. The artificial neural network (NN) dynamic models implemented in this solution enable engineers to cal- culate and predict physical proper- ties and chemical compositions for different process streams, and pro - pose required set points, which will accomplish the calculated predic- tions (see Figure 3 ). Deep Reinforcement Learning
Economic optimi s ation
DR L
Renery planning (days-months)
Process Units (1-N)
Re f inery scheduling and optimi s ation (minutes-days)
Process optimi s ation
Process 1
Process N
Process units optimi s ation (seconds-minutes)
Process control
Figure 3 Modcon-AI dynamic model architecture
70 PTQQ 2 2022
www.digitalrefining.com
Powered by FlippingBook