Page 9 - Paramount PR Report - April 2025
P. 9

Currently, ‘AI data/model poisoning,’ has emerged as a critical challenge, which
               occurs when malicious actors deliberately manipulate an AI or machine learning
               model’s training data, compromising its reliability.


               Mostly, this attack exploits the predictive or narrow AI solutions (task-focused AI
               systems) within the MLOps cycle (deploying machine learning models). Within
               generative AI solutions this poisoning appears in RAG (answers using retrieved
               knowledge) and knowledge graphs (connected data for understanding) rather than at
               the model levels. Most concerning is impact on the Agentic AI, where AI poisoning
               not only corrupts the output but also influences autonomous actions.


               For instance, consider an oil and gas company is using AI from predictive
               maintenance. Here, if a malicious actor subtly injects manipulated sensor readings
               into its training data, the AI might fail identify genuine warning signs or even falsely
               flag healthy equipment. This could lead to unexpected shutdowns, costly repairs and
               potential safety hazards.

               Similarly, in the financial sector, corrupted  data in a bank’s investment agent’s
               knowledge graph could prompt the agentic AI to make poor investment choices,
               resulting significant losses for customers. “AI poisoning undermines the reliability of
               AI-driven decisions, making companies vulnerable to operational disruptions and
               financial losses. This highlights the significance of protecting the integrity of data to
               ensure the continued safe and efficient operation of assets,” Premchand Kurup, CEO
               of Paramount, told Khaleej Times in an interview.

               Data poisoning presents a grave threat to enterprises, particularly in sensitive
               sectors such as finance and cybersecurity. For instance, when attackers corrupt
               training data in a bank’s AI-powered fraud detection system, it may fail to identify real
               fraud, causing significant financial losses. Similarly, in cybersecurity, poisoned
               malware detection system might misclassify threats as safe, leaving systems
               vulnerable to attack. The consequences could extend beyond immediate loses, as
               data poisoning can erode customer trust and cause reputational harm. “Detecting
               these sophisticated attacks requires robust AI cybersecurity framework and
               protection mechanisms. An inefficiency in these resources may lower organisations’
               trust in AI initiatives,” Kurup said.

               As AI integration accelerates companies need to adopt a comprehensive AI
               Framework for Cybersecurity to ensure that the technology is implemented safely
               and responsibly, Kurup said. “The first component of this framework is AI
               governance, which establishes clear guidelines for responsible AI usage, addressing
               data privacy concerns and legal liabilities while boosting productivity. Second,
               securing AI systems to protect AI models from external threats through a
               comprehensive AI lifecycle approach, from data collection to deployment and
               retirement. This ensures AI integrity and resilience by reducing the risks of system
               exploitation,” Kurup said.












                                                     khaleejtimes.com
   4   5   6   7   8   9   10   11   12   13   14