Thought Leadership
Eng Choon shares insights into how organisations can secure AI across its lifecycle, from development to deployment.
March 2026
Artificial Intelligence (AI) is no longer experimental, it is transforming workflows, boosting productivity and driving business results across industries. One in three CEOs globally report revenue gains from AI, and nearly nine in ten employees use AI at work. Global AI spending is surging, projected to exceed S$800 billion , according to IDC. Meanwhile, agentic AI is emerging, enabling autonomous, self-learning capabilities that expand what AI can achieve.
But as organisations race to scale AI, security is not keeping pace. From decision-making to operations, AI is increasingly widespread, yet only a fraction of organisations has robust measures to assess and secure their AI systems. The World Economic Forum (WEF) reported that while 66% of organisations expect AI to significantly impact cybersecurity, only 37% have processes to assess the security of AI tools before deployment. As AI systems grow more powerful and pervasive, so do the risks.
Threat actors are actively targeting AI through model evasion, data poisoning and privacy attacks. Their tactics mislead models, corrupt training data, extract sensitive information or repurpose models for harmful misuse. Even a tiny fraction of poisoned training data (less than 0.1%) can cause misclassification, undermining trust and leading to faulty outputs that impact operations.
Securing AI is a shared responsibility across the AI value chain.
Does this mean slowing down AI adoption? Absolutely not. But it does mean taking deliberate steps to protect AI. Just as organisations safeguard intellectual property and operational resilience, securing AI should be integral to maintaining competitive advantage. AI security must be built into the initial phases of model development and deployment, much like cybersecurity in traditional IT systems. If your organisation has yet to put in place an AI security framework, now is the time.
Organisations can build on core cybersecurity practices like vulnerability management, anomaly detection and data protection, as a foundation for AI-specific safeguards. While existing controls remain relevant, AI introduces unique risks that require additional capabilities.
For organisations deploying and operating AI systems, three key areas demand attention:
These steps lay a strong foundation, but more safeguards will be needed as AI adoption matures. That’s why securing AI across its lifecycle, from development to deployment, is fast becoming a priority. Organisations that act early will protect their operations, maintain trust and unlock AI’s full potential as a driver of growth. Our commitment is clear: to help organisations innovate confidently, operate securely and stay ahead in an increasingly intelligent and interconnected world.
ST Engineering’s Cyber business is an industry leader in cybersecurity with over 25 years of proven expertise, offering a trusted portfolio of solutions designed to enhance cyber resilience for governments, critical infrastructure owners, and commercial enterprises. Our comprehensive offerings protect Information Technology (IT), Operational Technology (OT), and cloud environments by leveraging cutting-edge technology, AI-driven capabilities and our award-winning innovations.

Goh Eng Choon,
President, Cyber
Copyright © 2026 ST Engineering
By subscribing to the mailing list, you confirm that you have read and agree with the Terms of Use and Personal Data Policy.