Realising AI’s Promise Depends on Protecting its Foundations

Realising AI’s Promise Depends on Protecting its Foundations

Thought Leadership

Eng Choon shares insights into how organisations can secure AI across its lifecycle, from development to deployment.


March 2026



Artificial Intelligence (AI) is no longer experimental, it is transforming workflows, boosting productivity and driving business results across industries. One in three CEOs globally report revenue gains from AI, and nearly nine in ten employees use AI at work. Global AI spending is surging, projected to exceed S$800 billion , according to IDC. Meanwhile, agentic AI is emerging, enabling autonomous, self-learning capabilities that expand what AI can achieve. 

But as organisations race to scale AI, security is not keeping pace. From decision-making to operations, AI is increasingly widespread, yet only a fraction of organisations has robust measures to assess and secure their AI systems. The World Economic Forum (WEF) reported that while 66% of organisations expect AI to significantly impact cybersecurity, only 37% have processes to assess the security of AI tools before deployment. As AI systems grow more powerful and pervasive, so do the risks.

The Growing Need for AI Security

Threat actors are actively targeting AI through model evasion, data poisoning and privacy attacks. Their tactics mislead models, corrupt training data, extract sensitive information or repurpose models for harmful misuse. Even a tiny fraction of poisoned training data (less than 0.1%) can cause misclassification, undermining trust and leading to faulty outputs that impact operations.

A New Strategic Priority

Securing AI is a shared responsibility across the AI value chain. 

  • AI providers (those who develop and train models) must embed security during design and development.
  • AI users (organisations deploying and operating AI) must ensure robust safeguards during implementation and ongoing use
  • Partners and regulators play critical roles in setting standards and ensuring compliance.

Does this mean slowing down AI adoption? Absolutely not. But it does mean taking deliberate steps to protect AI. Just as organisations safeguard intellectual property and operational resilience, securing AI should be integral to maintaining competitive advantage. AI security must be built into the initial phases of model development and deployment, much like cybersecurity in traditional IT systems. If your organisation has yet to put in place an AI security framework, now is the time.

Building AI Security on strong foundation

Organisations can build on core cybersecurity practices like vulnerability management, anomaly detection and data protection, as a foundation for AI-specific safeguards. While existing controls remain relevant, AI introduces unique risks that require additional capabilities. 

For organisations deploying and operating AI systems, three key areas demand attention: 

  1. Protect data integrity 
    Training data is the lifeblood of AI. Verifying authenticity and integrity is critical to prevent tampering and ensure models behave as intended. AI models and agents must be tested for vulnerabilities. For AI users, this means benchmarking resilience and understanding your organisation’s security posture. Risks differ depending on how AI is adopted. Organisations developing their own models must manage data quality, model transparency and internal access controls, while those relying on third‑party systems must ensure vendors uphold strong security and privacy standards. 

  2. Ensure operational continuity 
    As more processes and solutions rely on AI, any disruption can cause service outages. Maintain functionality and accessibility through continuous monitoring, performance measurement and traceability across the AI stack. When errors occur, being able to pinpoint the part of the system at fault enables faster recovery and continuity. 

  3. Prevent abuse and ensure safe use 
    AI systems embedded in operations requires continuous oversight. Mechanisms to detect harmful behaviour, correct appropriate model outputs and enforce accountability are essential. Pair cybersecurity controls with governance and ethics frameworks to embed responsible use through operations. Our own AI Governance Framework formalises principles, roles and escalation, supported by risk classification, control checklists and training to build workforce readiness.

The road ahead

These steps lay a strong foundation, but more safeguards will be needed as AI adoption matures. That’s why securing AI across its lifecycle, from development to deployment, is fast becoming a priority. Organisations that act early will protect their operations, maintain trust and unlock AI’s full potential as a driver of growth. Our commitment is clear: to help organisations innovate confidently, operate securely and stay ahead in an increasingly intelligent and interconnected world.



ST Engineering’s Cyber business is an industry leader in cybersecurity with over 25 years of proven expertise, offering a trusted portfolio of solutions designed to enhance cyber resilience for governments, critical infrastructure owners, and commercial enterprises. Our comprehensive offerings protect Information Technology (IT), Operational Technology (OT), and cloud environments by leveraging cutting-edge technology, AI-driven capabilities and our award-winning innovations.

Jeffrey's profile photo


Goh Eng Choon,

President, Cyber