Security in the Age of AI

Security in the Age of AI


6 min read

As AI technology advances at pace it's important that tech leadership keep a good eye on both opportunity and risk, and evolve approaches to meet both. Here's my take including 3 priorities for CTO's right now.

Well that AI-scalated quickly.

The hype around Chat GPT and Dall-E has taken AI into the mainstream in a way not seen since Ironman came out and J.A.R.V.I.S. started to be vision (open source Jarvis personal assistant]).

So what to make of this at the enterprise security level?

There's both opportunity and risk here of course, and as AI technology advances at pace it's important that tech leadership keeps a good eye on both.


Adversarial attacks: AI generated Adversarial attacks, designed to bypass security systems, will be more difficult to detect. They can cause serious damage to a system.

Malware: AI can be used to create new types of malware that can evade traditional antivirus software. Malware can be designed to automatically adapt to the security measures that are put in place, making it difficult to defend against.

Deepfakes: AI generated Deepfakes - videos or images - can be used to spread misinformation or conduct social engineering attacks. Deepfakes can be used to impersonate people in videos, making it difficult to know whether or not the content is real.

Password cracking: AI can be used to crack passwords, making it easier for cybercriminals to gain access to systems that they shouldn't have access to. This can be especially dangerous if the system holds sensitive data.

Social engineering attacks: AI can be used to conduct more sophisticated social engineering attacks, such as spear phishing. AI can be used to create more convincing emails, making it more likely that someone will click on a link or provide sensitive information.


Threat detection and response: AI can be used to analyse large volumes of data from multiple sources in real-time, detecting anomalous behavior and potential security threats that may be missed by human analysts. AI-powered security tools can also automatically respond to threats, such as by blocking access or quarantining files, reducing the time it takes to respond to a threat.

Vulnerability management: AI can be used to identify vulnerabilities in enterprise systems and applications, including those that are not yet known to humans. By using machine learning algorithms to scan for and analyse potential vulnerabilities, AI can help identify and mitigate risks before they can be exploited by attackers.

User behavior analytics: AI can be used to analyse user behavior across enterprise systems, identifying patterns of behavior that may indicate a security risk. For example, AI can detect when a user is accessing systems outside of normal working hours or accessing data that is not relevant to their role, indicating a potential security threat.

Fraud detection: AI can be used to detect fraud and financial crime within enterprise systems, including detecting and preventing fraudulent transactions and identifying patterns of behavior that may indicate fraudulent activity.

Compliance monitoring: AI can be used to monitor compliance with internal security policies and external regulations, such as GDPR or HIPAA. By automatically detecting violations and notifying security teams, AI can help ensure that compliance is maintained and risks are minimized.

So, what is a CTO to do?

It's a lot about staying up to date with tech changes, but also about taking a cultural shift in how security is approached within your organisation.

Here's our top 3 take outs:

Develop a comprehensive AI governance framework: CTOs need to establish an AI governance framework that includes policies, processes, and standards for the development, deployment, and management of AI systems within their organisations. This framework should include considerations around ethical and legal issues, as well as data privacy and security concerns, and be formed with a clear view of the myriad of tech tools and platforms in use (and being considered) in your organisation

Implement robust AI security controls: CTOs need to ensure that appropriate security controls are implemented throughout the AI development and deployment process. This includes secure coding practices, vulnerability assessments, and penetration testing of AI systems, as well as encryption of sensitive data and secure storage and transmission of data.

Bake Security in - from culture to process: CTOs need to create a culture of security within organisations by keeping security risk and awareness training right front of mind for all employees. This is about security being deeply embeded within organisations as a way of operating - from development to integration to execution. It's time to go deep in embedding security in your technical DNA.


If you read this article and said a couple of times that is ML not AI - I hear you. We are considering the umbrella term of AI which, when not as intelligent is softened to models and learning.

Call today,Or we can call you.+61 429 342 callback