Source: Google
There are many ways AI systems can be vulnerable to attack. For example, AI systems can be tricked into making incorrect decisions by feeding them malicious data. Additionally, AI systems can be hacked to gain access to sensitive data or to take
control of the system itself. Attacks on AI systems can impact the security of the system, but could also cause harm or privacy issues.
It’s therefore essential to take steps to secure AI systems. This includes ensuring that AI systems are fully part of cyber governance, that they’re protected from malicious attacks, and that their security is regularly reviewed. In addition to securing AI systems, it’s also important to ensure that AI is used in a secure way, as even the securely developed platform may be used in a less secure manner.
Naturally, the approach to securing AI heavily depends on the type of AI (generative AI has its own threat scenarios), your AI use cases (AI writing marketing copy vs. AI writing production code has different risks), and your role in the AI ecosystem (using
consumer-grade AI or developing your own AI applications calls for different security safeguards). As a result, the discussion should be on specific AI use cases, and each new use case should merit a new risk assessment.
This paper from Google focuses primarily on security (click to download)
If you are interested in finding out more about the IISF, or would like to attend one of our Chapter Meetings as an invited guest, please contact the
IISF Secretary:
By email:
secretary@iisf.ie
By post:
David Cahill
Information Security
GPO, 1-117
D01 F5P2
Enhance your Cybersecurity knowledge and learn from those at the coalface of information Security in Ireland
Invitations for Annual Sponsorship of IISF has now reopened.
Sponsors are featured prominently throughout the IISF.IE website, social media channels as well as enjoying other benefits Read more