AI Act | What Cybersecurity Leaders Need to Know
The EU AI Act is likely to become a global benchmark for regulating artificial intelligence. For cybersecurity leaders — especially CISOs and technical teams — it introduces new responsibilities, risk models, and strategic imperatives. This article unpacks the core impacts on cybersecurity and what organisations must do to prepare.

A New Risk-Based Model for AI
The AI Act classifies AI systems into four risk categories: minimal, limited, high, and unacceptable risk. High-risk systems — such as those used in biometric identification, credit scoring, hiring processes, and critical infrastructure — will face stringent regulatory scrutiny. These systems must implement a lifecycle risk management framework, maintain performance and security monitoring, and ensure human oversight/decision making. Before such AI hits the market, it must undergo a conformity assessment and be registered in an EU database. In short, the Act adopts a “secure-by-design and by-default” philosophy for high-risk AI. This raises the bar for cybersecurity: systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle.
Providers (meaning developers or suppliers of AI systems) carry most of these responsibilities. However, deployers (organisations using AI systems) are also accountable for implementing appropriate oversight, using the systems in compliance with their intended purpose, and reporting incidents. You can also become subject to the provider requirements if you, let us say, buy an AI/ML system and make significant modifications to it.
This structured, risk-based approach aligns closely with core cybersecurity principles: identifying critical assets, assigning proportional controls, and maintaining accountability throughout the technology lifecycle. Below we take a look at the implications for cybersecurity.
Core Implications for Cybersecurity Functions
Translating the AI Act’s risk framework into practice means rethinking how cybersecurity teams approach their core functions. The following sections explore how traditional security activities like penetration testing, risk assessment, product security and related themes such as privacy and security monitoring must evolve to meet new expectations.
Penetration Testing
Traditional penetration testing must evolve to account for a new class of AI-specific threats. Rather than solely probing application logic or infrastructure/code vulnerabilities, security teams now need to consider how machine learning models themselves can be manipulated. This includes adversarial attacks that trick models into misclassifying inputs, data poisoning where training data is subtly corrupted to alter outcomes, model inversion or extraction techniques that compromise confidentiality, and prompt injection attacks targeting generative AI systems. Testing methodologies must expand to evaluate model behavior under these conditions, ensuring that AI systems are robust against manipulation and resilient by design to address new AI-specific threats:
- Adversarial attacks that mislead AI models (e.g., image manipulation to fool computer vision)
- Data poisoning that corrupts training datasets
- Model inversion or extraction that compromises confidentiality
- Prompt injection in generative AI systems
If you are a provider of an AI system, you also need to take into account the risk of the model being backdoored or otherwise manipulated in data or code to provide output that aligns with a malicious actor’s intentions rather than the AI models.
Cybersecurity teams need to incorporate adversarial testing methodologies and adapt existing tooling to evaluate model behavior, not just application or infrastructure vulnerabilities. This comes as an addition to testing the AI systems themselves (and adjacent infrastructure) for normal security vulnerabilities.
Risk Assessments
Security risk assessments should expand to account for algorithmic integrity, data governance, and socio-technical impact. For example:
- Could an attacker manipulate inputs to trigger harmful decisions?
- Does the AI system rely on sensitive data governed by GDPR?
- Is there adequate human oversight to detect misuse?
Collaboration between cyber, privacy, risk, and engineering teams will be essential.
In addition, companies should adopt the risk assessment incorporated in the AI act in their core project methodology, so that they can capture and classify high-risk AI applications early. As the obligations on these systems are significant, it is important to budget for and plan for this at an early stage of application development.
Product Security
Many high-risk AI systems will be embedded in digital products. These systems must be developed securely and comply with both the AI Act and the upcoming Cyber Resilience Act (CRA).
Security teams must:
- Risk assess new systems
- Enforce secure development lifecycle (SDLC) principles
- Vet third-party AI components
- Maintain traceability of AI training data and logic
Post-deployment monitoring and vulnerability management will also become mandatory for many AI-powered products.
Privacy and GDPR
For AI systems that process personal data, the AI Act does not replace the GDPR — it complements it. This means cybersecurity and privacy leaders must account for dual obligations: ensuring AI systems are both technically secure and legally compliant with data protection principles.
AI systems that perform profiling, automated decision-making, or biometric identification will often qualify as high-risk under the AI Act and trigger GDPR requirements such as lawful basis for processing, data minimisation, transparency, and individual rights. Security teams must collaborate with privacy officers to ensure:
- AI inputs and outputs involving personal data are adequately protected
- Logs and monitoring do not inadvertently collect excessive or sensitive information
- Risk assessments for high-risk AI systems also consider data protection impact (DPIA)
- Individuals are informed when they interact with AI systems, as required by both regulations
Additionally, personal data used to train AI must be accurate, relevant, and obtained lawfully. This creates a need for secure and traceable data pipelines, supported by robust data governance.
By embedding privacy-by-design into the AI development lifecycle and aligning AI risk assessments with DPIAs, organisations can create a unified compliance approach that satisfies both GDPR and the AI Act.
From Compliance to Operational Readiness
Six Key Actions
- Inventory, risk assess and classify AI systems across your organisation to determine exposure under the AI Act.
- Expand threat models and risk assessments to include adversarial attacks, data integrity, and model behavior.
- Augment penetration testing practices to evaluate AI-specific vulnerabilities.
- Integrate AI security controls into SDLC, including secure data pipelines, access controls, and logging.
- Update incident response playbooks to reflect AI-related failure modes and reporting obligations.
- Establish cross-functional governance that brings together cybersecurity, legal, privacy, product, and engineering teams.
Why It Matters Now
The AI Act’s technical and operational requirements are already shaping the regulatory landscape, and the practical steps outlined above are critical to preparing your organisation. Security teams that start early will not only reduce compliance risk but also strengthen resilience across the board.
Full enforcement of the AI Act will begin in phases from 2025 onwards. Waiting, however, is not an option:
- Technical debt in insecure AI systems is harder and more expensive to remediate later
- The projects that are starting or ongoing now involving AI will be subject to regulation scrutiny down the road
- Early movers will shape industry best practices
- Aligning with the AI Act now boosts customer trust and audit readiness
The AI Act is not only about risk mitigation — it is about enabling safe innovation. Organisations that treat it as an opportunity to elevate cybersecurity maturity will gain a long-term advantage.
A Product Regulation
Finally, the AI Act is a product regulation. You can compare it to how selling electrical or wireless products in the EU requires you to do lab testing and submit conformity reports in order to attain the CE mark on your product. Without the CE mark you are not allowed to sell your product in the EU. Similar processes exist in the US with the FTC. The regulatory oversight also includes the mandate for the regulatory body to stop sales of the product in the EU if a non-conformity is detected.
In other words, AI engineers, product development teams, cyber security teams and data scientists should consider this novel and potential business-ending risk when acting as a provider (developer) of AI systems: Non-conformity may lead to a full stop of revenue from the AI product, and potentially add-on risks such as litigation from clients and other interested parties.
There are many unknowns, though, as AI essentially is data and code that develop over time, it is harder to apply the same process as for electronics. How EU will practically and legally enforce the act remains to be seen, but the act cannot be discounted; it is the first attempt at regulating AI on a wide scale.
Implementation of the AI Act
At Advisense, we help organisations adapt their cybersecurity, risk, and product assurance functions to meet the evolving demands of AI regulation. From governance design to secure implementation and incident readiness, our experts are here to support your journey toward responsible and resilient AI. We can help you risk assess your AI initiatives, penetration test your algorithms, help you with AI governance and more.
For more information on the EU AI Act please visit our designated AI site.

Navigating AI with Legal Expertise & Technical Assurance
Our multidisciplinary team combines deep regulatory expertise with technical assurance to support you in AI Governance, Risk Management, and Compliance. From AI Act Gap Analysis, Establishment of AI Governance, to AI Literacy Programs and AI Security Verification we ensure responsible and effective AI management.
