AI and Cybersecurity: HR Directors on the front line of digital trust

In just a few years, artificial intelligence tools have found their way into every corner of HR. Automated CV screening tools, assistants for drafting job adverts, interview transcript analysis, personalised learning path recommendations: the HR function now relies heavily on AI to gain speed, analytical depth and greater steering capacity.

But behind this promise of efficiency lies a less visible, yet crucial issue: the security of employee data.

The more AI is used, the more employee data flows from one tool to another, is cross-referenced, and sometimes stored with several providers. And the more this data circulates, the more vulnerable it becomes. AI and cybersecurity are no longer just topics for the IT department; they are now a strategic issue for the HR Director.

AI in HR: technological acceleration, increased vulnerability

AI has given HR an unprecedented ability to process volumes of information that previously remained under-exploited. Repetitive tasks have been automated, decisions are better informed thanks to data, and some analyses that used to take days can now be completed in a matter of minutes. From this angle, adopting AI seems like an obvious choice.

But this same AI runs on a single type of fuel: HR data. Profiles, career histories, pay information, performance reviews, feedback from managers or employees are now all used to feed algorithms and generate recommendations. If tool selection has not been anticipated through a cybersecurity lens, this sensitive information can end up in environments that are insufficiently controlled.

At the beginning, many teams tested and adopted AI solutions with enthusiasm, in an experimental mindset. Today, awareness is growing: it is no longer possible to separate “AI-driven HR transformation” from “rigorous protection of employee data”. The two must be designed together.

AI and cybersecurity: why HR data has become a critical issue

HR databases have always been sensitive, but AI has made them even more strategic. They bring together a set of personal, sometimes highly intimate information, which is of interest both to business functions and to attackers. They contain identification data, elements relating to career paths, pay information and, in some cases, health data or indicators relating to personal circumstances.

For a cybercriminal, this type of database is extremely attractive because it is reliable, structured and regularly updated. Once compromised, this data can be used to support identity theft, targeted phishing campaigns against senior leaders or key functions, or attempts to gain access to other corporate systems.

The issue is not limited to technical aspects. A leak of salary data, performance reviews or health-related information can create lasting tensions, fuel misunderstandings and profoundly undermine trust. It can, for example:

  • call into question the sense of internal fairness if pay gaps are suddenly exposed;
  • weaken employees whose sensitive information ends up being disclosed;
  • damage the employer brand by projecting the image of an organisation that is careless about protecting its people.

In this context, the security of HR data becomes a legal, social and reputational issue all at once.

AI in day-to-day work: small actions, big risks

Risks linked to AI in HR do not come solely from sophisticated attacks. They often arise from everyday actions, carried out in good faith, by professionals who are simply trying to save time.

We frequently see recruiters copy and paste lists of candidates into a public AI assistant to draft more personalised messages. HR professionals submit interview notes to an AI tool to obtain a quick summary. Learning and development teams export employees’ training histories to test a new online analytics service, without really considering where the data will be stored or how it will be reused.

Taken in isolation, each of these actions may seem trivial. Taken together, they create a constant flow of sensitive data towards tools that may never have been audited or approved by the IT department or the DPO. It is this diffuse, barely visible exposure that constitutes one of the main risks in the AI era.

In response, purely technical measures are not enough. It is essential to develop a genuine culture of responsible AI use within HR teams, with simple, practical messages:

  • which types of data are considered sensitive;
  • which tools are authorised or prohibited;
  • which questions to ask before pasting employee information into an external service.

AI and cybersecurity: putting in place AI governance tailored to HR issues

To avoid managing cybersecurity reactively, it is necessary to structure AI governance. The aim is not to multiply constraints, but to define a shared framework that prevents misuse, duplication or blind spots.

Robust governance is usually based on close collaboration between the IT department, Legal / the DPO and HR. Each brings a specific perspective: technical risks for one, regulatory requirements for another, and human impact for HR. Together, they can clarify the rules of the game and define a few guiding principles, for example:

  • which types of AI projects in HR must systematically be reviewed before deployment;
  • which minimum security, hosting and data management criteria must be required from suppliers;
  • in which cases specific information for employees is necessary, or even consultation with employee representative bodies;
  • how to manage the data lifecycle (collection, use, retention, deletion) when AI is involved.

On the HR side, substantive work is also needed to map the tools already in place. Many organisations discover, when they dig deeper, that they are using far more AI solutions than they imagined: ATS with automatic scoring, recommendation engines integrated into learning platforms, employee listening tools based on algorithmic models… Understanding this ecosystem is a prerequisite for regaining control.

Making cybersecurity a lasting HR capability

AI is not going to withdraw from HR; on the contrary, it will continue to extend to new processes. The question is therefore not whether to adopt it, but how to deploy it safely and responsibly.

For the HR Director, this implies assuming a dual role. On the one hand, that of innovation facilitator, by identifying relevant AI use cases for teams and managers, and supporting projects that genuinely create value. On the other, that of employee protector, by ensuring that their data is handled with care, secured throughout its lifecycle, and never used for opaque or disproportionate purposes.

In practice, this entails a few key actions, which should be seen as a long-term investment rather than a one-off project:

  • systematically integrating a “data and cybersecurity” component into HR projects involving AI;
  • embedding training on these issues for HR teams over time, with regular refreshers and practical case studies;
  • tracking incidents or near-misses in order to adjust rules and practices rather than downplaying them;
  • making transparency towards employees a guiding principle: explaining what is done with their data, why, and with which safeguards.

By placing data security on the same level as tool performance, the HR function strengthens its credibility and legitimacy. It shows that it is capable of combining innovation and responsibility, efficiency and respect for individuals. In the age of AI, this is undoubtedly one of the essential conditions for continuing to attract, engage and retain talent over the long term.