Cases

Public institution: AI Act compliance on recruitment

Sector
Public sector
Size
5,000+ employees
Date
February 25, 2026
Problem
AI Act compliance to reach across the entire recruitment chain, without degrading time-to-fill on critical positions.

Public sectorAI ActRecruitment

DRAFT — to be replaced by author content

Context

The terrain: a large French public institution in the research and higher education sector, more than eight thousand permanent employees and several thousand non-permanent associated personnel. The institution proceeds to about one thousand two hundred recruitments per year, on very varied typologies: research professors, research engineers, administrative and technical personnel, temporary contractors on funded projects.

The institution deployed in 2023 a candidate management platform integrating decision-aid functionalities: candidate pre-screening, relevance scoring against the job description, follow-up recommendations for inactive candidates. These functionalities, provided by a European HR Tech vendor, rely on proprietary machine learning models.

The progressive entry into force of the AI Act, from February 2025, imposed on the institution to bring this chain into compliance with the obligations specific to AI systems classified as high-risk in the recruitment context.

Problem

The problem was twofold.

On the one hand, the institution had to reach compliance on four precise requirements: transparency on the use of models toward candidates, documented human supervision of any decision taken with model support, possibility for candidates to request an individual explanation, conduct of a fundamental rights impact assessment for high-risk uses. None of these requirements had, at project start, an operational mechanism.

On the other hand, the institution had to conduct this compliance without degrading already tight time-to-fill on critical positions — particularly research professor positions in tense disciplines. The HR department feared, rightly, that procedural overload would extend these delays and cause the loss of candidates to competitor institutions less constrained regulatorily.

Intervention

The intervention was conducted over twelve months, with a mixed steering committee associating HR, the legal department, the data protection officer, an IT department representative and an independent member appointed by the presidency.

The first stage, over three months, consisted in conducting the fundamental rights impact assessment. This study, conducted with the support of a specialized external firm, examined each of the platform’s functionalities under the angle of risks of algorithmic bias, indirect discrimination and decision opacity. It concluded to a limited risk on two functionalities, to a significant risk requiring remediation on a third, and to a high risk imposing withdrawal on a fourth — functionality that was deactivated in the month following the conclusion of the study.

The second stage, over four months, consisted in instrumenting human supervision. The platform was reconfigured to produce, at each model-assisted decision, a documented trace including the identity of the supervisor, the motive of the decision and the score elements considered determining. Validation processes were rewritten to integrate this traceability.

The third stage, over three months, consisted in setting up the candidate explanation request response mechanism. A dedicated channel was opened, with a target response time of fifteen working days. Recruitment teams were trained in conducting these responses, relying on the traces produced by the platform.

The fourth stage, in parallel with the previous three, consisted in updating information notices to candidates, prior to application, and publishing on the institutional website a synthetic description of the models used and their limits.

Outcome

At the end of the twelve months, the institution had reached operational compliance on the four requirements. A self-assessment, conducted according to the national supervisory authority’s method, did not raise any major non-compliance. Time-to-fill, measured over the entire year, had not significantly increased — the theoretical extension linked to new procedures having been offset by the improvement in pre-screening quality.

More unexpected: the number of applications on the most sensitive positions slightly increased, in a sector context that was nevertheless tense. The HR department attributes this effect to the transparent publication on the models used, which would have reassured a portion of candidates sensitive to these issues.

Lessons

Three lessons were drawn by the HR director, in feedback presented at the research institutions conference.

The first lesson concerns the calendar. AI Act compliance on recruitment is not an obligation of means but an operational obligation, which requires several months of effective work. Delaying engagement of the project produces, in the long run, forced and costly choices. Engaging the project now — including for organizations not yet directly constrained — constitutes a prudent choice.

The second lesson concerns the unanticipated benefit of transparency. The publication of models used and their limits constituted a positive signal on employer attractiveness, in a sector where distrust toward automated tools is traditionally high. Transparency is not only a constraint; it can become a lever.

The third lesson concerns the composition of the steering committee. Associating an independent member allowed defusing several internal discussions by bringing a respected external perspective. This type of composition deserves to be generalized for this type of project, including in private organizations not required to do so.