Archives

At the heart of the criminal investigation: from the crime scene to the criminal court trial

In 2024, our unique literary concept combining crime fiction and educational writing finally came to life. It is the result of many months of work, fascinating encounters with seasoned professionals, the sharing of expertise, and true immersion in the daily lives of numerous experts within the judicial sphere. Our ambition with this book was to explore every stage of a criminal investigation, revealing to the general public the many layers of the vast judicial system—from the discovery of a violent crime scene to the verdict delivered by the criminal court. We extend our heartfelt thanks to all the experts who took part in this project and whose testimonies lend the book its authenticity. It has been a remarkable journey!

1 – To begin, could you briefly describe your background and what motivated you to write this book?

Sébastien Aguilar: I have been working in the Forensic Police of the Paris Police Prefecture for thirteen years. In 2017, I had the opportunity to co-author a first book on forensic science and to found ForenSeek®, a platform dedicated to forensic disciplines, which also offers a training program for the competitive examination to become a Forensic Science Technician (Technicien de Police Technique et Scientifique). Since my first assignment, I have always enjoyed sharing insights about this extraordinary profession, which, in my view, remains largely unknown to the general public. The inner workings of a judicial investigation are often unsuspected, and I have witnessed firsthand how investigators sacrifice part of their personal lives to bring cases to completion—sometimes over several days or even weeks. With this new book, our goal was to shed light on the full complexity of a criminal investigation: the overwhelming quantity of evidence to collect, the necessity of organizing all this information, and the importance of interpreting it correctly to uncover the truth. For us, it was a way to pay tribute to all those who work behind the scenes, whose efforts are essential—especially for the victims.

Justine Picard: My career path is somewhat atypical. I spent nearly ten years working in marketing and communications. As I approached my thirties, driven by a strong desire to pursue the profession that had always fascinated me, I decided to take the entrance examination for the Forensic Police of the French Police. In 2019, I joined the intervention unit of the SRPTS in Paris, marking a complete 360-degree career change! I discovered a fascinating, highly technical, and demanding field. Throughout the various cases I have worked on, I quickly began to feel a certain frustration. Within forensic science, we have our own protocols, our own methods, and our own way of working. At crime scenes, we collaborate closely with investigators, but soon after, we lose visibility on the subsequent progress of the case. It’s understandable—this is how the judicial process operates, and everyone must play their part to move things forward as quickly and efficiently as possible. Yet while I accept this professionally, on a personal level, it leaves me with a lingering sense of incompleteness. That’s what motivated me to embark on this literary project: Who? When? What? How? To know and understand every aspect of a criminal investigation, to delve into the daily work of those experts who operate in the shadows, and more broadly, to grasp the inner workings of our country’s judicial system.

2 – What makes your book stand out from other works on criminology and criminal investigations?

Justine Picard: Mainly the format we chose: finding the perfect balance between the technical narrative and the fictionalized storytelling. There are many books devoted to the National Police, the Gendarmerie, or other justice system professionals—some take the form of testimonies, others of detective novels or technical manuals—but none truly bridges these different worlds. For us, it was a way to engage the reader, to hold their attention, while guiding them through the entire judicial process with well-sourced information and key witness accounts. In this market, books tend to be one or the other—but rarely both!

Sébastien Aguilar: Our ambition was to create a book that is both educational and captivating, moving away from the somewhat austere format of traditional criminal law textbooks. We therefore chose to diversify our approach: by including sections dedicated to specific forensic specialities, interviews with various judicial actors (magistrates, experts, lawyers, psycho-criminologists, jurors of the cour d’assises, etc.), and concrete materials such as call detail records, official reports, autopsy findings, and forensic police reports. The idea was to immerse the reader in the heart of the investigation—to show, as vividly as possible, how a case is built step by step, and what tools investigators use along the way. I was particularly moved that Dominique Rizet, a seasoned judicial reporter, praised in his foreword the “educational, well-documented, and comprehensive” nature of this book, describing it as “truly one of a kind.”

3 – Why did you choose to tell this story in the form of a crime novel?

Justine Picard: Above all, we wanted to bring suspense to the narrative and move away from a purely technical approach. Another important point for us was to reach a wide audience—both “specialists” and “non-specialists”—by allowing them to immerse themselves more easily in a complex investigation involving multiple technical procedures. The plot twists, witness statements, and the reader’s desire to find out what happens next all serve as tools to gently introduce complex forensic and judicial concepts. Our aim was for the reader to finish the book with the satisfaction of a well-crafted story while also gaining a solid educational foundation through the insights of real experts and the many technical details presented.

Sébastien Aguilar: We chose a narrative format because it allows readers to experience the intensity and emotion inherent in this kind of investigation. This storytelling allows to convey powerful messages—such as the confrontation with death, the crucial role of the forensic autopsy, or the chronic fatigue affecting every individual involved in the investigation. Behind the forensic police expert’s coverall, the magistrate’s or lawyer’s robe, the pathologist’s lab coat, or the investigator’s computer screen, there are men and women with their own strengths and weaknesses. Writing it as a crime novel enabled us to highlight this deeply human dimension, too often overshadowed by the purely technical side of criminal investigation.

4 – Is the case presented in your book entirely fictional, or does it include real investigative elements and techniques?

Sébastien Aguilar / Justine Picard: Around 30% of the story is inspired by a real criminal case, to which we added numerous original elements to illustrate the diversity and modernity of current investigative techniques. We’re sometimes asked whether we’re concerned about revealing too much information that might benefit criminals. In reality, everything we describe in this book is already publicly accessible—through the internet, films, or television series. Nowadays, everyone knows they can be betrayed by their fingerprints, DNA, scent, clothing fibers, digital data, or even shoeprints left at the scene. To put it simply: the best way not to get caught is still not to commit a crime…

5 – What are the key insights or most surprising discoveries readers will find in « At the Heart of the Criminal Investigation »?

Sébastien Aguilar / Justine Picard: In At the Heart of the Criminal Investigation, we reveal fascinating developments that are set to transform investigative methods in the years to come. For instance, we explore emerging forms of digital trace evidence—such as connected devices, next-generation vehicles, and intelligent video surveillance—that are poised to play a decisive role in future investigations. These new sources of evidence already make it possible to reconstruct crime scenes with remarkable precision. We also break down how DNA analyses are conducted: How are they performed? What criteria are used to compare genetic profiles? Through this book, readers will gain insight into the inner workings of forensic genetics laboratories and understand how a single biological sample can completely change the course of an investigation.

6 – Your book doesn’t stop at the criminal investigation—it also includes a section on the trial before the cour d’assises. Why did you make that choice?

Justine Picard: The trial represents a crucial stage of the judicial process. All the work carried out beforehand by the various forensic and investigative experts takes on its full meaning in court, when the accused are confronted with the body of evidence gathered against them. That’s where everything comes together! We also felt it was important to shed light on how the justice system functions—something often misunderstood by the general public—and to clearly explain the roles of its key players (lawyers, prosecutors, investigating judges, etc.).

Sébastien Aguilar: Having attended several trials before the cour d’assises, I’ve always been struck by their almost theatrical staging and by the ability of certain investigators and experts who, when called to the stand, can testify for hours on end without interruption or notes. It was important for us to show how such a trial unfolds: How are jurors selected? Who appears before the court? Should one address the presiding judge as “Your Honour”? Do lawyers ever interrupt one another with an “Objection, Your Honour !”? How does the deliberation phase take place? and so on.

7 – If you had to describe your book in one word?

Justine Picard : Immersive !
Sébastien Aguilar : Thrilling !

8 – To conclude, could you share a short anecdote?

Sébastien Aguilar: In this fictional case, I actually went to the banks of the Seine—the location where the victim’s body is discovered—where I carried out a sample collection that was later analyzed by a captain from the Institut de Recherche Criminelle de la Gendarmerie Nationale (IRCGN). The results of that analysis proved decisive in our investigation. This book was also an opportunity to feature, through interviews and immersive accounts, contributions from real specialists in criminal investigation, including:

  • Jacques Dallest, honorary Attorney General, author of Cold Case and Sur les chemins du crime (Éditions Mareuil)
  • Christian Sainte, Director of the National Criminal Police (DNPJ)
  • Valérie-Odile Dervieux, Presiding Judge of the Investigative Chamber, Paris Court of Appeal
  • Delphine Blot, Judge of Liberties and Detention, Paris Judicial Court
  • Fatiha Touili, Investigating Judge, Bobigny Judicial Court
  • Thana Nanou, embalmer, author of Les yeux qu’on ferme (Éditions 41)
  • Guillaume Visseaux, forensic pathologist, IRCGN
  • Amel Larnane, Head of the Central Service for the Preservation of Biological Samples (SCPPB)
  • Eduardo Mariotti and Bertrand Le Corre, criminal lawyers
  • François-Xavier Laurent, forensic genetics expert at Interpol
  • Sylvie Miccolis, investigator, Paris Criminal Brigade (DPJ)
  • Noémi Chevassu, former investigator with the Minors’ Brigade, author of Pluie nocturne (Éditions Alba Capella)
  • Peggy Allimann, behavioural analyst, Forensic Division of the Gendarmerie Nationale (PJGN), author of Crimes (Éditions DarkSide)
  • General Christophe Husson and Colonel Pierre-Yves Caniotti, COMCYBER-MI• Chief Superintendent Sophie Malherbe-Mayeux, Head of the River Police Unit, Paris Police Prefecture
Au coeur de l'enquête criminelle - Sébastien AGUILAR - Police Scientifique

Our book is available in all bookstores and online retail platforms (To order: click here)

Linear Sequential Unmasking–Expanded (LSU-E): A general approach for improving decision making as well as minimizing noise and bias

Copy of the article Linear Sequential Unmasking–Expanded (LSU-E): A general approach for improving decision making as well as minimizing noise and biais, Forensic Science International: Synergy, Volume 3, 2021, 100161, with author agreement (contact : [email protected])

All decision making, and particularly expert decision making, re quires the examination, evaluation, and integration of information. Research has demonstrated that the order in which information is pre sented plays a critical role in decision making processes and outcomes. Different decisions can be reached when the same information is pre sented in a different order [1,2]. Because information must always be considered in some order, optimizing this sequence is important for optimizing decisions. Since adopting one sequence or another is inevitable —some sequence must be used— and since the sequence has important cognitive implications, it follows that considering how to best sequence information is paramount.

In the forensic sciences, existing approaches to optimize the order of information processing (sequential unmasking [3] and Linear Sequential Unmasking [4]) are limited in terms of their narrow applicability to only certain types of decisions, and they focus only on minimizing bias rather than optimizing forensic decision making in general. Here, we introduce Linear Sequential Unmasking–Expanded (LSU-E), an approach that is applicable to all forensic decisions rather than being limited to a particular type of decision, and it also reduces noise and improves forensic decision making in general rather than solely by minimizing bias.

Cognitive background

All decision making is dependent on the human brain and cognitive processes. Of particular importance is the sequence in which information is encountered. For example, it is well documented that people tend to remember the initial information in a sequence better —and be more strongly impacted by it— compared to subsequent information in the sequence (see the primacy effect [5,6]). For example, if asked to memorize a list of words, people are more likely to remember words from the beginning of the list compared to the middle of the list (see also the recency effect [7]).

Critically important, the initial information in a sequence is not only remembered well, but it also influences the processing of subsequent information in a number of ways (see a simple illustration in Fig. 1). The initial information can create powerful first impressions that are difficult to override [8], it generates hypotheses that determine which further information will be heeded or ignored (e.g., selective attention [[9][10][11][12]]), and it can prompt a host of other decisional phenomena, such as confirmation bias, escalation of commitment, decision momentum, tunnel vision, belief perseverance, mind set and anchoring effects [[13][14][15][16][17][18][19]]. These phenomena are not limited to forensic decisions, but also apply to medical experts, police investigators, financial analysts, military intelligence, and indeed anyone who engages in decision making.

Fig. 1. A simple illustration of the order effect: Reading from left to right, the first/leftmost stimulus can affect the interpretation of the middle stimulus, such that it reads as A-B-14; but reading the same stimuli, from right to left, starting with 14 as the first stimulus, often makes people see the stimuli as A-13-14, i.e., the middle stimulus as a ‘13’ (or a ‘B’) depending on what you start with first.

As a testament to the power of the sequencing of information, studies have repeatedly found that presenting the same information in a different sequence elicits different conclusions from decision-makers. Such effects have been shown in a whole range of domains, from food tasting [20] and jury decision-making [21,22], to countering conspiracy arguments (such as anti-vaccine conspiracy theories [23]), all demonstrating that the ordering of information is critical. Furthermore, such order effects have been specifically shown in forensic science; for example, Klales and Lesciotto [24] as well as Davidson, Rando, and Nakhaeizadeh [25] demonstrated that the order in which skeletal material is analyzed (e.g., skull versus hip) can bias sex estimates.

Bias background

Decisions are vulnerable to bias — systematic deviations in judgment [26]. This type of bias should not be confused with intentional discriminatory bias. Bias, as it is used here, refers to cognitive biases that impact all of us, typically without intention or even conscious awareness [26,27].

Although many experts incorrectly believe that they are immune from cognitive bias [28], in some ways experts are even more susceptible to bias than non-experts [[27][29][30]]. Indeed, the impact of cognitive bias on decision making has been documented in many domains of expertise, from criminal investigators and judges, to insurance underwriters, psychological assessments, safety inspectors and medical doctors [26,[31][32][33][34][35][36]], as well as specifically in forensic science [30].

No forensic domain, or any domain for that matter, is immune from bias.

Bias in forensic science

The existence and influence of cognitive bias in the forensic sciences is now widely recognized (‘the forensic confirmation bias’ [27,37,38]). In the United States, for example, the National Academy of Sciences [39], the President’s Council of Advisors on Science and Technology [40], and the National Commission on Forensic Science [41] have all recognized cognitive bias as a real and important issue in forensic de cision making. Similar findings have been reached in other countries all around the world—for example, in the United Kingdom, the Forensic Science Regulator has issued guidance about avoiding bias in forensic work [42], and in Australia as well [43]. 

Furthermore, the effects of bias have been observed and replicated across many forensic disciplines (e.g., fingerprinting, forensic pathol ogy, DNA, firearms, digital forensic, handwriting, forensic psychology, forensic anthropology, and CSI, among others; see Ref. [44] for a review)—including among practicing forensic science experts specif ically [30,45–47]. Simply put, no forensic domain, or any domain for that matter, is immune from bias.

Minimizing bias in forensic science

Although the need to combat bias in forensic science is now widely recognized, actually combating bias in practice is a different matter. Within the pragmatics, realities and constraints of crime scenes and forensic laboratories, minimizing bias is not always a straightforward issue [48]. Given that mere awareness and willpower are insufficient to combat bias [27], we must develop effective —but also practical— countermeasures.

Linear Sequential Unmasking (LSU [4]) minimizes bias by regulating the flow and order of information such that forensic decisions are based on the evidence and task-relevant information. To accomplish this, LSU requires that forensic comparative decisions must begin with the ex amination and documentation of the actual evidence from the crime scene (the questioned or unknown material) on its own before being exposed to the ‘target’/suspect (known) reference material. The goal is to minimize the potential biasing effect of the reference/’target’ on the evidence from the crime scene (see Level 2 in Fig. 2). LSU thus ensures that the evidence from the crime scene -not the ‘target’/suspect- drives the forensic decision. 

This is especially important since the nature of the evidence from the crime scene makes it more susceptible to bias, because –in contrast to the reference materials- it often has low quality and quantity of information, which makes it more ambiguous and malleable. By examining the crime scene evidence first, LSU minimizes the risk of circular reasoning in the comparative decision making process by pre venting one from working backward from the ‘target’/suspect to the evidence.

Fig. 2. Sources of cognitive bias in sampling, observations, testing strategies, analysis, and/or conclusions, that impact even experts. These sources of bias are organized in a taxonomy of three categories: case-specific sources (Category A), individual-specific sources (Category B), and sources that relate to human nature (Category C).

LSU limitations

By its very nature, LSU is limited to comparative decisions where evidence from the crime scene (such as fingerprints or handwriting) is compared to a ‘target’/suspect. This approach was first developed to minimize bias specifically in forensic DNA interpretation (sequential unmasking [3]). Dror et al. [4] then expanded this approach to other comparative forensic domains (fingerprints, firearms, handwriting, etc.) and introduced a balanced approach for allowing revisions of the initial judgments, but within restrictions.

LSU is therefore limited in two ways: First, it applies only to the limited set of comparative decisions (such as comparing DNA profiles or fingerprints). Second, its function is limited to minimizing bias, not reducing noise or improving decision making more broadly.

In this article, we introduce Linear Sequential Unmasking—Expanded (LSU-E). LSU-E provides an approach that can be applied to all forensic decisions, not only comparative decisions. Furthermore, LSU-E goes beyond bias, it reduces noise and improves decisions more generally by cognitively optimizing the sequence of information in a way that maximizes information utility and thereby produces better and more reliable decisions

Linear Sequential Unmasking—Expanded (LSU-E)

Beyond comparative forensic domains

LSU in its current form is only applicable to forensic domains that compare evidence against specific reference materials (such as a suspect’s known DNA profile or fingerprints—see Level 2 in Fig. 2). As noted above, the problem is that these reference materials can bias the perception and interpretation of the evidence, such that interpretations of the same data/evidence vary depending on the presence and nature of the reference material —and LSU aims to minimize this problem by requiring linear rather than circular reasoning.

However, many forensic judgments are not based on comparing two stimuli. For instance, digital forensics, forensic pathology, and CSI all require decisions that are not based on comparing evidence against a known suspect. Although such domains may not entail a comparison to a ‘target’ stimulus or suspect, they nevertheless entail biasing information and context that can create problematic expectations and top-down cognitive processes —and the expanded LSU-E provides a way to minimize those as well.

Take, for instance, CSI. Crime scene investigators customarily receive information about the scene even before they arrive to the crime scene itself, such as the presumed manner of death (homicide, suicide, or accident) or other investigative theories (such as an eyewitness account that the burglar entered through the back window, etc.). When the CSI receives such details before actually seeing the crime scene for themselves, they become prone to develop a priori expectations and hypotheses, which can bias their subsequent perception and interpretation of the actual crime scene, and impact if and what evidence they collect. The same applies to other non-comparative forensic domains, such as forensic pathology, fire investigators and digital forensics. For example, telling a fire investigator —before they arrive and examine the fire scene itself— that the property was on the market for two years but did not sell, or/and that the owner had recently insured the property, can bias their work and conclusions.

Combating bias in these domains is especially challenging since these experts need at least some contextual information in order to do their work (unlike, for example, firearms, fingerprint, and DNA experts, who require minimal contextual information to perform comparisons of physical evidence).

The aim of LSU-E is not to deprive experts of the information they need, but rather to minimize bias by providing that information in the optimal sequence. The principle is simple: Always begin with the actual data/evidence —and only that data/evidence— before considering any other contextual information, be it explicit or implicit, reference materials, or any other contextual or meta-information.

In CSI, for example, no contextual information should be provided until after the CSI has initially seen the crime scene for themselves and formed (and documented) their initial impressions, derived solely from the crime scene and nothing else. This allows them to form an initial impression driven only by the actual data/evidence. Then, they can receive relevant contextual information before commencing evidence collection. The goal is clear: As much as practically possible, experts should —at least initially— form their opinion based on the raw data itself before being given any further information that could influence their opinion.

Of course, LSU-E is not limited to forensic work and can be readily applied to many domains of expert decision making. For example, in healthcare, a medical doctor should examine a patient before making a diagnosis (or even generating a hypothesis) based on contextual information. The use of SBAR (Situation, Background, Assessment and Recommendation [49,50]) should not be provided until after they have seen the actual patient. Similarly, workplace safety inspectors should not be made aware of a company’s past violations until after they have evaluated the worksite for themselves without such knowledge [32].

Beyond minimizing bias

Beyond the issue of bias, expert decisions are stronger when they are less noisy and based on the ‘right’ information —the most appropriate, reliable, relevant and diagnostic information. LSU-E provides criteria (described below) for identifying and prioritizing this information. Rather than exposing experts to information in a random or incidental order, LSU-E aims to optimize the sequence of information so as to utilize (or counteract) cognitive and psychological influences (such as, primacy effects, selective attention and confirmation bias; see Section 1.1) and thus empower experts to make better decisions. It is also critical that as the expert progresses through the informational sequence, they document what information they see and any changes in their opinion. This is to ensure that it is transparent what information was used in their decision making and how [51,52].

Criteria for sequencing information in LSU-E

Optimizing the order of information not only minimizes bias but also reduces noise and improves the quality of decision making more generally. The question is: How should one determine what information experts should receive and how best to sequence it? LSU-E provides three criteria for determining the optimal sequence of exposure to task-relevant information: biasing power, objectivity, and relevance —which are elaborated below

1. Biasing power. 

The biasing power of relevant information varies drastically. Some information may be strongly biasing, whereas other information is not biasing at all. For example, the technique used to lift and develop a fingerprint is minimally biasing (if at all), but the medication found next to a body may bias the manner-of- death decision. It is therefore suggested that the non- (or less) biasing relevant information be put before the more strongly biasing relevant information in the order of exposure. 

2. Objectivity. 

Task-relevant information also varies in its objectivity. For example, an eyewitness account of an event is typically less objective than a video recording of the same event —but video re cordings can also vary in their objectivity, depending on their completeness, perspective, quality, etc. It is therefore suggested that the more objective information be put before the less objective in formation in the order of exposure. 

3. Relevance. 

Some relevant information stands at the very core of the work and necessarily underpins the decision, whereas other relevant information is not as central or essential. For example, in deter mining manner-of-death, the medicine found next to a body would typically be more relevant (for instance, to determine which toxi cological tests to run) than the decedent’s history of depression. It is therefore suggested that the more relevant information is put before the more peripheral information in the order of exposure, and –of course- any information that is totally irrelevant to the decision should be omitted altogether (such as the past criminal history of a suspect).

The above criteria are ‘guiding principles’ because:

A. The suggested criteria above are actually a continuum rather than a simple dichotomy [45,48,53]. One may even consider variability within the same category of information; for example, a higher quality video recording may be considered before a lower quality recording, or a statement from a sober eyewitness may be considered before a statement from an intoxicated witness. 

B. The three criteria are not independent; they interact with one another. For example, objectivity and relevance may interact to determine the power of the information (e.g., even highly objective information should be less powerful if its relevance is low, or conversely, highly relevant information should be less powerful if its objectivity is low). Hence, the three criteria are not to be judged in isolation from each other. 

C. The order of information needs to be weighed against the potential benefit it can provide [52]. For example, at the trial of police officer Derek Chauvin in relation to the death of George Floyd, the forensic pathologist Andrew Baker testified that he “intentionally chose not” to watch video of Floyd’s death before conducting the autopsy because he “did not want to bias [his] exam by going in with pre conceived notions that might lead [him] down one path or another” [54]. Hence, his decision was to examine the raw data first (an au topsy of the body) before exposure to other information (the video). Such a decision should also consider the potential benefit of watch ing the video before conducting the autopsy, in terms of whether the video might guide the autopsy more than bias it. In other words, LSU-E requires one to consider the potential benefit relative to the potential biasing effect [52]. 

With this approach, we urge experts to carefully consider how each piece of information satisfies each of these three criteria and whether and when it should, or should not, be included in the sequence —and whenever possible, to document their justification for including (or excluding) any given piece of information. Of course, this raises prac tical questions about how to best implement LSU-E, such as using case managers —and effective implementation strategies may well vary be tween disciplines and/or laboratories— but first we need to acknowl edge these issues and the need to develop approaches to deal with them.

Conclusion

In this paper, we draw upon classic cognitive and psychological research on factors that influence and underpin expert decision making to propose a broad and versatile approach to strengthening expert decision making. Experts from all domains should first form an initial impression based solely on the raw data/evidence, devoid of any reference material or context, even if relevant. Only thereafter can they consider what other information they should receive and in what order based on its objectivity, relevance, and biasing power. It is furthermore essential to transparently document the impact and role of the various pieces of information on the decision making process. As a result of using LSU-E, decisions will not only be more transparent and less noisy, but it will also make sure that the contributions of different pieces of information are justified by, and proportional to, their strength.

Références

[1] S.E. Asche, Forming impressions of personality, J. Abnorm. Soc. Psychol., 41 (1964), pp. 258-290
[2] C.I. Hovland (Ed.), The Order of Presentation in Persuasion, Yale University Press (1957)
[3] D. Krane, S. Ford, J. Gilder, K. Inman, A. Jamieson, R. Koppl, et al. Sequential unmasking: a means of minimizing observer effects in forensic DNA interpretation, J. Forensic Sci., 53 (2008), pp. 1006-1107
[4] I.E. Dror, W.C. Thompson, C.A. Meissner, I. Kornfield, D. Krane, M. Saks, et al. Context management toolbox: a Linear Sequential Unmasking (LSU) approach for minimizing cognitive bias in forensic decision making, J. Forensic Sci., 60 (4) (2015), pp. 1111-1112
[5] F.H. Lund. The psychology of belief: IV. The law of primacy in persuasion, J. Abnorm. Soc. Psychol., 20 (1925), pp. 183-191
[6] B.B. Murdock Jr. The serial position effect of free recall, J. Exp. Psychol., 64 (5) (1962), p. 482
[7] J. Deese, R.A. Kaufman. Serial effects in recall of unorganized and sequentially organized verbal material, J. Exp. Psychol., 54 (3) (1957), p. 180
[8] J.M. Darley, P.H. Gross. A hypothesis-confirming bias in labeling effects,J. Pers. Soc. Psychol., 44 (1) (1983), pp. 20-33
[9] A. Treisman. Contextual cues in selective listening, Q. J. Exp. Psychol., 12 (1960), pp. 242-248
[10] J. Bargh, E. Morsella. The unconscious mind, Perspect. Psychol. Sci., 3 (1) (2008), pp. 73-79
[11] D.A. Broadbent. Perception and Communication, Pergamon Press, London, England (1958)
[12] J.A. Deutsch, D. Deutsch. Attention: some theoretical considerations, Psychol. Rev., 70 (1963), pp. 80-90
[13] A. Tversky, D. Kahneman. Judgment under uncertainty: heuristics and biases, Science, 185 (4157) (1974), pp. 1124-1131
[14] R.S. Nickerson. Confirmation bias: a ubiquitous phenomenon in many guises, Rev. Gen. Psychol., 2 (1998), pp. 175-220
[15] C. Barry, K. Halfmann. The effect of mindset on decision-making, J. Integrated Soc. Sci., 6 (2016), pp. 49-74
[16] P.C. Wason. On the failure to eliminate hypotheses in a conceptual task, Q. J. Exp. Psychol., 12 (3) (1960), pp. 129-140
[17] B.M. Staw. The escalation of commitment: an update and appraisal, Z. Shapira (Ed.), Organizational Decision Making, Cambridge University Press (1997), pp. 191-215
[18] M. Sherif, D. Taub, C.I. Hovland. Assimilation and contrast effects of anchoring stimuli on judgments, J. Exp. Psychol., 55 (2) (1958), pp. 150-155
[19] C.A. Anderson, M.R. Lepper, L. Ross. Perseverance of social theories: the role of explanation in the persistence of discredited information, J. Pers. Soc. Psychol., 39 (6) (1980), pp. 1037-1049
[20] M.L. Dean. Presentation order effects in product taste tests, J. Psychol., 105 (1) (1980), pp. 107-110
[21] K.A. Carlson, J.E. Russo. Biased interpretation of evidence by mock jurors, J. Exp. Psychol. Appl., 7 (2) (2001), p. 91
[22] R.G. Lawson. Order of presentation as a factor in jury persuasion. Ky, LJ, 56 (1967), p. 523
[23] D. Jolley, K.M. Douglas. Prevention is better than cure: addressing anti-vaccine conspiracy theories, J. Appl. Soc. Psychol., 47 (2017), pp. 459-469
[24] A.R. Klales, K.M. Lesciotto. The “science of science”: examining bias in forensic anthropology, Proceedings of the 68th Annual Scientific Meeting of the American Academy of Forensic Sciences (2016)
[25] M. Davidson, C. Rando, S. Nakhaeizadeh. Cognitive bias and the order of examination on skeletal remains, Proceedings of the 71st Annual Meeting of the American Academy of Forensic Sciences (2019)
[26] D. Kahneman, O. Sibony, C. Sunstein. Noise: A Flaw in Human Judgment, William Collins (2021)
[27] I.E. Dror. Cognitive and human factors in expert decision making: six fallacies and the eight sources of bias, Anal. Chem., 92 (12) (2020), pp. 7998-8004
[28] J. Kukucka, S.M. Kassin, P.A. Zapf, I.E. Dror. Cognitive bias and blindness: a global survey of forensic science examiners, Journal of Applied Research in Memory and Cognition, 6 (2017), pp. 452-459
[29] I.E. Dror. The paradox of human expertise: why experts get it wrong, N. Kapur (Ed.), The Paradoxical Brain, Cambridge University Press, Cambridge, UK (2011), pp. 177-188
[30] C. Eeden, C. De Poot, P. Koppen. The forensic confirmation bias: a comparison between experts and novices, J. Forensic Sci., 64 (1) (2019), pp. 120-126
[31] C. Huang, R. Bull. Applying Hierarchy of Expert Performance (HEP) to investigative interview evaluation: strengths, challenges and future directions, Psychiatr. Psychol. Law, 28 (2021)
[32] C. MacLean, I.E. Dror. The effect of contextual information on professional judgment: reliability and biasability of expert workplace safety inspectors,J. Saf. Res., 77 (2021), pp. 13-22
[33] E. Rassin. Anyone who commits such a cruel crime, must be criminally irresponsible’: context effects in forensic psychological assessment, Psychiatr. Psychol. Law (2021)
[34] V. Meterko, G. Cooper. Cognitive biases in criminal case evaluation: a review of the research, J. Police Crim. Psychol. (2021)
[35] C. FitzGerald, S. Hurst. Implicit bias in healthcare professionals: a systematic review, BMC Med. Ethics, 18 (2017), pp. 1-18
[36] M.K. Goyal, N. Kuppermann, S.D. Cleary, S.J. Teach, J.M. Chamberlain. Racial disparities in pain management of children with appendicitis in emergency departments, JAMA Pediatr, 169 (11) (2015), pp. 996-1002
[37] I.E. Dror. Biases in forensic experts, Science, 360 (6386) (2018), p. 243
[38] S.M. Kassin, I.E. Dror, J. Kukucka. The forensic confirmation bias: problems, perspectives, and proposed solutions, Journal of Applied Research in Memory and Cognition, 2 (1) (2013), pp. 42-52
[39] NAS. National Research Council, Strengthening Forensic Science in the United States: a Path Forward, National Academy of Sciences (2009)
[40] PCAST, President’s Council of Advisors on science and Technology (PCAST), Report to the President – Forensic Science in Criminal Courts: Ensuring Validity of Feature-Comparison Methods, Office of Science and Technology, Washington, DC (2016)
[41] NCFS, National Commission on Forensic Science. Ensuring that Forensic Analysis Is Based upon Task-Relevant Information, National Commission on Forensic Science, Washington, DC (2016)
[42] Forensic Science Regulator. Cognitive bias effects relevant to forensic science examinations, disponible sur https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/914259/217_FSR-G-217_Cognitive_bias_appendix_Issue_2.pdf 
[43] ANZPAA, A Review of Contextual Bias in Forensic Science and its Potential Legal Implication, Australia New Zealand Policing Advisory Agency (2010)
[44] J. Kukucka, I.E. Dror. Human factors in forensic science: psychological causes of bias and error, D. DeMatteo, K.C. Scherr (Eds.), The Oxford Handbook of Psychology and Law, Oxford University Press, New York (2021)
[45] I.E. Dror, J. Melinek, J.L. Arden, J. Kukucka, S. Hawkins, J. Carter, D.S. Atherton. Cognitive bias in forensic pathology decisions, J. Forensic Sci., 66 (4) (2021)
[46] N. Sunde, I.E. Dror. A hierarchy of expert performance (HEP) applied to digital forensics: reliability and biasability in digital forensics decision making, Forensic Sci. Int.: Digit. Invest., 37 (2021)
[47] D.C. Murrie, M.T. Boccaccini, L.A. Guarnera, K.A. Rufino. Are forensic experts biased by the side that retained them? Psychol. Sci., 24 (10) (2013), pp. 1889-1897
[48] G. Langenburg. Addressing potential observer effects in forensic science: a perspective from a forensic scientist who uses linear sequential unmasking techniques, Aust. J. Forensic Sci., 49 (2017), pp. 548-563
[49] C.M. Thomas, E. Bertram, D. Johnson. The SBAR communication technique, Nurse Educat., 34 (4) (2009), pp. 176-180
[50] I. Wacogne, V. Diwakar. Handover and note-keeping: the SBAR approach, Clin. Risk, 16 (5) (2010), pp. 173-175
[51] M.A. Almazrouei, I.E. Dror, R. Morgan. The forensic disclosure model: what should be disclosed to, and by, forensic experts?, International Journal of Law, Crime and Justice, 59 (2019)
[52] I.E. Dror. Combating bias: the next step in fighting cognitive and psychological contamination, J. Forensic Sci., 57 (1) (2012), pp. 276-277
[53] D. Simon, Minimizing Error and Bias in Death Investigations, vol. 49, Seton Hall Law Rev. (2018), pp. 255-305
[54] CNN. Medical examiner: I “intentionally chose not” to view videos of Floyd’s death before conducting autopsy, April 9, 2021, disponible sur https://edition.cnn.com/us/live-news/derek-chauvin-trial-04-09-21/h_03cda59afac6532a0fb8ed48244e44a0 (2011)

AI in Forensics: Between Technological Revolution and Human Challenges

By Yann CHOVORY, Engineer in AI Applied to Criminalistics (Institut Génétique Nantes Atlantique – IGNA). On a crime scene, every minute counts. Between identifying a fleeing suspect, preventing further wrongdoing, and managing the time constraints of an investigation, case handlers are engaged in a genuine race against the clock. Fingerprints, gunshot residues, biological traces, video surveillance, digital data… all these clues must be collected and quickly analyzed, or there is a risk that the case will collapse for lack of usable evidence in time. Yet overwhelmed by the ever-growing mass of data, forensic laboratories are struggling to keep pace.

Analyzing evidence with speed and accuracy

In this context, artificial intelligence (AI) establishes itself as an indispensable accelerator. Capable of processing in a few hours what would take weeks to analyze manually, it optimises the use of clues by speeding up their sorting and detecting links imperceptible to the human eye. More than just a time-saver, it also improves the relevance of investigations: swiftly cross-referencing databases, spotting hidden patterns in phone call records, comparing DNA fragments with unmatched precision. AI thus acts as a tireless virtual analyst, reducing the risk of human error and offering new opportunities to forensic experts.

But this technological revolution does not come without friction. Between institutional scepticism and operational resistance, its integration into investigative practices remains a challenge. My professional journey, marked by a persistent quest to integrate AI into scientific policing, illustrates this transformation—and the obstacles it faces. From a marginalised bioinformatician to project lead for AI at IGNA, I have observed from within how this discipline, long grounded in traditional methods, is adapting—sometimes under pressure—to the era of big data.

The risk of human error is reduced and the reliability of identifications increased

Concrete examples: AI from the crime scene to the laboratory

AI is already making inroads in several areas of criminalistics, with promising results. For example, AFIS (Automated Fingerprint Identification System) fingerprint recognition systems now incorporate machine learning components to improve matching of latent fingerprints. The risk of human error is reduced and the reliability of identifications increased [1]. Likewise, in ballistics, computer vision algorithms now automatically compare the striations on a projectile with markings of known firearms, speeding the work of a firearms expert. Tools are also emerging to interpret bloodstains on a scene: machine learning1 models can help reconstruct the trajectory of blood droplets and thus the dynamics of an assault or violent event [2]. These examples illustrate how AI is integrating into the forensic expert’s toolkit, from crime scene image analysis to the recognition of complex patterns.But it is perhaps in forensic genetics that AI currently raises the greatest hopes. DNA analysis labs process thousands of genetic profiles and samples, with deadlines that can be critical. AI offers a considerable time-gain and enhanced accuracy. As part of my research, I contributed to developing an in-house AI capable of interpreting 86 genetic profiles in just three minutes [3]—a major advance when analyzing a complex profile may take hours. Since 2024, it has autonomously handled simple profiles, while complex genetic profiles are automatically routed to a human expert, ensuring effective collaboration between automation and expertise. The results observed are very encouraging. Not only is the turnaround time for DNA results drastically reduced, but the error rate also falls thanks to the standardization introduced by the algorithm.

AI does not replace humans but complements them

Another promising advance lies in enhancing genetic DNA-based facial composites. Currently, this technique allows estimating certain physical features of an individual (such as eye color, hair color, or skin pigmentation) from their genetic code, but it remains limited by the complexity of genetic interactions and uncertainties in predictions. AI could revolutionise this approach by using deep learning models trained on vast genetic and phenotypic databases, thereby refining these predictions and generating more accurate sketches. Unlike classical methods, which rely on statistical probabilities, an AI model could analyse millions of genetic variants in a few seconds and identify subtle correlations that traditional approaches do not detect. This prospect opens the way to a significant improvement in the relevance of DNA sketches, facilitating suspect identification when no other usable clues are available. The Forenseek platform has explored current advances in this area, but AI has not yet been fully exploited to surpass existing methods [5]. Its integration could therefore constitute a major breakthrough in criminal investigations.

It is important to emphasize that in all these examples, AI does not replace the human but complements them. At IRCGN (French National Gendarmerie Criminal Research Institute) cited above, while the majority of routine, good-quality DNA profiles can be handled automatically, regular human quality control remains: every week, a technician randomly checks cases processed by AI, to ensure no drift has occurred [3]. This human-machine collaboration is key to successful deployment, as the expertise of the forensic specialists remains indispensable to validate and finely interpret the results, especially in complex cases.

Intelligence artificielle IA en police scientifique et cybercriminalité - Forenseek

Algorithms Trained on Data: How AI “Learns” in Forensics

The impressive performance of AI in forensics relies on one crucial resource: data. For a machine learning algorithm to identify a fingerprint or interpret a DNA profile, it first needs to be trained on numerous examples. In practical terms, we provide it with representative datasets, each containing inputs (images, signals, genetic profiles, etc.) associated with an expected outcome (the identity of the correct suspect, the exact composition of the DNA profile, etc.). By analyzing thousands—or even millions—of these examples, the machine adjusts its internal parameters to best replicate the decisions made by human experts. This is known as supervised learning, since the AI learns from cases where the correct outcome is already known. For example, to train a model to recognize DNA profiles, we use data from solved cases where the expected result is clearly established.

an AI’s performance depends on the quality of the data that trains it.

The larger and more diverse the training dataset, the better the AI will be at detecting reliable and robust patterns. However, not all data is equal. It must be of high quality (e.g., properly labeled images, DNA profiles free from input errors) and cover a wide enough range of situations. If the system is biased by being exposed to only a narrow range of cases, it may fail when confronted with a slightly different scenario. In genetics, for instance, this means including profiles from various ethnic backgrounds, varying degrees of degradation, and complex mixture configurations so the algorithm can learn to handle all potential sources of variation.

Transparency in data composition is essential. Studies have shown that some forensic databases are demographically unbalanced—for example, the U.S. CODIS database contains an overrepresentation of profiles from African-American individuals compared to other groups [6]. A model naively trained on such data could inherit systemic biases and produce less reliable or less fair results for underrepresented populations. It is therefore crucial to monitor training data for bias and, if necessary, to correct it (e.g., through balanced sampling, augmentation of minority data) in order to achieve fair and equitable learning.

Data Collection: Gathering diverse and representative datasets
Data Preprocessing: Cleaning and preparing data for training
AI Training: Training algorithms on prepared datasets
Data Validation: Verifying the quality and diversity of the data
Bias Evaluation: Identifying and correcting biases in the datasets

Technically, training an AI involves rigorous steps of cross-validation and performance measurement. We generally split data into three sets: one for training, another for validation during development (to adjust the parameters), and a final test set to objectively evaluate the model. Quantitative metrics such as accuracy, recall (sensitivity), or error curves make it possible to quantify how reliable the algorithm is on data it has never seen [6]. For example, one can check that the AI correctly identifies a large majority of perpetrators from traces while maintaining a low rate of false positives. Increasingly, we also integrate fairness and ethical criteria into these evaluations: performance is examined across demographic groups or testing conditions (gender, age, etc.), to ensure that no unacceptable bias remains [6]. Finally, compliance with legal constraints (such as the GDPR in Europe, which regulates the use of personal data) must be built in from the design phase of the system [6]. That may involve anonymizing data, limiting certain sensitive information, or providing procedures in case an ethical bias is detected.

Ultimately, an AI’s performance depends on the quality of the data that trains it. In the forensic field, that means algorithms “learn” from accumulated human expertise. Every algorithmic decision implies the experience of hundreds of experts who provided examples or tuned parameters. It is both a strength – capitalizing on a vast knowledge base – and a responsibility: to carefully select, prepare, and control the data that will feed the artificial intelligence.

Technical and operational challenges for integrating AI into forensic science

Technical and operational challenges for integrating AI into forensic science

While AI promises substantial gains, its concrete integration in the forensic field faces many challenges. It is not enough to train a model in a laboratory: one must also be able to use it within the constrained framework of a judicial investigation, with all the reliability requirements that entails. Among the main technical and organisational challenges are:

  • Access to data and infrastructure: Paradoxically, although AI requires large datasets to learn, it can be difficult to gather sufficient data in the specific forensic domain. DNA profiles, for example, are highly sensitive personal data, protected by law and stored in secure, sequestered databases. Obtaining datasets large enough to train an algorithm may require complex cooperation between agencies or the generation of synthetic data to fill gaps. Additionally, computing tools must be capable of processing large volumes of data in reasonable time — which requires investment in hardware (servers, GPU2s for deep learning3) and specialized software. Some national initiatives are beginning to emerge to pool forensic data securely, but this remains an ongoing project.
  • Quality of annotations and bias: The effectiveness of AI learning depends on the quality of the annotations in training datasets. In many forensic areas, establishing « ground truth » is not trivial. For example, to train an algorithm to recognize a face in surveillance video, each face must be correctly identified by a human first — which can be difficult if the image is blurry or partial. Similarly, labeling data sets of footprints, fibers, or fingerprints requires meticulous work by experts and sometimes involves subjectivity. If the training data include annotation errors or historical biases, the AI will reproduce them [6]. A common bias is demographic representativeness noted above, but there may be others. For instance, if a weapon detection model is trained mainly on images of weapons indoors, it may perform poorly for detecting a weapon outdoors, in rain, etc. The quality and diversity of annotated data are therefore a major technical issue. This implicates establishing rigorous data collection and annotation protocols (ideally standardized at the international level), as well as ongoing monitoring to detect model drift (overfitting to certain cases, performance degradation over time, etc.). This validation relies on experimental studies comparing AI performance to that of human experts. However, the complexity of homologation procedures and procurement often slows adoption, delaying the deployment of new tools in forensic science by several years.
Intelligence Artificielle IA en police scientifique et en sciences forensiques cybercriminalité - Forenseek
  • Understanding and Acceptance by Judicial Actors: Introducing artificial intelligence into the judicial process inevitably raises the question of trust. An investigator or a laboratory technician, trained in conventional methods, must learn to use and interpret the results provided by AI. This requires training and a gradual cultural shift so that the tool becomes an ally and not an “incomprehensible black box.” More broadly, judges, attorneys, and jurors who will have to discuss this evidence must also grasp its principles. Yet explaining the inner workings of a neural network or the statistical meaning of a similarity score is far from simple. We sometimes observe misunderstanding or suspicion from certain judicial actors toward these algorithmic methods [6]. If a judge does not understand how a conclusion was reached, they may be inclined to reject it or assign it less weight, out of caution. Similarly, a defence lawyer will legitimately scrutinize the weaknesses of a tool they do not know, which may lead to judicial debates over the validity of the AI. A major challenge is thus to make AI explainable (the “XAI” concept—eXplainable Artificial Intelligence), or at least to present its results in a comprehensible format and pedagogically acceptable to a court. Without this, integrating AI risks facing resistance or sparking controversy in trials, limiting its practical contribution.
  • Regulatory Framework and Data Protection: Finally, forensic sciences operate within a strict legal framework, notably regarding personal data (DNA profiles, biometric data, etc.) and criminal procedure. The use of AI must comply with these regulations. In France, the CNIL (Commission Nationale de l’Informatique et des Libertés) keeps watch and can impose restrictions if an algorithmic processing harms privacy. For example, training an AI on nominal DNA profiles without a legal basis would be inconceivable. Innovation must therefore remain within legal boundaries, imposing constraints from the design phase of projects. Another issue concerns trade secrecy surrounding certain algorithms in judicial contexts: if a vendor refuses to disclose the internal workings of its software for intellectual property reasons, how can the defence or the judge ensure its reliability? Recent cases have shown defendants convicted on the basis of proprietary software (e.g., DNA analysis) without the defence being able to examine the source code used [7]. These situations raise issues of transparency and rights of defence. In the United States, a proposed law titled Justice in Forensic Algorithms Act aims precisely to ensure that trade secrecy cannot prevent the examination by experts of the algorithms used in forensics, in order to guarantee fairness in trials. This underlines the necessity of adapting regulatory frameworks to these new technologies.

Lack of Cooperation slows the development of powerful tools and limits their adoption in the field.

  • Another more structural obstacle lies in the difficulty of integrating hybrid profiles within forensic institutions, at least in France. Today, competitive examinations and recruitment often remain compartmentalised between different specialties, limiting the emergence of experts with dual expertise. For instance, in forensic police services, entrance exams for technicians or engineers are divided into distinct specialties such as biology or computer science, without pathways to recognize combined expertise in both fields. This institutional rigidity slows the integration of professionals capable of bridging between domains and fully exploiting the potential of AI in criminalistics. Yet current technological advances show that the analysis of biological traces increasingly relies on advanced digital tools. Faced with this evolution, greater flexibility in recruitment and training of forensic experts will be necessary to meet tomorrow’s challenges.

AI in forensics must not become a matter of competition or prestige among laboratories, but a tool put at the service of justice and truth, for the benefit of investigators and victims.

  • A further major barrier to innovation in forensic science is the compartmentalization of efforts among different stakeholders, who often work in parallel on identical problems without pooling their advances. This lack of cooperation slows the development of effective tools and limits their adoption in the field. However, by sharing our resources—whether databases, methodologies, or algorithms—we could accelerate the production deployment of AI solutions and guarantee continuous improvement based on collective expertise. My experience across different French laboratories (the Lyon Scientific Police Laboratory (Service National de Police Scientifique – SNPS), the Institut de Recherche Criminelle de la Gendarmerie Nationale (IRCGN), and now the Nantes Atlantique Genetic Institute (IGNA)) allows me to perceive how much this fragmentation hampers progress, even though we pursue a common goal: improving the resolution of investigations. This is why it is essential to promote open-source development when possible and to create platforms of collaboration among public and judicial entities. AI in forensics must not be a matter of competition or prestige among laboratories, but a tool in the service of justice and truth, for the benefit of investigators and victims alike.
Intelligence Artificielle IA en police scientifique et en sciences forensiques - Forenseek

The challenges discussed above all have technical dimensions, but they are closely intertwine with fundamental ethical and legal questions. From an ethical standpoint, the absolute priority is to avoid injustice through the use of AI. We must prevent at all costs that a poorly designed algorithm leads to someone’s wrongful indictment or, conversely, the release of a guilty party. This involves mastering biases (to avoid discrimination against certain groups), transparency (so that every party in a trial can understand and challenge algorithmic evidence), and accountability for decisions. Indeed, who is responsible if an AI makes an error? The expert who misused it, the software developer, or no one because “the machine made a mistake”? This ambiguity is unacceptable in justice: it is essential to always keep human expertise in the loop, so that a final decision—whether to accuse or exonerate—is based on human evaluation informed by AI, and not on the opaque verdict of an automated system.

On the legal side, the landscape is evolving to regulate the use of AI. The European Union, in particular, is finalizing an AI Regulation (AI Act) which will be the world’s first legislation establishing a framework for the development, commercialization, and use of artificial intelligence systems [8]. Its goal is to minimize risks to safety and fundamental rights by imposing obligations depending on the level of risk of the application (and forensic or criminal justice applications will undoubtedly be categorized among the most sensitive). In France, the CNIL has published recommendations emphasizing that innovation can be reconciled with respect for individual rights during the development of AI solutions [9]. This involves, for example, compliance with the GDPR, limitation of purposes (i.e. training a model only for legitimate and clearly defined objectives), proportionality in data collection, and prior impact assessments for any system likely to significantly affect individuals. These safeguards aim to ensure that enthusiasm for AI does not come at the expense of the fundamental principles of justice and privacy.

Encouraging Innovation While Demanding Scientific Validation and Transparency

A delicate balance must therefore be struck between technological innovation and regulatory framework. On one hand, overly restricting experimentation and adoption of AI in forensics could deprive investigators of tools potentially decisive for solving complex cases. On the other, leaving the field unregulated and unchecked would risk judicial errors or violations of rights. The solution likely lies in a measured approach: encouraging innovation while demanding solid scientific validation and transparency in methods. Ethics committees and independent experts can be involved to audit algorithms, verify that they comply with norms, and that they do not replicate problematic biases. Furthermore, legal professionals must be informed and trained on these new technologies so they can meaningfully debate their probative value in court. A judge trained in the basic concepts of AI will be better placed to understand the evidentiary weight (and limitations) of evidence derived from an algorithm.

Conclusion: The Future of forensics in the AI Era

Artificial intelligence is set to deeply transform forensics, offering investigators analysis tools that are faster, more accurate, and capable of handling volumes of data once considered inaccessible. Whether it is sifting through gigabytes of digital information, comparing latent traces with improved reliability, or untangling complex DNA profiles in a matter of minutes, AI opens new horizons for solving investigations more efficiently.

But this technological leap comes with crucial challenges. Learning techniques, quality of databases, algorithmic bias, transparency of decisions, regulatory framework: these are all stakes that will determine whether AI can truly strengthen justice without undermining it. At a time when public trust in digital tools is more than ever under scrutiny, it is imperative to integrate these innovations with rigor and responsibility.The future of AI in forensics will not be a confrontation between machine and human, but a collaborative work in which human expertise remains central. Technology may help us see faster and farther, but interpretation, judgment and decision-making will remain in the hands of forensic experts and the judicial authorities. Thus, the real question may not be how far AI can go in forensic science, but how we will frame it to ensure that it guarantees ethical and equitable justice. Will we be able to harness its power while preserving the very foundations of a fair trial and the right to defence?

The revolution is underway. It is now up to us to make it progress, not drift.

Bibliography

[1]​ : Océane DUBOUST. L’IA peut-elle aider la police scientifique à trouver des similitudes dans les empreintes digitales ? Euronews, 12/01/2024 [vue le 15/03/2025] https://fr.euronews.com/next/2024/01/12/lia-peut-elle-aider-la-police-scientifique-a-trouver-des-similitudes-dans-les-empreintes-d#:~:text=,il
[2] : International Journal of Multidisciplinary Research and Publications. The Role of Artificial Intelligence in Forensic Science: Transforming Investigations through Technology. Muhammad Arjamand et al. Volume 7, Issue 5, pp. 67-70, 2024. Disponible sur : http://ijmrap.com/ [vue le 15/03/2025]
[3]​ : Gendarmerie Nationale. Kit universel, puce RFID, IA : le PJGN à la pointe de la technologie sur l’ADN.  Mis à jour le 22/01/2025 et disponible sur : https://www.gendarmerie.interieur.gouv.fr/pjgn/recherche-et-innovation/kit-universel-puce-rfid-ia-le-pjgn-a-la-pointe-de-la-technologie-sur-l-adn [vue le 15/03/2025]
[4]​ : Michelle TAYLOR. EXCLUSIVE: Brand New Deterministic Software Can Deconvolute a DNA Mixture in Seconds.  Forensic Magazine, 29/03/022. Disponible sur : https://www.forensicmag.com [vue le 15/03/2025]
[5]​ : Sébastien AGUILAR. L’ADN à l’origine des portraits-robot ! Forenseek, 05/01/2023. Disponible sur : https://www.forenseek.fr/adn-a-l-origine-des-portraits-robot/ [vue le 15/03/2025]
[6]​ : Max M. Houck, Ph.D.  CSI/AI: The Potential for Artificial Intelligence in Forensic Science.  iShine News, 29/10/2024. Disponible sur : https://www.ishinews.com/csi-ai-the-potential-for-artificial-intelligence-in-forensic-science/ [vue le 15/03/2025]
[7]​ : Mark Takano.  Black box algorithms’ use in criminal justice system tackled by bill reintroduced by reps. Takano and evans.  Takano House, 15/02/2024. Disponible sur : https://takano.house.gov/newsroom/press-releases/black-box-algorithms-use-in-criminal-justice-system-tackled-by-bill-reintroduced-by-reps-takano-and-evans [vue le 15/03/2025]
[8] : Mon Expert RGPD. Artificial Intelligence Act : La CNIL répond aux premières questions.  Disponible sur : https://monexpertrgpd.com [vue le 15/03/2025]
[9]​ ​: ​ CNIL.  Les fiches pratiques IA.  Disponible sur : https://www.cnil.fr [vue le 15/03/2025]

Définitions :

  1. GPU (Graphics Processing Unit)
    A GPU is a specialized processor designed to perform massively parallel computations. Originally developed for rendering graphics, it is now widely used in artificial intelligence applications, particularly for training deep learning models. Unlike CPUs (central processing units), which are optimized for sequential, general-purpose tasks, GPUs contain thousands of cores optimized to execute numerous operations simultaneously on large datasets
  2. Machine Learning
    Machine learning is a branch of artificial intelligence that enables computers to learn from data without being explicitly programmed. It relies on algorithms capable of detecting patterns, making predictions, and improving performance through experience.
  3. Deep Learning
    Deep learning is a subfield of machine learning that uses artificial neural networks composed of multiple layers to model complex data representations. Inspired by the human brain, it allows AI systems to learn from large volumes of data and enhance their performance over time. Deep learning is especially effective for processing images, speech, text, and complex signals, with applications in computer vision, speech recognition, forensic science, and cybersecurity.

Cats — reliable witnesses in criminal investigations?

It is well known that our pets love affection and display unwavering loyalty. What has recently been discovered, however, is that they can also hold irrefutable evidence in the context of a criminal investigation.

Researchers from Flinders University in Australia took a close interest in the cats that inhabit many Australian households — not to study their habits or charming quirks, but for a much more forensic reason. These experts in criminalistics, genetics and forensic medecine sought to determine to what extent these animals could act as receptacles for human DNA and potentially transfer it onto other surfaces.

A single contact is enough to cause transfer

To test this hypothesis, the scientists conducted a study on twenty cats from fifteen households, collecting samples from four different areas: the fur on the head, back and right side of the body, and the skin located on the left flank of the animals.

A significant amount of DNA was recovered, mostly from the cats’ fur and, to a lesser extent, from their skin. Unsurprisingly, most of this genetic material belonged to the cats’ owners. More surprisingly, however, DNA from individuals outside the household was also detected in 47% of the samples—particularly from cats that regularly roamed their neighborhood. This demonstrates that feline fur readily captures human DNA, not only through petting but also via brief, incidental contact. Moreover, the study showed that DNA transfer can also occur in the opposite direction—from the cat to another person or to an object—simply by tapping or scratching the animal’s fur with a bare or gloved hand. In both cases, the recovered traces proved sufficient to identify an individual.

The cat: a silent but relevant witness

For forensic investigators, the findings of this study open up new perspectives in the way a crime scene is approached. From now on, an animal present at the scene can be considered as potential evidence if it is suspected to have come into contact with the perpetrator of a crime. Since their fur acts as a true reservoir of DNA, collecting samples from the areas identified by the researchers could make it possible to identify offenders—or, conversely, to rule out certain suspects.

It should be noted that this ability to capture human DNA is not unique to cats. Dogs, which are also common household companions across the world, have proven to be excellent collectors of genetic material as well.

Read full article here.

The sexome: a potential source of evidence in sexual assault cases

Thanks to new sampling and analytical techniques, forensic science now plays an essential role in solving sexual crimes. In cases where the search for semen fails, the sexome—also referred to as the genital microbiome—could take over and become a complementary, or even decisive, investigative tool.

What is the sexome? At a time when the importance of the human microbiota is being recognized in numerous areas of health, researchers are no longer confined to the studying of the bacterial flora colonizing the skin and the intestine. They are also focusing on the microorganisms that inhabit the male and female genital areas—the genital microbiome. Their work primarily addresses health-related questions, such as the prevention of sexually transmitted infections, but its implications extend further.

A unique microbial signature

A study conducted by a team of researchers from Murdoch University in Perth, Australia, on about a dozen heterosexual couples, demonstrated that each individual possesses a distinctive genital microbial flora. This flora, more abundant in women than in men, is transferred from one partner to another during sexual intercourse. According to Brendan Chapman, forensic scientist and co-author of the study, the discovery of these microbial “traces” could offer an effective alternative for identifying perpetrators of sexual offences.

Identification possible even in condom-protected intercourse

According to the scientists behind this discovery, this new technique could play a decisive role when semen DNA analysis proves problematic. The collection of biological material from victims of sexual assault is now highly advanced and, thanks to genetic databases, enables numerous identifications. However, this method faces several challenges, particularly related to time constraints. Beyond 48 hours, the quantity of sperm cells decreases dramatically and may no longer be sufficient for conclusive DNA analysis. Furthermore, in the absence of ejaculation or when a condom has been used, these biological traces are nonexistent.

By contrast, with the help of advanced sequencing techniques, it is possible to detect the sexual microbial signature transferred from one partner to another in samples collected up to five days after sexual contact. Even more remarkably, these transfers can still be detected after washing the genital area, and—though in smaller quantities—even when a condom has been used. In such cases, explains Brendan Chapman, it is mainly components of the female sexual microbiome that are recovered from the male genital area. This approach could help identify more sexual offenders even in the absence of DNA evidence, without requiring additional samples to be taken from already deeply traumatized victims.

The next step for scientists is to refine the technique by determining which factors can influence the sexome—particularly the vaginal microbiome, which fluctuates with the menstrual cycle—since such variations may affect the accuracy of results. This promising line of research opens new perspectives for forensic science.

Read the full study here.

Artificial Intelligence (AI): A lever in the fight against crime

By Benoit Fayet, Defense & Security Consultant at Sopra Steria Next, member of the Strategic Committee of the CRSI, and Bruno Maillot, Data and Artificial Intelligence Expert at Sopra Steria Next, for the Center for Reflection on Internal Security.

Context

La majorité des Français expérimente l’IA sans parfois s’en rendre compte au quotidien : transports, Most French citizens experience AI in their daily lives—through transportation, e-commerce, energy, healthcare, smart homes, agriculture, and more—often without even realizing it. However, AI remains less prevalent in the field of security and in the work carried out by France’s Internal Security Forces (FSI, police and gendarmerie). This is despite the fact that, for years, IT systems and new technologies have already transformed these professions, while the Armed Forces and local authorities have embraced them far more extensively, sometimes for closely related challenges. Today, police officers and gendarmes rely heavily on digital tools, particularly for:

Their daily activities, using information systems and applications to take complaints, draft reports, consult information on individuals, or through the development and employment of biometric technologies—widely used for identification and authentication, such as fingerprinting.

Field communications, through dedicated communication networks and mobile devices that assist them during patrols or interventions.

Monitoring delinquency, especially at the local level or in crisis management situations (video surveillance, command centers, etc.).

Victim support, with the recent development of online platforms and applications offering the same services as in physical units (filing complaints, reporting incidents, etc.).

Artificial Intelligence represents a decisive lever to reinforce each of these existing digital uses by police officers and gendarmes. The digital tools they already possess, the wealth of data they process daily, and their operational needs could allow this, offering the Ministry of the Interior a new digital revolution.

Indeed, AI is not just another tool; it is a disruptive innovation capable of profoundly transforming the professions and practices of police and gendarmerie personnel, particularly in areas under strain or in crisis, such as criminal investigations. AI could also alleviate many of the daily frustrations that French citizens face regarding security. For example, by reducing the time officers spend on technical or administrative tasks in their units, AI could free them up to spend more time in public spaces, or by enhancing investigative capabilities, it could improve clearance rates for certain offenses. AI’s analytical capabilities in processing complex datasets could also strengthen the fight against organized crime and drug trafficking.Deploying AI systems, however, requires several prerequisites. First and foremost, mastering the national and European legal frameworks governing AI is essential. In addition, clear political guidelines for AI use must be established to ensure acceptance both by police officers and gendarmes themselves and by the public, so that AI is recognized as a tool—not an end in itself. Decision-making and oversight must always remain in human hands, to avoid slipping into the “civilization of machines,” as Georges Bernanos already warned in France Against the Robots (1947).

AI thus represents a decisive lever to reinforce each of the current digital uses by police and gendarmes.

Finally, in a context of growing cyber threats and challenges to our sovereignty, it is essential to ensure the maturity and resilience of the technologies employed, while identifying the most secure tools. A key concern is the lack of technological sovereignty within the EU and France regarding AI solutions, which currently come mostly from outside Europe. It is therefore crucial to identify AI tools that do not expose Europe and France to loss of sovereignty or increased vulnerability to intelligence and influence operations.

The objectives of this article are therefore to analyze the opportunities enabled by the current legal framework for integrating AI into internal security, and to identify concrete, realistic operational uses in the near future that remain technologically controlled and secure.

Early uses of AI underway—will the Paris 2024 Olympics mark a turning point?

Projects already exist in France, whether in public space crime management, administrative activities, or investigative work. Recent innovations in AI have been deployed in connection with the Paris 2024 Olympic Games.

AI used to support decision-making in crime prevention

AI has already been applied because it aligns closely with the core mission of France’s Internal Security Forces (FSI): anticipating and preventing crime. AI has not been developed to predict crime, but rather to better understand and analyze it, and ultimately to assist in decision-making. Crime is not a random phenomenon; it can be analyzed by gathering statistical data on a given territory and feeding it into models that help the FSI operate more effectively in that area (for example, patrol locations and schedules). Analytical methods have been used by the Gendarmerie nationale on non-personal data from the Ministry of the Interior’s Statistical Service for Internal Security (SSMSI), which were then exploited through data visualization to map and monitor the evolution of delinquency within a territory. These are not predictive policing tools—they forecast nothing—but instead provide decision-support analysis based on past events. They offer orientation to FSI units, who cannot realistically cross-check such volumes of data without AI’s analytical capacity. The method, for example, consists of identifying where burglaries or vehicle-related offenses occurred within a defined period and territory in order to infer where the next ones are likely to occur. The aim is to target specific areas and plan police deployments in locations where offenses are at risk of happening, thereby deterring crime.

Other experiments with more predictive tools—extending beyond decision support to actual risk or occurrence prediction—have also been conducted but have not demonstrated significant operational added value.

AI developed to support data processing in criminal investigations

Early AI-based data processing tools have also been developed by the Gendarmerie nationale to assist in investigative phases. Tools can, for example, support FSI in monitoring communications during an investigation by detecting spoken languages in court-authorized telephone interceptions, transcribing and translating conversations, and flagging relevant topics for the case through recurrent neural networks.

Another project has enabled the transcription of videotaped victim interviews and the annotation of procedural documents (persons, places, dates, objects, etc.).

Finally, the Digital Agency for Internal Security Forces (ANFSI), responsible for developing their digital equipment, is experimenting with a tool for producing intervention reports generated by “voice command” on NEO mobile devices.

A decisive shift with the Paris 2024 Olympics?

During the Paris 2024 Olympics, “augmented video” was authorized in Île-de-France under the supervision of the Paris Police Prefecture. For the first time, the law of May 19, 2023 authorized the deployment of AI in video surveillance, within a strict framework explicitly excluding facial recognition. The experimentation focused solely on detecting predefined events, such as abandoned objects, the presence or use of weapons, vehicles failing to respect traffic directions or restricted zones, crowd movements, and fire outbreaks. Article 10 specifically authorized AI processing on certain video streams from fixed cameras to detect these situations, with the goal of securing events particularly exposed to risks of terrorism or threats to public safety. An evaluation committee for these algorithmic cameras is expected to deliver a report by the end of 2024. Several use cases of intelligent video surveillance have already been deemed highly effective, notably those enabling the detection of individuals in restricted zones (facilitating the adjustment of police presence), the detection of crowd density or movements linked to fights, and interventions in urban transport systems.

In summary, while projects exist, they remain limited in scope and far from generalized deployment. Any large-scale adoption must occur within a constrained and evolving legal framework.

In France, a strict framework shaped by the CNIL and political efforts to move forward

The CNIL (French Data Protection Authority) has issued several specific recommendations to ensure that AI system deployments respect individuals’ privacy, in line with the provisions of the 1978 « Informatique et Libertés » law and the 2016 European “Police-Justice” directive, which defines data protection rules for information systems used by Internal Security Forces (FSI). Public authorities responsible for AI systems must comply with transparency obligations, making evaluations of such systems public, and follow the principle of “double proportionality.” This principle ensures that AI use is justified both in terms of the operational framework (patrols, criminal investigations, or counter-terrorism threats) and the type of data involved (personal data, statistical data, etc.). For the CNIL, the general rules of data protection (storage duration, independent oversight, etc.) apply equally to AI systems.

At the same time, the Ministry of the Interior and the legislature have advanced along the path outlined by the CNIL—through the 2020 White Paper on Internal Security and the 2023 Loi d’Orientation et de Programmation du ministère de l’Intérieur (LOPMI). These frameworks identified and legally codified specific use cases that may justify AI use in the security sector. They also introduced safeguards for experimentation, particularly in preparation for the Paris 2024 Olympic Games: data anonymization, secure storage, and ensuring that decisions and control remain in the hands of human agents.

The European Commission drafted the AI Act, aimed at regulating the use of AI in Europe, which was adopted by the European Parliament in December 2023 and scheduled to come into effect in August 2026.

A strengthened European framework with the AI Act

Complementing the French framework, the European Commission drafted the AI Act, aimed at regulating the use of AI in Europe, which was adopted by the European Parliament in December 2023 and scheduled to come into effect in August 2026. Its aim is to ensure that AI systems used in the EU are safe, transparent, and under human oversight. Generative AI systems capable of producing texts, code, or images are subject to particular scrutiny. The AI Act then establishes a detailed legal framework for public sector use of AI, including security applications:

Prohibited AI systems deemed dangerous: biometric identification in public spaces, facial recognition databases (including those based on open-source data), predictive policing systems, etc.

High-risk AI systems: allowed under strict conditions, requiring documentation, human oversight, compliance procedures, and continuous evaluation (e.g., biometric categorization systems, migration management tools).

Limited-risk AI systems: permitted but subject to transparency requirements (e.g., object detection systems). (By February 2025, prohibited AI systems must be withdrawn or brought into compliance. By August 2025, high-risk and limited-risk systems must be fully compliant).

It should be noted that the AI Act provides exceptions, particularly for law enforcement operations. Remote facial recognition (via camera or drone) may be permitted, but only under prior judicial authorization and within a strictly defined list of crimes—such as the search for a convicted or suspected serious offender.

Prospects for the Use and Application of AI in Internal Security

Building on the reflections already undertaken and the regulatory framework now in place, it is time to look ahead at the concrete contributions AI could bring to the professions of the national police and gendarmerie. This involves leveraging existing technologies, recent developments—particularly in generative AI—and identifying the conditions required for such use: communication and information-sharing, data access, simplification of technical tasks, data analysis in investigative phases, and more.

AI must support the Internal Security Forces (FSI), without becoming “the agent.” Tasks that may be entrusted to AI must always remain under human primacy in terms of oversight and validation.

It is important to emphasize that the use cases identified in this note are part of a forward-looking perspective. They take into account the regulatory framework described earlier and are grounded in the idea that AI should provide operational added value to the FSI, while safeguarding ethical principles regarding data protection. This approach must remain far removed from the practices of certain non-European countries, which would undermine the French democratic model. AI must support the Internal Security Forces (FSI), without becoming “the agent.” Tasks that may be entrusted to AI must always remain under human primacy in terms of oversight and validation. Delegation to AI should therefore accelerate action and decision-making, without creating dependence. The key lies in identifying appropriate use cases, particularly those involving tasks with little or no added value, so that FSI personnel retain their decision-making capacity and agency.

AI to Optimize Communication and Information-Sharing Among FSI

In today’s deteriorated security environment, communication and data-sharing are critical—whether during routine patrols, interventions requiring situational awareness, or more serious operations such as counter-narcotics or counter-terrorism missions.

Concrete use cases include the ability to centralize and process data from FSI mobile equipment or from video surveillance systems (video, audio, radio, conversations, and calls between units). These capabilities are currently unattainable but could become feasible with AI-powered tools, especially given the ever-increasing volumes of data being collected. Such tools would enhance operational performance by improving situational awareness and could be integrated into the ongoing transformation of FSI communication systems through the deployment of a national high-speed mobile network. AI could thus be a decisive enabler for faster information and intelligence-sharing, ensuring that actionable insights reach police and gendarmes in the field quickly enough to address emerging threats—for example, by detecting weak signals linked to operational drug intelligence units (CROSS) or through partnerships involving local authorities, municipal police, and associations. AI could process and qualify shared information almost in real time.

AI to Generate Knowledge and Support FSI Action in Real Time

As Internal Security Forces (FSI) increasingly produce data through their mobile devices, they operate in an environment where third-party data is also multiplying. To address this dual evolution, a data-valorization strategy leveraging AI could be developed, combining retrospective data analysis (already available in existing decision-support tools) with the enrichment of operational information in real time (e.g., patrol geolocation, AI-generated analytical notes), algorithmic developments, and the integration of external datasets (in compliance with the AI Act). This could include, for instance, analyzing real mobility flows across urban transport networks during an arrest mission or monitoring road traffic to detect accidents and disruptions in real time, thereby enabling faster and better-informed responses.

One of AI’s distinctive features is its ability to automatically flag incidents. When predefined conditions or scenarios are met—such as fights or crowd movements—an AI-based system can automatically generate detailed incident reports and dispatch alerts to FSI units for immediate assessment. This not only accelerates the documentation process but also ensures that minor infractions or disturbances (e.g., acts of vandalism or incivilities) that might otherwise go unnoticed are reported and addressed.

Moreover, the growing volume of available data can provide real-time access to a wider range of information. Tactical awareness could thus be enhanced by combining operational data (patrol geolocation, including other “security producers” such as municipal police or private security, and the geolocation of individuals targeted in an investigation), contextual data (points of interest, population density, infrastructure status), and sensor data (body-worn cameras, etc.). AI could retrieve, structure, and deliver these diverse datasets in real time to FSI officers on the ground, enabling faster intervention times (e.g., automated data transmission).

The challenge in this case lies in clearly defining needs and use cases to ensure relevant, actionable data, and in developing appropriate methods of restitution—such as cartographic visualization or automated integration into the information systems and mobile devices used by FSI.

Applied to these technical tasks, AI-driven automation could help FSI save time in their daily activities, allowing them to refocus on their core mission: being visibly present in the field, patrolling the streets, reinforcing public trust, deterring crime, and preventing delinquency.

AI to Streamline, Accelerate, and Simplify the Administrative and Technical Tasks of FSI

Internal Security Forces (FSI) often lament that a growing share of their working time is consumed by repetitive, burdensome administrative and drafting tasks with little added value. The use of AI for such “back-office” technical tasks is already widespread in other industries, particularly with generative AI, which shifts from passive analysis to active content creation.

Applied to these technical tasks, AI-driven automation could help FSI save time in their daily activities, allowing them to refocus on their core mission: being visibly present in the field, patrolling the streets, reinforcing public trust, deterring crime, and preventing delinquency. One of the key lessons learned from the Paris 2024 Olympic Games is that the visible, large-scale presence of FSI in public spaces was not only effective but also welcomed by the population.

Support procedure drafting, collect information 

AI opens the door to numerous functionalities to facilitate—or even eliminate—time-consuming repetitive tasks that dominate FSI daily operations, including drafting official reports (procès-verbaux), arrest records, complaint filings, or investigation notes. In the drafting and transcription phases, AI could accelerate report writing, whether at the station or in the field, by generating automated text or providing suggested formulations (e.g., regulatory phrasing), extracting relevant information from documents, accelerating video review by filtering or selecting scenes via semantic queries, or masking specific segments of documents or video (e.g., identifying relevant portions within large volumes of video using transformers).

The use of AI in this context would rely on recurrent neural networks that process data streams while retaining “memory” of texts, word sequences, or sentence patterns, much like biological neural networks—but with exponentially greater computational power. This can add real value to drafting and transcription tasks.

To further enhance efficiency, AI could also amplify the capabilities of tools already deployed by the Ministry of the Interior—for instance, integrating natural language processing into everyday applications (e.g., generating reports or official records via voice commands directly in the field). In this sense, AI is a powerful enabler, giving FSI more time to focus on high-value tasks—for example, during periods of police custody, allowing officers to interrogate suspects or work on case files instead of devoting limited time to repetitive administrative and technical tasks. (By law, police custody lasts 24 hours but may be extended to 48 hours if the alleged offense carries a prison sentence of more than one year, and up to 96 hours for specific crimes such as drug trafficking, terrorism, or organized crime.)

Fact-Checking and Assisting in Evidence Gathering

The collection of statements, testimonies, and various interviews forms the backbone of investigative work and often represents the first step in uncovering contradictions or verifying facts. The hundreds of documents that typically enrich a case file are still largely transcribed manually by investigators. Increasingly, however—and whenever required by law—these statements are filmed and recorded. In the future, they could be directly recorded and automatically transcribed by an AI-based system, thereby generating data that can be quickly processed and cross-checked by FSI. This would allow investigators to focus on analysis and fact-finding, ultimately improving case resolution rates.

This aggregation capability is one of the major contributions of AI, which must be considered within the legal framework set by the CNIL. Properly deployed, it could give FSI easier and faster access to the information they need.

Searching for Information Across Information Systems

L’IA pourrait aussi permettre de faciliter les travaux de recherche d’informations sur un individu ou un groupe d’individus interpellés ou recherchés. Ces phases de recherche sur des données biographiques, des données sur des antécédents judiciaires sont le quotidien des FSI et se font dans les différents fichiers de police mis à leur disposition (FPR – Fichier des Personnes Recherchées, TAJ – Traitement des antécédents judiciaires, etc.). Ces fichiers de police fonctionnent en silo et communiquent peu entre eux, notamment dans le but de respecter leurs finalités de traitement, conformément aux principes de la CNIL. Ainsi, le partage d’informations entre les fichiers est limité à des interfaçages applicatifs, et les FSIAI could also simplify information retrieval tasks on an individual—or a group of individuals—who have been arrested or are being sought. These searches, which involve biographical data and criminal records, are a daily routine for FSI and are performed using multiple police databases (such as the FPR – Fichier des Personnes Recherchées or the TAJ – Traitement des Antécédents Judiciaires). These systems function in silos and communicate very little with each other, partly to comply with their specific legal purposes as required by CNIL principles. As a result, information sharing between databases is limited to certain interfaces, and investigators often need to consult several systems simultaneously. Given the proliferation of data and the sheer volume to be analyzed daily, AI could overcome this challenge by aggregating information and delivering it directly to FSI. This aggregation capability is one of the major contributions of AI, which must be considered within the legal framework set by the CNIL. Properly deployed, it could give FSI easier and faster access to the information they need. —whether for personal safety during an arrest (e.g., understanding how to approach a specific individual), improving the effectiveness of public safety checks (e.g., ensuring the correct identification of a person during a stop), or supporting police investigations. For example, AI could support cross-referencing between different data sources, which is now authorized between Automatic License Plate Recognition (ALPR/LAPI) systems and other databases, such as stolen vehicle registries, vehicle insurance records, or the automated traffic enforcement system. Moreover, AI’s aggregation capabilities could streamline the process of freezing bank accounts through the Ministry of Economy and Finance’s information systems, thereby improving the recovery of fines—including amendes forfaitaires délictuelles (AFD, flat-rate criminal fines)—or directly targeting the financial assets of certain offenders, a political priority emphasized by the Minister of the Interior, Bruno Retailleau. à l’assurance des véhicules ou encore le système de contrôle automatisé.

Securing Police Databases and Their Use

In addition to consultation, a recurring task also involves “feeding” police databases with information about individuals who have been arrested or are wanted—data that includes descriptions of facts, offenses, and most importantly, identity details (biographic or biometric). This stage is critical, particularly in the acquisition of biometric data, as it determines the quality of the databases and ensures that, in the case of an offense or crime, suspects or victims can later be accurately identified. The computing power of AI algorithms can identify and highlight minutiae (specific points on a fingerprint) with greater precision than the human eye, leading to more accurate comparisons.

The interrogation of police databases containing fingerprint or genetic data could also be automated with AI, enabling faster and more reliable comparisons. Moreover, the deployment of automated quality-control checks could secure data acquisition, for example through an application assisting in fingerprint capture and automatically detecting non-compliant fingerprint images. Similarly, AI could enhance the processing of latent fingerprints left unintentionally on surfaces, which are often partial, blurred, or of poor quality. By extrapolating from recognized patterns, AI could fill in missing segments, enabling stronger matches.

During investigative phases, AI could also be leveraged to search data and cross-comparison with large databases.

AI to Strengthen Analytical Activities of FSI and Better Combat Delinquency

Handling Large Volumes of Data

AI offers opportunities to compute and automate certain tasks for FSI faced with vast amounts of data, whether in administrative screening activities or in criminal investigations. For example, Interior Ministry agents are tasked with vetting individuals applying for sensitive jobs, requiring them to check across all relevant police databases. In these mass data-analysis activities, AI could add value by accelerating and securing checks, allowing human analysts to focus on critical points, and ultimately enabling faster decision-making. AI could also optimize oversight activities by automatically detecting abnormal database consultations.

During investigative phases, AI could also be leveraged to search data and cross-comparison with large databases to improve clearance rates—for example, through DNA comparison against the national DNA database (FNAEG). DNA analysis is one of the most widely used forensic methods for identifying perpetrators of crimes. Moreover, AI could support judicial investigations, where case files are becoming increasingly voluminous and complex. Faced with the multiplicity and heterogeneity of data, AI’s processing power allows for faster classification, linkage, and recall of all relevant information within very short timeframes. Investigators could therefore analyze and cross-check evidence more efficiently, with fewer errors, ultimately improving case resolution.

Additionally, most data from criminal investigations are stored on hard drives for archiving. Tools capable of cross-referencing and linking these datasets are needed to identify evidence within massive amounts of stored information. AI could perform this classification and connection work, linking facts and evidence contained in judicial files. A key challenge lies in managing large volumes of judicial data to accelerate investigative processes—whether on the street or during interventions—enabling quicker decision-making. Here again, the definition of clear use cases and the pre-training of AI systems are essential to ensure the relevance of analyses, for instance through the generation of synthetic data.

Better Analyzing and Interpreting Images, Sounds, and Large-Scale Data

Generative AI introduces new techniques for analyzing and interpreting images. These developments can greatly enhance investigative work by processing large volumes of heterogeneous images, extracting requested elements from them, and understanding complex queries. Such solutions could support investigative phases through the analysis of crime scene photos, traffic accident images, or footage from urban supervision centers (CSU), helping to identify items of interest (vehicles, persons, etc.). Likewise, computer vision could become a major asset through the use of neural networks capable of interpreting and analyzing complex visual information on a large scale. Inspired by the human brain, these networks could, for example, be applied to aerial or satellite imagery to detect specific surfaces—useful for national police and gendarmerie to identify targeted vehicles, drug-dealing spots, and more.

AI would also improve vigilance in monitoring video surveillance feeds, thereby increasing efficiency and accelerating responses to suspicious situations (drug-dealing locations, brawls, gatherings, etc.). Indeed, it is estimated that after just one hour of real-time video monitoring, an operator loses concentration and may miss up to 50% of events. In the future, operators could rely on AI systems to flag events automatically, allowing them to focus on verification, analysis, and decision-making rather than continuous live monitoring.

AI can accelerate the detection process currently carried out by human eye, using image-analysis and behavioral-analysis tools based on convolutional neural networks to identify objects, actions, or individuals. Current machine learning techniques already allow the retrieval of a specific person’s photo, object, or weapon from thousands of stored photos on a computer or smartphone. AI’s object- and shape-recognition capabilities therefore represent a significant operational advantage.

Finally, AI can make major contributions through voice recognition technologies, capable of deciphering unique vocal characteristics, converting speech into models that can be processed and compared to stored voiceprints (samples from telephone calls or recordings).

AI would thus enable greater vigilance in video surveillance monitoring, improving efficiency and accelerating responses to suspicious situations.

Mieux analyser et détecter rapidement

Open-source intelligence gathering (OSINT, SOCMINT) has become a common practice due to the abundance of available data (social networks, etc.). AI support is fundamental in these phases, enabling rapid detection and characterization of urgent or dangerous situations at a faster speed than criminal networks or drug traffickers can erase their digital traces. For example, AI can be used for monitoring information flows “information noise” through web scraping, while respecting the AI Act framework (which excludes facial recognition), to detect and counter propaganda or disinformation. Using advanced AI and machine learning algorithms, vast amounts of data can be analyzed at high speed to identify patterns, keywords, or visual content—decisive in large-scale investigations such as narcotics trafficking or organized crime. Moreover, AI-driven systems can be trained on known propaganda or disinformation material, proactively spotting and flagging new content with similar features, ensuring swift and effective removal before it spreads.

To ensure the efficiency of these tools, the challenge lies in the ability to process both structured data (words, signs, numbers, etc.) and unstructured data (images, sounds, videos, etc.) at a large scale, relying on platforms and self-learning AI models capable of reformatting and making them exploitable. Automated language-analysis technologies powered by AI can extract and analyze written content across data volumes impossible to process manually.

AI also offers faster response times via machine learning, by training systems with massive datasets so they progressively learn to handle them autonomously. For instance, tracking the escape vehicle of a criminal suspect through surveillance cameras could be processed far more quickly by AI than by human analysis, leaving human investigators free to focus on data interpretation and oversight.

AI could support operators at an urban supervision center by detecting waste, abandoned objects, weapons, or fire outbreaks; calculating vehicle dwell time; monitoring parking near sensitive sites; identifying red-light violations or line crossings; and analyzing crowd movements.

AI to Enhance Investigative Capabilities and Case Solving

The sheer volume of data to be processed in investigations (video streams, images) has reached a level where exploitation is no longer possible without digital assistance. The massive increase in digital data places a heavy burden on the ability of Internal Security Forces (FSI) to handle it. The use of AI has therefore become not just an asset but a necessity—and, in the long term, an indispensable condition for effectively exploiting data or information that may contribute to establishing evidence. FSI must be equipped with tools to streamline video analysis, helping investigators avoid the need to review entire video or image sources manually. This would save time, increase efficiency, prevent concentration loss during long viewing sessions, and allow investigators to focus on higher-value analytical tasks. Without AI, it is likely that investigators would, in some cases, forego systematically analyzing all available video sources, thus missing out on particularly valuable digital evidence.

In addition to videos, investigators often rely on witness statements, bank records, phone logs, and testimonies—sources that could be processed by AI-powered software to flag inconsistencies, map data across time and space, or generate relational diagrams. Entering such data manually from paper or digital records is slow and labor-intensive. AI tools could detect key elements directly within texts, identify and classify them by meaning, and automatically generate relational diagrams. This would offer real added value by enabling real-time mapping of relationships and links from recordings or digital data, as well as suggesting follow-up questions to help investigators identify and apprehend suspects faster.

AI could also enable real-time detection across massive data flows of forged documents and fraud—tasks beyond human capacity—thereby improving case resolution rates. In identity management, applications are numerous, as in road safety, whether for combating driver’s license fraud, insurance fraud, or repeated fraudulent practices (disputes over traffic fines, registration fraud, vehicle theft, etc.). Furthermore, with AI-driven analysis, investigators could process millions of financial transactions, detecting suspicious fund movements indicative of money laundering schemes otherwise difficult to uncover. This would concretely improve the ability to analyze and understand criminal networks, connections, and environments, detect hidden links, and strengthen the fight against organized crime (drug trafficking, etc.). Recent dismantling of encrypted communication systems such as EncroChat or Matrix has highlighted the crucial role of advanced data analysis.

Finally, AI could help secure the use of images, documents, and videos by FSI by automating the redaction of passages or sections of documents to strengthen personal data protection and privacy compliance in line with CNIL and AI Act recommendations. For example, convolutional networks could detect and filter pre-defined elements in images. AI could also be used with precision and speed to exploit tools such as body-worn cameras to establish evidence—for instance, in cases where FSI conduct during interventions is challenged.

AI could also streamline procedures such as filing complaints or renewing administrative documents.

AI to Strengthen Victim Support

In recent years, the Ministry of the Interior has deployed online platforms and applications enabling citizens to report and declare incidents, file complaints (Perceval—reporting credit card fraud; Pharos—reporting illegal online content; THESEE—reporting cyber scams; PNAV—reporting crimes against persons and victim support), or access online information (locating a police station, etc.). These sites provide citizens with practical tools and represent a concrete way to expand data-driven practices. They also generate a new “data source,” often structured (words, numbers, signs, etc.), which could be further developed and exploited by AI for mapping reports, locating crime data, or analyzing crime types by area. Such processed data could then be made available to FSI in the field, allowing for quicker interventions (e.g., automated statistics, automatic reporting).

These platforms also open opportunities to rethink new modes of intelligence and information collection. They could help strengthen the detection of even weak signals using AI in a security context where diversifying channels for incident reporting is critical.

AI to Transform Police-Public Relations

AI could also streamline procedures such as filing complaints or renewing administrative documents by introducing conversational assistants (chatbots or callbots) to provide information or services.

By integrating AI capabilities into complaint-management software, police and gendarmes handling victims in their units could respond faster and more effectively (e.g., automatic scheduling of appointments via categorization). Online complaint services could also be enhanced, automatically registering complaints, analyzing them, and directing victims toward the most appropriate solution (automated confirmation, video-complaint with an officer, in-person appointment, etc.).

Recommendations and conclusion: Developing AI within a Clear and Secure Framework

The successful experiences from the Paris 2024 Olympic Games and recent technological advances highlight the urgent need to reflect on AI’s role in security. Given the deteriorating security context in France, increasingly organized crime, and the high demands of complex law enforcement professions where digital tools are already pervasive (investigation, forensics), AI is becoming an essential resource to help resolve cases, clarify facts, and provide robust solutions for FSI.

AI is not an end in itself but a powerful lever—just as IT systems and biometrics once were—and should be understood as such. To achieve this, it is vital to ensure:

• A clear political framework to reassure society and stabilize legislation.

• A ministerial strategy to secure FSI’s use of AI, supported by clear governance.

• Identifying the right use cases for AI, the right projects, and creating the conditions to scale up: ensuring data quality to guarantee tool efficiency, robustness, resilience and security-by-design of systems to limit vulnerabilities to cyber risks posed by criminals skilled in such threats, system auditability to understand and improve the results produced by these systems, etc.

• An integrated approach — ethical, legal, and technological — to systematically address the importance of the data being handled, requiring the establishment of balanced measures in terms of cybersecurity and transparency toward society.

• An identification of AI tools whose use does not come at the cost of France’s sovereignty or increase vulnerability to data leaks, cyberattacks, or intelligence and influence operations (information warfare, etc.)

Taking these technological steps is necessary if France is to meet the pressing challenges of public order and security it faces today.

DNA is also tracking down drug traffickers

From raw product to the small plastic bag of drugs sold on the street, every stage of handling increases the chances of leaving traces behind. Whether fingerprints or biological residues (DNA), such evidence is invaluable for tracing the network back to the traffickers.

DNA — a key tool in criminal investigations

Depuis quelques années, l’analyse de l’ADN est devenue un outil incontournable pour élucider les Over the past few years, DNA analysis has become an essential tool in solving criminal cases, including long-unsolved cold cases. In a recent study conducted at Flinders University, the research team led by forensic science PhD candidate Madison Nolan and Professor Adrian Linacre proposes to push the boundaries of suspect identification in drug trafficking cases through advanced genetic profiling.

Packaging as a source of evidence

Before reaching the streets, drugs are transformed and packaged in various types of containers — which can become a goldmine of information for forensic investigators. However, repeated handling and exposure to environmental factors can degrade DNA, sometimes rendering it unusable. To support forensic work, the Flinders team focused on identifying the areas of drug packaging most likely to retain exploitable biological traces.

Better DNA transfer within the packaging

According to the study’s findings, DNA presence was particularly significant on capsules containing powdered substances and on the inner surfaces of “Ziploc”-type bags used to store them, especially along the interior edges of the seal. Even brief contact—around 30 seconds—was enough to leave detectable amounts of DNA. Because these traces are located inside the packaging, the risk of external contamination is considerably reduced.

New perspectives for forensic investigations

For forensic police, this research offers new insights for optimizing sampling during drug seizures. By focusing primarily on the outer surfaces of capsules and the inner surfaces of plastic bags, investigators can obtain higher-quality genetic profiles—provided that collection procedures are followed meticulously to avoid any contamination. Nevertheless, as the researchers caution, DNA recovered from seized materials may already be degraded by previous handling or transport conditions, which can limit its reliability.

Sources :

https://www.sciencedirect.com/science/article/pii/S1872497324001789 https://news.flinders.edu.au/blog/2025/02/03/dna-study-targets-drug-making/

De la recherche ADN aux analyses de données numériques : l’évolution fulgurante de la police scientifique

La police scientifique n’a jamais été aussi performante qu’aujourd’hui. Grâce aux avancées technologiques, les enquêtes criminelles bénéficient désormais d’outils d’investigation pointus, permettant notamment de résoudre des affaires de meurtre, de viol, de vol à main armée ou encore de terrorisme. Dans ce contexte, le concours de la Police Scientifique revêt une importance capitale, car il permet de recruter de futurs Techniciens de Police Technique et Scientifique (TPTS), chargés d’intervenir rapidement sur les scènes de crime.

Un soutien essentiel pour la Police Judiciaire

En effet, l’expertise de la police scientifique fait gagner un temps précieux à la Police Judiciaire, que ce soit dans la gestion d’une scène de crime (homicide, assassinat, etc.) ou d’une scène de délit (vol, cambriolage, dégradations, trafic de stupéfiants). L’analyse d’indices tels que les empreintes digitales, l’ADN, les fibres, les éléments balistiques ou encore les traces numériques contribue à établir des preuves solides face aux tribunaux et permet de mieux cerner le profil des suspects.

L’importance du facteur humain

Malgré ces moyens technologiques de pointe, les effectifs de la police scientifique restent profondément humains. Chaque jour, ces professionnels doivent composer avec des situations parfois dramatiques et faire face à la détresse des victimes. Dans l’émission LEGEND sur YouTube, animée par Guillaume Pley, le policier scientifique Sébastien Aguilar, souligne l’impact psychologique de ces enquêtes. Il évoque des affaires hors du commun, parfois totalement folles, mais aussi des cas dont la violence l’a marqué pour toujours.

Police Scientifique : un métier loin des clichés

Cette réalité du terrain est souvent bien différente des clichés véhiculés au sujet de la police scientifique. Dans son ouvrage « Au cœur de l’enquête criminelle », publié dans la collection Darkside chez Hachette, Sébastien Aguilar décrit pas à pas le travail rigoureux des enquêteurs, épaulés par les policiers scientifiques entièrement dédiés à la recherche de la vérité. Il y relate également les différentes étapes qui mènent au procès en Cour d’Assises, offrant un aperçu complet du fonctionnement de la machine judiciaire.

Si vous souhaitez en savoir plus sur l’impact psychologique du métier, les techniques d’investigation modernes ou encore l’importance du concours de Technicien de Police Technique et Scientifique, retrouvez l’interview de Sébastien Aguilar par Guillaume Pley sur YouTube et plongez-vous dans « Au cœur de l’enquête criminelle » pour une immersion totale dans l’univers passionnant de la police scientifique.

Fourniret: My encounter with a serial killer couple

As Public Prosecutor in Charleville-Mézières from 2003 to 2008, I was in charge of the Fourniret/Monique Olivier case for four years and, at their assize court trial, secured their conviction to life imprisonment. It was an exceptional case—by the number of young girls and adolescents who fell victim, by the length of the criminal trajectory of this serial killer couple, by the abomination and inhumanity of the crimes committed, and by the unprecedented perversity of these two monsters.

It took me 15 years after the 2008 trial to write my book Ma rencontre avec le Mal (My Encounter with Evil), which is not a journalistic narrative of these heinous crimes but rather the writing of my lived experience—my share of truth. It is an attempt to contribute to the understanding of the particular handling that major criminal cases and serial crimes require, as well as the legal, criminological, and social challenges they present. Above all, my book was intended to help others understand, at least in part, the terrible and irreversible torment endured by the families of these unfortunate victims, so that they might be better heard by the justice system, granted genuine attention, and shown constant empathy throughout their arduous judicial ordeal.

The chilling discovery of a serial killer

Upon my appointment as Public Prosecutor in Charleville-Mézières in 2003, I was immediately made aware of two very serious criminal cases: the abduction and murder of two young victims. On May 16, 2000, Céline, aged 18, disappeared after leaving her high school in Charleville-Mézières. Any suggestion of a runaway was quickly dismissed, and although searches began immediately, they remained fruitless until July 22, 2000, when her skeletal remains were discovered in nearby woodland close to the Belgian border. On May 5, 2001, Mananya, aged 13 and a half, disappeared in Sedan, an Ardennes town, as she left the public library and was returning home. On March 1, 2002, her body was discovered about 30 kilometers away, near a Belgian village. The analogies between these two crimes were far too numerous to suggest coincidence: the geographical proximity of the events and of the sites where the bodies were found, the modus operandi of urban abductions carried out discreetly, the details of how the bodies were abandoned, the victims’ physical and psychological profiles… Together with the investigating judges and investigators of the Reims police judiciaire, the frightening hypothesis of a serial killer operating in the Ardennes became increasingly credible. Despite multiple investigations conducted jointly by the judicial police of Reims and Dinant in Belgium, the searches remained unfruitful. That was until June 26, 2003, when a major event reignited both inquiries. On that day, a 13-year-old girl, Marie-Ascension, was abducted in Belgium. Thanks to her courage and composure, she managed to free herself from her restraints and escape from the van in which her abductor had confined her. A motorist picked her up and, with remarkable presence of mind, took note of the vehicle’s license plate. The gendarmes traced it to Monique Olivier, wife of Michel Fourniret. Fourniret was quickly arrested. He admitted to the facts and stated that he had tied up the girl, touched her breasts, considered having sexual relations with her, and told her that “he was far worse than Dutroux.”

Searches carried out at his home revealed children’s clothing, ropes, adhesive tape, a pair of handcuffs, a child’s inhalation mask, vials of ether, and various weapons, including police revolvers stolen during a burglary.

Fourniret agreed to kill Monique Olivier’s two previous partners, and in exchange, she would provide him with young virgin girls, whom they referred to as “MSPs (membranes on legs)” or “young slits.”

Michel Fourniret - Forenseek

Michel Fourniret and Monique Olivier: the deadly alliance of a serial killer couple

Monique Olivier claimed to be completely unaware of her husband’s proclivity for children, denied any awareness of his prior convictions, and asserted—like Fourniret—that this abduction was an isolated case. It would take a full year, the relentless work of Belgian investigators who questioned her 120 times, and the covert recording of a prison visitation, before she finally gave her first confession—very partial—regarding the number of crimes committed and her own role. A few days later, confronted with the details provided by his wife, Fourniret confessed as well. He explained that he had met Monique Olivier while serving a sentence for previous sexual assaults. She had responded to his request for a pen pal, and over the course of eight months they exchanged more than 200 letters. The analysis of this voluminous correspondence is chilling: even before meeting in person, they had sealed a genuine criminal pact. Fourniret promised to kill Olivier’s two former partners, and in return, she would provide him with young virgins girls, whom they referred to as “MSPs (membranes on legs)” or “young slits.” Two months after Fourniret’s release, this pact—until then confined to paper—was put into practice, with devastating consequences for Isabelle, aged 17.

On December 11, 1987, Isabelle disappeared on her way home from school in Auxerre. The couple’s later confessions revealed the meticulous planning of this abduction: surveillance beforehand, the kidnapping of the girl by Monique Olivier, Fourniret pretending to have run out of fuel before climbing into the vehicle, slipping a cord around her neck, while Olivier administered Rohypnol tablets that rendered her semi-conscious. Brought to their home, Fourniret attempted to rape her but was unable to do so due to erectile failure. Acting on her own initiative, Olivier performed oral sex on him. Fourniret then strangled the girl, and together they disposed of her body by throwing it into a disused well. It would take more than two years and the exploration of some thirty old wells before her remains were finally recovered—a heart-wrenching moment for Isabelle’s father. The murders continued one after another, as the bloody trajectory of the couple stretched on for 16 years.

In January 1988, Fourniret, accompanied by Monique Olivier, shot a sales representative at point-blank range with a shotgun in order to rob his wallet. This act of violence was consistent with one of their written agreements. The victim miraculously survived.A few weeks later, they committed another murder by killing the partner of one of Fourniret’s former fellow inmates, in order to steal part of the loot from the Gang des Postiches, a notorious group of bank robbers active in the Paris region. This allowed them to purchase, for 1.2 million francs, the Château du Sautou, a 19th-century castle with a park of about fifteen hectares. A criminal episode as bizarre as it was deadly. In August of the same year, they abducted and murdered Fabienne, a 20-year-old student, in the Marne. Fourniret first tried to kill her by injecting air into her veins with a syringe, then shot her at point-blank range with a sawed-off shotgun. They abandoned the young woman’s body on the Mourmelon military base, echoing the crimes committed by another serial killer, Adjutant Chanal.

Michel Fourniret - Victime - Forenseek

In January 1989, the couple abducted Jeanne-Marie, a 21-year-old student, in Charleville-Mézières. After attempting to rape her, Fourniret strangled her while Monique Olivier sealed her nasal and oral passages with adhesive tape. They buried her body on their property at the Château du Sautou. Her remains would not be found until 15 years later, after searches complicated by Fourniret’s manipulations and provocations. In December 1989, near Namur in Belgium, Monique Olivier used the pretext of her sick infant—lying in a cradle at the back of her vehicle—to abduct 12-year-old Elisabeth together with Fourniret. After bringing her to their Ardennes home, intoxicating and tying her up, Fourniret attempted to rape her—again unsuccessfully, despite another act of oral sex performed by Monique Olivier. Elisabeth spent the night in chains, before being taken the next day by Fourniret to the Château du Sautou, where he suffocated her in a transparent plastic bag. During a later “conversation,” he tried to provoke me by odiously recounting in detail the physical transformations of a face undergoing asphyxiation.

Un mode A modus operandi strikingly similar to that of another serial killer, Francis Heaulme.
similaire à celui d’un autre tueur en série, Francis Heaulme.

In November 1990, they abducted Natacha, 13, from a supermarket parking lot near Nantes. Fourniret brutally beat her, raped her, and stabbed her repeatedly with an awl. He abandoned her body on a beach in Vendée, 80 kilometers away—a modus operandi strikingly similar to that of another serial killer, Francis Heaulme. In 1995, Fourniret violently assaulted a dog groomer in Namur. Thanks to her presence of mind, the victim, Joëlle, survived, though she still suffers permanent psychological trauma to this day—a genuine psychological murder. In 2000 and 2001, he abducted, raped, and killed Céline and Mananya in the Ardennes, after subjecting them to prolonged psychological torture. An endless ordeal for these two young victims.

Between 1990 and 2000, far from being a “quiet period,” the couple’s criminal activities continued at a relentless pace. Crimes were committed, such as the murder of young Estelle in Guermantes, near Paris—solved 16 years later—as well as numerous attempted or aborted abduction, rapes and murders plans.

Michel Fourniret: the most “accomplished” serial killer

A deep dive into the tortured psyche of a diabolical couple

How can such extreme and murderous deviations be explained? Nothing in the past lives of either Fourniret or Monique Olivier provides the faintest beginning of an answer. Only their numerous psychological and psychiatric evaluations have managed to lift part of the veil.

Monique Olivier - Forenseek

           

Fourniret has been described as “the most accomplished serial killer.” Of chilling coldness, highly organized and obsessive, sadistic, extremely violent and perverse, his criminal pathology was regarded as absolute. He derived genuine pleasure from the terror and humiliation of his victims, prolonging their agony. Monique Olivier, on the other hand, possessed an intelligence quotient far above average. Perverse and manipulative, she managed to make herself indispensable to Fourniret in order to fulfill her own most archaic fantasies. She was the one who gave him his license to kill; “without her, there would have been no murder.” Indifferent to the suffering of their young victims, she took a certain pride in being Fourniret’s accomplice and played a particularly active role in the commission of their crimes. The grip they exerted over each other was total and reciprocal, through “the alienation of each in the fantasy of the other.” Experts spoke of “a genuine co-optation of their unconscious drives, a mechanism so intimate that it gave rise to a new, third entity: the acting couple. It was as if a new subject had been created—two psyches meshed together, driving them toward criminal action.”

Life sentences for an unprecedented verdict

This monstrous couple was convicted in May 2008 after eight grueling weeks of trial. To their insufferable demeanor, the families responded with exceptional dignity—despite their immeasurable and permanent grief—commanding the respect and admiration of all.

Fourniret was sentenced to life imprisonment without parole, in other words, a true whole-life sentence with no possibility of adjustment, reduction, temporary leave, or early release—a sentence rarely imposed in France. Monique Olivier was sentenced to life imprisonment with a minimum security period of 28 years. She remains the only woman ever to have received such a sentence in France.

France had never before witnessed such a monstrous serial killer couple, over such an extended period, targeting such a high number of young victims—likely between thirty and thirty-five, many of whom remain unidentified—tortured under horrific conditions. May such atrocities never be repeated.

“No one will come out of the Fourniret case unscathed—not FRANCIS NACHBAR, not even you, Mr. Prosecutor.”

Fourniret uttered this sentence only minutes after his first encounter with Prosecutor Nachbar. It was on July 3, 2004, during the excavations in the park of the Château du Sautou, in the Ardennes, where the bodies of two of his victims were discovered. The monster was not mistaken. Eighteen years later, those words still resonate like a grim refrain. Indeed, Francis Nachbar did not emerge entirely unscathed from this extraordinary judicial case, from the hundreds of hours spent with Michel Fourniret and Monique Olivier, a diabolical couple and the ultimate embodiment of evil. Fifteen years after the 2008 Assize Court trial, Francis Nachbar decided to deliver his own share of the truth about one of the greatest serial killer cases France has ever known. It took him all these years to feel ready—freed from judicial office and from any constraints.

A vital book that sheds new light on this bloodstained couple. In total, Michel Fourniret confessed to 11 murders. He is also suspected in 21 other cases of missing girls and young women. Available to order here.

cold case bientôt résolu grâce à l'ADN en parentèle Forenseek Police scientifique

Two cold cases solved through familial DNA searching ?

As in the Élodie Kulik case in 2011 and that of the “Prédateur des bois” in 2022, two casefiles reopened by the Cold Case Unit in Nanterre are now close to being solved thanks to familial DNA searching,

“They say you’re only betrayed by your own,” a proverb that takes on its full meaning here. Two murders committed twelve years apart and seemingly unrelated have now been traced back to one and the same suspect—thanks to a genetic link established between members of the same family.

In 1988, fifteen-year-old Valérie Boyer was found with her throat slit along the railway tracks in Saint-Quentin-Fallavier. In 2000, forty-year-old Laïla Afif was shot dead in the head in La Verpillière. The only common factor between the two crimes was geographical proximity, as both occurred in neighboring towns in the Isère department. Lacking solid leads or similarities in the modus operandi, the investigations soon came to a standstill—until March 2024. More than twenty years later, the Cold Case Unit, created in 2022, reopened the Laïla Afif case and ordered new DNA analyses on samples recovered from the crime scene.

Proof through familial DNA 

These forensic analyses, extended through the use of familial DNA searching, led investigators to identify, within the National Automated DNA Database (Fichier National Automatisé des Empreintes Génétiques – FNAEG), an individual previously implicated in another case whose DNA showed a 50% genetic match with the profile obtained in the Afif case. And as the immutable laws of genetics dictate—each person shares half of their genome with their biological parents and children—the investigators logically traced the lead back to the man’s father. Betrayed by his son’s DNA, Mohammed C. has now been indicted not only for the murder of Laïla Afif, but also for that of Valérie Boyer, as the reopened investigation revealed striking connections between the two crimes. The latter case is part of the notorious “Disparus de l’Isère” (the Isère disappearances) series, a string of disappearances that made national headlines in the 1980s and is already under review by the Cold Case Unit.

Further reading: COLD CASES UN MAGISTRAT ENQUÊTE (Cold Cases – A Magistrate Investigates), by Jacques Dallest

Criminal history is marked by sordid murders, brutal killings, mysterious disappearances, and puzzling suicides. Mysterious and puzzling because these cases have never been solved—the perpetrators never identified, the culprits never convicted. In proper French, these cases are referred to as “cold cases.” They number in the dozens and are often unknown to the general public. Only a few major unsolved cases have found their place in the annals of judicial history and continue to fuel debate and speculation: the Bruay-en-Artois case, the Fontanet case, the Grégory case, the Boulin case, and, more recently, the Chevaline shootings. But what exactly is a cold case? What does this English term signify within the context of the French judicial system? Should these cases be reopened? And after so many years, how can justice still be served? In this scholarly and meticulously documented essay, Jacques Dallest—former investigating judge, public prosecutor, and Advocate General—offers a comprehensive analysis of the issue as no book has ever done before.

Order online.