Monthly Archives: March 2025

Linear Sequential Unmasking–Expanded (LSU-E): A general approach for improving decision making as well as minimizing noise and bias

Copy of the article Linear Sequential Unmasking–Expanded (LSU-E): A general approach for improving decision making as well as minimizing noise and biais, Forensic Science International: Synergy, Volume 3, 2021, 100161, with author agreement (contact : [email protected])

All decision making, and particularly expert decision making, re quires the examination, evaluation, and integration of information. Research has demonstrated that the order in which information is pre sented plays a critical role in decision making processes and outcomes. Different decisions can be reached when the same information is pre sented in a different order [1,2]. Because information must always be considered in some order, optimizing this sequence is important for optimizing decisions. Since adopting one sequence or another is inevitable —some sequence must be used— and since the sequence has important cognitive implications, it follows that considering how to best sequence information is paramount.

In the forensic sciences, existing approaches to optimize the order of information processing (sequential unmasking [3] and Linear Sequential Unmasking [4]) are limited in terms of their narrow applicability to only certain types of decisions, and they focus only on minimizing bias rather than optimizing forensic decision making in general. Here, we introduce Linear Sequential Unmasking–Expanded (LSU-E), an approach that is applicable to all forensic decisions rather than being limited to a particular type of decision, and it also reduces noise and improves forensic decision making in general rather than solely by minimizing bias.

Cognitive background

All decision making is dependent on the human brain and cognitive processes. Of particular importance is the sequence in which information is encountered. For example, it is well documented that people tend to remember the initial information in a sequence better —and be more strongly impacted by it— compared to subsequent information in the sequence (see the primacy effect [5,6]). For example, if asked to memorize a list of words, people are more likely to remember words from the beginning of the list compared to the middle of the list (see also the recency effect [7]).

Critically important, the initial information in a sequence is not only remembered well, but it also influences the processing of subsequent information in a number of ways (see a simple illustration in Fig. 1). The initial information can create powerful first impressions that are difficult to override [8], it generates hypotheses that determine which further information will be heeded or ignored (e.g., selective attention [[9][10][11][12]]), and it can prompt a host of other decisional phenomena, such as confirmation bias, escalation of commitment, decision momentum, tunnel vision, belief perseverance, mind set and anchoring effects [[13][14][15][16][17][18][19]]. These phenomena are not limited to forensic decisions, but also apply to medical experts, police investigators, financial analysts, military intelligence, and indeed anyone who engages in decision making.

Fig. 1. A simple illustration of the order effect: Reading from left to right, the first/leftmost stimulus can affect the interpretation of the middle stimulus, such that it reads as A-B-14; but reading the same stimuli, from right to left, starting with 14 as the first stimulus, often makes people see the stimuli as A-13-14, i.e., the middle stimulus as a ‘13’ (or a ‘B’) depending on what you start with first.

As a testament to the power of the sequencing of information, studies have repeatedly found that presenting the same information in a different sequence elicits different conclusions from decision-makers. Such effects have been shown in a whole range of domains, from food tasting [20] and jury decision-making [21,22], to countering conspiracy arguments (such as anti-vaccine conspiracy theories [23]), all demonstrating that the ordering of information is critical. Furthermore, such order effects have been specifically shown in forensic science; for example, Klales and Lesciotto [24] as well as Davidson, Rando, and Nakhaeizadeh [25] demonstrated that the order in which skeletal material is analyzed (e.g., skull versus hip) can bias sex estimates.

Bias background

Decisions are vulnerable to bias — systematic deviations in judgment [26]. This type of bias should not be confused with intentional discriminatory bias. Bias, as it is used here, refers to cognitive biases that impact all of us, typically without intention or even conscious awareness [26,27].

Although many experts incorrectly believe that they are immune from cognitive bias [28], in some ways experts are even more susceptible to bias than non-experts [[27][29][30]]. Indeed, the impact of cognitive bias on decision making has been documented in many domains of expertise, from criminal investigators and judges, to insurance underwriters, psychological assessments, safety inspectors and medical doctors [26,[31][32][33][34][35][36]], as well as specifically in forensic science [30].

No forensic domain, or any domain for that matter, is immune from bias.

Bias in forensic science

The existence and influence of cognitive bias in the forensic sciences is now widely recognized (‘the forensic confirmation bias’ [27,37,38]). In the United States, for example, the National Academy of Sciences [39], the President’s Council of Advisors on Science and Technology [40], and the National Commission on Forensic Science [41] have all recognized cognitive bias as a real and important issue in forensic de cision making. Similar findings have been reached in other countries all around the world—for example, in the United Kingdom, the Forensic Science Regulator has issued guidance about avoiding bias in forensic work [42], and in Australia as well [43]. 

Furthermore, the effects of bias have been observed and replicated across many forensic disciplines (e.g., fingerprinting, forensic pathol ogy, DNA, firearms, digital forensic, handwriting, forensic psychology, forensic anthropology, and CSI, among others; see Ref. [44] for a review)—including among practicing forensic science experts specif ically [30,45–47]. Simply put, no forensic domain, or any domain for that matter, is immune from bias.

Minimizing bias in forensic science

Although the need to combat bias in forensic science is now widely recognized, actually combating bias in practice is a different matter. Within the pragmatics, realities and constraints of crime scenes and forensic laboratories, minimizing bias is not always a straightforward issue [48]. Given that mere awareness and willpower are insufficient to combat bias [27], we must develop effective —but also practical— countermeasures.

Linear Sequential Unmasking (LSU [4]) minimizes bias by regulating the flow and order of information such that forensic decisions are based on the evidence and task-relevant information. To accomplish this, LSU requires that forensic comparative decisions must begin with the ex amination and documentation of the actual evidence from the crime scene (the questioned or unknown material) on its own before being exposed to the ‘target’/suspect (known) reference material. The goal is to minimize the potential biasing effect of the reference/’target’ on the evidence from the crime scene (see Level 2 in Fig. 2). LSU thus ensures that the evidence from the crime scene -not the ‘target’/suspect- drives the forensic decision. 

This is especially important since the nature of the evidence from the crime scene makes it more susceptible to bias, because –in contrast to the reference materials- it often has low quality and quantity of information, which makes it more ambiguous and malleable. By examining the crime scene evidence first, LSU minimizes the risk of circular reasoning in the comparative decision making process by pre venting one from working backward from the ‘target’/suspect to the evidence.

Fig. 2. Sources of cognitive bias in sampling, observations, testing strategies, analysis, and/or conclusions, that impact even experts. These sources of bias are organized in a taxonomy of three categories: case-specific sources (Category A), individual-specific sources (Category B), and sources that relate to human nature (Category C).

LSU limitations

By its very nature, LSU is limited to comparative decisions where evidence from the crime scene (such as fingerprints or handwriting) is compared to a ‘target’/suspect. This approach was first developed to minimize bias specifically in forensic DNA interpretation (sequential unmasking [3]). Dror et al. [4] then expanded this approach to other comparative forensic domains (fingerprints, firearms, handwriting, etc.) and introduced a balanced approach for allowing revisions of the initial judgments, but within restrictions.

LSU is therefore limited in two ways: First, it applies only to the limited set of comparative decisions (such as comparing DNA profiles or fingerprints). Second, its function is limited to minimizing bias, not reducing noise or improving decision making more broadly.

In this article, we introduce Linear Sequential Unmasking—Expanded (LSU-E). LSU-E provides an approach that can be applied to all forensic decisions, not only comparative decisions. Furthermore, LSU-E goes beyond bias, it reduces noise and improves decisions more generally by cognitively optimizing the sequence of information in a way that maximizes information utility and thereby produces better and more reliable decisions

Linear Sequential Unmasking—Expanded (LSU-E)

Beyond comparative forensic domains

LSU in its current form is only applicable to forensic domains that compare evidence against specific reference materials (such as a suspect’s known DNA profile or fingerprints—see Level 2 in Fig. 2). As noted above, the problem is that these reference materials can bias the perception and interpretation of the evidence, such that interpretations of the same data/evidence vary depending on the presence and nature of the reference material —and LSU aims to minimize this problem by requiring linear rather than circular reasoning.

However, many forensic judgments are not based on comparing two stimuli. For instance, digital forensics, forensic pathology, and CSI all require decisions that are not based on comparing evidence against a known suspect. Although such domains may not entail a comparison to a ‘target’ stimulus or suspect, they nevertheless entail biasing information and context that can create problematic expectations and top-down cognitive processes —and the expanded LSU-E provides a way to minimize those as well.

Take, for instance, CSI. Crime scene investigators customarily receive information about the scene even before they arrive to the crime scene itself, such as the presumed manner of death (homicide, suicide, or accident) or other investigative theories (such as an eyewitness account that the burglar entered through the back window, etc.). When the CSI receives such details before actually seeing the crime scene for themselves, they become prone to develop a priori expectations and hypotheses, which can bias their subsequent perception and interpretation of the actual crime scene, and impact if and what evidence they collect. The same applies to other non-comparative forensic domains, such as forensic pathology, fire investigators and digital forensics. For example, telling a fire investigator —before they arrive and examine the fire scene itself— that the property was on the market for two years but did not sell, or/and that the owner had recently insured the property, can bias their work and conclusions.

Combating bias in these domains is especially challenging since these experts need at least some contextual information in order to do their work (unlike, for example, firearms, fingerprint, and DNA experts, who require minimal contextual information to perform comparisons of physical evidence).

The aim of LSU-E is not to deprive experts of the information they need, but rather to minimize bias by providing that information in the optimal sequence. The principle is simple: Always begin with the actual data/evidence —and only that data/evidence— before considering any other contextual information, be it explicit or implicit, reference materials, or any other contextual or meta-information.

In CSI, for example, no contextual information should be provided until after the CSI has initially seen the crime scene for themselves and formed (and documented) their initial impressions, derived solely from the crime scene and nothing else. This allows them to form an initial impression driven only by the actual data/evidence. Then, they can receive relevant contextual information before commencing evidence collection. The goal is clear: As much as practically possible, experts should —at least initially— form their opinion based on the raw data itself before being given any further information that could influence their opinion.

Of course, LSU-E is not limited to forensic work and can be readily applied to many domains of expert decision making. For example, in healthcare, a medical doctor should examine a patient before making a diagnosis (or even generating a hypothesis) based on contextual information. The use of SBAR (Situation, Background, Assessment and Recommendation [49,50]) should not be provided until after they have seen the actual patient. Similarly, workplace safety inspectors should not be made aware of a company’s past violations until after they have evaluated the worksite for themselves without such knowledge [32].

Beyond minimizing bias

Beyond the issue of bias, expert decisions are stronger when they are less noisy and based on the ‘right’ information —the most appropriate, reliable, relevant and diagnostic information. LSU-E provides criteria (described below) for identifying and prioritizing this information. Rather than exposing experts to information in a random or incidental order, LSU-E aims to optimize the sequence of information so as to utilize (or counteract) cognitive and psychological influences (such as, primacy effects, selective attention and confirmation bias; see Section 1.1) and thus empower experts to make better decisions. It is also critical that as the expert progresses through the informational sequence, they document what information they see and any changes in their opinion. This is to ensure that it is transparent what information was used in their decision making and how [51,52].

Criteria for sequencing information in LSU-E

Optimizing the order of information not only minimizes bias but also reduces noise and improves the quality of decision making more generally. The question is: How should one determine what information experts should receive and how best to sequence it? LSU-E provides three criteria for determining the optimal sequence of exposure to task-relevant information: biasing power, objectivity, and relevance —which are elaborated below

1. Biasing power. 

The biasing power of relevant information varies drastically. Some information may be strongly biasing, whereas other information is not biasing at all. For example, the technique used to lift and develop a fingerprint is minimally biasing (if at all), but the medication found next to a body may bias the manner-of- death decision. It is therefore suggested that the non- (or less) biasing relevant information be put before the more strongly biasing relevant information in the order of exposure. 

2. Objectivity. 

Task-relevant information also varies in its objectivity. For example, an eyewitness account of an event is typically less objective than a video recording of the same event —but video re cordings can also vary in their objectivity, depending on their completeness, perspective, quality, etc. It is therefore suggested that the more objective information be put before the less objective in formation in the order of exposure. 

3. Relevance. 

Some relevant information stands at the very core of the work and necessarily underpins the decision, whereas other relevant information is not as central or essential. For example, in deter mining manner-of-death, the medicine found next to a body would typically be more relevant (for instance, to determine which toxi cological tests to run) than the decedent’s history of depression. It is therefore suggested that the more relevant information is put before the more peripheral information in the order of exposure, and –of course- any information that is totally irrelevant to the decision should be omitted altogether (such as the past criminal history of a suspect).

The above criteria are ‘guiding principles’ because:

A. The suggested criteria above are actually a continuum rather than a simple dichotomy [45,48,53]. One may even consider variability within the same category of information; for example, a higher quality video recording may be considered before a lower quality recording, or a statement from a sober eyewitness may be considered before a statement from an intoxicated witness. 

B. The three criteria are not independent; they interact with one another. For example, objectivity and relevance may interact to determine the power of the information (e.g., even highly objective information should be less powerful if its relevance is low, or conversely, highly relevant information should be less powerful if its objectivity is low). Hence, the three criteria are not to be judged in isolation from each other. 

C. The order of information needs to be weighed against the potential benefit it can provide [52]. For example, at the trial of police officer Derek Chauvin in relation to the death of George Floyd, the forensic pathologist Andrew Baker testified that he “intentionally chose not” to watch video of Floyd’s death before conducting the autopsy because he “did not want to bias [his] exam by going in with pre conceived notions that might lead [him] down one path or another” [54]. Hence, his decision was to examine the raw data first (an au topsy of the body) before exposure to other information (the video). Such a decision should also consider the potential benefit of watch ing the video before conducting the autopsy, in terms of whether the video might guide the autopsy more than bias it. In other words, LSU-E requires one to consider the potential benefit relative to the potential biasing effect [52]. 

With this approach, we urge experts to carefully consider how each piece of information satisfies each of these three criteria and whether and when it should, or should not, be included in the sequence —and whenever possible, to document their justification for including (or excluding) any given piece of information. Of course, this raises prac tical questions about how to best implement LSU-E, such as using case managers —and effective implementation strategies may well vary be tween disciplines and/or laboratories— but first we need to acknowl edge these issues and the need to develop approaches to deal with them.

Conclusion

In this paper, we draw upon classic cognitive and psychological research on factors that influence and underpin expert decision making to propose a broad and versatile approach to strengthening expert decision making. Experts from all domains should first form an initial impression based solely on the raw data/evidence, devoid of any reference material or context, even if relevant. Only thereafter can they consider what other information they should receive and in what order based on its objectivity, relevance, and biasing power. It is furthermore essential to transparently document the impact and role of the various pieces of information on the decision making process. As a result of using LSU-E, decisions will not only be more transparent and less noisy, but it will also make sure that the contributions of different pieces of information are justified by, and proportional to, their strength.

Références

[1] S.E. Asche, Forming impressions of personality, J. Abnorm. Soc. Psychol., 41 (1964), pp. 258-290
[2] C.I. Hovland (Ed.), The Order of Presentation in Persuasion, Yale University Press (1957)
[3] D. Krane, S. Ford, J. Gilder, K. Inman, A. Jamieson, R. Koppl, et al. Sequential unmasking: a means of minimizing observer effects in forensic DNA interpretation, J. Forensic Sci., 53 (2008), pp. 1006-1107
[4] I.E. Dror, W.C. Thompson, C.A. Meissner, I. Kornfield, D. Krane, M. Saks, et al. Context management toolbox: a Linear Sequential Unmasking (LSU) approach for minimizing cognitive bias in forensic decision making, J. Forensic Sci., 60 (4) (2015), pp. 1111-1112
[5] F.H. Lund. The psychology of belief: IV. The law of primacy in persuasion, J. Abnorm. Soc. Psychol., 20 (1925), pp. 183-191
[6] B.B. Murdock Jr. The serial position effect of free recall, J. Exp. Psychol., 64 (5) (1962), p. 482
[7] J. Deese, R.A. Kaufman. Serial effects in recall of unorganized and sequentially organized verbal material, J. Exp. Psychol., 54 (3) (1957), p. 180
[8] J.M. Darley, P.H. Gross. A hypothesis-confirming bias in labeling effects,J. Pers. Soc. Psychol., 44 (1) (1983), pp. 20-33
[9] A. Treisman. Contextual cues in selective listening, Q. J. Exp. Psychol., 12 (1960), pp. 242-248
[10] J. Bargh, E. Morsella. The unconscious mind, Perspect. Psychol. Sci., 3 (1) (2008), pp. 73-79
[11] D.A. Broadbent. Perception and Communication, Pergamon Press, London, England (1958)
[12] J.A. Deutsch, D. Deutsch. Attention: some theoretical considerations, Psychol. Rev., 70 (1963), pp. 80-90
[13] A. Tversky, D. Kahneman. Judgment under uncertainty: heuristics and biases, Science, 185 (4157) (1974), pp. 1124-1131
[14] R.S. Nickerson. Confirmation bias: a ubiquitous phenomenon in many guises, Rev. Gen. Psychol., 2 (1998), pp. 175-220
[15] C. Barry, K. Halfmann. The effect of mindset on decision-making, J. Integrated Soc. Sci., 6 (2016), pp. 49-74
[16] P.C. Wason. On the failure to eliminate hypotheses in a conceptual task, Q. J. Exp. Psychol., 12 (3) (1960), pp. 129-140
[17] B.M. Staw. The escalation of commitment: an update and appraisal, Z. Shapira (Ed.), Organizational Decision Making, Cambridge University Press (1997), pp. 191-215
[18] M. Sherif, D. Taub, C.I. Hovland. Assimilation and contrast effects of anchoring stimuli on judgments, J. Exp. Psychol., 55 (2) (1958), pp. 150-155
[19] C.A. Anderson, M.R. Lepper, L. Ross. Perseverance of social theories: the role of explanation in the persistence of discredited information, J. Pers. Soc. Psychol., 39 (6) (1980), pp. 1037-1049
[20] M.L. Dean. Presentation order effects in product taste tests, J. Psychol., 105 (1) (1980), pp. 107-110
[21] K.A. Carlson, J.E. Russo. Biased interpretation of evidence by mock jurors, J. Exp. Psychol. Appl., 7 (2) (2001), p. 91
[22] R.G. Lawson. Order of presentation as a factor in jury persuasion. Ky, LJ, 56 (1967), p. 523
[23] D. Jolley, K.M. Douglas. Prevention is better than cure: addressing anti-vaccine conspiracy theories, J. Appl. Soc. Psychol., 47 (2017), pp. 459-469
[24] A.R. Klales, K.M. Lesciotto. The “science of science”: examining bias in forensic anthropology, Proceedings of the 68th Annual Scientific Meeting of the American Academy of Forensic Sciences (2016)
[25] M. Davidson, C. Rando, S. Nakhaeizadeh. Cognitive bias and the order of examination on skeletal remains, Proceedings of the 71st Annual Meeting of the American Academy of Forensic Sciences (2019)
[26] D. Kahneman, O. Sibony, C. Sunstein. Noise: A Flaw in Human Judgment, William Collins (2021)
[27] I.E. Dror. Cognitive and human factors in expert decision making: six fallacies and the eight sources of bias, Anal. Chem., 92 (12) (2020), pp. 7998-8004
[28] J. Kukucka, S.M. Kassin, P.A. Zapf, I.E. Dror. Cognitive bias and blindness: a global survey of forensic science examiners, Journal of Applied Research in Memory and Cognition, 6 (2017), pp. 452-459
[29] I.E. Dror. The paradox of human expertise: why experts get it wrong, N. Kapur (Ed.), The Paradoxical Brain, Cambridge University Press, Cambridge, UK (2011), pp. 177-188
[30] C. Eeden, C. De Poot, P. Koppen. The forensic confirmation bias: a comparison between experts and novices, J. Forensic Sci., 64 (1) (2019), pp. 120-126
[31] C. Huang, R. Bull. Applying Hierarchy of Expert Performance (HEP) to investigative interview evaluation: strengths, challenges and future directions, Psychiatr. Psychol. Law, 28 (2021)
[32] C. MacLean, I.E. Dror. The effect of contextual information on professional judgment: reliability and biasability of expert workplace safety inspectors,J. Saf. Res., 77 (2021), pp. 13-22
[33] E. Rassin. Anyone who commits such a cruel crime, must be criminally irresponsible’: context effects in forensic psychological assessment, Psychiatr. Psychol. Law (2021)
[34] V. Meterko, G. Cooper. Cognitive biases in criminal case evaluation: a review of the research, J. Police Crim. Psychol. (2021)
[35] C. FitzGerald, S. Hurst. Implicit bias in healthcare professionals: a systematic review, BMC Med. Ethics, 18 (2017), pp. 1-18
[36] M.K. Goyal, N. Kuppermann, S.D. Cleary, S.J. Teach, J.M. Chamberlain. Racial disparities in pain management of children with appendicitis in emergency departments, JAMA Pediatr, 169 (11) (2015), pp. 996-1002
[37] I.E. Dror. Biases in forensic experts, Science, 360 (6386) (2018), p. 243
[38] S.M. Kassin, I.E. Dror, J. Kukucka. The forensic confirmation bias: problems, perspectives, and proposed solutions, Journal of Applied Research in Memory and Cognition, 2 (1) (2013), pp. 42-52
[39] NAS. National Research Council, Strengthening Forensic Science in the United States: a Path Forward, National Academy of Sciences (2009)
[40] PCAST, President’s Council of Advisors on science and Technology (PCAST), Report to the President – Forensic Science in Criminal Courts: Ensuring Validity of Feature-Comparison Methods, Office of Science and Technology, Washington, DC (2016)
[41] NCFS, National Commission on Forensic Science. Ensuring that Forensic Analysis Is Based upon Task-Relevant Information, National Commission on Forensic Science, Washington, DC (2016)
[42] Forensic Science Regulator. Cognitive bias effects relevant to forensic science examinations, disponible sur https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/914259/217_FSR-G-217_Cognitive_bias_appendix_Issue_2.pdf 
[43] ANZPAA, A Review of Contextual Bias in Forensic Science and its Potential Legal Implication, Australia New Zealand Policing Advisory Agency (2010)
[44] J. Kukucka, I.E. Dror. Human factors in forensic science: psychological causes of bias and error, D. DeMatteo, K.C. Scherr (Eds.), The Oxford Handbook of Psychology and Law, Oxford University Press, New York (2021)
[45] I.E. Dror, J. Melinek, J.L. Arden, J. Kukucka, S. Hawkins, J. Carter, D.S. Atherton. Cognitive bias in forensic pathology decisions, J. Forensic Sci., 66 (4) (2021)
[46] N. Sunde, I.E. Dror. A hierarchy of expert performance (HEP) applied to digital forensics: reliability and biasability in digital forensics decision making, Forensic Sci. Int.: Digit. Invest., 37 (2021)
[47] D.C. Murrie, M.T. Boccaccini, L.A. Guarnera, K.A. Rufino. Are forensic experts biased by the side that retained them? Psychol. Sci., 24 (10) (2013), pp. 1889-1897
[48] G. Langenburg. Addressing potential observer effects in forensic science: a perspective from a forensic scientist who uses linear sequential unmasking techniques, Aust. J. Forensic Sci., 49 (2017), pp. 548-563
[49] C.M. Thomas, E. Bertram, D. Johnson. The SBAR communication technique, Nurse Educat., 34 (4) (2009), pp. 176-180
[50] I. Wacogne, V. Diwakar. Handover and note-keeping: the SBAR approach, Clin. Risk, 16 (5) (2010), pp. 173-175
[51] M.A. Almazrouei, I.E. Dror, R. Morgan. The forensic disclosure model: what should be disclosed to, and by, forensic experts?, International Journal of Law, Crime and Justice, 59 (2019)
[52] I.E. Dror. Combating bias: the next step in fighting cognitive and psychological contamination, J. Forensic Sci., 57 (1) (2012), pp. 276-277
[53] D. Simon, Minimizing Error and Bias in Death Investigations, vol. 49, Seton Hall Law Rev. (2018), pp. 255-305
[54] CNN. Medical examiner: I “intentionally chose not” to view videos of Floyd’s death before conducting autopsy, April 9, 2021, disponible sur https://edition.cnn.com/us/live-news/derek-chauvin-trial-04-09-21/h_03cda59afac6532a0fb8ed48244e44a0 (2011)

AI in Forensics: Between Technological Revolution and Human Challenges

By Yann CHOVORY, Engineer in AI Applied to Criminalistics (Institut Génétique Nantes Atlantique – IGNA). On a crime scene, every minute counts. Between identifying a fleeing suspect, preventing further wrongdoing, and managing the time constraints of an investigation, case handlers are engaged in a genuine race against the clock. Fingerprints, gunshot residues, biological traces, video surveillance, digital data… all these clues must be collected and quickly analyzed, or there is a risk that the case will collapse for lack of usable evidence in time. Yet overwhelmed by the ever-growing mass of data, forensic laboratories are struggling to keep pace.

Analyzing evidence with speed and accuracy

In this context, artificial intelligence (AI) establishes itself as an indispensable accelerator. Capable of processing in a few hours what would take weeks to analyze manually, it optimises the use of clues by speeding up their sorting and detecting links imperceptible to the human eye. More than just a time-saver, it also improves the relevance of investigations: swiftly cross-referencing databases, spotting hidden patterns in phone call records, comparing DNA fragments with unmatched precision. AI thus acts as a tireless virtual analyst, reducing the risk of human error and offering new opportunities to forensic experts.

But this technological revolution does not come without friction. Between institutional scepticism and operational resistance, its integration into investigative practices remains a challenge. My professional journey, marked by a persistent quest to integrate AI into scientific policing, illustrates this transformation—and the obstacles it faces. From a marginalised bioinformatician to project lead for AI at IGNA, I have observed from within how this discipline, long grounded in traditional methods, is adapting—sometimes under pressure—to the era of big data.

The risk of human error is reduced and the reliability of identifications increased

Concrete examples: AI from the crime scene to the laboratory

AI is already making inroads in several areas of criminalistics, with promising results. For example, AFIS (Automated Fingerprint Identification System) fingerprint recognition systems now incorporate machine learning components to improve matching of latent fingerprints. The risk of human error is reduced and the reliability of identifications increased [1]. Likewise, in ballistics, computer vision algorithms now automatically compare the striations on a projectile with markings of known firearms, speeding the work of a firearms expert. Tools are also emerging to interpret bloodstains on a scene: machine learning1 models can help reconstruct the trajectory of blood droplets and thus the dynamics of an assault or violent event [2]. These examples illustrate how AI is integrating into the forensic expert’s toolkit, from crime scene image analysis to the recognition of complex patterns.But it is perhaps in forensic genetics that AI currently raises the greatest hopes. DNA analysis labs process thousands of genetic profiles and samples, with deadlines that can be critical. AI offers a considerable time-gain and enhanced accuracy. As part of my research, I contributed to developing an in-house AI capable of interpreting 86 genetic profiles in just three minutes [3]—a major advance when analyzing a complex profile may take hours. Since 2024, it has autonomously handled simple profiles, while complex genetic profiles are automatically routed to a human expert, ensuring effective collaboration between automation and expertise. The results observed are very encouraging. Not only is the turnaround time for DNA results drastically reduced, but the error rate also falls thanks to the standardization introduced by the algorithm.

AI does not replace humans but complements them

Another promising advance lies in enhancing genetic DNA-based facial composites. Currently, this technique allows estimating certain physical features of an individual (such as eye color, hair color, or skin pigmentation) from their genetic code, but it remains limited by the complexity of genetic interactions and uncertainties in predictions. AI could revolutionise this approach by using deep learning models trained on vast genetic and phenotypic databases, thereby refining these predictions and generating more accurate sketches. Unlike classical methods, which rely on statistical probabilities, an AI model could analyse millions of genetic variants in a few seconds and identify subtle correlations that traditional approaches do not detect. This prospect opens the way to a significant improvement in the relevance of DNA sketches, facilitating suspect identification when no other usable clues are available. The Forenseek platform has explored current advances in this area, but AI has not yet been fully exploited to surpass existing methods [5]. Its integration could therefore constitute a major breakthrough in criminal investigations.

It is important to emphasize that in all these examples, AI does not replace the human but complements them. At IRCGN (French National Gendarmerie Criminal Research Institute) cited above, while the majority of routine, good-quality DNA profiles can be handled automatically, regular human quality control remains: every week, a technician randomly checks cases processed by AI, to ensure no drift has occurred [3]. This human-machine collaboration is key to successful deployment, as the expertise of the forensic specialists remains indispensable to validate and finely interpret the results, especially in complex cases.

Intelligence artificielle IA en police scientifique et cybercriminalité - Forenseek

Algorithms Trained on Data: How AI “Learns” in Forensics

The impressive performance of AI in forensics relies on one crucial resource: data. For a machine learning algorithm to identify a fingerprint or interpret a DNA profile, it first needs to be trained on numerous examples. In practical terms, we provide it with representative datasets, each containing inputs (images, signals, genetic profiles, etc.) associated with an expected outcome (the identity of the correct suspect, the exact composition of the DNA profile, etc.). By analyzing thousands—or even millions—of these examples, the machine adjusts its internal parameters to best replicate the decisions made by human experts. This is known as supervised learning, since the AI learns from cases where the correct outcome is already known. For example, to train a model to recognize DNA profiles, we use data from solved cases where the expected result is clearly established.

an AI’s performance depends on the quality of the data that trains it.

The larger and more diverse the training dataset, the better the AI will be at detecting reliable and robust patterns. However, not all data is equal. It must be of high quality (e.g., properly labeled images, DNA profiles free from input errors) and cover a wide enough range of situations. If the system is biased by being exposed to only a narrow range of cases, it may fail when confronted with a slightly different scenario. In genetics, for instance, this means including profiles from various ethnic backgrounds, varying degrees of degradation, and complex mixture configurations so the algorithm can learn to handle all potential sources of variation.

Transparency in data composition is essential. Studies have shown that some forensic databases are demographically unbalanced—for example, the U.S. CODIS database contains an overrepresentation of profiles from African-American individuals compared to other groups [6]. A model naively trained on such data could inherit systemic biases and produce less reliable or less fair results for underrepresented populations. It is therefore crucial to monitor training data for bias and, if necessary, to correct it (e.g., through balanced sampling, augmentation of minority data) in order to achieve fair and equitable learning.

Data Collection: Gathering diverse and representative datasets
Data Preprocessing: Cleaning and preparing data for training
AI Training: Training algorithms on prepared datasets
Data Validation: Verifying the quality and diversity of the data
Bias Evaluation: Identifying and correcting biases in the datasets

Technically, training an AI involves rigorous steps of cross-validation and performance measurement. We generally split data into three sets: one for training, another for validation during development (to adjust the parameters), and a final test set to objectively evaluate the model. Quantitative metrics such as accuracy, recall (sensitivity), or error curves make it possible to quantify how reliable the algorithm is on data it has never seen [6]. For example, one can check that the AI correctly identifies a large majority of perpetrators from traces while maintaining a low rate of false positives. Increasingly, we also integrate fairness and ethical criteria into these evaluations: performance is examined across demographic groups or testing conditions (gender, age, etc.), to ensure that no unacceptable bias remains [6]. Finally, compliance with legal constraints (such as the GDPR in Europe, which regulates the use of personal data) must be built in from the design phase of the system [6]. That may involve anonymizing data, limiting certain sensitive information, or providing procedures in case an ethical bias is detected.

Ultimately, an AI’s performance depends on the quality of the data that trains it. In the forensic field, that means algorithms “learn” from accumulated human expertise. Every algorithmic decision implies the experience of hundreds of experts who provided examples or tuned parameters. It is both a strength – capitalizing on a vast knowledge base – and a responsibility: to carefully select, prepare, and control the data that will feed the artificial intelligence.

Technical and operational challenges for integrating AI into forensic science

Technical and operational challenges for integrating AI into forensic science

While AI promises substantial gains, its concrete integration in the forensic field faces many challenges. It is not enough to train a model in a laboratory: one must also be able to use it within the constrained framework of a judicial investigation, with all the reliability requirements that entails. Among the main technical and organisational challenges are:

  • Access to data and infrastructure: Paradoxically, although AI requires large datasets to learn, it can be difficult to gather sufficient data in the specific forensic domain. DNA profiles, for example, are highly sensitive personal data, protected by law and stored in secure, sequestered databases. Obtaining datasets large enough to train an algorithm may require complex cooperation between agencies or the generation of synthetic data to fill gaps. Additionally, computing tools must be capable of processing large volumes of data in reasonable time — which requires investment in hardware (servers, GPU2s for deep learning3) and specialized software. Some national initiatives are beginning to emerge to pool forensic data securely, but this remains an ongoing project.
  • Quality of annotations and bias: The effectiveness of AI learning depends on the quality of the annotations in training datasets. In many forensic areas, establishing « ground truth » is not trivial. For example, to train an algorithm to recognize a face in surveillance video, each face must be correctly identified by a human first — which can be difficult if the image is blurry or partial. Similarly, labeling data sets of footprints, fibers, or fingerprints requires meticulous work by experts and sometimes involves subjectivity. If the training data include annotation errors or historical biases, the AI will reproduce them [6]. A common bias is demographic representativeness noted above, but there may be others. For instance, if a weapon detection model is trained mainly on images of weapons indoors, it may perform poorly for detecting a weapon outdoors, in rain, etc. The quality and diversity of annotated data are therefore a major technical issue. This implicates establishing rigorous data collection and annotation protocols (ideally standardized at the international level), as well as ongoing monitoring to detect model drift (overfitting to certain cases, performance degradation over time, etc.). This validation relies on experimental studies comparing AI performance to that of human experts. However, the complexity of homologation procedures and procurement often slows adoption, delaying the deployment of new tools in forensic science by several years.
Intelligence Artificielle IA en police scientifique et en sciences forensiques cybercriminalité - Forenseek
  • Understanding and Acceptance by Judicial Actors: Introducing artificial intelligence into the judicial process inevitably raises the question of trust. An investigator or a laboratory technician, trained in conventional methods, must learn to use and interpret the results provided by AI. This requires training and a gradual cultural shift so that the tool becomes an ally and not an “incomprehensible black box.” More broadly, judges, attorneys, and jurors who will have to discuss this evidence must also grasp its principles. Yet explaining the inner workings of a neural network or the statistical meaning of a similarity score is far from simple. We sometimes observe misunderstanding or suspicion from certain judicial actors toward these algorithmic methods [6]. If a judge does not understand how a conclusion was reached, they may be inclined to reject it or assign it less weight, out of caution. Similarly, a defence lawyer will legitimately scrutinize the weaknesses of a tool they do not know, which may lead to judicial debates over the validity of the AI. A major challenge is thus to make AI explainable (the “XAI” concept—eXplainable Artificial Intelligence), or at least to present its results in a comprehensible format and pedagogically acceptable to a court. Without this, integrating AI risks facing resistance or sparking controversy in trials, limiting its practical contribution.
  • Regulatory Framework and Data Protection: Finally, forensic sciences operate within a strict legal framework, notably regarding personal data (DNA profiles, biometric data, etc.) and criminal procedure. The use of AI must comply with these regulations. In France, the CNIL (Commission Nationale de l’Informatique et des Libertés) keeps watch and can impose restrictions if an algorithmic processing harms privacy. For example, training an AI on nominal DNA profiles without a legal basis would be inconceivable. Innovation must therefore remain within legal boundaries, imposing constraints from the design phase of projects. Another issue concerns trade secrecy surrounding certain algorithms in judicial contexts: if a vendor refuses to disclose the internal workings of its software for intellectual property reasons, how can the defence or the judge ensure its reliability? Recent cases have shown defendants convicted on the basis of proprietary software (e.g., DNA analysis) without the defence being able to examine the source code used [7]. These situations raise issues of transparency and rights of defence. In the United States, a proposed law titled Justice in Forensic Algorithms Act aims precisely to ensure that trade secrecy cannot prevent the examination by experts of the algorithms used in forensics, in order to guarantee fairness in trials. This underlines the necessity of adapting regulatory frameworks to these new technologies.

Lack of Cooperation slows the development of powerful tools and limits their adoption in the field.

  • Another more structural obstacle lies in the difficulty of integrating hybrid profiles within forensic institutions, at least in France. Today, competitive examinations and recruitment often remain compartmentalised between different specialties, limiting the emergence of experts with dual expertise. For instance, in forensic police services, entrance exams for technicians or engineers are divided into distinct specialties such as biology or computer science, without pathways to recognize combined expertise in both fields. This institutional rigidity slows the integration of professionals capable of bridging between domains and fully exploiting the potential of AI in criminalistics. Yet current technological advances show that the analysis of biological traces increasingly relies on advanced digital tools. Faced with this evolution, greater flexibility in recruitment and training of forensic experts will be necessary to meet tomorrow’s challenges.

AI in forensics must not become a matter of competition or prestige among laboratories, but a tool put at the service of justice and truth, for the benefit of investigators and victims.

  • A further major barrier to innovation in forensic science is the compartmentalization of efforts among different stakeholders, who often work in parallel on identical problems without pooling their advances. This lack of cooperation slows the development of effective tools and limits their adoption in the field. However, by sharing our resources—whether databases, methodologies, or algorithms—we could accelerate the production deployment of AI solutions and guarantee continuous improvement based on collective expertise. My experience across different French laboratories (the Lyon Scientific Police Laboratory (Service National de Police Scientifique – SNPS), the Institut de Recherche Criminelle de la Gendarmerie Nationale (IRCGN), and now the Nantes Atlantique Genetic Institute (IGNA)) allows me to perceive how much this fragmentation hampers progress, even though we pursue a common goal: improving the resolution of investigations. This is why it is essential to promote open-source development when possible and to create platforms of collaboration among public and judicial entities. AI in forensics must not be a matter of competition or prestige among laboratories, but a tool in the service of justice and truth, for the benefit of investigators and victims alike.
Intelligence Artificielle IA en police scientifique et en sciences forensiques - Forenseek

The challenges discussed above all have technical dimensions, but they are closely intertwine with fundamental ethical and legal questions. From an ethical standpoint, the absolute priority is to avoid injustice through the use of AI. We must prevent at all costs that a poorly designed algorithm leads to someone’s wrongful indictment or, conversely, the release of a guilty party. This involves mastering biases (to avoid discrimination against certain groups), transparency (so that every party in a trial can understand and challenge algorithmic evidence), and accountability for decisions. Indeed, who is responsible if an AI makes an error? The expert who misused it, the software developer, or no one because “the machine made a mistake”? This ambiguity is unacceptable in justice: it is essential to always keep human expertise in the loop, so that a final decision—whether to accuse or exonerate—is based on human evaluation informed by AI, and not on the opaque verdict of an automated system.

On the legal side, the landscape is evolving to regulate the use of AI. The European Union, in particular, is finalizing an AI Regulation (AI Act) which will be the world’s first legislation establishing a framework for the development, commercialization, and use of artificial intelligence systems [8]. Its goal is to minimize risks to safety and fundamental rights by imposing obligations depending on the level of risk of the application (and forensic or criminal justice applications will undoubtedly be categorized among the most sensitive). In France, the CNIL has published recommendations emphasizing that innovation can be reconciled with respect for individual rights during the development of AI solutions [9]. This involves, for example, compliance with the GDPR, limitation of purposes (i.e. training a model only for legitimate and clearly defined objectives), proportionality in data collection, and prior impact assessments for any system likely to significantly affect individuals. These safeguards aim to ensure that enthusiasm for AI does not come at the expense of the fundamental principles of justice and privacy.

Encouraging Innovation While Demanding Scientific Validation and Transparency

A delicate balance must therefore be struck between technological innovation and regulatory framework. On one hand, overly restricting experimentation and adoption of AI in forensics could deprive investigators of tools potentially decisive for solving complex cases. On the other, leaving the field unregulated and unchecked would risk judicial errors or violations of rights. The solution likely lies in a measured approach: encouraging innovation while demanding solid scientific validation and transparency in methods. Ethics committees and independent experts can be involved to audit algorithms, verify that they comply with norms, and that they do not replicate problematic biases. Furthermore, legal professionals must be informed and trained on these new technologies so they can meaningfully debate their probative value in court. A judge trained in the basic concepts of AI will be better placed to understand the evidentiary weight (and limitations) of evidence derived from an algorithm.

Conclusion: The Future of forensics in the AI Era

Artificial intelligence is set to deeply transform forensics, offering investigators analysis tools that are faster, more accurate, and capable of handling volumes of data once considered inaccessible. Whether it is sifting through gigabytes of digital information, comparing latent traces with improved reliability, or untangling complex DNA profiles in a matter of minutes, AI opens new horizons for solving investigations more efficiently.

But this technological leap comes with crucial challenges. Learning techniques, quality of databases, algorithmic bias, transparency of decisions, regulatory framework: these are all stakes that will determine whether AI can truly strengthen justice without undermining it. At a time when public trust in digital tools is more than ever under scrutiny, it is imperative to integrate these innovations with rigor and responsibility.The future of AI in forensics will not be a confrontation between machine and human, but a collaborative work in which human expertise remains central. Technology may help us see faster and farther, but interpretation, judgment and decision-making will remain in the hands of forensic experts and the judicial authorities. Thus, the real question may not be how far AI can go in forensic science, but how we will frame it to ensure that it guarantees ethical and equitable justice. Will we be able to harness its power while preserving the very foundations of a fair trial and the right to defence?

The revolution is underway. It is now up to us to make it progress, not drift.

Bibliography

[1]​ : Océane DUBOUST. L’IA peut-elle aider la police scientifique à trouver des similitudes dans les empreintes digitales ? Euronews, 12/01/2024 [vue le 15/03/2025] https://fr.euronews.com/next/2024/01/12/lia-peut-elle-aider-la-police-scientifique-a-trouver-des-similitudes-dans-les-empreintes-d#:~:text=,il
[2] : International Journal of Multidisciplinary Research and Publications. The Role of Artificial Intelligence in Forensic Science: Transforming Investigations through Technology. Muhammad Arjamand et al. Volume 7, Issue 5, pp. 67-70, 2024. Disponible sur : http://ijmrap.com/ [vue le 15/03/2025]
[3]​ : Gendarmerie Nationale. Kit universel, puce RFID, IA : le PJGN à la pointe de la technologie sur l’ADN.  Mis à jour le 22/01/2025 et disponible sur : https://www.gendarmerie.interieur.gouv.fr/pjgn/recherche-et-innovation/kit-universel-puce-rfid-ia-le-pjgn-a-la-pointe-de-la-technologie-sur-l-adn [vue le 15/03/2025]
[4]​ : Michelle TAYLOR. EXCLUSIVE: Brand New Deterministic Software Can Deconvolute a DNA Mixture in Seconds.  Forensic Magazine, 29/03/022. Disponible sur : https://www.forensicmag.com [vue le 15/03/2025]
[5]​ : Sébastien AGUILAR. L’ADN à l’origine des portraits-robot ! Forenseek, 05/01/2023. Disponible sur : https://www.forenseek.fr/adn-a-l-origine-des-portraits-robot/ [vue le 15/03/2025]
[6]​ : Max M. Houck, Ph.D.  CSI/AI: The Potential for Artificial Intelligence in Forensic Science.  iShine News, 29/10/2024. Disponible sur : https://www.ishinews.com/csi-ai-the-potential-for-artificial-intelligence-in-forensic-science/ [vue le 15/03/2025]
[7]​ : Mark Takano.  Black box algorithms’ use in criminal justice system tackled by bill reintroduced by reps. Takano and evans.  Takano House, 15/02/2024. Disponible sur : https://takano.house.gov/newsroom/press-releases/black-box-algorithms-use-in-criminal-justice-system-tackled-by-bill-reintroduced-by-reps-takano-and-evans [vue le 15/03/2025]
[8] : Mon Expert RGPD. Artificial Intelligence Act : La CNIL répond aux premières questions.  Disponible sur : https://monexpertrgpd.com [vue le 15/03/2025]
[9]​ ​: ​ CNIL.  Les fiches pratiques IA.  Disponible sur : https://www.cnil.fr [vue le 15/03/2025]

Définitions :

  1. GPU (Graphics Processing Unit)
    A GPU is a specialized processor designed to perform massively parallel computations. Originally developed for rendering graphics, it is now widely used in artificial intelligence applications, particularly for training deep learning models. Unlike CPUs (central processing units), which are optimized for sequential, general-purpose tasks, GPUs contain thousands of cores optimized to execute numerous operations simultaneously on large datasets
  2. Machine Learning
    Machine learning is a branch of artificial intelligence that enables computers to learn from data without being explicitly programmed. It relies on algorithms capable of detecting patterns, making predictions, and improving performance through experience.
  3. Deep Learning
    Deep learning is a subfield of machine learning that uses artificial neural networks composed of multiple layers to model complex data representations. Inspired by the human brain, it allows AI systems to learn from large volumes of data and enhance their performance over time. Deep learning is especially effective for processing images, speech, text, and complex signals, with applications in computer vision, speech recognition, forensic science, and cybersecurity.

Cats — reliable witnesses in criminal investigations?

It is well known that our pets love affection and display unwavering loyalty. What has recently been discovered, however, is that they can also hold irrefutable evidence in the context of a criminal investigation.

Researchers from Flinders University in Australia took a close interest in the cats that inhabit many Australian households — not to study their habits or charming quirks, but for a much more forensic reason. These experts in criminalistics, genetics and forensic medecine sought to determine to what extent these animals could act as receptacles for human DNA and potentially transfer it onto other surfaces.

A single contact is enough to cause transfer

To test this hypothesis, the scientists conducted a study on twenty cats from fifteen households, collecting samples from four different areas: the fur on the head, back and right side of the body, and the skin located on the left flank of the animals.

A significant amount of DNA was recovered, mostly from the cats’ fur and, to a lesser extent, from their skin. Unsurprisingly, most of this genetic material belonged to the cats’ owners. More surprisingly, however, DNA from individuals outside the household was also detected in 47% of the samples—particularly from cats that regularly roamed their neighborhood. This demonstrates that feline fur readily captures human DNA, not only through petting but also via brief, incidental contact. Moreover, the study showed that DNA transfer can also occur in the opposite direction—from the cat to another person or to an object—simply by tapping or scratching the animal’s fur with a bare or gloved hand. In both cases, the recovered traces proved sufficient to identify an individual.

The cat: a silent but relevant witness

For forensic investigators, the findings of this study open up new perspectives in the way a crime scene is approached. From now on, an animal present at the scene can be considered as potential evidence if it is suspected to have come into contact with the perpetrator of a crime. Since their fur acts as a true reservoir of DNA, collecting samples from the areas identified by the researchers could make it possible to identify offenders—or, conversely, to rule out certain suspects.

It should be noted that this ability to capture human DNA is not unique to cats. Dogs, which are also common household companions across the world, have proven to be excellent collectors of genetic material as well.

Read full article here.