Archives

Uncovering the meaning of suspicious injuries in cases of child abuse

There is a certain difficulty in objectively identifying a cigarette burn in a forensic context, particularly when the victim cannot testify. Such lesions are of particular relevance in cases of suspected child abuse. Until now, diagnoses have relied mainly on the morphological appearance of the injuries, with no standardized tool to support a conclusion based on material evidence.

A striking clinical case of child abuse

A team from the Laboratory of Histological Pathology and Forensic Microbiology at the University of Milan investigated a suspected case of child abuse that resulted in the death of a child. Three circular lesions suggestive of cigarette burns were found on the body. A cigarette butt collected nearby further supported the suspicion of an intentional act. The challenge was to determine whether these marks were the result of deliberate harm. However, visual inspection and even conventional histology cannot always confirm the exact origin of such lesions. Hence the value of turning to a more refined and objective method.

The SEM–EDX method: a microscopic zoom on the lesion

Scanning electron microscopy (SEM) allows the morphology of the injured skin to be observed with extreme precision, while energy-dispersive X-ray spectroscopy (EDX) identifies the chemical elements present on the surface of the lesions. This analysis relied on internal calibration, applied both to samples of injured skin and to cigarette fragments collected at the scene.

Elemental signatures of an intentional act

The results revealed a circular lesion with a reddish base, consistent with intense thermal contact. The chemical composition detected by EDX contained elements typically associated with tobacco combustion, in particular sulfur trioxide and phosphorus oxides, confirming combustion rather than mere environmental residues. Combined with the histological findings, this analysis demonstrated that the injury had occurred prior to death, providing an objective element supporting the likelihood of abuse.

A tool to strengthen forensic expertise

The study demonstrates that SEM–EDX analysis, combined with histology, represents a significant advancement in the characterization of suspicious lesions in the context of child abuse. It moves beyond visual assessment to provide objective and reproducible data, essential in judicial proceedings. By overcoming the limitations of visual inspection, this approach delivers results based on reproducible physico-chemical evidence, thereby reinforcing the robustness of forensic conclusions in light of judicial requirements.

Conclusion

This study paves the way for broader integration of analytical microscopy into forensic practices. By combining scientific rigor with judicial investigation, it offers a robust method for clarifying the nature of lesions whose origin often remains uncertain. The approach could also be applied to other types of injuries, such as those caused by heat sources or chemical agents. This progress deserves to be extended and validated on a larger number of cases in order to refine its reliability.

Références :

  • Tambuzzi S. et al. (2024). Pilot Application of SEM/EDX Analysis on Suspected Cigarette Burns in a Forensic Autopsy Case of Child Abuse. American Journal of Forensic Medicine & Pathology, 45(2), 135‑143. Read here.
  • Faller-Marquardt M., Pollak S., Schmidt U. (2008). Cigarette Burns in Forensic Medicine. Forensic Sci. Int., 176(2–3), 200–208
  • Maghin F. et al. (2018). Characterization With SEM/EDX of Microtraces From Ligature in Hanging. Am. J. Forensic Med. Pathol., 39(1), 1–7, read here.

How do nature indicates the presence of a corpse ?

What if fungal spores and pollen grains could reveal the secrets of clandestine graves? That is the hypothesis explored by an international team of researchers in Colombia, who conducted a pioneering experiment combining mycology and palynology in a forensic context. 

A biological approach to detecting illegal graves

In an experimental project carried out in Bogotá, two graves simulating clandestine burials were dug — one empty, the other containing a pig cadaver (a standard human body substitute in forensic science). Soil samples were collected and analyzed at different depths to study fungal and pollen communities composition. The aim of the study was to determine whether decomposed organic remains alter the soil’s microbial and plant-based communities, and whether these biological signatures could serve as spatial and temporal indicators in criminal investigations. 

Revealing fungal and pollen richness

The results showed that soil from the pits containing a carcass exhibits greater fungal richness (higher species diversity), notably with species such as Fusarium oxysporum and Paecilomyces, whose frequency increased in the presence of decomposition. These organisms, capable of degrading nitrogen-rich compounds such as keratin, could serve as indicators of buried organic remains.

Fungal structures of Fusarium oxysporum observed under optical microscopy.
A and B: macroconidia; C: chlamydospores. © David Esteban Duarte-Alvarado

On the palynology side, pollen grains identified at 50 cm depth—including Borago officinalis, Poa sp., and Croton sonderianus—are typical of the dry season. In contrast, the pollens found at 30 cm correspond to the rainy season. This stratified distribution could allow investigators to estimate the burial and exhumation periods with greater accuracy.

Integrating soil biology into criminal investigations

This study is the first to provide experimental data on mycology and palynology in an equatorial tropical context, a field largely unexplored in forensic science until now. It paves the way for a more systematic integration of these disciplines in crime scene investigations involving clandestine graves or the search for buried remains. While preliminary, the findings demonstrate the value of biological approaches as a complement to conventional forensic methods especially in regions where climatic conditions influence decomposition dynamics.

Conclusion

This study is part of a broader research effort into biological indicators left by buried bodies. After trees and roots that can signal underground anomalies, it is now fungi and pollen that emerge as silent witnesses of clandestine deaths. This microbiological approach expands the toolkit of forensic archaeology, as practiced by experts such as those from the French Gendarmerie. By combining invisible biological traces with conventional excavation and stratigraphic analysis techniques, it enables a more precise reading of the soil—and the criminal stories it may conceal.

Reference :
Tranchida, M. C., et al. (2025). Mycology and palynology: Preliminary results in a forensic experimental laboratory in Colombia, South America. Journal of Forensic Sciences.
Full article here.

When artificial intelligence reads the signs of death

Estimating the postmortem interval (PMI) largely relies on identifying (scoring) the stage of decomposition (SOD) of the body. Until now, this crucial step has been performed primarily by human experts using semi-objective visual methods. However, these approaches suffer from significant limitations: subjectivity, processing time, and difficulties in handling massive datasets.

A recent study conducted by the University of Tennessee investigates the contribution of artificial intelligence (AI) to automating this classification. Drawing on a dataset of over 1.5 million images of decomposing bodies documented under real conditions between 2011 and 2023, the researchers trained two convolutional neural network (CNN) models: Inception V3 and Xception.

A segmented anatomical approach based on deep learning

The study employed a strategy of decomposition stage scoring by anatomical region (head, trunk, and limbs), consistent with the methods of Megyesi (4 stages) and Gelderman (6 stages). Images were automatically sorted and then manually annotated by an expert according to these reference systems. The AI models were subsequently trained through transfer learning and tested on unseen images.

Performance results are highly promising, particularly with the Xception model, which achieved a high F1-score for both methods—an indicator of an AI model’s ability to generate predictions that are both accurate and comprehensive. Results were more modest for the limbs, owing to variability in photographic conditions.

A reliability equivalent to human experts?

To evaluate the performance of artificial intelligence against human experts, the researchers conducted an inter-rater test on 300 thoracic images. Three specialists classified the decomposition stages of these images using the two recognized methods, and their results were compared with those generated by the AI.

Agreement was assessed using Fleiss’ Kappa coefficient. For the Megyesi method, results revealed a “substantial” agreement between AI classifications and those of human experts (κ = 0.637), a score very close to that observed among the experts themselves (κ = 0.67). These findings highlight the significant alignment of AI with expert evaluations, thereby reinforcing the validity and relevance of this automated approach.

Challenges to overcome for operational integration

Annotation carried out by a single expert introduces bias, while the use of a unique environmental context limits the generalizability of the results. Lower performance on limb regions highlights the need for greater data diversification, particularly through the inclusion of varied climatic conditions. A multicenter dataset annotated by multiple experts would provide a more robust reference base, ensuring improved generalization and increased reliability of the models.

Perspectives: toward AI-augmented forensics

This study represents a step forward in the automation of taphonomic analysis. Other work, such as that of Smith et al. (2024) using Bayesian models, or the growing use of 3D imaging and the necrobiome, suggest a convergence of AI, biological, and environmental approaches for a more accurate and less subjective estimation of the PMI.

Automating the assessment of decomposition stage allows for substantial time savings while reducing inter-observer variability. However, further efforts are needed to expand datasets and to develop standardized annotation protocols. The integration of algorithms such as those described here could transform forensic practice by facilitating the exploitation and analysis of large image databases, as well as their application in crisis situations (disasters, conflicts).

References :

  • Nau, A.-M. et al. (2024). Towards Automation of Human Stage of Decay Identification: An Artificial Intelligence Approach. arXiv:2408.10414.
  • Megyesi, M.S. et al. (2005). Using accumulated degree-days to estimate the postmortem interval from decomposed human remains. Journal of Forensic Sciences, 50(3), 618–626.
  • Gelderman, H. et al. (2018). The development of a post-mortem interval estimation for human remains found on land in the Netherlands. Int. J. Legal Med., 132(3), 863–873.
  • Smith, D.H. et al. (2024). Modeling human decomposition: a Bayesian approach. arXiv:2411.09802.
  • Infante, D. (2025). How AI and 3D Imaging are Transforming Body Farm Research. AZoLifeSciences.
  • Piraianu, A.-I. et al. (2023). Enhancing the evidence with algorithms: how artificial intelligence is transforming forensic medicine. Diagnostics, 13(18), 2992.

When teeth talk : How dental tartar serves toxicology

Initially exploited in archaeology, dental calculus is now revealing its potential in forensic science. It retains traces of ingested substances, opening the way to post-mortem analysis of drug intake and psychoactive compounds.

Dental calculus: A neglected but valuable matrix

Dental calculus forms through the gradual mineralization of dental plaque, a biofilm composed of saliva, microorganisms, and food residues. This process traps various compounds present in the oral cavity, including xenobiotics such as drugs or their metabolites. Its crystalline structure grants this matrix an excellent preservation properties for the substances it contains, while making it resistant to external degradation, including in post-mortem or archaeological contexts.

A new path for tracking illicit substances

Recently, a research team demonstrated the feasibility of a toxicological approach based on the analysis of dental calculus using liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). In a study involving ten forensic cases, the researchers detected 131 substances in tartar, compared to 117 in blood—sometimes in higher concentrations within the tartar. The method enabled the identification of common drugs such as cocaine, heroin, and cannabinoids, even in cases where they were no longer detectable in conventional matrices (Sørensen et al., 2021). These substances, absent from the blood, were often present in higher concentrations in dental tartar.

A long-lasting and discreet witness

This approach offers several clear advantages. It allows the detection of substance use weeks or even months after ingestion. Tartar sampling is non-invasive and applicable to skeletal remains, making it particularly relevant in archaeological and forensic anthropology contexts. It can help reconstruct consumption habits, medical treatments, or causes of death in situations where blood, urine, or hair are unavailable.

A promising method to be further developed

One of the main strengths of this technique lies in its ability to exploit a matrix that is often overlooked but commonly available on teeth. Only a few milligrams are needed to conduct a reliable analysis—provided the trapped substances remain stable over time. This method also opens the possibility of broadening the range of detectable compounds, pending further validation.

While promising, this avenue still requires additional research to standardize protocols, assess the long-term stability of molecules, and fully integrate this approach into routine forensic toxicology practices. Although still in its exploratory phase, the method offers remarkable potential for the use of alternative matrices and opens new perspectives for forensic toxicology.

Reference:

  • Sørensen LK, Hasselstrøm JB, Larsen LS, et al. Entrapment of drugs in dental calculus: detection validation based on test results from post-mortem investigations. Forensic Sci Int 2021; 319: 110647.
  • Reymond C, Le Masle A, Colas C, et al. A rational strategy based on experimental designs to optimize parameters of a liquid chromatography-mass spectrometry analysis of complex matrices. Talanta 2019; 205: 120063.
  • Radini A, Nikita E, Buckley S, Copeland L, Hardy K. Beyond food: The multiple pathways for inclusion of materials into ancient dental calculus. Am J Phys Anthropol 2017; 162: 71–83.
  • Henry AG, Piperno DR. Using plant microfossils from dental calculus to recover human diet: a case study from Tell al-Raqā’i, Syria. J Archaeol Sci 2008; 35: 1943–1950.

Bedbugs: a new weapon for forensic science?

Malaysian researchers have explored the potential of tropical bedbugs, Cimex hemipterus, as a new source of human DNA in forensic investigations. Typically overlooked in crime scene analyses due to the absence of visible traces, these insects may nevertheless carry, within their digestive tract, the DNA of the last human host they fed on. The study aimed to determine whether—and for how long—a usable human DNA profile could be extracted from the blood meal content of bedbugs, focusing on two key forensic genetic markers: STRs (Short Tandem Repeats) and SNPs (Single Nucleotide Polymorphisms).

Methodology and results

Laboratory-reared bedbug colonies were fed on human volunteers and subsequently sacrificed at different intervals (0, 5, 14, 30, and 45 days after feeding). DNA was extracted and subjected to STR and SNP analyses following standard forensic protocols. The results were conclusive: complete STR and SNP profiles could only be obtained on the day of feeding (day 0), while partial, though still informative, profiles remained detectable up to 45 days post-feeding. The SNP data were interpreted using the HIrisPlex-S system, allowing phenotype predictions (eye, skin, and hair colour) even from partial genetic information. Moreover, field-collected bedbugs confirmed the feasibility of STR profiling, occasionally revealing mixed DNA profiles—potentially indicating feeding from multiple human hosts.

These results open up a new avenue for forensic science: when traditional biological traces have disappeared or been cleaned away, bedbugs could remain at the scene and serve as reliable micro-reservoirs of human DNA, enabling investigators to identify individuals who were present or to establish a timeline of movements. However, several limitations must be taken into account. First, the analyses are time-consuming and require a rigorous protocol. The DNA profile becomes partial after a few days, and some loci are no longer detectable. Moreover, when an insect has fed on multiple individuals, mixed genetic signals can occur, making interpretation more complex.

The authors emphasize the need to validate these findings on a broader range of samples, including more human donors and various commercial STR/SNP kits. Controlled in situ tests on simulated crime scenes would also be useful to confirm the robustness of the method—particularly in comparison with other insects or biological intermediaries considered in forensic entomology.

Conclusion

In summary, this study demonstrates that human DNA preserved in the stomach of tropical bedbugs can be exploited for up to 45 days after feeding through STR and SNP analysis. Although a complete genetic profile can only be obtained immediately after feeding, these insects represent an innovative and promising resource for forensic science, especially in situations where conventional methods fail. Nevertheless, the approach requires strict protocols, further validation studies, and realistic crime-scene modelling before it can be used in judicial proceedings. Additional research will determine how this strategy can be integrated into the growing toolkit of forensic investigators and scientists.

Sources :

  • Kamal, M. M. et al. (2023)Human profiling from STR and SNP analysis of tropical bed bug (Cimex hemipterus) for forensic science, Scientific Reports, 13(1), 1173.
  • Chaitanya, L. et al. (2018)HIrisPlex-S system for eye, hair and skin colour prediction from DNA, Forensic Science International: Genetics, 35, 123–134.
  • Asia News Network (2023)Malaysian scientists discover bed bugs can play role in forensic investigations, Read full article.
  • ResearchGate – Publication originaleHuman profiling from STR and SNP analysis of tropical bed bug Cimex hemipterus for forensic science, Read full article.

Photogrammetry, Lasergrammetry, and Artificial Intelligence: A Technological Revolution

Forensics and emergency response are currently at a turning point with the growing integration of advanced technologies such as photogrammetry, lasergrammetry (LiDAR), and artificial intelligence (AI). These technologies not only provide unprecedented levels of accuracy and efficiency but also open up new avenues for investigation and intervention, profoundly reshaping traditional methodologies.

Photogrammétrie et Lasergrammétrie : des outils de précision

As a surveying expert and officer specializing in the drone unit of the Haute-Savoie Fire and Rescue Department (SDIS74), I have directly observed how these tools enhance the accuracy of topographic surveys and facilitate the rapid analysis of complex scenes. Photogrammetry enables 3D reconstruction of various environments using aerial images captured by drones equipped with high-resolution cameras. This process quickly generates detailed digital terrain models, which are critical in urgent or forensic interventions where every detail matters.

Road survey using photogrammetric methods, in true color. Credit: Arnaud STEPHAN – LATITUDE DRONE

It is possible to achieve extremely high levels of detail, allowing, for example, the identification of footprints by the depth left in the ground.

LiDAR scanning effectively complements photogrammetry by providing millimetric precision through the emission of laser beams that scan and model the environment in three dimensions. This technology is particularly effective in complex contexts such as dense wooded areas, steep cliffs, or rugged mountain terrain, where photogrammetry may sometimes struggle to capture all the necessary details.

To be more precise, LiDAR generally produces more noise on bare ground and hard surfaces compared to photogrammetry, which remains the preferred tool in such cases. However, in wooded areas, LiDAR can occasionally penetrate through to the ground and thus provide crucial information about the terrain, where photogrammetry may fail.

Photogrammetry only works during daylight, since it relies on photographic data in the visible spectrum.

Depending on the chosen flight altitudes and the type of sensor used, it is possible to achieve extremely high levels of detail, allowing, for example, the identification of footprints by the depth left in the ground. These technologies are already being used to precisely capture crime scenes. Traditionally, static scanners were used for this purpose, but drones now make it possible to greatly expand the capture perimeter while ensuring faster processing. This speed is crucial, as it is often imperative to capture the scene quickly before any change in weather conditions.

However, it is important to note that photogrammetry only works during daylight, since it relies on photographic data in the visible spectrum.

Topographic survey using LiDAR method and colored according to altitude. Vegetation differentiated in green. Credit: Arnaud STEPHAN – LATITUDE DRONE

Artificial Intelligence: towards automated and efficient analysis

The true revolution lies in the integration of these geospatial surveys into intelligent systems capable of massively analyzing visual data with speed and precision. In this regard, the OPEN RESCUE project, developed by ODAS Solutions in partnership with SDIS74 and the Université Savoie Mont-Blanc, stands as an exemplary case. This AI is fueled by an exceptional dataset of nearly 1.35 million images collected using various types of drones (DJI Mavic 3, DJI Matrice 300, Phantom 4 PRO RTK, etc.) across a remarkable diversity of environments, covering all seasons.

Illustration of OPEN RESCUE’s capabilities: a person isolated in the mountains during winter. Credit: Arnaud STEPHAN – ODAS SOLUTIONS

The robustness of the OPEN RESCUE AI is demonstrated by a maximum F1-score of 93.6%, a remarkable result validated through real field operations. The F1-score is a statistical indicator used to measure the accuracy of an artificial intelligence system: it combines precision (the number of correctly identified elements among all detections) and recall (the number of correctly identified elements among all those actually present). A high score therefore means that the AI effectively detects a large number of relevant elements while avoiding false detections. This intelligent system is capable of accurately detecting individuals as well as indirect signs of human presence such as abandoned clothing, immobilized vehicles, or personal belongings, thereby providing valuable and immediate assistance to rescue teams.

Collection of OPEN RESCUE training data with SDIS74 firefighters – Credit: Arnaud STEPHAN – ODAS SOLUTIONS

The arrival of this technology is radically transforming the way teams conduct their searches: it is now possible to methodically and extensively sweep entire areas, while ensuring that no relevant element has been missed by the AI in these zones. Although this does not replace canine units or other traditional methods, artificial intelligence provides a new and complementary level of thoroughness in the search process.

The arrival of this technology is radically transforming the way teams conduct their searches.

Practical Applications and Operational Results

In the field, the effectiveness of these technologies has been widely demonstrated. The autonomous drones used by our unit can efficiently cover up to 100 hectares in about 25 minutes, with image processing carried out almost in real time by OPEN RESCUE. This enables an extremely rapid response, ensuring optimal management of critical time during emergency interventions and missing-person searches.

Furthermore, the ability to precisely document the areas covered during operations provides a significant advantage in judicial contexts. The possibility of using these accurate 3D models and automatically analyzed data as evidence before courts offers greater transparency in judicial procedures and greatly facilitates the work of judges, investigators, and lawyers.

DJI Matrice 300 drone flying in a mountainous area – Credit: Arnaud STEPHAN – LATITUDE DRONE

Operational constraints and regulatory framework

The operational use of drones and these advanced technologies is subject to several strict regulatory constraints, particularly in terms of flight authorizations, privacy protection, data management, and air safety. In France, drones are regulated by the Direction Générale de l’Aviation Civile (DGAC – French Civil Aviation Authority), which imposes specific flight scenarios and precise protocols to be followed during missions.

In addition, the technical constraints of operations include the need for trained and regularly certified pilots, capable of carrying out missions safely and efficiently. Finally, roughly every six months, new innovative equipment is released, constantly bringing significant improvements such as higher capture speeds, better optical and thermal sensors, and the miniaturization of onboard LiDAR systems.

Conclusion

Ultimately, the growing integration of advanced technologies represents a decisive breakthrough in forensic sciences and emergency interventions, despite the operational and regulatory constraints to be taken into account. Their practical application not only enhances the efficiency and speed of operations but also opens up new possibilities for judicial analysis, thereby confirming their essential role in public safety and modern justice.

Entomotoxicology: the role of insects in forensic investigation

When the body no longer speaks, insects tell the truth. At the crossroads of toxicology and entomology, entomotoxicology turns these small organisms into key witnesses capable of revealing what time seeks to erase.

Insects serving the truth

In certain deaths, especially those linked to drug abuse or suicide, the victim’s body may remain undiscovered for several days or even months. Decomposition begins immediately after death, releasing gases and fluids that produce putrefaction odors. These effluents quickly attract insects such as flies, which lay their eggs in the body’s natural openings. The emerging larvae feed on the decomposing flesh, thereby accelerating the breakdown process [Forensic Entomology – Damien Charabidze].

As insect colonization progresses, it further accelerates the gradual decomposition of body tissues. This degradation also alters organic fluids and tissues such as urine, blood, and the liver, rendering traditional matrices used in forensic toxicological analyses unavailable or unreliable. At this stage, insects—particularly blowfly larvae—become especially valuable for toxicological examination.

Discovery of larvae on a decomposing body

Because of their abundance, ease of collection, and resistance to environmental conditions, necrophagous insect larvae can be sampled from various regions of the body where they are present. This choice is critical, as the ante- and postmortem redistribution of substances (medications, drugs, toxins) within the body may vary between tissues, leading to qualitative and quantitative differences in the insects. Collecting multiple samples from different area therefore improves the accuracy and reliability of qualitative results. Numerous studies have demonstrated their potential to reveal toxic substances where traditional methods have failed [1–4]. To date, entomotoxicological analyses allow only for the qualitative detection of toxic substances present in the decedent’s body. This means that one can confirm the presence or absence of a drug, poison, or medication in the tissues consumed by the larvae, but not yet reliably determine its concentration.

Such results can therefore only support a hypothesis of intoxication. However, establishing whether the detected quantity was lethal is not yet possible, as this would require a more reliable quantitative approach.

In addition, the toxicologist must bear in mind that these small organisms are capable of metabolizing substances and producing metabolites similar to those generated by the human body, despite the usual complexity of such biotransformations in humans. Research into this phenomenon is still in its early stages.

Forensic entomological methods can assist in determining the minimum time elapsed between death and the discovery of the body.

Drugged insects, misleading dating!

In a criminal investigation, a key factor to consider—especially in cases involving decomposed bodies—is the minimum postmortem interval (minPMI). This refers to the time elapsed between the moment when the first insects colonized the body and its discovery (Fig. 1).

Fig. 1. Simplified diagram illustrating the difference between PMI and minPMI.

It is referred to as a minimum because this estimate does not begin at the exact time of death but at the moment of the first insect colonization, which occurs shortly after death, ranging from a few minutes to several hours depending on environmental factors. Once rigor mortis, livor mortis, and the cooling of the body to ambient temperature have passed, it becomes increasingly difficult to estimate the time elapsed since death. The body then enters the putrefaction phase, during which forensic entomological methods can assist in determining the minimum time elapsed between death and the discovery of the body.

However, if the victim had consumed drugs prior to death, this can significantly affect the development of necrophagous insects by accelerating or delaying their growth.

Forensic entomologists measure the size and study the developmental stage of the larvae present on the body, and by taking into account factors such as ambient temperature, the insect species identified, and data on necrophagous species succession [5, 6], they compare these findings with the life cycle of the insects concerned (Fig. 2).

Fig. 2. Life cycle of necrophagous flies (Diptera).

Since certain species colonize a cadaver very soon after death, they make it possible to determine the day of initial colonization and thereby estimate a minimum postmortem interval (minPMI).

However, if the victim had consumed drugs prior to death, this can significantly affect the development of necrophagous insects by accelerating or delaying their growth. Comparison with their life cycle is therefore biased, leading to overestimation or underestimation of the minPMI. This is why studies have been conducted to evaluate the impact of specific drugs on insect development, with the aim of eventually incorporating these variability ranges into more accurate minPMI estimations [7–12]. The way in which insects metabolize or fail to metabolize ingested toxic substances remains uncertain.

Limitations of entomotoxicology

Although insects can provide valuable information, entomotoxicology is not without limitations.

First, the environment plays a critical role: insect development is highly dependent on temperature, humidity, and climate. If the body is, for example, exposed to extreme heat or strong winds, insect colonization may differ. Ambient temperature directly influences insect survival and development. Harsh environmental conditions may kill the insects or slow their growth, thereby biasing or preventing analysis.

Moreover, beyond development, insect colonization itself can also be disrupted. For instance, when a body is submerged in water or covered (by clothing, tarpaulins, soil, debris, etc.), insect access is hindered. This may alter the dynamics of colonization and consequently the way insects feed on the body.

Another challenge in entomotoxicology is the uncertainty regarding how insects metabolize or fail to metabolize toxic substances they ingest. Unlike humans, their metabolism and the ways in which they store or eliminate toxic substances are poorly understood. As a result, it is difficult to establish a correlation between the amount of toxic substances detected in insects and the dose ingested by the deceased, making it problematic to confirm a lethal dose in the body and therefore to substantiate and accept a prior hypothesis of fatal intoxication.

Furthermore, methods for extracting, purifying, and analyzing toxic substances in insect larval tissues are not yet standardized. Each study must be adapted according to the substances being investigated.

Conclusion

Forensic entomotoxicology thus illustrates how insects could in the future become key witnesses in the resolution of criminal investigations. It opens up promising perspectives for forensic medicine by broadening the range of possibilities when no other biological sample is available. With advances in research, entomotoxicology may become an even more precise tool—not only qualitative but also quantitative—and essential in contributing to the establishment of truth in real forensic casework.

Bibliographie :

  • [1] Campobasso, C., Gherardi, M., Caligara, M., Sironi, L., & Introna, F. (2004c). Drug analysis in blowfly larvae and in human tissues : a comparative study. International Journal Of Legal Medicine, 118(4). https://doi.org/10.1007/s00414-004-0448-1
  • [2] Groth, O., Franz, S., Fels, H., Krueger, J., Roider, G., Dame, T., Musshoff, F., & Graw, M. (2021). Unexpected results found in larvae samples from two postmortem forensic cases. Forensic Toxicology, 40(1), 144‑155. https://doi.org/10.1007/s11419-021-00601-x
  • [3] Levine, B., Golle, M., & Smialek, J. E. (2000b). An unusual drug death involving maggots. American Journal Of Forensic Medicine & Pathology, 21(1), 59‑61. https://doi.org/10.1097/00000433-200003000-00010
  • [4] Beyer, J., Enos, W., & Stajić, M. (1980b). Drug Identification Through Analysis of Maggots. Journal Of Forensic Sciences, 25(2), 411‑412. https://doi.org/10.1520/jfs12147j
  • [5] Lutz, L., Verhoff, M. A., & Amendt, J. (2018). Environmental factors influencing flight activity of forensically important female blow flies in Central Europe. International Journal Of Legal Medicine, 133(4), 1267 1278. https://doi.org/10.1007/s00414-018-1967-5
  • [6] Matuszewski, S. (2021). Post-Mortem Interval Estimation Based on Insect Evidence : Current Challenges. Insects, 12(4), 314. https://doi.org/10.3390/insects12040314
  • [7] Boulkenafet, F., Dob, Y., Karroui, R., Al-Khalifa, M., Boumrah, Y., Toumi, M., & Mashaly, A. (2020). Detection of benzodiazepines in decomposing rabbit tissues and certain necrophagic dipteran species of forensic importance. Saudi Journal Of Biological Sciences, 27(7), 1691‑1698. https://doi.org/10.1016/j.sjbs.2020.04.044
  • [8] Gosselin, M., Di Fazio, V., Wille, S. M., Del Mar Ramírez Fernandez, M., Samyn, N., Bourel, B., & Rasmont, P. (2011b). Methadone determination in puparia and its effect on the development of Lucilia sericata (Diptera, Calliphoridae). Forensic Science International, 209(1‑3), 154‑159. https://doi.org/10.1016/j.forsciint.2011.01.020
  • [9] El-Samad, L. M., El-Moaty, Z. A., & Makemer, H. M. (2011). Effects of Tramadol on the Development of Lucilia sericata (Diptera : Calliphoridae) and Detection of the Drug Concentration in Postmortem Rabbit Tissues and Larvae. Journal Of Entomology, 8(4), 353‑364. https://doi.org/10.3923/je.2011.353.364
  • [10] Bourel, B., Hédouin, V., Martin-Bouyer, L., Bécart, A., Tournel, G., Deveaux, M., & Gosset, D. (1999). Effects of Morphine in Decomposing Bodies on the Development of Lucilia sericata (Diptera : Calliphoridae). Journal Of Forensic Sciences, 44(2), 354‑358. https://doi.org/10.1520/jfs14463j
  • [11] Zou, Y., Huang, M., Huang, R., Wu, X., You, Z., Lin, J., Huang, X., Qiu, X., & Zhang, S. (2013). Effect of ketamine on the development of Lucilia sericata (Meigen) (Diptera : Calliphoridae) and preliminary pathological observation of larvae. Forensic Science International, 226(1‑3), 273‑281. https://doi.org/10.1016/j.forsciint.2013.01.042
  • [12] O’Brien, C., & Turner, B. (2004). Impact of paracetamol on Calliphora vicina larval development. International Journal Of Legal Medicine, 118(4), 188‑189. https://doi.org/10.1007/s00414-004-0440-9

At the heart of the criminal investigation: from the crime scene to the criminal court trial

In 2024, our unique literary concept combining crime fiction and educational writing finally came to life. It is the result of many months of work, fascinating encounters with seasoned professionals, the sharing of expertise, and true immersion in the daily lives of numerous experts within the judicial sphere. Our ambition with this book was to explore every stage of a criminal investigation, revealing to the general public the many layers of the vast judicial system—from the discovery of a violent crime scene to the verdict delivered by the criminal court. We extend our heartfelt thanks to all the experts who took part in this project and whose testimonies lend the book its authenticity. It has been a remarkable journey!

1 – To begin, could you briefly describe your background and what motivated you to write this book?

Sébastien Aguilar: I have been working in the Forensic Police of the Paris Police Prefecture for thirteen years. In 2017, I had the opportunity to co-author a first book on forensic science and to found ForenSeek®, a platform dedicated to forensic disciplines, which also offers a training program for the competitive examination to become a Forensic Science Technician (Technicien de Police Technique et Scientifique). Since my first assignment, I have always enjoyed sharing insights about this extraordinary profession, which, in my view, remains largely unknown to the general public. The inner workings of a judicial investigation are often unsuspected, and I have witnessed firsthand how investigators sacrifice part of their personal lives to bring cases to completion—sometimes over several days or even weeks. With this new book, our goal was to shed light on the full complexity of a criminal investigation: the overwhelming quantity of evidence to collect, the necessity of organizing all this information, and the importance of interpreting it correctly to uncover the truth. For us, it was a way to pay tribute to all those who work behind the scenes, whose efforts are essential—especially for the victims.

Justine Picard: My career path is somewhat atypical. I spent nearly ten years working in marketing and communications. As I approached my thirties, driven by a strong desire to pursue the profession that had always fascinated me, I decided to take the entrance examination for the Forensic Police of the French Police. In 2019, I joined the intervention unit of the SRPTS in Paris, marking a complete 360-degree career change! I discovered a fascinating, highly technical, and demanding field. Throughout the various cases I have worked on, I quickly began to feel a certain frustration. Within forensic science, we have our own protocols, our own methods, and our own way of working. At crime scenes, we collaborate closely with investigators, but soon after, we lose visibility on the subsequent progress of the case. It’s understandable—this is how the judicial process operates, and everyone must play their part to move things forward as quickly and efficiently as possible. Yet while I accept this professionally, on a personal level, it leaves me with a lingering sense of incompleteness. That’s what motivated me to embark on this literary project: Who? When? What? How? To know and understand every aspect of a criminal investigation, to delve into the daily work of those experts who operate in the shadows, and more broadly, to grasp the inner workings of our country’s judicial system.

2 – What makes your book stand out from other works on criminology and criminal investigations?

Justine Picard: Mainly the format we chose: finding the perfect balance between the technical narrative and the fictionalized storytelling. There are many books devoted to the National Police, the Gendarmerie, or other justice system professionals—some take the form of testimonies, others of detective novels or technical manuals—but none truly bridges these different worlds. For us, it was a way to engage the reader, to hold their attention, while guiding them through the entire judicial process with well-sourced information and key witness accounts. In this market, books tend to be one or the other—but rarely both!

Sébastien Aguilar: Our ambition was to create a book that is both educational and captivating, moving away from the somewhat austere format of traditional criminal law textbooks. We therefore chose to diversify our approach: by including sections dedicated to specific forensic specialities, interviews with various judicial actors (magistrates, experts, lawyers, psycho-criminologists, jurors of the cour d’assises, etc.), and concrete materials such as call detail records, official reports, autopsy findings, and forensic police reports. The idea was to immerse the reader in the heart of the investigation—to show, as vividly as possible, how a case is built step by step, and what tools investigators use along the way. I was particularly moved that Dominique Rizet, a seasoned judicial reporter, praised in his foreword the “educational, well-documented, and comprehensive” nature of this book, describing it as “truly one of a kind.”

3 – Why did you choose to tell this story in the form of a crime novel?

Justine Picard: Above all, we wanted to bring suspense to the narrative and move away from a purely technical approach. Another important point for us was to reach a wide audience—both “specialists” and “non-specialists”—by allowing them to immerse themselves more easily in a complex investigation involving multiple technical procedures. The plot twists, witness statements, and the reader’s desire to find out what happens next all serve as tools to gently introduce complex forensic and judicial concepts. Our aim was for the reader to finish the book with the satisfaction of a well-crafted story while also gaining a solid educational foundation through the insights of real experts and the many technical details presented.

Sébastien Aguilar: We chose a narrative format because it allows readers to experience the intensity and emotion inherent in this kind of investigation. This storytelling allows to convey powerful messages—such as the confrontation with death, the crucial role of the forensic autopsy, or the chronic fatigue affecting every individual involved in the investigation. Behind the forensic police expert’s coverall, the magistrate’s or lawyer’s robe, the pathologist’s lab coat, or the investigator’s computer screen, there are men and women with their own strengths and weaknesses. Writing it as a crime novel enabled us to highlight this deeply human dimension, too often overshadowed by the purely technical side of criminal investigation.

4 – Is the case presented in your book entirely fictional, or does it include real investigative elements and techniques?

Sébastien Aguilar / Justine Picard: Around 30% of the story is inspired by a real criminal case, to which we added numerous original elements to illustrate the diversity and modernity of current investigative techniques. We’re sometimes asked whether we’re concerned about revealing too much information that might benefit criminals. In reality, everything we describe in this book is already publicly accessible—through the internet, films, or television series. Nowadays, everyone knows they can be betrayed by their fingerprints, DNA, scent, clothing fibers, digital data, or even shoeprints left at the scene. To put it simply: the best way not to get caught is still not to commit a crime…

5 – What are the key insights or most surprising discoveries readers will find in « At the Heart of the Criminal Investigation »?

Sébastien Aguilar / Justine Picard: In At the Heart of the Criminal Investigation, we reveal fascinating developments that are set to transform investigative methods in the years to come. For instance, we explore emerging forms of digital trace evidence—such as connected devices, next-generation vehicles, and intelligent video surveillance—that are poised to play a decisive role in future investigations. These new sources of evidence already make it possible to reconstruct crime scenes with remarkable precision. We also break down how DNA analyses are conducted: How are they performed? What criteria are used to compare genetic profiles? Through this book, readers will gain insight into the inner workings of forensic genetics laboratories and understand how a single biological sample can completely change the course of an investigation.

6 – Your book doesn’t stop at the criminal investigation—it also includes a section on the trial before the cour d’assises. Why did you make that choice?

Justine Picard: The trial represents a crucial stage of the judicial process. All the work carried out beforehand by the various forensic and investigative experts takes on its full meaning in court, when the accused are confronted with the body of evidence gathered against them. That’s where everything comes together! We also felt it was important to shed light on how the justice system functions—something often misunderstood by the general public—and to clearly explain the roles of its key players (lawyers, prosecutors, investigating judges, etc.).

Sébastien Aguilar: Having attended several trials before the cour d’assises, I’ve always been struck by their almost theatrical staging and by the ability of certain investigators and experts who, when called to the stand, can testify for hours on end without interruption or notes. It was important for us to show how such a trial unfolds: How are jurors selected? Who appears before the court? Should one address the presiding judge as “Your Honour”? Do lawyers ever interrupt one another with an “Objection, Your Honour !”? How does the deliberation phase take place? and so on.

7 – If you had to describe your book in one word?

Justine Picard : Immersive !
Sébastien Aguilar : Thrilling !

8 – To conclude, could you share a short anecdote?

Sébastien Aguilar: In this fictional case, I actually went to the banks of the Seine—the location where the victim’s body is discovered—where I carried out a sample collection that was later analyzed by a captain from the Institut de Recherche Criminelle de la Gendarmerie Nationale (IRCGN). The results of that analysis proved decisive in our investigation. This book was also an opportunity to feature, through interviews and immersive accounts, contributions from real specialists in criminal investigation, including:

  • Jacques Dallest, honorary Attorney General, author of Cold Case and Sur les chemins du crime (Éditions Mareuil)
  • Christian Sainte, Director of the National Criminal Police (DNPJ)
  • Valérie-Odile Dervieux, Presiding Judge of the Investigative Chamber, Paris Court of Appeal
  • Delphine Blot, Judge of Liberties and Detention, Paris Judicial Court
  • Fatiha Touili, Investigating Judge, Bobigny Judicial Court
  • Thana Nanou, embalmer, author of Les yeux qu’on ferme (Éditions 41)
  • Guillaume Visseaux, forensic pathologist, IRCGN
  • Amel Larnane, Head of the Central Service for the Preservation of Biological Samples (SCPPB)
  • Eduardo Mariotti and Bertrand Le Corre, criminal lawyers
  • François-Xavier Laurent, forensic genetics expert at Interpol
  • Sylvie Miccolis, investigator, Paris Criminal Brigade (DPJ)
  • Noémi Chevassu, former investigator with the Minors’ Brigade, author of Pluie nocturne (Éditions Alba Capella)
  • Peggy Allimann, behavioural analyst, Forensic Division of the Gendarmerie Nationale (PJGN), author of Crimes (Éditions DarkSide)
  • General Christophe Husson and Colonel Pierre-Yves Caniotti, COMCYBER-MI• Chief Superintendent Sophie Malherbe-Mayeux, Head of the River Police Unit, Paris Police Prefecture
Au coeur de l'enquête criminelle - Sébastien AGUILAR - Police Scientifique

Our book is available in all bookstores and online retail platforms (To order: click here)

Linear Sequential Unmasking–Expanded (LSU-E): A general approach for improving decision making as well as minimizing noise and bias

Copy of the article Linear Sequential Unmasking–Expanded (LSU-E): A general approach for improving decision making as well as minimizing noise and biais, Forensic Science International: Synergy, Volume 3, 2021, 100161, with author agreement (contact : [email protected])

All decision making, and particularly expert decision making, re quires the examination, evaluation, and integration of information. Research has demonstrated that the order in which information is pre sented plays a critical role in decision making processes and outcomes. Different decisions can be reached when the same information is pre sented in a different order [1,2]. Because information must always be considered in some order, optimizing this sequence is important for optimizing decisions. Since adopting one sequence or another is inevitable —some sequence must be used— and since the sequence has important cognitive implications, it follows that considering how to best sequence information is paramount.

In the forensic sciences, existing approaches to optimize the order of information processing (sequential unmasking [3] and Linear Sequential Unmasking [4]) are limited in terms of their narrow applicability to only certain types of decisions, and they focus only on minimizing bias rather than optimizing forensic decision making in general. Here, we introduce Linear Sequential Unmasking–Expanded (LSU-E), an approach that is applicable to all forensic decisions rather than being limited to a particular type of decision, and it also reduces noise and improves forensic decision making in general rather than solely by minimizing bias.

Cognitive background

All decision making is dependent on the human brain and cognitive processes. Of particular importance is the sequence in which information is encountered. For example, it is well documented that people tend to remember the initial information in a sequence better —and be more strongly impacted by it— compared to subsequent information in the sequence (see the primacy effect [5,6]). For example, if asked to memorize a list of words, people are more likely to remember words from the beginning of the list compared to the middle of the list (see also the recency effect [7]).

Critically important, the initial information in a sequence is not only remembered well, but it also influences the processing of subsequent information in a number of ways (see a simple illustration in Fig. 1). The initial information can create powerful first impressions that are difficult to override [8], it generates hypotheses that determine which further information will be heeded or ignored (e.g., selective attention [[9][10][11][12]]), and it can prompt a host of other decisional phenomena, such as confirmation bias, escalation of commitment, decision momentum, tunnel vision, belief perseverance, mind set and anchoring effects [[13][14][15][16][17][18][19]]. These phenomena are not limited to forensic decisions, but also apply to medical experts, police investigators, financial analysts, military intelligence, and indeed anyone who engages in decision making.

Fig. 1. A simple illustration of the order effect: Reading from left to right, the first/leftmost stimulus can affect the interpretation of the middle stimulus, such that it reads as A-B-14; but reading the same stimuli, from right to left, starting with 14 as the first stimulus, often makes people see the stimuli as A-13-14, i.e., the middle stimulus as a ‘13’ (or a ‘B’) depending on what you start with first.

As a testament to the power of the sequencing of information, studies have repeatedly found that presenting the same information in a different sequence elicits different conclusions from decision-makers. Such effects have been shown in a whole range of domains, from food tasting [20] and jury decision-making [21,22], to countering conspiracy arguments (such as anti-vaccine conspiracy theories [23]), all demonstrating that the ordering of information is critical. Furthermore, such order effects have been specifically shown in forensic science; for example, Klales and Lesciotto [24] as well as Davidson, Rando, and Nakhaeizadeh [25] demonstrated that the order in which skeletal material is analyzed (e.g., skull versus hip) can bias sex estimates.

Bias background

Decisions are vulnerable to bias — systematic deviations in judgment [26]. This type of bias should not be confused with intentional discriminatory bias. Bias, as it is used here, refers to cognitive biases that impact all of us, typically without intention or even conscious awareness [26,27].

Although many experts incorrectly believe that they are immune from cognitive bias [28], in some ways experts are even more susceptible to bias than non-experts [[27][29][30]]. Indeed, the impact of cognitive bias on decision making has been documented in many domains of expertise, from criminal investigators and judges, to insurance underwriters, psychological assessments, safety inspectors and medical doctors [26,[31][32][33][34][35][36]], as well as specifically in forensic science [30].

No forensic domain, or any domain for that matter, is immune from bias.

Bias in forensic science

The existence and influence of cognitive bias in the forensic sciences is now widely recognized (‘the forensic confirmation bias’ [27,37,38]). In the United States, for example, the National Academy of Sciences [39], the President’s Council of Advisors on Science and Technology [40], and the National Commission on Forensic Science [41] have all recognized cognitive bias as a real and important issue in forensic de cision making. Similar findings have been reached in other countries all around the world—for example, in the United Kingdom, the Forensic Science Regulator has issued guidance about avoiding bias in forensic work [42], and in Australia as well [43]. 

Furthermore, the effects of bias have been observed and replicated across many forensic disciplines (e.g., fingerprinting, forensic pathol ogy, DNA, firearms, digital forensic, handwriting, forensic psychology, forensic anthropology, and CSI, among others; see Ref. [44] for a review)—including among practicing forensic science experts specif ically [30,45–47]. Simply put, no forensic domain, or any domain for that matter, is immune from bias.

Minimizing bias in forensic science

Although the need to combat bias in forensic science is now widely recognized, actually combating bias in practice is a different matter. Within the pragmatics, realities and constraints of crime scenes and forensic laboratories, minimizing bias is not always a straightforward issue [48]. Given that mere awareness and willpower are insufficient to combat bias [27], we must develop effective —but also practical— countermeasures.

Linear Sequential Unmasking (LSU [4]) minimizes bias by regulating the flow and order of information such that forensic decisions are based on the evidence and task-relevant information. To accomplish this, LSU requires that forensic comparative decisions must begin with the ex amination and documentation of the actual evidence from the crime scene (the questioned or unknown material) on its own before being exposed to the ‘target’/suspect (known) reference material. The goal is to minimize the potential biasing effect of the reference/’target’ on the evidence from the crime scene (see Level 2 in Fig. 2). LSU thus ensures that the evidence from the crime scene -not the ‘target’/suspect- drives the forensic decision. 

This is especially important since the nature of the evidence from the crime scene makes it more susceptible to bias, because –in contrast to the reference materials- it often has low quality and quantity of information, which makes it more ambiguous and malleable. By examining the crime scene evidence first, LSU minimizes the risk of circular reasoning in the comparative decision making process by pre venting one from working backward from the ‘target’/suspect to the evidence.

Fig. 2. Sources of cognitive bias in sampling, observations, testing strategies, analysis, and/or conclusions, that impact even experts. These sources of bias are organized in a taxonomy of three categories: case-specific sources (Category A), individual-specific sources (Category B), and sources that relate to human nature (Category C).

LSU limitations

By its very nature, LSU is limited to comparative decisions where evidence from the crime scene (such as fingerprints or handwriting) is compared to a ‘target’/suspect. This approach was first developed to minimize bias specifically in forensic DNA interpretation (sequential unmasking [3]). Dror et al. [4] then expanded this approach to other comparative forensic domains (fingerprints, firearms, handwriting, etc.) and introduced a balanced approach for allowing revisions of the initial judgments, but within restrictions.

LSU is therefore limited in two ways: First, it applies only to the limited set of comparative decisions (such as comparing DNA profiles or fingerprints). Second, its function is limited to minimizing bias, not reducing noise or improving decision making more broadly.

In this article, we introduce Linear Sequential Unmasking—Expanded (LSU-E). LSU-E provides an approach that can be applied to all forensic decisions, not only comparative decisions. Furthermore, LSU-E goes beyond bias, it reduces noise and improves decisions more generally by cognitively optimizing the sequence of information in a way that maximizes information utility and thereby produces better and more reliable decisions

Linear Sequential Unmasking—Expanded (LSU-E)

Beyond comparative forensic domains

LSU in its current form is only applicable to forensic domains that compare evidence against specific reference materials (such as a suspect’s known DNA profile or fingerprints—see Level 2 in Fig. 2). As noted above, the problem is that these reference materials can bias the perception and interpretation of the evidence, such that interpretations of the same data/evidence vary depending on the presence and nature of the reference material —and LSU aims to minimize this problem by requiring linear rather than circular reasoning.

However, many forensic judgments are not based on comparing two stimuli. For instance, digital forensics, forensic pathology, and CSI all require decisions that are not based on comparing evidence against a known suspect. Although such domains may not entail a comparison to a ‘target’ stimulus or suspect, they nevertheless entail biasing information and context that can create problematic expectations and top-down cognitive processes —and the expanded LSU-E provides a way to minimize those as well.

Take, for instance, CSI. Crime scene investigators customarily receive information about the scene even before they arrive to the crime scene itself, such as the presumed manner of death (homicide, suicide, or accident) or other investigative theories (such as an eyewitness account that the burglar entered through the back window, etc.). When the CSI receives such details before actually seeing the crime scene for themselves, they become prone to develop a priori expectations and hypotheses, which can bias their subsequent perception and interpretation of the actual crime scene, and impact if and what evidence they collect. The same applies to other non-comparative forensic domains, such as forensic pathology, fire investigators and digital forensics. For example, telling a fire investigator —before they arrive and examine the fire scene itself— that the property was on the market for two years but did not sell, or/and that the owner had recently insured the property, can bias their work and conclusions.

Combating bias in these domains is especially challenging since these experts need at least some contextual information in order to do their work (unlike, for example, firearms, fingerprint, and DNA experts, who require minimal contextual information to perform comparisons of physical evidence).

The aim of LSU-E is not to deprive experts of the information they need, but rather to minimize bias by providing that information in the optimal sequence. The principle is simple: Always begin with the actual data/evidence —and only that data/evidence— before considering any other contextual information, be it explicit or implicit, reference materials, or any other contextual or meta-information.

In CSI, for example, no contextual information should be provided until after the CSI has initially seen the crime scene for themselves and formed (and documented) their initial impressions, derived solely from the crime scene and nothing else. This allows them to form an initial impression driven only by the actual data/evidence. Then, they can receive relevant contextual information before commencing evidence collection. The goal is clear: As much as practically possible, experts should —at least initially— form their opinion based on the raw data itself before being given any further information that could influence their opinion.

Of course, LSU-E is not limited to forensic work and can be readily applied to many domains of expert decision making. For example, in healthcare, a medical doctor should examine a patient before making a diagnosis (or even generating a hypothesis) based on contextual information. The use of SBAR (Situation, Background, Assessment and Recommendation [49,50]) should not be provided until after they have seen the actual patient. Similarly, workplace safety inspectors should not be made aware of a company’s past violations until after they have evaluated the worksite for themselves without such knowledge [32].

Beyond minimizing bias

Beyond the issue of bias, expert decisions are stronger when they are less noisy and based on the ‘right’ information —the most appropriate, reliable, relevant and diagnostic information. LSU-E provides criteria (described below) for identifying and prioritizing this information. Rather than exposing experts to information in a random or incidental order, LSU-E aims to optimize the sequence of information so as to utilize (or counteract) cognitive and psychological influences (such as, primacy effects, selective attention and confirmation bias; see Section 1.1) and thus empower experts to make better decisions. It is also critical that as the expert progresses through the informational sequence, they document what information they see and any changes in their opinion. This is to ensure that it is transparent what information was used in their decision making and how [51,52].

Criteria for sequencing information in LSU-E

Optimizing the order of information not only minimizes bias but also reduces noise and improves the quality of decision making more generally. The question is: How should one determine what information experts should receive and how best to sequence it? LSU-E provides three criteria for determining the optimal sequence of exposure to task-relevant information: biasing power, objectivity, and relevance —which are elaborated below

1. Biasing power. 

The biasing power of relevant information varies drastically. Some information may be strongly biasing, whereas other information is not biasing at all. For example, the technique used to lift and develop a fingerprint is minimally biasing (if at all), but the medication found next to a body may bias the manner-of- death decision. It is therefore suggested that the non- (or less) biasing relevant information be put before the more strongly biasing relevant information in the order of exposure. 

2. Objectivity. 

Task-relevant information also varies in its objectivity. For example, an eyewitness account of an event is typically less objective than a video recording of the same event —but video re cordings can also vary in their objectivity, depending on their completeness, perspective, quality, etc. It is therefore suggested that the more objective information be put before the less objective in formation in the order of exposure. 

3. Relevance. 

Some relevant information stands at the very core of the work and necessarily underpins the decision, whereas other relevant information is not as central or essential. For example, in deter mining manner-of-death, the medicine found next to a body would typically be more relevant (for instance, to determine which toxi cological tests to run) than the decedent’s history of depression. It is therefore suggested that the more relevant information is put before the more peripheral information in the order of exposure, and –of course- any information that is totally irrelevant to the decision should be omitted altogether (such as the past criminal history of a suspect).

The above criteria are ‘guiding principles’ because:

A. The suggested criteria above are actually a continuum rather than a simple dichotomy [45,48,53]. One may even consider variability within the same category of information; for example, a higher quality video recording may be considered before a lower quality recording, or a statement from a sober eyewitness may be considered before a statement from an intoxicated witness. 

B. The three criteria are not independent; they interact with one another. For example, objectivity and relevance may interact to determine the power of the information (e.g., even highly objective information should be less powerful if its relevance is low, or conversely, highly relevant information should be less powerful if its objectivity is low). Hence, the three criteria are not to be judged in isolation from each other. 

C. The order of information needs to be weighed against the potential benefit it can provide [52]. For example, at the trial of police officer Derek Chauvin in relation to the death of George Floyd, the forensic pathologist Andrew Baker testified that he “intentionally chose not” to watch video of Floyd’s death before conducting the autopsy because he “did not want to bias [his] exam by going in with pre conceived notions that might lead [him] down one path or another” [54]. Hence, his decision was to examine the raw data first (an au topsy of the body) before exposure to other information (the video). Such a decision should also consider the potential benefit of watch ing the video before conducting the autopsy, in terms of whether the video might guide the autopsy more than bias it. In other words, LSU-E requires one to consider the potential benefit relative to the potential biasing effect [52]. 

With this approach, we urge experts to carefully consider how each piece of information satisfies each of these three criteria and whether and when it should, or should not, be included in the sequence —and whenever possible, to document their justification for including (or excluding) any given piece of information. Of course, this raises prac tical questions about how to best implement LSU-E, such as using case managers —and effective implementation strategies may well vary be tween disciplines and/or laboratories— but first we need to acknowl edge these issues and the need to develop approaches to deal with them.

Conclusion

In this paper, we draw upon classic cognitive and psychological research on factors that influence and underpin expert decision making to propose a broad and versatile approach to strengthening expert decision making. Experts from all domains should first form an initial impression based solely on the raw data/evidence, devoid of any reference material or context, even if relevant. Only thereafter can they consider what other information they should receive and in what order based on its objectivity, relevance, and biasing power. It is furthermore essential to transparently document the impact and role of the various pieces of information on the decision making process. As a result of using LSU-E, decisions will not only be more transparent and less noisy, but it will also make sure that the contributions of different pieces of information are justified by, and proportional to, their strength.

Références

[1] S.E. Asche, Forming impressions of personality, J. Abnorm. Soc. Psychol., 41 (1964), pp. 258-290
[2] C.I. Hovland (Ed.), The Order of Presentation in Persuasion, Yale University Press (1957)
[3] D. Krane, S. Ford, J. Gilder, K. Inman, A. Jamieson, R. Koppl, et al. Sequential unmasking: a means of minimizing observer effects in forensic DNA interpretation, J. Forensic Sci., 53 (2008), pp. 1006-1107
[4] I.E. Dror, W.C. Thompson, C.A. Meissner, I. Kornfield, D. Krane, M. Saks, et al. Context management toolbox: a Linear Sequential Unmasking (LSU) approach for minimizing cognitive bias in forensic decision making, J. Forensic Sci., 60 (4) (2015), pp. 1111-1112
[5] F.H. Lund. The psychology of belief: IV. The law of primacy in persuasion, J. Abnorm. Soc. Psychol., 20 (1925), pp. 183-191
[6] B.B. Murdock Jr. The serial position effect of free recall, J. Exp. Psychol., 64 (5) (1962), p. 482
[7] J. Deese, R.A. Kaufman. Serial effects in recall of unorganized and sequentially organized verbal material, J. Exp. Psychol., 54 (3) (1957), p. 180
[8] J.M. Darley, P.H. Gross. A hypothesis-confirming bias in labeling effects,J. Pers. Soc. Psychol., 44 (1) (1983), pp. 20-33
[9] A. Treisman. Contextual cues in selective listening, Q. J. Exp. Psychol., 12 (1960), pp. 242-248
[10] J. Bargh, E. Morsella. The unconscious mind, Perspect. Psychol. Sci., 3 (1) (2008), pp. 73-79
[11] D.A. Broadbent. Perception and Communication, Pergamon Press, London, England (1958)
[12] J.A. Deutsch, D. Deutsch. Attention: some theoretical considerations, Psychol. Rev., 70 (1963), pp. 80-90
[13] A. Tversky, D. Kahneman. Judgment under uncertainty: heuristics and biases, Science, 185 (4157) (1974), pp. 1124-1131
[14] R.S. Nickerson. Confirmation bias: a ubiquitous phenomenon in many guises, Rev. Gen. Psychol., 2 (1998), pp. 175-220
[15] C. Barry, K. Halfmann. The effect of mindset on decision-making, J. Integrated Soc. Sci., 6 (2016), pp. 49-74
[16] P.C. Wason. On the failure to eliminate hypotheses in a conceptual task, Q. J. Exp. Psychol., 12 (3) (1960), pp. 129-140
[17] B.M. Staw. The escalation of commitment: an update and appraisal, Z. Shapira (Ed.), Organizational Decision Making, Cambridge University Press (1997), pp. 191-215
[18] M. Sherif, D. Taub, C.I. Hovland. Assimilation and contrast effects of anchoring stimuli on judgments, J. Exp. Psychol., 55 (2) (1958), pp. 150-155
[19] C.A. Anderson, M.R. Lepper, L. Ross. Perseverance of social theories: the role of explanation in the persistence of discredited information, J. Pers. Soc. Psychol., 39 (6) (1980), pp. 1037-1049
[20] M.L. Dean. Presentation order effects in product taste tests, J. Psychol., 105 (1) (1980), pp. 107-110
[21] K.A. Carlson, J.E. Russo. Biased interpretation of evidence by mock jurors, J. Exp. Psychol. Appl., 7 (2) (2001), p. 91
[22] R.G. Lawson. Order of presentation as a factor in jury persuasion. Ky, LJ, 56 (1967), p. 523
[23] D. Jolley, K.M. Douglas. Prevention is better than cure: addressing anti-vaccine conspiracy theories, J. Appl. Soc. Psychol., 47 (2017), pp. 459-469
[24] A.R. Klales, K.M. Lesciotto. The “science of science”: examining bias in forensic anthropology, Proceedings of the 68th Annual Scientific Meeting of the American Academy of Forensic Sciences (2016)
[25] M. Davidson, C. Rando, S. Nakhaeizadeh. Cognitive bias and the order of examination on skeletal remains, Proceedings of the 71st Annual Meeting of the American Academy of Forensic Sciences (2019)
[26] D. Kahneman, O. Sibony, C. Sunstein. Noise: A Flaw in Human Judgment, William Collins (2021)
[27] I.E. Dror. Cognitive and human factors in expert decision making: six fallacies and the eight sources of bias, Anal. Chem., 92 (12) (2020), pp. 7998-8004
[28] J. Kukucka, S.M. Kassin, P.A. Zapf, I.E. Dror. Cognitive bias and blindness: a global survey of forensic science examiners, Journal of Applied Research in Memory and Cognition, 6 (2017), pp. 452-459
[29] I.E. Dror. The paradox of human expertise: why experts get it wrong, N. Kapur (Ed.), The Paradoxical Brain, Cambridge University Press, Cambridge, UK (2011), pp. 177-188
[30] C. Eeden, C. De Poot, P. Koppen. The forensic confirmation bias: a comparison between experts and novices, J. Forensic Sci., 64 (1) (2019), pp. 120-126
[31] C. Huang, R. Bull. Applying Hierarchy of Expert Performance (HEP) to investigative interview evaluation: strengths, challenges and future directions, Psychiatr. Psychol. Law, 28 (2021)
[32] C. MacLean, I.E. Dror. The effect of contextual information on professional judgment: reliability and biasability of expert workplace safety inspectors,J. Saf. Res., 77 (2021), pp. 13-22
[33] E. Rassin. Anyone who commits such a cruel crime, must be criminally irresponsible’: context effects in forensic psychological assessment, Psychiatr. Psychol. Law (2021)
[34] V. Meterko, G. Cooper. Cognitive biases in criminal case evaluation: a review of the research, J. Police Crim. Psychol. (2021)
[35] C. FitzGerald, S. Hurst. Implicit bias in healthcare professionals: a systematic review, BMC Med. Ethics, 18 (2017), pp. 1-18
[36] M.K. Goyal, N. Kuppermann, S.D. Cleary, S.J. Teach, J.M. Chamberlain. Racial disparities in pain management of children with appendicitis in emergency departments, JAMA Pediatr, 169 (11) (2015), pp. 996-1002
[37] I.E. Dror. Biases in forensic experts, Science, 360 (6386) (2018), p. 243
[38] S.M. Kassin, I.E. Dror, J. Kukucka. The forensic confirmation bias: problems, perspectives, and proposed solutions, Journal of Applied Research in Memory and Cognition, 2 (1) (2013), pp. 42-52
[39] NAS. National Research Council, Strengthening Forensic Science in the United States: a Path Forward, National Academy of Sciences (2009)
[40] PCAST, President’s Council of Advisors on science and Technology (PCAST), Report to the President – Forensic Science in Criminal Courts: Ensuring Validity of Feature-Comparison Methods, Office of Science and Technology, Washington, DC (2016)
[41] NCFS, National Commission on Forensic Science. Ensuring that Forensic Analysis Is Based upon Task-Relevant Information, National Commission on Forensic Science, Washington, DC (2016)
[42] Forensic Science Regulator. Cognitive bias effects relevant to forensic science examinations, disponible sur https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/914259/217_FSR-G-217_Cognitive_bias_appendix_Issue_2.pdf 
[43] ANZPAA, A Review of Contextual Bias in Forensic Science and its Potential Legal Implication, Australia New Zealand Policing Advisory Agency (2010)
[44] J. Kukucka, I.E. Dror. Human factors in forensic science: psychological causes of bias and error, D. DeMatteo, K.C. Scherr (Eds.), The Oxford Handbook of Psychology and Law, Oxford University Press, New York (2021)
[45] I.E. Dror, J. Melinek, J.L. Arden, J. Kukucka, S. Hawkins, J. Carter, D.S. Atherton. Cognitive bias in forensic pathology decisions, J. Forensic Sci., 66 (4) (2021)
[46] N. Sunde, I.E. Dror. A hierarchy of expert performance (HEP) applied to digital forensics: reliability and biasability in digital forensics decision making, Forensic Sci. Int.: Digit. Invest., 37 (2021)
[47] D.C. Murrie, M.T. Boccaccini, L.A. Guarnera, K.A. Rufino. Are forensic experts biased by the side that retained them? Psychol. Sci., 24 (10) (2013), pp. 1889-1897
[48] G. Langenburg. Addressing potential observer effects in forensic science: a perspective from a forensic scientist who uses linear sequential unmasking techniques, Aust. J. Forensic Sci., 49 (2017), pp. 548-563
[49] C.M. Thomas, E. Bertram, D. Johnson. The SBAR communication technique, Nurse Educat., 34 (4) (2009), pp. 176-180
[50] I. Wacogne, V. Diwakar. Handover and note-keeping: the SBAR approach, Clin. Risk, 16 (5) (2010), pp. 173-175
[51] M.A. Almazrouei, I.E. Dror, R. Morgan. The forensic disclosure model: what should be disclosed to, and by, forensic experts?, International Journal of Law, Crime and Justice, 59 (2019)
[52] I.E. Dror. Combating bias: the next step in fighting cognitive and psychological contamination, J. Forensic Sci., 57 (1) (2012), pp. 276-277
[53] D. Simon, Minimizing Error and Bias in Death Investigations, vol. 49, Seton Hall Law Rev. (2018), pp. 255-305
[54] CNN. Medical examiner: I “intentionally chose not” to view videos of Floyd’s death before conducting autopsy, April 9, 2021, disponible sur https://edition.cnn.com/us/live-news/derek-chauvin-trial-04-09-21/h_03cda59afac6532a0fb8ed48244e44a0 (2011)

AI in Forensics: Between Technological Revolution and Human Challenges

By Yann CHOVORY, Engineer in AI Applied to Criminalistics (Institut Génétique Nantes Atlantique – IGNA). On a crime scene, every minute counts. Between identifying a fleeing suspect, preventing further wrongdoing, and managing the time constraints of an investigation, case handlers are engaged in a genuine race against the clock. Fingerprints, gunshot residues, biological traces, video surveillance, digital data… all these clues must be collected and quickly analyzed, or there is a risk that the case will collapse for lack of usable evidence in time. Yet overwhelmed by the ever-growing mass of data, forensic laboratories are struggling to keep pace.

Analyzing evidence with speed and accuracy

In this context, artificial intelligence (AI) establishes itself as an indispensable accelerator. Capable of processing in a few hours what would take weeks to analyze manually, it optimises the use of clues by speeding up their sorting and detecting links imperceptible to the human eye. More than just a time-saver, it also improves the relevance of investigations: swiftly cross-referencing databases, spotting hidden patterns in phone call records, comparing DNA fragments with unmatched precision. AI thus acts as a tireless virtual analyst, reducing the risk of human error and offering new opportunities to forensic experts.

But this technological revolution does not come without friction. Between institutional scepticism and operational resistance, its integration into investigative practices remains a challenge. My professional journey, marked by a persistent quest to integrate AI into scientific policing, illustrates this transformation—and the obstacles it faces. From a marginalised bioinformatician to project lead for AI at IGNA, I have observed from within how this discipline, long grounded in traditional methods, is adapting—sometimes under pressure—to the era of big data.

The risk of human error is reduced and the reliability of identifications increased

Concrete examples: AI from the crime scene to the laboratory

AI is already making inroads in several areas of criminalistics, with promising results. For example, AFIS (Automated Fingerprint Identification System) fingerprint recognition systems now incorporate machine learning components to improve matching of latent fingerprints. The risk of human error is reduced and the reliability of identifications increased [1]. Likewise, in ballistics, computer vision algorithms now automatically compare the striations on a projectile with markings of known firearms, speeding the work of a firearms expert. Tools are also emerging to interpret bloodstains on a scene: machine learning1 models can help reconstruct the trajectory of blood droplets and thus the dynamics of an assault or violent event [2]. These examples illustrate how AI is integrating into the forensic expert’s toolkit, from crime scene image analysis to the recognition of complex patterns.But it is perhaps in forensic genetics that AI currently raises the greatest hopes. DNA analysis labs process thousands of genetic profiles and samples, with deadlines that can be critical. AI offers a considerable time-gain and enhanced accuracy. As part of my research, I contributed to developing an in-house AI capable of interpreting 86 genetic profiles in just three minutes [3]—a major advance when analyzing a complex profile may take hours. Since 2024, it has autonomously handled simple profiles, while complex genetic profiles are automatically routed to a human expert, ensuring effective collaboration between automation and expertise. The results observed are very encouraging. Not only is the turnaround time for DNA results drastically reduced, but the error rate also falls thanks to the standardization introduced by the algorithm.

AI does not replace humans but complements them

Another promising advance lies in enhancing genetic DNA-based facial composites. Currently, this technique allows estimating certain physical features of an individual (such as eye color, hair color, or skin pigmentation) from their genetic code, but it remains limited by the complexity of genetic interactions and uncertainties in predictions. AI could revolutionise this approach by using deep learning models trained on vast genetic and phenotypic databases, thereby refining these predictions and generating more accurate sketches. Unlike classical methods, which rely on statistical probabilities, an AI model could analyse millions of genetic variants in a few seconds and identify subtle correlations that traditional approaches do not detect. This prospect opens the way to a significant improvement in the relevance of DNA sketches, facilitating suspect identification when no other usable clues are available. The Forenseek platform has explored current advances in this area, but AI has not yet been fully exploited to surpass existing methods [5]. Its integration could therefore constitute a major breakthrough in criminal investigations.

It is important to emphasize that in all these examples, AI does not replace the human but complements them. At IRCGN (French National Gendarmerie Criminal Research Institute) cited above, while the majority of routine, good-quality DNA profiles can be handled automatically, regular human quality control remains: every week, a technician randomly checks cases processed by AI, to ensure no drift has occurred [3]. This human-machine collaboration is key to successful deployment, as the expertise of the forensic specialists remains indispensable to validate and finely interpret the results, especially in complex cases.

Intelligence artificielle IA en police scientifique et cybercriminalité - Forenseek

Algorithms Trained on Data: How AI “Learns” in Forensics

The impressive performance of AI in forensics relies on one crucial resource: data. For a machine learning algorithm to identify a fingerprint or interpret a DNA profile, it first needs to be trained on numerous examples. In practical terms, we provide it with representative datasets, each containing inputs (images, signals, genetic profiles, etc.) associated with an expected outcome (the identity of the correct suspect, the exact composition of the DNA profile, etc.). By analyzing thousands—or even millions—of these examples, the machine adjusts its internal parameters to best replicate the decisions made by human experts. This is known as supervised learning, since the AI learns from cases where the correct outcome is already known. For example, to train a model to recognize DNA profiles, we use data from solved cases where the expected result is clearly established.

an AI’s performance depends on the quality of the data that trains it.

The larger and more diverse the training dataset, the better the AI will be at detecting reliable and robust patterns. However, not all data is equal. It must be of high quality (e.g., properly labeled images, DNA profiles free from input errors) and cover a wide enough range of situations. If the system is biased by being exposed to only a narrow range of cases, it may fail when confronted with a slightly different scenario. In genetics, for instance, this means including profiles from various ethnic backgrounds, varying degrees of degradation, and complex mixture configurations so the algorithm can learn to handle all potential sources of variation.

Transparency in data composition is essential. Studies have shown that some forensic databases are demographically unbalanced—for example, the U.S. CODIS database contains an overrepresentation of profiles from African-American individuals compared to other groups [6]. A model naively trained on such data could inherit systemic biases and produce less reliable or less fair results for underrepresented populations. It is therefore crucial to monitor training data for bias and, if necessary, to correct it (e.g., through balanced sampling, augmentation of minority data) in order to achieve fair and equitable learning.

Data Collection: Gathering diverse and representative datasets
Data Preprocessing: Cleaning and preparing data for training
AI Training: Training algorithms on prepared datasets
Data Validation: Verifying the quality and diversity of the data
Bias Evaluation: Identifying and correcting biases in the datasets

Technically, training an AI involves rigorous steps of cross-validation and performance measurement. We generally split data into three sets: one for training, another for validation during development (to adjust the parameters), and a final test set to objectively evaluate the model. Quantitative metrics such as accuracy, recall (sensitivity), or error curves make it possible to quantify how reliable the algorithm is on data it has never seen [6]. For example, one can check that the AI correctly identifies a large majority of perpetrators from traces while maintaining a low rate of false positives. Increasingly, we also integrate fairness and ethical criteria into these evaluations: performance is examined across demographic groups or testing conditions (gender, age, etc.), to ensure that no unacceptable bias remains [6]. Finally, compliance with legal constraints (such as the GDPR in Europe, which regulates the use of personal data) must be built in from the design phase of the system [6]. That may involve anonymizing data, limiting certain sensitive information, or providing procedures in case an ethical bias is detected.

Ultimately, an AI’s performance depends on the quality of the data that trains it. In the forensic field, that means algorithms “learn” from accumulated human expertise. Every algorithmic decision implies the experience of hundreds of experts who provided examples or tuned parameters. It is both a strength – capitalizing on a vast knowledge base – and a responsibility: to carefully select, prepare, and control the data that will feed the artificial intelligence.

Technical and operational challenges for integrating AI into forensic science

Technical and operational challenges for integrating AI into forensic science

While AI promises substantial gains, its concrete integration in the forensic field faces many challenges. It is not enough to train a model in a laboratory: one must also be able to use it within the constrained framework of a judicial investigation, with all the reliability requirements that entails. Among the main technical and organisational challenges are:

  • Access to data and infrastructure: Paradoxically, although AI requires large datasets to learn, it can be difficult to gather sufficient data in the specific forensic domain. DNA profiles, for example, are highly sensitive personal data, protected by law and stored in secure, sequestered databases. Obtaining datasets large enough to train an algorithm may require complex cooperation between agencies or the generation of synthetic data to fill gaps. Additionally, computing tools must be capable of processing large volumes of data in reasonable time — which requires investment in hardware (servers, GPU2s for deep learning3) and specialized software. Some national initiatives are beginning to emerge to pool forensic data securely, but this remains an ongoing project.
  • Quality of annotations and bias: The effectiveness of AI learning depends on the quality of the annotations in training datasets. In many forensic areas, establishing « ground truth » is not trivial. For example, to train an algorithm to recognize a face in surveillance video, each face must be correctly identified by a human first — which can be difficult if the image is blurry or partial. Similarly, labeling data sets of footprints, fibers, or fingerprints requires meticulous work by experts and sometimes involves subjectivity. If the training data include annotation errors or historical biases, the AI will reproduce them [6]. A common bias is demographic representativeness noted above, but there may be others. For instance, if a weapon detection model is trained mainly on images of weapons indoors, it may perform poorly for detecting a weapon outdoors, in rain, etc. The quality and diversity of annotated data are therefore a major technical issue. This implicates establishing rigorous data collection and annotation protocols (ideally standardized at the international level), as well as ongoing monitoring to detect model drift (overfitting to certain cases, performance degradation over time, etc.). This validation relies on experimental studies comparing AI performance to that of human experts. However, the complexity of homologation procedures and procurement often slows adoption, delaying the deployment of new tools in forensic science by several years.
Intelligence Artificielle IA en police scientifique et en sciences forensiques cybercriminalité - Forenseek
  • Understanding and Acceptance by Judicial Actors: Introducing artificial intelligence into the judicial process inevitably raises the question of trust. An investigator or a laboratory technician, trained in conventional methods, must learn to use and interpret the results provided by AI. This requires training and a gradual cultural shift so that the tool becomes an ally and not an “incomprehensible black box.” More broadly, judges, attorneys, and jurors who will have to discuss this evidence must also grasp its principles. Yet explaining the inner workings of a neural network or the statistical meaning of a similarity score is far from simple. We sometimes observe misunderstanding or suspicion from certain judicial actors toward these algorithmic methods [6]. If a judge does not understand how a conclusion was reached, they may be inclined to reject it or assign it less weight, out of caution. Similarly, a defence lawyer will legitimately scrutinize the weaknesses of a tool they do not know, which may lead to judicial debates over the validity of the AI. A major challenge is thus to make AI explainable (the “XAI” concept—eXplainable Artificial Intelligence), or at least to present its results in a comprehensible format and pedagogically acceptable to a court. Without this, integrating AI risks facing resistance or sparking controversy in trials, limiting its practical contribution.
  • Regulatory Framework and Data Protection: Finally, forensic sciences operate within a strict legal framework, notably regarding personal data (DNA profiles, biometric data, etc.) and criminal procedure. The use of AI must comply with these regulations. In France, the CNIL (Commission Nationale de l’Informatique et des Libertés) keeps watch and can impose restrictions if an algorithmic processing harms privacy. For example, training an AI on nominal DNA profiles without a legal basis would be inconceivable. Innovation must therefore remain within legal boundaries, imposing constraints from the design phase of projects. Another issue concerns trade secrecy surrounding certain algorithms in judicial contexts: if a vendor refuses to disclose the internal workings of its software for intellectual property reasons, how can the defence or the judge ensure its reliability? Recent cases have shown defendants convicted on the basis of proprietary software (e.g., DNA analysis) without the defence being able to examine the source code used [7]. These situations raise issues of transparency and rights of defence. In the United States, a proposed law titled Justice in Forensic Algorithms Act aims precisely to ensure that trade secrecy cannot prevent the examination by experts of the algorithms used in forensics, in order to guarantee fairness in trials. This underlines the necessity of adapting regulatory frameworks to these new technologies.

Lack of Cooperation slows the development of powerful tools and limits their adoption in the field.

  • Another more structural obstacle lies in the difficulty of integrating hybrid profiles within forensic institutions, at least in France. Today, competitive examinations and recruitment often remain compartmentalised between different specialties, limiting the emergence of experts with dual expertise. For instance, in forensic police services, entrance exams for technicians or engineers are divided into distinct specialties such as biology or computer science, without pathways to recognize combined expertise in both fields. This institutional rigidity slows the integration of professionals capable of bridging between domains and fully exploiting the potential of AI in criminalistics. Yet current technological advances show that the analysis of biological traces increasingly relies on advanced digital tools. Faced with this evolution, greater flexibility in recruitment and training of forensic experts will be necessary to meet tomorrow’s challenges.

AI in forensics must not become a matter of competition or prestige among laboratories, but a tool put at the service of justice and truth, for the benefit of investigators and victims.

  • A further major barrier to innovation in forensic science is the compartmentalization of efforts among different stakeholders, who often work in parallel on identical problems without pooling their advances. This lack of cooperation slows the development of effective tools and limits their adoption in the field. However, by sharing our resources—whether databases, methodologies, or algorithms—we could accelerate the production deployment of AI solutions and guarantee continuous improvement based on collective expertise. My experience across different French laboratories (the Lyon Scientific Police Laboratory (Service National de Police Scientifique – SNPS), the Institut de Recherche Criminelle de la Gendarmerie Nationale (IRCGN), and now the Nantes Atlantique Genetic Institute (IGNA)) allows me to perceive how much this fragmentation hampers progress, even though we pursue a common goal: improving the resolution of investigations. This is why it is essential to promote open-source development when possible and to create platforms of collaboration among public and judicial entities. AI in forensics must not be a matter of competition or prestige among laboratories, but a tool in the service of justice and truth, for the benefit of investigators and victims alike.
Intelligence Artificielle IA en police scientifique et en sciences forensiques - Forenseek

The challenges discussed above all have technical dimensions, but they are closely intertwine with fundamental ethical and legal questions. From an ethical standpoint, the absolute priority is to avoid injustice through the use of AI. We must prevent at all costs that a poorly designed algorithm leads to someone’s wrongful indictment or, conversely, the release of a guilty party. This involves mastering biases (to avoid discrimination against certain groups), transparency (so that every party in a trial can understand and challenge algorithmic evidence), and accountability for decisions. Indeed, who is responsible if an AI makes an error? The expert who misused it, the software developer, or no one because “the machine made a mistake”? This ambiguity is unacceptable in justice: it is essential to always keep human expertise in the loop, so that a final decision—whether to accuse or exonerate—is based on human evaluation informed by AI, and not on the opaque verdict of an automated system.

On the legal side, the landscape is evolving to regulate the use of AI. The European Union, in particular, is finalizing an AI Regulation (AI Act) which will be the world’s first legislation establishing a framework for the development, commercialization, and use of artificial intelligence systems [8]. Its goal is to minimize risks to safety and fundamental rights by imposing obligations depending on the level of risk of the application (and forensic or criminal justice applications will undoubtedly be categorized among the most sensitive). In France, the CNIL has published recommendations emphasizing that innovation can be reconciled with respect for individual rights during the development of AI solutions [9]. This involves, for example, compliance with the GDPR, limitation of purposes (i.e. training a model only for legitimate and clearly defined objectives), proportionality in data collection, and prior impact assessments for any system likely to significantly affect individuals. These safeguards aim to ensure that enthusiasm for AI does not come at the expense of the fundamental principles of justice and privacy.

Encouraging Innovation While Demanding Scientific Validation and Transparency

A delicate balance must therefore be struck between technological innovation and regulatory framework. On one hand, overly restricting experimentation and adoption of AI in forensics could deprive investigators of tools potentially decisive for solving complex cases. On the other, leaving the field unregulated and unchecked would risk judicial errors or violations of rights. The solution likely lies in a measured approach: encouraging innovation while demanding solid scientific validation and transparency in methods. Ethics committees and independent experts can be involved to audit algorithms, verify that they comply with norms, and that they do not replicate problematic biases. Furthermore, legal professionals must be informed and trained on these new technologies so they can meaningfully debate their probative value in court. A judge trained in the basic concepts of AI will be better placed to understand the evidentiary weight (and limitations) of evidence derived from an algorithm.

Conclusion: The Future of forensics in the AI Era

Artificial intelligence is set to deeply transform forensics, offering investigators analysis tools that are faster, more accurate, and capable of handling volumes of data once considered inaccessible. Whether it is sifting through gigabytes of digital information, comparing latent traces with improved reliability, or untangling complex DNA profiles in a matter of minutes, AI opens new horizons for solving investigations more efficiently.

But this technological leap comes with crucial challenges. Learning techniques, quality of databases, algorithmic bias, transparency of decisions, regulatory framework: these are all stakes that will determine whether AI can truly strengthen justice without undermining it. At a time when public trust in digital tools is more than ever under scrutiny, it is imperative to integrate these innovations with rigor and responsibility.The future of AI in forensics will not be a confrontation between machine and human, but a collaborative work in which human expertise remains central. Technology may help us see faster and farther, but interpretation, judgment and decision-making will remain in the hands of forensic experts and the judicial authorities. Thus, the real question may not be how far AI can go in forensic science, but how we will frame it to ensure that it guarantees ethical and equitable justice. Will we be able to harness its power while preserving the very foundations of a fair trial and the right to defence?

The revolution is underway. It is now up to us to make it progress, not drift.

Bibliography

[1]​ : Océane DUBOUST. L’IA peut-elle aider la police scientifique à trouver des similitudes dans les empreintes digitales ? Euronews, 12/01/2024 [vue le 15/03/2025] https://fr.euronews.com/next/2024/01/12/lia-peut-elle-aider-la-police-scientifique-a-trouver-des-similitudes-dans-les-empreintes-d#:~:text=,il
[2] : International Journal of Multidisciplinary Research and Publications. The Role of Artificial Intelligence in Forensic Science: Transforming Investigations through Technology. Muhammad Arjamand et al. Volume 7, Issue 5, pp. 67-70, 2024. Disponible sur : http://ijmrap.com/ [vue le 15/03/2025]
[3]​ : Gendarmerie Nationale. Kit universel, puce RFID, IA : le PJGN à la pointe de la technologie sur l’ADN.  Mis à jour le 22/01/2025 et disponible sur : https://www.gendarmerie.interieur.gouv.fr/pjgn/recherche-et-innovation/kit-universel-puce-rfid-ia-le-pjgn-a-la-pointe-de-la-technologie-sur-l-adn [vue le 15/03/2025]
[4]​ : Michelle TAYLOR. EXCLUSIVE: Brand New Deterministic Software Can Deconvolute a DNA Mixture in Seconds.  Forensic Magazine, 29/03/022. Disponible sur : https://www.forensicmag.com [vue le 15/03/2025]
[5]​ : Sébastien AGUILAR. L’ADN à l’origine des portraits-robot ! Forenseek, 05/01/2023. Disponible sur : https://www.forenseek.fr/adn-a-l-origine-des-portraits-robot/ [vue le 15/03/2025]
[6]​ : Max M. Houck, Ph.D.  CSI/AI: The Potential for Artificial Intelligence in Forensic Science.  iShine News, 29/10/2024. Disponible sur : https://www.ishinews.com/csi-ai-the-potential-for-artificial-intelligence-in-forensic-science/ [vue le 15/03/2025]
[7]​ : Mark Takano.  Black box algorithms’ use in criminal justice system tackled by bill reintroduced by reps. Takano and evans.  Takano House, 15/02/2024. Disponible sur : https://takano.house.gov/newsroom/press-releases/black-box-algorithms-use-in-criminal-justice-system-tackled-by-bill-reintroduced-by-reps-takano-and-evans [vue le 15/03/2025]
[8] : Mon Expert RGPD. Artificial Intelligence Act : La CNIL répond aux premières questions.  Disponible sur : https://monexpertrgpd.com [vue le 15/03/2025]
[9]​ ​: ​ CNIL.  Les fiches pratiques IA.  Disponible sur : https://www.cnil.fr [vue le 15/03/2025]

Définitions :

  1. GPU (Graphics Processing Unit)
    A GPU is a specialized processor designed to perform massively parallel computations. Originally developed for rendering graphics, it is now widely used in artificial intelligence applications, particularly for training deep learning models. Unlike CPUs (central processing units), which are optimized for sequential, general-purpose tasks, GPUs contain thousands of cores optimized to execute numerous operations simultaneously on large datasets
  2. Machine Learning
    Machine learning is a branch of artificial intelligence that enables computers to learn from data without being explicitly programmed. It relies on algorithms capable of detecting patterns, making predictions, and improving performance through experience.
  3. Deep Learning
    Deep learning is a subfield of machine learning that uses artificial neural networks composed of multiple layers to model complex data representations. Inspired by the human brain, it allows AI systems to learn from large volumes of data and enhance their performance over time. Deep learning is especially effective for processing images, speech, text, and complex signals, with applications in computer vision, speech recognition, forensic science, and cybersecurity.