Tag Archives: avancés scientifiques

Analysis hair proteomic forensic Forenseek

Identifying an individual without DNA: Hair shaft proteomic analysis

When a hair or body hair is recovered without its root from a crime scene, no conventional genetic analysis can be performed. Lacking nuclear DNA, this biological material has long offered only limited evidentiary value and could neither support the formal identification of an individual nor allow comparison with national DNA databases. Since few years, however, a major shift has occurred: hair proteomics, which exploits the proteins of the hair shaft to reveal individualizing markers. Thanks to advances in mass spectrometry, this approach now provides a new pathway for identification, particularly useful in cold cases or in situations where DNA is absent or unusable.

A biological evidence long underused

Hairs recovered from crime scenes are frequently rootless, preventing any STR (Short Tandem Repeat) analysis. Traditional alternatives (morphological examination or mitochondrial DNA analysis), offer only limited discriminating power [1][9]. In many cases, these items were classified as “weak traces,” with insufficient probative value. Yet a hair is biologically rich. It is composed mainly of keratins and other structural proteins that exhibit remarkable stability and resistance to heat, aging, and environmental degradation [1]. This robustness has led several research teams to explore another avenue: instead of seeking nuclear DNA where it is absent or degraded, why not rely directly on proteins, some of which vary between individuals?

Figure 1: Structure of a hair shaft. Source: cosmeticsdesign.com

From DNA to proteomics

This technological shift relies on high-resolution mass spectrometry (HRMS), combined with bioinformatic analysis of protein polymorphisms. Recent work has confirmed that hundreds of proteins can be identified in a single hair shaft. Among them, certain markers, SAPs (Single Amino acid Polymorphisms), directly reflect individual genetic variation [2]. A major study demonstrated that a single individual presents, on average, more than 600 detectable protein groups and more than 160 polymorphic markers, yielding Random Match Probabilities (RMP) on the order of 10⁻¹⁴ [2]. This protein signature therefore offers strong discriminating power, in some cases comparable to the informational value of mitochondrial DNA, while avoiding several well-known limitations of the latter [10].

Technical obstacles related to protein extraction, made difficult by the highly cross-linked structure of keratin, have also been partially overcome. Protocols combining controlled heat and reducing agents now allow more efficient and reproducible extraction [3]. These advances make the approach more mature and more suitable for forensic practice.

Analyse protéomique du cheveu pour une identification forensique. Article Forenseek.
Figure 2: Hair proteomic analysis workflow. Proteins extracted from the hair shaft are fragmented and then analyzed by mass spectrometry to identify individual peptide variations. Source: [2] Parker, G. et al., Deep Coverage Proteome Analysis of Human Hair Shafts, Journal of Proteome Research, 2022.

Concrete opportunities for investigations

Hair-shaft proteomics significantly enhances the usefulness of hair traces in investigations. In cold cases, hairs preserved for decades can now provide individualizing information, even when nuclear DNA was unusable at the time of the original analysis [5]. In extreme contexts (fire scenes, carbonized remains, or highly degraded traces), proteins often persist where DNA has degraded, making them particularly valuable [5][6].

In recent investigations (sexual assaults, abductions, violent incidents, close-contact events), head or body hairs without roots recovered from clothing, vehicles, or victims can now contribute to establishing associations or excluding individuals. Even when it does not yield a formal identification, the protein signature may narrow the suspect pool, confirm or refute an investigative hypothesis, and support evidential assessments presented to judicial authorities [4]. From a legal standpoint, this method must be understood as a probabilistic approach, similar in principle to mitochondrial DNA analysis but based on more stable markers [7]. When integrated carefully, it may become decisive in investigative orientations, the re-examination of older cases, or situations previously left unresolved due to lack of nuclear DNA or usable fingerprints.

Technical limits and challenges

Despite its potential, hair-shaft proteomics remains a technique still in maturation. The first limitation lies in the protocols themselves: protein extraction remains challenging due to the resistant structure of the hair shaft, and full standardization has not yet been achieved [3]. A second challenge is the creation of sufficiently large population databases to compute robust Random Match Probabilities [4]. Inter-laboratory validation, essential before any use in judicial contexts, requires testing on hairs from individuals of different populations, ages, environments, and storage conditions [4][6].

Legal integration also presents challenges. Judges and attorneys will need clear explanations of this emerging probabilistic evidence. Classical admissibility requirements (reliability, reproducibility, methodological transparency, statistical robustness), apply fully [7]. To date, no international standard formally regulates the procedure, although preliminary work is underway [8].

Towards standardization and operational integration?

The outlook for the coming years is particularly promising. Several centres, notably Murdoch University and ChemCentre near Perth, Australia, are working on protocol standardization and the development of diverse reference databases [5][6]. Advances in mass spectrometry and bioinformatic tools now make possible a partial automation of analyses and more seamless integration into routine forensic laboratory workflows. For investigators, police officers, magistrates and forensic experts, this evolution requires adapting collection and preservation practices. From now on, any rootless hair should be systematically collected and retained. Even very small or very old samples may contain an exploitable protein signature. This shift in perspective could transform the re-evaluation of cold cases, fire-scene examinations, and the most complex investigations.

Conclusion

Hair-shaft proteomics represents one of the most promising advances of the coming years in forensic identification. By restoring value to traces long considered underexploited, it offers a reliable and robust alternative when DNA is absent, degraded or otherwise unusable. Although judicial integration still requires validation, standardization and appropriate communication, early results clearly indicate that this approach could play a decisive role in complex investigations, degraded scenes and unresolved cases.

References :

[1] Adav, S.S., Human Hair Proteomics: An Overview, Science & Justice, 2021.
[2] Parker, G. et al., Deep Coverage Proteome Analysis of Human Hair Shafts, Journal of Proteome Research, 2022.
[3] Liu, Y. et al., Individual-specific proteomic markers from protein amino acid polymorphisms, Proteome Science, 2024.
[4] Smith, R.N. et al., Forensic Proteomics: Potential and Challenges, Proteomics, 2023.
[5] Murdoch University – Western Australia, Hair Protein Identification Project (2024–2025).
[6] ChemCentre (Western Australia Government), World-first Forensic Proteomics Research Program, 2024.
[7] Henry, R. & Stoyan, N., The Admissibility of Proteomic Evidence in Court, SSRN, 2020.
[8] ISO / ASTM – Guidelines on Forensic Biology & Novel Analytical Methods, 2022–2024.
[9] Anslinger, K., Hair Evidence in Forensic Science, Wiley, 2019.
[10] Budowle, B., Mitochondrial DNA in Forensic Identification, Elsevier, 2018.

Towards a revolution in post-mortem forensic imaging

How can an internal lesion go unnoticed during autopsy yet may have potentially caused death? In forensic medicine, understanding internal trauma is essential to reconstructing the sequence of a violent event. Among such injuries, those involving the vertebral artery present a major challenge. Subtle and often concealed by bone structures, they frequently escape traditional examination methods. A recent technological breakthrough in forensic imaging offers a promising approach: combining fluoroscopy and micro-computed tomography (micro-CT) to analyze post-mortem vascular injuries with unprecedented precision.

Key artery, difficult access

The vertebral artery supplies vital regions of the nervous system, including the brainstem, cerebellum, and posterior areas of the brain. Even a minor injury can trigger a stroke, a rapid neurological collapse, or sudden death. Its anatomical pathway, deeply embedded within the cervical spine, makes it particularly difficult to explore. In a forensic context, a lesion affecting this artery represents a critical clue when analyzing a penetrating neck wound, often revealing a potentially lethal intent.

Forensic imaging to observe real-time blood flow

The vertebral artery supplies vital regions of the nervous system, including the brainstem, cerebellum, and posterior areas of the brain. Even a minor injury can trigger a stroke, a rapid neurological collapse, or sudden death. Its anatomical pathway, deeply embedded within the cervical spine, makes it particularly difficult to explore. In a forensic context, a lesion affecting this artery represents a critical clue when analyzing a penetrating neck wound, often revealing a potentially lethal intent.

Micro-CT: diving into the heart of the lesion

To overcome this limitation, researchers have turned to micro-computed tomography (micro-CT), a very high-resolution imaging technique. The sample is rotated during the acquisition of thousands of radiographic images, which are then reconstructed into a digital 3D model. This process reveals otherwise invisible details such as arterial wall tears, thrombi, dissections, or partial ruptures. These reconstructions allow for virtual dissections from multiple angles without altering the body, ensuring a high level of reproducibility, an invaluable feature in forensic investigations.

A standardized method serving both justice and medicine

The protocol developed by Secco and colleagues relies on ex situ imaging, meaning that the examination is performed on an artery extracted from the body. This approach overcomes several obstacles, such as advanced decomposition, previous surgery, complex trauma, or movement artifacts. With the injection of a contrast agent, the vascular network becomes clearly visualized, allowing for precise and stable documentation. These high-quality images serve as robust evidence admissible in court and represent a valuable resource for medical teams involved in planning neurosurgical or trauma-related procedures.

An educational and scientific tool

Beyond their diagnostic value, 3D reconstructions and fluoroscopic videos serve as outstanding educational tools. They allow for a strikingly realistic visualization of injury mechanisms and a deeper understanding of the biomechanics of penetrating trauma. This refined comprehension of the forces at play helps not only researchers characterize vascular lesions, but also engineers design more effective protective equipment and forensic experts accurately reconstruct the circumstances surrounding a violent act.

Towards a new standard in forensic medicine

Born from close collaboration between radiologists, pathologists, engineers, and chemists, this imaging protocol represents a major step forward in forensic practice. The growing accessibility of micro-CT equipment suggests its forthcoming integration into routine autopsies. With the continuous improvement of imaging technologies in terms of resolution, speed, and multi-contrast capacity, the prospect of non-invasive post-mortem vascular examinations is becoming increasingly realistic. In the long term, this method could be extended to other arterial regions (carotid, subclavian, intracranial), thereby deepening our overall understanding of vascular trauma.

Conclusion

At the crossroads of technology and forensic science, this approach combines precision, rigor, and innovation. By providing a three-dimensional and reproducible reading of internal injuries, it transforms the way stab wounds involving the vertebral artery are analyzed. This is a major advancement, serving both judicial truth and scientific knowledge, and it paves the way for a new generation of autopsies that are finer, more reliable, and better documented.

References  :

Bioengineer.org. (2024). Detecting Vertebral Artery Stab Wounds with Imaging. Read here.

Secco, L., Franchetti, G., Viel, G. et al. Ex-situ identification of vertebral artery injuries from stab wounds through contrast-enhanced fluoroscopy and micro-CT. Int J Legal Med (2025). Read here.

Medscape. (2024). Vertebral Artery Anatomy. Read here.

Reconstruction of torn documents

When a document has been torn or shredded, the investigator is faced with a puzzle that has lost its box, its reference image, and sometimes even a portion of its pieces. Yet, the information contained within those fragments can alter the course of a case: a single figure in a contract, a name in a table, or a handwritten note in the margin. The question is therefore not merely “can it be reconstructed?”, but rather “can it be done reliably, traceably, and fast enough to be of use to the investigation?”

Why reconstruction is challenging

In forensic practice, fragments are rarely clean or uniform. They vary in shape, size, paper texture, ink density, and orientation. When several documents have been destroyed together, the fragments intermingle and create visual ambiguities: two edges may appear to fit when they do not, two different fonts may look similar, and uniform areas, blank backgrounds or low-detail photographs, provide almost no clues. So-called edge-matching approaches, which seek continuities along borders and patterns, work fairly well for small sets. But as the number of fragments grows, the number of possible combinations increases exponentially, and these methods struggle to discriminate between competing hypotheses.

The idea: harnessing randomness to explore better

Stochastic optimization offers an alternative way to approach the problem. Rather than attempting to reach the perfect configuration immediately, the algorithm generates plausible assemblies, evaluates them, and occasionally accepts “imperfect” choices in order to continue exploring the solution space. This probabilistic strategy continuously alternates between two complementary phases: exploration, which searches new pathways to avoid dead ends, and exploitation, which consolidates promising insights already discovered. In practice, each proposed assembly is assigned a score based on visual continuity (alignment of letters, extension of strokes, texture and color matching). If coherence improves, the hypothesis is adopted; if it deteriorates, it may still be tolerated for a while to test whether it leads to a better configuration later on. This flexible logic distinguishes the method from more rigid approaches such as simulated annealing or certain genetic algorithms. It adapts better to the real variability of documents and fragment mixtures, and it leaves room for light operator interaction when needed.

What the experiments show

The authors report large-scale tests conducted on more than a thousand heterogeneous torn documents (office printouts, handwritten pages, images, and mixed-content sheets). The results converge toward an observation intuitive to any expert: the richer a document is in content (dense text, grids, or patterns), the faster and more accurate the reconstruction process becomes. Conversely, uniform areas require more iterations because they provide few visual anchor points. In the most challenging cases, occasional operator input, such as confirming a match or indicating the probable orientation of a fragment, is sufficient to guide the algorithm without compromising overall reproducibility.

Validation through a benchmark challenge

To evaluate the method under conditions close to real-world scenarios, the researchers tested it on fragment datasets inspired by the DARPA Shredder Challenge, a well-known benchmark in which participants attempt to reconstruct documents shredded into very narrow strips or confetti-like pieces. The method successfully reconstructed coherent and readable pages where other techniques either failed or stalled. This is more than an academic result: it demonstrates that the algorithm performs robustly when faced with investigative constraints, including numerous, intermingled, and occasionally damaged fragments resulting from handling or scanning.

Relevance to forensic practice

Beyond raw performance, the value of such a method lies in its integration into a demonstrable forensic workflow. The initial reconstruction phase, typically the most time-consuming, can be largely automated, freeing analysts to focus on content examination. More importantly, the approach lends itself to precise traceability: a log of tested hypotheses, retained parameters, acceptance thresholds, and intermediate captures. These records help document the chain of custody, justify technical choices before a magistrate, and, when necessary, reproduce the procedure in full transparency.

In laboratory settings, integration is facilitated by adopting rigorous acquisition practices such as high-resolution scanning, neutral backgrounds, color calibration, and systematic archiving of source files. A preliminary physical sorting of fragments, by paper weight, hue, or the presence of images, also enhances robustness by reducing ambiguities at the input stage.

Limitations and avenues for improvement

As with any optimization method, performance depends heavily on proper parameter tuning. Thresholds that are too strict will hinder exploration, while overly permissive criteria make it erratic. Highly mixed batches, comprising visually similar documents with identical layouts or fonts, remain difficult and may require occasional human intervention to prevent mismatches. Micro-fragments produced by high-grade shredders represent another major challenge: the smaller the visible surface, the fewer cues the algorithm can exploit. Future progress is expected in improving robustness against scanning artifacts, automating pre-sorting steps, and, more broadly, establishing standardized performance metrics (such as edge-matching accuracy, page completeness, and computation time) to facilitate fair comparison between methods.

Conclusion

Reconstructing torn documents is no longer solely a matter of expert patience and intuition. Stochastic optimization provides an exploration engine capable of handling large volumes, managing uncertainty, and producing usable assemblies. By combining automation, traceability, and expert supervision when needed, this approach transforms an “impossible puzzle” into a systematic procedure, serving the purposes of material evidence, intelligence gathering, and the preservation of damaged archives.

References :

Touch DNA: a new approach to better understand the traces left behind

In criminal investigations, DNA analysis plays a central role in identifying the perpetrators of crimes and offenses. However, not all biological traces provide the same type of information. Touch DNA—deposited involuntarily on a surface after simple contact—remains challenging to interpret for forensic experts.

Why do some individuals leave more DNA than others? A recent study conducted by researchers at Flinders University in Australia proposes an innovative method to objectively assess this variability. By examining the individual propensity to shed skin cells, the team opens new perspectives in forensic genetics and the interpretation of biological traces at crime scenes.

A genuine interindividual variability

Some individuals, described as “good shedders,” naturally deposit large quantities of skin cells on objects they handle. Others, by contrast, leave only minimal traces. This difference, long observed by forensic biologists, complicates the interpretation of DNA results, particularly when assessing the likelihood of direct contact between a person and an object.

Until now, reliably and reproducibly quantifying this variability has been difficult. The Australian study specifically addresses this gap, providing a rigorous scientific protocol.

A simple and reproducible measurement protocol

The researchers developed a protocol based on a series of controlled contacts carried out by 100 participants, each asked to touch a standardized surface. The deposited cells were then:

  • Stained with a fluorescent marker,
  • Counted using microscopy,
  • Subjected to genetic analysis to confirm the presence of recoverable DNA.

The results showed that, for 98 out of 100 participants, the level of cell deposition was stable and reproducible over time. This protocol allows individuals to be classified into three categories: high, moderate, or low skin cell shedders.

A tool to better contextualize touch DNA evidence

The value of this method extends beyond biology. It may serve as a tool for judicial contextualization. For instance, a suspect identified as a high shedder could account for the abundant presence of their DNA on an object without having taken part in the offense. Conversely, the absence of DNA from a low shedder does not exclude the possibility of contact.

This information could be incorporated into likelihood ratio calculations used in DNA interpretation, thereby strengthening the robustness of forensic assessments.

Future perspectives for forensic science

The proposed method has several advantages: it is inexpensive, easy to implement in the laboratory, and could be adapted to various objects and realistic conditions (different surfaces, durations of contact, humidity). Further validation studies are still required before widespread adoption. Ultimately, however, this approach could be integrated into routine biological trace analysis, providing valuable support to magistrates and investigators in evaluating the probative value of DNA evidence.

References

  • Petcharoen P., Nolan M., Kirkbride K.P., Linacre A. (2024). Shedding more light on shedders. Forensic Science International: Genetics, 72, 103065, read here.
  • Flinders University. (2024, August 22). Heavy skin shedders revealed: New forensic DNA test could boost crime scene investigations. ScienceDaily, read here.

Uncovering the meaning of suspicious injuries in cases of child abuse

There is a certain difficulty in objectively identifying a cigarette burn in a forensic context, particularly when the victim cannot testify. Such lesions are of particular relevance in cases of suspected child abuse. Until now, diagnoses have relied mainly on the morphological appearance of the injuries, with no standardized tool to support a conclusion based on material evidence.

A striking clinical case of child abuse

A team from the Laboratory of Histological Pathology and Forensic Microbiology at the University of Milan investigated a suspected case of child abuse that resulted in the death of a child. Three circular lesions suggestive of cigarette burns were found on the body. A cigarette butt collected nearby further supported the suspicion of an intentional act. The challenge was to determine whether these marks were the result of deliberate harm. However, visual inspection and even conventional histology cannot always confirm the exact origin of such lesions. Hence the value of turning to a more refined and objective method.

The SEM–EDX method: a microscopic zoom on the lesion

Scanning electron microscopy (SEM) allows the morphology of the injured skin to be observed with extreme precision, while energy-dispersive X-ray spectroscopy (EDX) identifies the chemical elements present on the surface of the lesions. This analysis relied on internal calibration, applied both to samples of injured skin and to cigarette fragments collected at the scene.

Elemental signatures of an intentional act

The results revealed a circular lesion with a reddish base, consistent with intense thermal contact. The chemical composition detected by EDX contained elements typically associated with tobacco combustion, in particular sulfur trioxide and phosphorus oxides, confirming combustion rather than mere environmental residues. Combined with the histological findings, this analysis demonstrated that the injury had occurred prior to death, providing an objective element supporting the likelihood of abuse.

A tool to strengthen forensic expertise

The study demonstrates that SEM–EDX analysis, combined with histology, represents a significant advancement in the characterization of suspicious lesions in the context of child abuse. It moves beyond visual assessment to provide objective and reproducible data, essential in judicial proceedings. By overcoming the limitations of visual inspection, this approach delivers results based on reproducible physico-chemical evidence, thereby reinforcing the robustness of forensic conclusions in light of judicial requirements.

Conclusion

This study paves the way for broader integration of analytical microscopy into forensic practices. By combining scientific rigor with judicial investigation, it offers a robust method for clarifying the nature of lesions whose origin often remains uncertain. The approach could also be applied to other types of injuries, such as those caused by heat sources or chemical agents. This progress deserves to be extended and validated on a larger number of cases in order to refine its reliability.

Références :

  • Tambuzzi S. et al. (2024). Pilot Application of SEM/EDX Analysis on Suspected Cigarette Burns in a Forensic Autopsy Case of Child Abuse. American Journal of Forensic Medicine & Pathology, 45(2), 135‑143. Read here.
  • Faller-Marquardt M., Pollak S., Schmidt U. (2008). Cigarette Burns in Forensic Medicine. Forensic Sci. Int., 176(2–3), 200–208
  • Maghin F. et al. (2018). Characterization With SEM/EDX of Microtraces From Ligature in Hanging. Am. J. Forensic Med. Pathol., 39(1), 1–7, read here.

How do nature indicates the presence of a corpse ?

What if fungal spores and pollen grains could reveal the secrets of clandestine graves? That is the hypothesis explored by an international team of researchers in Colombia, who conducted a pioneering experiment combining mycology and palynology in a forensic context. 

A biological approach to detecting illegal graves

In an experimental project carried out in Bogotá, two graves simulating clandestine burials were dug — one empty, the other containing a pig cadaver (a standard human body substitute in forensic science). Soil samples were collected and analyzed at different depths to study fungal and pollen communities composition. The aim of the study was to determine whether decomposed organic remains alter the soil’s microbial and plant-based communities, and whether these biological signatures could serve as spatial and temporal indicators in criminal investigations. 

Revealing fungal and pollen richness

The results showed that soil from the pits containing a carcass exhibits greater fungal richness (higher species diversity), notably with species such as Fusarium oxysporum and Paecilomyces, whose frequency increased in the presence of decomposition. These organisms, capable of degrading nitrogen-rich compounds such as keratin, could serve as indicators of buried organic remains.

Fungal structures of Fusarium oxysporum observed under optical microscopy.
A and B: macroconidia; C: chlamydospores. © David Esteban Duarte-Alvarado

On the palynology side, pollen grains identified at 50 cm depth—including Borago officinalis, Poa sp., and Croton sonderianus—are typical of the dry season. In contrast, the pollens found at 30 cm correspond to the rainy season. This stratified distribution could allow investigators to estimate the burial and exhumation periods with greater accuracy.

Integrating soil biology into criminal investigations

This study is the first to provide experimental data on mycology and palynology in an equatorial tropical context, a field largely unexplored in forensic science until now. It paves the way for a more systematic integration of these disciplines in crime scene investigations involving clandestine graves or the search for buried remains. While preliminary, the findings demonstrate the value of biological approaches as a complement to conventional forensic methods especially in regions where climatic conditions influence decomposition dynamics.

Conclusion

This study is part of a broader research effort into biological indicators left by buried bodies. After trees and roots that can signal underground anomalies, it is now fungi and pollen that emerge as silent witnesses of clandestine deaths. This microbiological approach expands the toolkit of forensic archaeology, as practiced by experts such as those from the French Gendarmerie. By combining invisible biological traces with conventional excavation and stratigraphic analysis techniques, it enables a more precise reading of the soil—and the criminal stories it may conceal.

Reference :
Tranchida, M. C., et al. (2025). Mycology and palynology: Preliminary results in a forensic experimental laboratory in Colombia, South America. Journal of Forensic Sciences.
Full article here.

When teeth talk : How dental tartar serves toxicology

Initially exploited in archaeology, dental calculus is now revealing its potential in forensic science. It retains traces of ingested substances, opening the way to post-mortem analysis of drug intake and psychoactive compounds.

Dental calculus: A neglected but valuable matrix

Dental calculus forms through the gradual mineralization of dental plaque, a biofilm composed of saliva, microorganisms, and food residues. This process traps various compounds present in the oral cavity, including xenobiotics such as drugs or their metabolites. Its crystalline structure grants this matrix an excellent preservation properties for the substances it contains, while making it resistant to external degradation, including in post-mortem or archaeological contexts.

A new path for tracking illicit substances

Recently, a research team demonstrated the feasibility of a toxicological approach based on the analysis of dental calculus using liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). In a study involving ten forensic cases, the researchers detected 131 substances in tartar, compared to 117 in blood—sometimes in higher concentrations within the tartar. The method enabled the identification of common drugs such as cocaine, heroin, and cannabinoids, even in cases where they were no longer detectable in conventional matrices (Sørensen et al., 2021). These substances, absent from the blood, were often present in higher concentrations in dental tartar.

A long-lasting and discreet witness

This approach offers several clear advantages. It allows the detection of substance use weeks or even months after ingestion. Tartar sampling is non-invasive and applicable to skeletal remains, making it particularly relevant in archaeological and forensic anthropology contexts. It can help reconstruct consumption habits, medical treatments, or causes of death in situations where blood, urine, or hair are unavailable.

A promising method to be further developed

One of the main strengths of this technique lies in its ability to exploit a matrix that is often overlooked but commonly available on teeth. Only a few milligrams are needed to conduct a reliable analysis—provided the trapped substances remain stable over time. This method also opens the possibility of broadening the range of detectable compounds, pending further validation.

While promising, this avenue still requires additional research to standardize protocols, assess the long-term stability of molecules, and fully integrate this approach into routine forensic toxicology practices. Although still in its exploratory phase, the method offers remarkable potential for the use of alternative matrices and opens new perspectives for forensic toxicology.

Reference:

  • Sørensen LK, Hasselstrøm JB, Larsen LS, et al. Entrapment of drugs in dental calculus: detection validation based on test results from post-mortem investigations. Forensic Sci Int 2021; 319: 110647.
  • Reymond C, Le Masle A, Colas C, et al. A rational strategy based on experimental designs to optimize parameters of a liquid chromatography-mass spectrometry analysis of complex matrices. Talanta 2019; 205: 120063.
  • Radini A, Nikita E, Buckley S, Copeland L, Hardy K. Beyond food: The multiple pathways for inclusion of materials into ancient dental calculus. Am J Phys Anthropol 2017; 162: 71–83.
  • Henry AG, Piperno DR. Using plant microfossils from dental calculus to recover human diet: a case study from Tell al-Raqā’i, Syria. J Archaeol Sci 2008; 35: 1943–1950.

Bedbugs: a new weapon for forensic science?

Malaysian researchers have explored the potential of tropical bedbugs, Cimex hemipterus, as a new source of human DNA in forensic investigations. Typically overlooked in crime scene analyses due to the absence of visible traces, these insects may nevertheless carry, within their digestive tract, the DNA of the last human host they fed on. The study aimed to determine whether—and for how long—a usable human DNA profile could be extracted from the blood meal content of bedbugs, focusing on two key forensic genetic markers: STRs (Short Tandem Repeats) and SNPs (Single Nucleotide Polymorphisms).

Methodology and results

Laboratory-reared bedbug colonies were fed on human volunteers and subsequently sacrificed at different intervals (0, 5, 14, 30, and 45 days after feeding). DNA was extracted and subjected to STR and SNP analyses following standard forensic protocols. The results were conclusive: complete STR and SNP profiles could only be obtained on the day of feeding (day 0), while partial, though still informative, profiles remained detectable up to 45 days post-feeding. The SNP data were interpreted using the HIrisPlex-S system, allowing phenotype predictions (eye, skin, and hair colour) even from partial genetic information. Moreover, field-collected bedbugs confirmed the feasibility of STR profiling, occasionally revealing mixed DNA profiles—potentially indicating feeding from multiple human hosts.

These results open up a new avenue for forensic science: when traditional biological traces have disappeared or been cleaned away, bedbugs could remain at the scene and serve as reliable micro-reservoirs of human DNA, enabling investigators to identify individuals who were present or to establish a timeline of movements. However, several limitations must be taken into account. First, the analyses are time-consuming and require a rigorous protocol. The DNA profile becomes partial after a few days, and some loci are no longer detectable. Moreover, when an insect has fed on multiple individuals, mixed genetic signals can occur, making interpretation more complex.

The authors emphasize the need to validate these findings on a broader range of samples, including more human donors and various commercial STR/SNP kits. Controlled in situ tests on simulated crime scenes would also be useful to confirm the robustness of the method—particularly in comparison with other insects or biological intermediaries considered in forensic entomology.

Conclusion

In summary, this study demonstrates that human DNA preserved in the stomach of tropical bedbugs can be exploited for up to 45 days after feeding through STR and SNP analysis. Although a complete genetic profile can only be obtained immediately after feeding, these insects represent an innovative and promising resource for forensic science, especially in situations where conventional methods fail. Nevertheless, the approach requires strict protocols, further validation studies, and realistic crime-scene modelling before it can be used in judicial proceedings. Additional research will determine how this strategy can be integrated into the growing toolkit of forensic investigators and scientists.

Sources :

  • Kamal, M. M. et al. (2023)Human profiling from STR and SNP analysis of tropical bed bug (Cimex hemipterus) for forensic science, Scientific Reports, 13(1), 1173.
  • Chaitanya, L. et al. (2018)HIrisPlex-S system for eye, hair and skin colour prediction from DNA, Forensic Science International: Genetics, 35, 123–134.
  • Asia News Network (2023)Malaysian scientists discover bed bugs can play role in forensic investigations, Read full article.
  • ResearchGate – Publication originaleHuman profiling from STR and SNP analysis of tropical bed bug Cimex hemipterus for forensic science, Read full article.

Photogrammetry, Lasergrammetry, and Artificial Intelligence: A Technological Revolution

Forensics and emergency response are currently at a turning point with the growing integration of advanced technologies such as photogrammetry, lasergrammetry (LiDAR), and artificial intelligence (AI). These technologies not only provide unprecedented levels of accuracy and efficiency but also open up new avenues for investigation and intervention, profoundly reshaping traditional methodologies.

Photogrammétrie et Lasergrammétrie : des outils de précision

As a surveying expert and officer specializing in the drone unit of the Haute-Savoie Fire and Rescue Department (SDIS74), I have directly observed how these tools enhance the accuracy of topographic surveys and facilitate the rapid analysis of complex scenes. Photogrammetry enables 3D reconstruction of various environments using aerial images captured by drones equipped with high-resolution cameras. This process quickly generates detailed digital terrain models, which are critical in urgent or forensic interventions where every detail matters.

Road survey using photogrammetric methods, in true color. Credit: Arnaud STEPHAN – LATITUDE DRONE

It is possible to achieve extremely high levels of detail, allowing, for example, the identification of footprints by the depth left in the ground.

LiDAR scanning effectively complements photogrammetry by providing millimetric precision through the emission of laser beams that scan and model the environment in three dimensions. This technology is particularly effective in complex contexts such as dense wooded areas, steep cliffs, or rugged mountain terrain, where photogrammetry may sometimes struggle to capture all the necessary details.

To be more precise, LiDAR generally produces more noise on bare ground and hard surfaces compared to photogrammetry, which remains the preferred tool in such cases. However, in wooded areas, LiDAR can occasionally penetrate through to the ground and thus provide crucial information about the terrain, where photogrammetry may fail.

Photogrammetry only works during daylight, since it relies on photographic data in the visible spectrum.

Depending on the chosen flight altitudes and the type of sensor used, it is possible to achieve extremely high levels of detail, allowing, for example, the identification of footprints by the depth left in the ground. These technologies are already being used to precisely capture crime scenes. Traditionally, static scanners were used for this purpose, but drones now make it possible to greatly expand the capture perimeter while ensuring faster processing. This speed is crucial, as it is often imperative to capture the scene quickly before any change in weather conditions.

However, it is important to note that photogrammetry only works during daylight, since it relies on photographic data in the visible spectrum.

Topographic survey using LiDAR method and colored according to altitude. Vegetation differentiated in green. Credit: Arnaud STEPHAN – LATITUDE DRONE

Artificial Intelligence: towards automated and efficient analysis

The true revolution lies in the integration of these geospatial surveys into intelligent systems capable of massively analyzing visual data with speed and precision. In this regard, the OPEN RESCUE project, developed by ODAS Solutions in partnership with SDIS74 and the Université Savoie Mont-Blanc, stands as an exemplary case. This AI is fueled by an exceptional dataset of nearly 1.35 million images collected using various types of drones (DJI Mavic 3, DJI Matrice 300, Phantom 4 PRO RTK, etc.) across a remarkable diversity of environments, covering all seasons.

Illustration of OPEN RESCUE’s capabilities: a person isolated in the mountains during winter. Credit: Arnaud STEPHAN – ODAS SOLUTIONS

The robustness of the OPEN RESCUE AI is demonstrated by a maximum F1-score of 93.6%, a remarkable result validated through real field operations. The F1-score is a statistical indicator used to measure the accuracy of an artificial intelligence system: it combines precision (the number of correctly identified elements among all detections) and recall (the number of correctly identified elements among all those actually present). A high score therefore means that the AI effectively detects a large number of relevant elements while avoiding false detections. This intelligent system is capable of accurately detecting individuals as well as indirect signs of human presence such as abandoned clothing, immobilized vehicles, or personal belongings, thereby providing valuable and immediate assistance to rescue teams.

Collection of OPEN RESCUE training data with SDIS74 firefighters – Credit: Arnaud STEPHAN – ODAS SOLUTIONS

The arrival of this technology is radically transforming the way teams conduct their searches: it is now possible to methodically and extensively sweep entire areas, while ensuring that no relevant element has been missed by the AI in these zones. Although this does not replace canine units or other traditional methods, artificial intelligence provides a new and complementary level of thoroughness in the search process.

The arrival of this technology is radically transforming the way teams conduct their searches.

Practical Applications and Operational Results

In the field, the effectiveness of these technologies has been widely demonstrated. The autonomous drones used by our unit can efficiently cover up to 100 hectares in about 25 minutes, with image processing carried out almost in real time by OPEN RESCUE. This enables an extremely rapid response, ensuring optimal management of critical time during emergency interventions and missing-person searches.

Furthermore, the ability to precisely document the areas covered during operations provides a significant advantage in judicial contexts. The possibility of using these accurate 3D models and automatically analyzed data as evidence before courts offers greater transparency in judicial procedures and greatly facilitates the work of judges, investigators, and lawyers.

DJI Matrice 300 drone flying in a mountainous area – Credit: Arnaud STEPHAN – LATITUDE DRONE

Operational constraints and regulatory framework

The operational use of drones and these advanced technologies is subject to several strict regulatory constraints, particularly in terms of flight authorizations, privacy protection, data management, and air safety. In France, drones are regulated by the Direction Générale de l’Aviation Civile (DGAC – French Civil Aviation Authority), which imposes specific flight scenarios and precise protocols to be followed during missions.

In addition, the technical constraints of operations include the need for trained and regularly certified pilots, capable of carrying out missions safely and efficiently. Finally, roughly every six months, new innovative equipment is released, constantly bringing significant improvements such as higher capture speeds, better optical and thermal sensors, and the miniaturization of onboard LiDAR systems.

Conclusion

Ultimately, the growing integration of advanced technologies represents a decisive breakthrough in forensic sciences and emergency interventions, despite the operational and regulatory constraints to be taken into account. Their practical application not only enhances the efficiency and speed of operations but also opens up new possibilities for judicial analysis, thereby confirming their essential role in public safety and modern justice.

AI in Forensics: Between Technological Revolution and Human Challenges

By Yann CHOVORY, Engineer in AI Applied to Criminalistics (Institut Génétique Nantes Atlantique – IGNA). On a crime scene, every minute counts. Between identifying a fleeing suspect, preventing further wrongdoing, and managing the time constraints of an investigation, case handlers are engaged in a genuine race against the clock. Fingerprints, gunshot residues, biological traces, video surveillance, digital data… all these clues must be collected and quickly analyzed, or there is a risk that the case will collapse for lack of usable evidence in time. Yet overwhelmed by the ever-growing mass of data, forensic laboratories are struggling to keep pace.

Analyzing evidence with speed and accuracy

In this context, artificial intelligence (AI) establishes itself as an indispensable accelerator. Capable of processing in a few hours what would take weeks to analyze manually, it optimises the use of clues by speeding up their sorting and detecting links imperceptible to the human eye. More than just a time-saver, it also improves the relevance of investigations: swiftly cross-referencing databases, spotting hidden patterns in phone call records, comparing DNA fragments with unmatched precision. AI thus acts as a tireless virtual analyst, reducing the risk of human error and offering new opportunities to forensic experts.

But this technological revolution does not come without friction. Between institutional scepticism and operational resistance, its integration into investigative practices remains a challenge. My professional journey, marked by a persistent quest to integrate AI into scientific policing, illustrates this transformation—and the obstacles it faces. From a marginalised bioinformatician to project lead for AI at IGNA, I have observed from within how this discipline, long grounded in traditional methods, is adapting—sometimes under pressure—to the era of big data.

The risk of human error is reduced and the reliability of identifications increased

Concrete examples: AI from the crime scene to the laboratory

AI is already making inroads in several areas of criminalistics, with promising results. For example, AFIS (Automated Fingerprint Identification System) fingerprint recognition systems now incorporate machine learning components to improve matching of latent fingerprints. The risk of human error is reduced and the reliability of identifications increased [1]. Likewise, in ballistics, computer vision algorithms now automatically compare the striations on a projectile with markings of known firearms, speeding the work of a firearms expert. Tools are also emerging to interpret bloodstains on a scene: machine learning1 models can help reconstruct the trajectory of blood droplets and thus the dynamics of an assault or violent event [2]. These examples illustrate how AI is integrating into the forensic expert’s toolkit, from crime scene image analysis to the recognition of complex patterns.But it is perhaps in forensic genetics that AI currently raises the greatest hopes. DNA analysis labs process thousands of genetic profiles and samples, with deadlines that can be critical. AI offers a considerable time-gain and enhanced accuracy. As part of my research, I contributed to developing an in-house AI capable of interpreting 86 genetic profiles in just three minutes [3]—a major advance when analyzing a complex profile may take hours. Since 2024, it has autonomously handled simple profiles, while complex genetic profiles are automatically routed to a human expert, ensuring effective collaboration between automation and expertise. The results observed are very encouraging. Not only is the turnaround time for DNA results drastically reduced, but the error rate also falls thanks to the standardization introduced by the algorithm.

AI does not replace humans but complements them

Another promising advance lies in enhancing genetic DNA-based facial composites. Currently, this technique allows estimating certain physical features of an individual (such as eye color, hair color, or skin pigmentation) from their genetic code, but it remains limited by the complexity of genetic interactions and uncertainties in predictions. AI could revolutionise this approach by using deep learning models trained on vast genetic and phenotypic databases, thereby refining these predictions and generating more accurate sketches. Unlike classical methods, which rely on statistical probabilities, an AI model could analyse millions of genetic variants in a few seconds and identify subtle correlations that traditional approaches do not detect. This prospect opens the way to a significant improvement in the relevance of DNA sketches, facilitating suspect identification when no other usable clues are available. The Forenseek platform has explored current advances in this area, but AI has not yet been fully exploited to surpass existing methods [5]. Its integration could therefore constitute a major breakthrough in criminal investigations.

It is important to emphasize that in all these examples, AI does not replace the human but complements them. At IRCGN (French National Gendarmerie Criminal Research Institute) cited above, while the majority of routine, good-quality DNA profiles can be handled automatically, regular human quality control remains: every week, a technician randomly checks cases processed by AI, to ensure no drift has occurred [3]. This human-machine collaboration is key to successful deployment, as the expertise of the forensic specialists remains indispensable to validate and finely interpret the results, especially in complex cases.

Intelligence artificielle IA en police scientifique et cybercriminalité - Forenseek

Algorithms Trained on Data: How AI “Learns” in Forensics

The impressive performance of AI in forensics relies on one crucial resource: data. For a machine learning algorithm to identify a fingerprint or interpret a DNA profile, it first needs to be trained on numerous examples. In practical terms, we provide it with representative datasets, each containing inputs (images, signals, genetic profiles, etc.) associated with an expected outcome (the identity of the correct suspect, the exact composition of the DNA profile, etc.). By analyzing thousands—or even millions—of these examples, the machine adjusts its internal parameters to best replicate the decisions made by human experts. This is known as supervised learning, since the AI learns from cases where the correct outcome is already known. For example, to train a model to recognize DNA profiles, we use data from solved cases where the expected result is clearly established.

an AI’s performance depends on the quality of the data that trains it.

The larger and more diverse the training dataset, the better the AI will be at detecting reliable and robust patterns. However, not all data is equal. It must be of high quality (e.g., properly labeled images, DNA profiles free from input errors) and cover a wide enough range of situations. If the system is biased by being exposed to only a narrow range of cases, it may fail when confronted with a slightly different scenario. In genetics, for instance, this means including profiles from various ethnic backgrounds, varying degrees of degradation, and complex mixture configurations so the algorithm can learn to handle all potential sources of variation.

Transparency in data composition is essential. Studies have shown that some forensic databases are demographically unbalanced—for example, the U.S. CODIS database contains an overrepresentation of profiles from African-American individuals compared to other groups [6]. A model naively trained on such data could inherit systemic biases and produce less reliable or less fair results for underrepresented populations. It is therefore crucial to monitor training data for bias and, if necessary, to correct it (e.g., through balanced sampling, augmentation of minority data) in order to achieve fair and equitable learning.

Data Collection: Gathering diverse and representative datasets
Data Preprocessing: Cleaning and preparing data for training
AI Training: Training algorithms on prepared datasets
Data Validation: Verifying the quality and diversity of the data
Bias Evaluation: Identifying and correcting biases in the datasets

Technically, training an AI involves rigorous steps of cross-validation and performance measurement. We generally split data into three sets: one for training, another for validation during development (to adjust the parameters), and a final test set to objectively evaluate the model. Quantitative metrics such as accuracy, recall (sensitivity), or error curves make it possible to quantify how reliable the algorithm is on data it has never seen [6]. For example, one can check that the AI correctly identifies a large majority of perpetrators from traces while maintaining a low rate of false positives. Increasingly, we also integrate fairness and ethical criteria into these evaluations: performance is examined across demographic groups or testing conditions (gender, age, etc.), to ensure that no unacceptable bias remains [6]. Finally, compliance with legal constraints (such as the GDPR in Europe, which regulates the use of personal data) must be built in from the design phase of the system [6]. That may involve anonymizing data, limiting certain sensitive information, or providing procedures in case an ethical bias is detected.

Ultimately, an AI’s performance depends on the quality of the data that trains it. In the forensic field, that means algorithms “learn” from accumulated human expertise. Every algorithmic decision implies the experience of hundreds of experts who provided examples or tuned parameters. It is both a strength – capitalizing on a vast knowledge base – and a responsibility: to carefully select, prepare, and control the data that will feed the artificial intelligence.

Technical and operational challenges for integrating AI into forensic science

Technical and operational challenges for integrating AI into forensic science

While AI promises substantial gains, its concrete integration in the forensic field faces many challenges. It is not enough to train a model in a laboratory: one must also be able to use it within the constrained framework of a judicial investigation, with all the reliability requirements that entails. Among the main technical and organisational challenges are:

  • Access to data and infrastructure: Paradoxically, although AI requires large datasets to learn, it can be difficult to gather sufficient data in the specific forensic domain. DNA profiles, for example, are highly sensitive personal data, protected by law and stored in secure, sequestered databases. Obtaining datasets large enough to train an algorithm may require complex cooperation between agencies or the generation of synthetic data to fill gaps. Additionally, computing tools must be capable of processing large volumes of data in reasonable time — which requires investment in hardware (servers, GPU2s for deep learning3) and specialized software. Some national initiatives are beginning to emerge to pool forensic data securely, but this remains an ongoing project.
  • Quality of annotations and bias: The effectiveness of AI learning depends on the quality of the annotations in training datasets. In many forensic areas, establishing « ground truth » is not trivial. For example, to train an algorithm to recognize a face in surveillance video, each face must be correctly identified by a human first — which can be difficult if the image is blurry or partial. Similarly, labeling data sets of footprints, fibers, or fingerprints requires meticulous work by experts and sometimes involves subjectivity. If the training data include annotation errors or historical biases, the AI will reproduce them [6]. A common bias is demographic representativeness noted above, but there may be others. For instance, if a weapon detection model is trained mainly on images of weapons indoors, it may perform poorly for detecting a weapon outdoors, in rain, etc. The quality and diversity of annotated data are therefore a major technical issue. This implicates establishing rigorous data collection and annotation protocols (ideally standardized at the international level), as well as ongoing monitoring to detect model drift (overfitting to certain cases, performance degradation over time, etc.). This validation relies on experimental studies comparing AI performance to that of human experts. However, the complexity of homologation procedures and procurement often slows adoption, delaying the deployment of new tools in forensic science by several years.
Intelligence Artificielle IA en police scientifique et en sciences forensiques cybercriminalité - Forenseek
  • Understanding and Acceptance by Judicial Actors: Introducing artificial intelligence into the judicial process inevitably raises the question of trust. An investigator or a laboratory technician, trained in conventional methods, must learn to use and interpret the results provided by AI. This requires training and a gradual cultural shift so that the tool becomes an ally and not an “incomprehensible black box.” More broadly, judges, attorneys, and jurors who will have to discuss this evidence must also grasp its principles. Yet explaining the inner workings of a neural network or the statistical meaning of a similarity score is far from simple. We sometimes observe misunderstanding or suspicion from certain judicial actors toward these algorithmic methods [6]. If a judge does not understand how a conclusion was reached, they may be inclined to reject it or assign it less weight, out of caution. Similarly, a defence lawyer will legitimately scrutinize the weaknesses of a tool they do not know, which may lead to judicial debates over the validity of the AI. A major challenge is thus to make AI explainable (the “XAI” concept—eXplainable Artificial Intelligence), or at least to present its results in a comprehensible format and pedagogically acceptable to a court. Without this, integrating AI risks facing resistance or sparking controversy in trials, limiting its practical contribution.
  • Regulatory Framework and Data Protection: Finally, forensic sciences operate within a strict legal framework, notably regarding personal data (DNA profiles, biometric data, etc.) and criminal procedure. The use of AI must comply with these regulations. In France, the CNIL (Commission Nationale de l’Informatique et des Libertés) keeps watch and can impose restrictions if an algorithmic processing harms privacy. For example, training an AI on nominal DNA profiles without a legal basis would be inconceivable. Innovation must therefore remain within legal boundaries, imposing constraints from the design phase of projects. Another issue concerns trade secrecy surrounding certain algorithms in judicial contexts: if a vendor refuses to disclose the internal workings of its software for intellectual property reasons, how can the defence or the judge ensure its reliability? Recent cases have shown defendants convicted on the basis of proprietary software (e.g., DNA analysis) without the defence being able to examine the source code used [7]. These situations raise issues of transparency and rights of defence. In the United States, a proposed law titled Justice in Forensic Algorithms Act aims precisely to ensure that trade secrecy cannot prevent the examination by experts of the algorithms used in forensics, in order to guarantee fairness in trials. This underlines the necessity of adapting regulatory frameworks to these new technologies.

Lack of Cooperation slows the development of powerful tools and limits their adoption in the field.

  • Another more structural obstacle lies in the difficulty of integrating hybrid profiles within forensic institutions, at least in France. Today, competitive examinations and recruitment often remain compartmentalised between different specialties, limiting the emergence of experts with dual expertise. For instance, in forensic police services, entrance exams for technicians or engineers are divided into distinct specialties such as biology or computer science, without pathways to recognize combined expertise in both fields. This institutional rigidity slows the integration of professionals capable of bridging between domains and fully exploiting the potential of AI in criminalistics. Yet current technological advances show that the analysis of biological traces increasingly relies on advanced digital tools. Faced with this evolution, greater flexibility in recruitment and training of forensic experts will be necessary to meet tomorrow’s challenges.

AI in forensics must not become a matter of competition or prestige among laboratories, but a tool put at the service of justice and truth, for the benefit of investigators and victims.

  • A further major barrier to innovation in forensic science is the compartmentalization of efforts among different stakeholders, who often work in parallel on identical problems without pooling their advances. This lack of cooperation slows the development of effective tools and limits their adoption in the field. However, by sharing our resources—whether databases, methodologies, or algorithms—we could accelerate the production deployment of AI solutions and guarantee continuous improvement based on collective expertise. My experience across different French laboratories (the Lyon Scientific Police Laboratory (Service National de Police Scientifique – SNPS), the Institut de Recherche Criminelle de la Gendarmerie Nationale (IRCGN), and now the Nantes Atlantique Genetic Institute (IGNA)) allows me to perceive how much this fragmentation hampers progress, even though we pursue a common goal: improving the resolution of investigations. This is why it is essential to promote open-source development when possible and to create platforms of collaboration among public and judicial entities. AI in forensics must not be a matter of competition or prestige among laboratories, but a tool in the service of justice and truth, for the benefit of investigators and victims alike.
Intelligence Artificielle IA en police scientifique et en sciences forensiques - Forenseek

The challenges discussed above all have technical dimensions, but they are closely intertwine with fundamental ethical and legal questions. From an ethical standpoint, the absolute priority is to avoid injustice through the use of AI. We must prevent at all costs that a poorly designed algorithm leads to someone’s wrongful indictment or, conversely, the release of a guilty party. This involves mastering biases (to avoid discrimination against certain groups), transparency (so that every party in a trial can understand and challenge algorithmic evidence), and accountability for decisions. Indeed, who is responsible if an AI makes an error? The expert who misused it, the software developer, or no one because “the machine made a mistake”? This ambiguity is unacceptable in justice: it is essential to always keep human expertise in the loop, so that a final decision—whether to accuse or exonerate—is based on human evaluation informed by AI, and not on the opaque verdict of an automated system.

On the legal side, the landscape is evolving to regulate the use of AI. The European Union, in particular, is finalizing an AI Regulation (AI Act) which will be the world’s first legislation establishing a framework for the development, commercialization, and use of artificial intelligence systems [8]. Its goal is to minimize risks to safety and fundamental rights by imposing obligations depending on the level of risk of the application (and forensic or criminal justice applications will undoubtedly be categorized among the most sensitive). In France, the CNIL has published recommendations emphasizing that innovation can be reconciled with respect for individual rights during the development of AI solutions [9]. This involves, for example, compliance with the GDPR, limitation of purposes (i.e. training a model only for legitimate and clearly defined objectives), proportionality in data collection, and prior impact assessments for any system likely to significantly affect individuals. These safeguards aim to ensure that enthusiasm for AI does not come at the expense of the fundamental principles of justice and privacy.

Encouraging Innovation While Demanding Scientific Validation and Transparency

A delicate balance must therefore be struck between technological innovation and regulatory framework. On one hand, overly restricting experimentation and adoption of AI in forensics could deprive investigators of tools potentially decisive for solving complex cases. On the other, leaving the field unregulated and unchecked would risk judicial errors or violations of rights. The solution likely lies in a measured approach: encouraging innovation while demanding solid scientific validation and transparency in methods. Ethics committees and independent experts can be involved to audit algorithms, verify that they comply with norms, and that they do not replicate problematic biases. Furthermore, legal professionals must be informed and trained on these new technologies so they can meaningfully debate their probative value in court. A judge trained in the basic concepts of AI will be better placed to understand the evidentiary weight (and limitations) of evidence derived from an algorithm.

Conclusion: The Future of forensics in the AI Era

Artificial intelligence is set to deeply transform forensics, offering investigators analysis tools that are faster, more accurate, and capable of handling volumes of data once considered inaccessible. Whether it is sifting through gigabytes of digital information, comparing latent traces with improved reliability, or untangling complex DNA profiles in a matter of minutes, AI opens new horizons for solving investigations more efficiently.

But this technological leap comes with crucial challenges. Learning techniques, quality of databases, algorithmic bias, transparency of decisions, regulatory framework: these are all stakes that will determine whether AI can truly strengthen justice without undermining it. At a time when public trust in digital tools is more than ever under scrutiny, it is imperative to integrate these innovations with rigor and responsibility.The future of AI in forensics will not be a confrontation between machine and human, but a collaborative work in which human expertise remains central. Technology may help us see faster and farther, but interpretation, judgment and decision-making will remain in the hands of forensic experts and the judicial authorities. Thus, the real question may not be how far AI can go in forensic science, but how we will frame it to ensure that it guarantees ethical and equitable justice. Will we be able to harness its power while preserving the very foundations of a fair trial and the right to defence?

The revolution is underway. It is now up to us to make it progress, not drift.

Bibliography

[1]​ : Océane DUBOUST. L’IA peut-elle aider la police scientifique à trouver des similitudes dans les empreintes digitales ? Euronews, 12/01/2024 [vue le 15/03/2025] https://fr.euronews.com/next/2024/01/12/lia-peut-elle-aider-la-police-scientifique-a-trouver-des-similitudes-dans-les-empreintes-d#:~:text=,il
[2] : International Journal of Multidisciplinary Research and Publications. The Role of Artificial Intelligence in Forensic Science: Transforming Investigations through Technology. Muhammad Arjamand et al. Volume 7, Issue 5, pp. 67-70, 2024. Disponible sur : http://ijmrap.com/ [vue le 15/03/2025]
[3]​ : Gendarmerie Nationale. Kit universel, puce RFID, IA : le PJGN à la pointe de la technologie sur l’ADN.  Mis à jour le 22/01/2025 et disponible sur : https://www.gendarmerie.interieur.gouv.fr/pjgn/recherche-et-innovation/kit-universel-puce-rfid-ia-le-pjgn-a-la-pointe-de-la-technologie-sur-l-adn [vue le 15/03/2025]
[4]​ : Michelle TAYLOR. EXCLUSIVE: Brand New Deterministic Software Can Deconvolute a DNA Mixture in Seconds.  Forensic Magazine, 29/03/022. Disponible sur : https://www.forensicmag.com [vue le 15/03/2025]
[5]​ : Sébastien AGUILAR. L’ADN à l’origine des portraits-robot ! Forenseek, 05/01/2023. Disponible sur : https://www.forenseek.fr/adn-a-l-origine-des-portraits-robot/ [vue le 15/03/2025]
[6]​ : Max M. Houck, Ph.D.  CSI/AI: The Potential for Artificial Intelligence in Forensic Science.  iShine News, 29/10/2024. Disponible sur : https://www.ishinews.com/csi-ai-the-potential-for-artificial-intelligence-in-forensic-science/ [vue le 15/03/2025]
[7]​ : Mark Takano.  Black box algorithms’ use in criminal justice system tackled by bill reintroduced by reps. Takano and evans.  Takano House, 15/02/2024. Disponible sur : https://takano.house.gov/newsroom/press-releases/black-box-algorithms-use-in-criminal-justice-system-tackled-by-bill-reintroduced-by-reps-takano-and-evans [vue le 15/03/2025]
[8] : Mon Expert RGPD. Artificial Intelligence Act : La CNIL répond aux premières questions.  Disponible sur : https://monexpertrgpd.com [vue le 15/03/2025]
[9]​ ​: ​ CNIL.  Les fiches pratiques IA.  Disponible sur : https://www.cnil.fr [vue le 15/03/2025]

Définitions :

  1. GPU (Graphics Processing Unit)
    A GPU is a specialized processor designed to perform massively parallel computations. Originally developed for rendering graphics, it is now widely used in artificial intelligence applications, particularly for training deep learning models. Unlike CPUs (central processing units), which are optimized for sequential, general-purpose tasks, GPUs contain thousands of cores optimized to execute numerous operations simultaneously on large datasets
  2. Machine Learning
    Machine learning is a branch of artificial intelligence that enables computers to learn from data without being explicitly programmed. It relies on algorithms capable of detecting patterns, making predictions, and improving performance through experience.
  3. Deep Learning
    Deep learning is a subfield of machine learning that uses artificial neural networks composed of multiple layers to model complex data representations. Inspired by the human brain, it allows AI systems to learn from large volumes of data and enhance their performance over time. Deep learning is especially effective for processing images, speech, text, and complex signals, with applications in computer vision, speech recognition, forensic science, and cybersecurity.