Tag Archives: intelligence artificielle

Reconstruction of torn documents

When a document has been torn or shredded, the investigator is faced with a puzzle that has lost its box, its reference image, and sometimes even a portion of its pieces. Yet, the information contained within those fragments can alter the course of a case: a single figure in a contract, a name in a table, or a handwritten note in the margin. The question is therefore not merely “can it be reconstructed?”, but rather “can it be done reliably, traceably, and fast enough to be of use to the investigation?”

Why reconstruction is challenging

In forensic practice, fragments are rarely clean or uniform. They vary in shape, size, paper texture, ink density, and orientation. When several documents have been destroyed together, the fragments intermingle and create visual ambiguities: two edges may appear to fit when they do not, two different fonts may look similar, and uniform areas, blank backgrounds or low-detail photographs, provide almost no clues. So-called edge-matching approaches, which seek continuities along borders and patterns, work fairly well for small sets. But as the number of fragments grows, the number of possible combinations increases exponentially, and these methods struggle to discriminate between competing hypotheses.

The idea: harnessing randomness to explore better

Stochastic optimization offers an alternative way to approach the problem. Rather than attempting to reach the perfect configuration immediately, the algorithm generates plausible assemblies, evaluates them, and occasionally accepts “imperfect” choices in order to continue exploring the solution space. This probabilistic strategy continuously alternates between two complementary phases: exploration, which searches new pathways to avoid dead ends, and exploitation, which consolidates promising insights already discovered. In practice, each proposed assembly is assigned a score based on visual continuity (alignment of letters, extension of strokes, texture and color matching). If coherence improves, the hypothesis is adopted; if it deteriorates, it may still be tolerated for a while to test whether it leads to a better configuration later on. This flexible logic distinguishes the method from more rigid approaches such as simulated annealing or certain genetic algorithms. It adapts better to the real variability of documents and fragment mixtures, and it leaves room for light operator interaction when needed.

What the experiments show

The authors report large-scale tests conducted on more than a thousand heterogeneous torn documents (office printouts, handwritten pages, images, and mixed-content sheets). The results converge toward an observation intuitive to any expert: the richer a document is in content (dense text, grids, or patterns), the faster and more accurate the reconstruction process becomes. Conversely, uniform areas require more iterations because they provide few visual anchor points. In the most challenging cases, occasional operator input, such as confirming a match or indicating the probable orientation of a fragment, is sufficient to guide the algorithm without compromising overall reproducibility.

Validation through a benchmark challenge

To evaluate the method under conditions close to real-world scenarios, the researchers tested it on fragment datasets inspired by the DARPA Shredder Challenge, a well-known benchmark in which participants attempt to reconstruct documents shredded into very narrow strips or confetti-like pieces. The method successfully reconstructed coherent and readable pages where other techniques either failed or stalled. This is more than an academic result: it demonstrates that the algorithm performs robustly when faced with investigative constraints, including numerous, intermingled, and occasionally damaged fragments resulting from handling or scanning.

Relevance to forensic practice

Beyond raw performance, the value of such a method lies in its integration into a demonstrable forensic workflow. The initial reconstruction phase, typically the most time-consuming, can be largely automated, freeing analysts to focus on content examination. More importantly, the approach lends itself to precise traceability: a log of tested hypotheses, retained parameters, acceptance thresholds, and intermediate captures. These records help document the chain of custody, justify technical choices before a magistrate, and, when necessary, reproduce the procedure in full transparency.

In laboratory settings, integration is facilitated by adopting rigorous acquisition practices such as high-resolution scanning, neutral backgrounds, color calibration, and systematic archiving of source files. A preliminary physical sorting of fragments, by paper weight, hue, or the presence of images, also enhances robustness by reducing ambiguities at the input stage.

Limitations and avenues for improvement

As with any optimization method, performance depends heavily on proper parameter tuning. Thresholds that are too strict will hinder exploration, while overly permissive criteria make it erratic. Highly mixed batches, comprising visually similar documents with identical layouts or fonts, remain difficult and may require occasional human intervention to prevent mismatches. Micro-fragments produced by high-grade shredders represent another major challenge: the smaller the visible surface, the fewer cues the algorithm can exploit. Future progress is expected in improving robustness against scanning artifacts, automating pre-sorting steps, and, more broadly, establishing standardized performance metrics (such as edge-matching accuracy, page completeness, and computation time) to facilitate fair comparison between methods.

Conclusion

Reconstructing torn documents is no longer solely a matter of expert patience and intuition. Stochastic optimization provides an exploration engine capable of handling large volumes, managing uncertainty, and producing usable assemblies. By combining automation, traceability, and expert supervision when needed, this approach transforms an “impossible puzzle” into a systematic procedure, serving the purposes of material evidence, intelligence gathering, and the preservation of damaged archives.

References :

When the forest hides the truth: how airborne LiDAR can help investigators in disappearance cases

Difficult disappearances to solve

Every year in France, nearly 40,000 people are reported missing. In 2022, the association ARPD recorded 60,000 “worrying disappearances”, including 43,200 minors; around 1,000 cases remain unsolved in practice [1,2]. Over time, the likelihood of finding a missing person—alive or even just their remains—drops drastically. The dense vegetation of undergrowth and forests becomes a major obstacle, rendering both aerial observation and the scent-tracking abilities of search dogs ineffective [3]. In France’s overseas territories, such as Martinique, disappearances are also numerous, and the topography of key disappearance zones is a serious impediment to ground searches and the use of more conventional methods to locate missing persons [4,5]. Drone pilots from the French Gendarmerie are often called in, but the drones currently used are only equipped with optical sensors, which struggle to detect anything beneath the vegetation cover. Nevertheless, drones remain valuable for rescue missions: in the United States, they are widely used to locate accident victims in the wild, deliver communication devices, medication, or supplies [6–8]. When dense canopy and vegetation make traditional searches ineffective, an alternative becomes necessary: LiDAR (Light Detection and Ranging). Already proven in many fields, including archaeology, LiDAR could bring real added value to judicial investigations in forest environments [9–11].

The promise of LiDAR

A LiDAR sensor emits up to 240,000 laser pulses per second. It measures the time taken for each beam to return to the emitter after hitting an obstacle, reconstructing a 3D point cloud [12]. Even though a large percentage of the beams bounce off leaves, the remainder reaches the ground and maps its relief. Investigators can select a precise height range—for example, between 15 and 50 cm above ground level—which effectively removes the canopy. This filtering provides access to the volumes present. A body or object can thus stand out from the natural relief [13].

A full-scale test in Isère

In April 2024, a team made up of a forensic anthropologist and a LiDAR drone specialist placed a volunteer lying down in a thicket in Montbonnot-Saint-Martin (Isère) to test whether a human body could produce a detectable signature despite dense vegetation. The test area, 0.8 hectares in size, contained 721 trees per hectare and showed a Normalized Difference Vegetation Index (NDVI) between +0.6 and +1, proof of an exceptionally thick canopy [13–15].

Two LiDAR sensors

Sensor (DJI)Max choes/pointFlight speed% of “ground” pointsVerdict
Zenmuse L131,9 m/s0,11 %The body is barely detectable
Zenmuse L252 m/s0,26 %Silhouette detected in just a few clicks

Much like a GPS (Global Positioning System), the drone’s remote-control screen provides a zenith view of the search area. When a mission is programmed, a zone is defined, and the drone’s software plots the route it must follow. The drone flies in a straight line, then upon reaching the edge of the zone, it performs a 90° turn, advances, makes another 90° turn, and continues in the opposite direction. As the drone retraces its path, the LiDAR beam overlaps the previous pass, enabling greater data acquisition both above and beneath the canopy (Figure 1).

Recherche de personnes en cas de disparition

Figure 1: Schematic representation of a drone’s flight path. The grey bands represent the areas scanned by the LiDAR. The dark grey zones show the overlap of the laser beam occurring with each drone pass.

The test demonstrates that, even under a dense canopy, a next-generation LiDAR can capture enough ground points to detect a body on the surface.

After a 7-minute flight, the data are imported into DJI Terra Pro and then TerraSolid. Filtering at the 0.15–0.50 m height slice highlights a characteristic over-density at the volunteer’s location. Comparison with a control scan without a body makes it possible to distinguish natural anomalies (rocks, stumps) and to prepare a true/false positive matrix to assess the statistical robustness of the detection.

Weather and regulations: field limitations

The test demonstrates that, even under dense canopy, a next-generation LiDAR can capture enough ground points to detect a body on the surface (Figure 2). Selecting an appropriate height band is crucial to reduce noise from rocks or tree trunks. However, weather conditions (rain, fog, wind > 30 km/h) remain limiting factors, as do drone autonomy and regulatory distance constraints.

Recherche de personnes en cas de disparition

Figure 2: A: acquisition without a body on the ground, B: acquisition with the volunteer placed on the ground.

The aim is to evaluate up to which degree of decomposition bodies still leave a detectable LiDAR signature.

What’s next?

Airborne LiDAR offers a non-destructive tool to locate human remains under vegetation and to document the three-dimensional topography of a scene prior to any excavation, ensuring safe access to the body. Its rapid deployment (lightweight equipment, one to two operators) provides a cheaper and safer alternative to human search parties or helicopter flights in difficult terrain.

Initially, research is focused on the detection of living volunteers, but for cases where individuals are presumed deceased, tests will need to be carried out on decomposing bodies. The aim is to evaluate up to which degree of decomposition a body still leaves a detectable LiDAR signature. Such research cannot currently take place in France, so collaborations with foreign laboratories are being considered. Another possibility would be to complement LiDAR with other sensors, such as thermal imaging or multispectral sensors. Thermal imaging could detect heat sources linked to entomological activity on the body [16], while multispectral sensors could reveal chemical changes in soil or vegetation over time associated with decomposition [17,18].

In just a few hours, a simple raw point cloud can be transformed into a priority search zone.

Conclusion

This study demonstrates that even a tiny percentage of “ground points” can be enough to reveal the presence of a body in vegetation usually considered impenetrable. In just a few hours, a raw point cloud can be transformed into a priority search zone, reducing both the scope of the search and the anxious waiting of families. These results still need to be confirmed in other forest types and with actual donors, but LiDAR is already breaking through the opacity of disappearances.

The results confirm that airborne LiDAR sensors are capable of highlighting the presence of a body in heavily vegetated environments. In the densest conditions, the ground point density reached 0.26%. The study underlines the need to improve post-processing techniques, particularly the selection of cloud points and the development of true/false positive analyses, in order to optimize detection reliability. Finally, the integration of complementary sensors, such as thermal or multispectral devices, appears to be a promising avenue for identifying more precisely the thermal anomalies and chemical markers associated with decomposition.

References

[1] ARPD | ARPD, (n.d.). https://www.arpd.fr/fr (accessed February 28, 2024).

[2] M. de l’Intérieur, Disparitions inquiétantes, http://www.interieur.gouv.fr/Archives/Archives-des-dossiers/2015-Dossiers/L-OCRVP-au-caeur-des-tenebres/Disparitions-inquietantes (accessed April 17, 2024).

[3] U. Pietsch, G. Strapazzon, D. Ambühl, V. Lischke, S. Rauch, J. Knapp, Challenges of helicopter mountain rescue missions by human external cargo: Need for physicians onsite and comprehensive training, Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 27 (2019). https://doi.org/10.1186/s13049-019-0598-2.

[4] C. Gratien, La mort de Benoit Lagrée officiellement reconnue, Martinique La 1ère (n.d.).

[5] Disparition de Marion à la Dominique : où en sont les recherches ?, guadeloupe.franceantilles.fr (2024). https://www.guadeloupe.franceantilles.fr/actualite/faits-divers/disparition-de-marion-a-la-dominique-ou-en-sont-les-recherches-976553.php (accessed October 25, 2024).

[6] C. Van Tilburg, First Report of Using Portable Unmanned Aircraft Systems (Drones) for Search and Rescue, Wilderness & Environmental Medicine 28 (2017) 116–118. https://doi.org/10.1016/j.wem.2016.12.010.

[7] Y. Karaca, M. Cicek, O. Tatli, A. Sahin, S. Pasli, M.F. Beser, S. Turedi, The potential use of unmanned aircraft systems (drones) in mountain search and rescue operations, The American Journal of Emergency Medicine 36 (2018) 583–588. https://doi.org/10.1016/j.ajem.2017.09.025.

[8] H.B. Abrahamsen, A remotely piloted aircraft system in major incident management: Concept and pilot, feasibility study, BMC Emergency Medicine 15 (2015). https://doi.org/10.1186/s12873-015-0036-3.

[9] J.C. Fernandez-Diaz, W.E. Carter, R.L. Shrestha, C.L. Glennie, Now You See It… Now You Don’t: Understanding Airborne Mapping LiDAR Collection and Data Product Generation for Archaeological Research in Mesoamerica, Remote Sensing 6 (2014) 9951–10001. https://doi.org/10.3390/rs6109951.

[10]    T.S. Hare, M.A. Masson, B. Russell, High-Density LiDAR Mapping of the Ancient City of Mayapán, Remote. Sens. 6 (2014) 9064–9085.

[11]    N.E. Mohd Sabri, M.K. Chainchel Singh, M.S. Mahmood, L.S. Khoo, M.Y.P. Mohd Yusof, C.C. Heo, M.D. Muhammad Nasir, H. Nawawi, A scoping review on drone technology applications in forensic science, SN Appl. Sci. 5 (2023) 233. https://doi.org/10.1007/s42452-023-05450-4.

[12]    Zenmuse L2, DJI (n.d.). https://enterprise.dji.com.

[13]    P. Nègre, K. Mahé, J. Cornacchini, Unmanned aerial vehicle (UAV) paired with LiDAR sensor to detect bodies on surface under vegetation cover: Preliminary test, Forensic Science International 369 (2025) 112411. https://doi.org/10.1016/j.forsciint.2025.112411.

[14]    S. Li, L. Xu, Y. Jing, H. Yin, X. Li, X. Guan, High-quality vegetation index product generation: A review of NDVI time series reconstruction techniques, International Journal of Applied Earth Observation and Geoinformation 105 (2021) 102640. https://doi.org/10.1016/j.jag.2021.102640.

[15]    Z. Davis, L. Nesbitt, M. Guhn, M. van den Bosch, Assessing changes in urban vegetation using Normalised Difference Vegetation Index (NDVI) for epidemiological studies, Urban Forestry & Urban Greening 88 (2023) 128080. https://doi.org/10.1016/j.ufug.2023.128080.

[16]    J. Amendt, S. Rodner, C.-P. Schuch, H. Sprenger, L. Weidlich, F. Reckel, Helicopter thermal imaging for detecting insect infested cadavers, Science & Justice 57 (2017) 366–372. https://doi.org/10.1016/j.scijus.2017.04.008.

[17]    J. Link, D. Senner, W. Claupein, Developing and evaluating an aerial sensor platform (ASP) to collect multispectral data for deriving management decisions in precision farming, Computers and Electronics in Agriculture 94 (2013) 20–28. https://doi.org/10.1016/j.compag.2013.03.003.

[18]    R.M. Turner, M.M. MacLaughlin, S.R. Iverson, Identifying and mapping potentially adverse discontinuities in underground excavations using thermal and multispectral UAV imagery, Engineering Geology 266 (2020). https://doi.org/10.1016/j.enggeo.2019.105470.

AI-based facial reconstruction: a breakthrough in disaster victim identification

When conventional identification methods reach their limits… In forensic medicine, identification traditionally relies on three so-called “primary” methods: genetic analysis (DNA), fingerprint comparison, and forensic odontology. Their reliability is well established, yet their effectiveness depends on the condition of the remains and the availability of comparative data. In large-scale disasters—earthquakes, plane crashes, terrorist attacks—bodies may be burned, mutilated, or decomposed, rendering DNA analysis uninterpretable and fingerprints unreadable. In other cases, the challenge lies in the absence of ante-mortem data: no dental records, no biometric registration, and sometimes no official administrative identification at all. These situations often leave forensic experts at a standstill. It is precisely in such contexts that innovative technologies, such as artificial intelligence-based facial reconstruction, open up new perspectives.

An innovation from Panjab University

In collaboration with Ankita Guleria and Vishal Sharma, Professor Kewal Krishan has developed a pioneering method of AI-assisted facial reconstruction. Their model focuses on three skeletal structures known for their resistance to post-mortem degradation: the mandible, the maxilla, and the dentition. These anatomical elements form a true morphological signature, as they directly influence chin width, cheekbone prominence, overall facial shape, and lip position.

By combining these anatomical data with an extensive database of anthropometric measurements collected from populations in northern India, the researchers successfully trained an algorithm capable of generating a digital face closely resembling the individual’s real appearance. The results are striking: an estimated accuracy rate of 95%, an exceptional figure for an indirect method of post-mortem identification. This innovation quickly drew attention—it has been officially registered and protected by the Indian Copyright Office, underscoring both its scientific value and its technological originality.

Remarkable accuracy, yet unavoidable limitations 

The reported 95% figure should not be interpreted as the artificial intelligence’s ability to produce a perfectly photographic portrait. Rather, it indicates that in the vast majority of cases, the features generated by the algorithm closely match those of the real individual. In practical terms, the model faithfully reproduces the general facial proportions, maintains consistency with key morphological characteristics, and achieves a sufficient degree of resemblance to effectively guide investigations toward a targeted identification.

However, it is important to emphasize that this technology retains a margin of uncertainty. Soft tissues—such as lip thickness, the precise shape of the nose, skin texture, and distinctive features like wrinkles or scars—cannot be inferred solely from bone structure. An additional methodological limitation lies in the fact that the algorithm was trained on a specific population from northern India; therefore, its accuracy may decrease when applied to other ethnic or geographic groups.

These factors demonstrate that AI-based facial reconstruction should be regarded primarily as a complementary tool—one that can orient and support the work of forensic experts, but without claiming to replace the primary methods of identification in forensic medicine.

The use of artificial intelligence in victim identification raises ethical, legal, and regulatory concerns that cannot be overlooked. From an ethical standpoint, the handling of post-mortem biometric data requires particular vigilance. Reconstructing a face from human remains must never come at the expense of the dignity of the deceased or the sensitivity of their families—especially since such reconstructions, even when scientifically sound, can be perceived as intrusive if shared without proper safeguards.

From a legal standpoint, another question arises: what evidential value could an AI-generated facial reconstruction have before a court? Until judicial procedures clearly define the role of this tool, its use will remain limited to an orientational function rather than serving as formal evidence. The issue of liability in the event of a misidentification also remains unresolved.

Europe imposes a strict regulatory environment. Such applications must comply with the General Data Protection Regulation (GDPR) and fall under the scope of the forthcoming European Artificial Intelligence Act, which specifically governs “high-risk” uses. In other words, the implementation of this technology in forensic contexts will depend not only on its scientific reliability but also on its ability to fit within a clear and protective legal framework.

Perspectives for victim identification

Despite these constraints, the prospects offered by AI-assisted facial reconstruction remain highly promising. In the context of mass disasters, this technology could complement DNA or odontological analyses, helping to accelerate identification processes and reduce the waiting time for families. It could also prove valuable in complex criminal investigations where a body is too damaged for primary identifiers to be usable. Moreover, it opens new avenues in archaeology and anthropology, where it could help restore the appearance of ancient individuals for whom no genetic material is available.

This advance reflects the growing convergence between artificial intelligence and forensic sciences. While it does not aim to replace traditional identification methods, it enriches the forensic toolkit by providing experts with an additional opportunity to restore an identity to victims who had long remained unknown.


References :

  • Guleria A., Krishan K., Sharma V. Methods of forensic facial reconstruction and human identification: historical background, significance and limitations. The Science of Nature, 110 (2023).
  • Guleria A. et al. Assessment of facial and nasal phenotypes: implications in forensic facial reconstruction. Archives of Biological Sciences, mars 2025.
  • Panjab University develops AI-based facial reconstruction models with up to 95 % accuracy using jaws and teeth dimensions. Indian Express, juillet 2025, consultable ici.
  • Panjab University secures copyright for AI tech that reconstructs faces from jaws. Hindustan Times, publié le 27 juillet 2025

Heartbeat Detection as an Anti-Deepfake Tool

Deepfake videos generated by artificial intelligence are becoming increasingly realistic, threatening the integrity of digital evidence. To address this challenge, Dutch researchers have developed an innovative method to detect deepfakes using a previously overlooked biological marker: the heartbeat. Still under scientific validation, this approach could become a valuable tool in digital forensic investigations.

A biological signal impossible to fake?

At the core of this innovation is a team from the Netherlands Forensic Institute (NFI), working with the University of Amsterdam. Their method relies on remote photoplethysmography (rPPG), a technique that detects subtle color variations in facial skin—on the forehead, around the eyes, or along the jawline—caused by blood flow at each heartbeat. Current deepfake algorithms are unable to simulate these micro-variations consistently, opening a promising path for detecting manipulated content.

An idea revived by technological progress

The concept dates back to 2012, when Professor Zeno Geradts explored video footage in criminal cases to assess whether the filmed individuals were alive. At the time, a MIT study had demonstrated that heart rate could be extracted from facial videos, but video compression destroyed the signal. Today, modern compression technologies preserve these micro-visual variations far better. The NFI team identified 79 facial points of interest to measure the signal and compared the results to biometric data from clinical sensors and smartwatches. Findings are encouraging, though some limitations remain—particularly with darker skin tones.

Figure 1. Principle of rPPG.
The absorption and reflection of light by the skin vary depending on hemodynamic activity under light sources (sunlight, lamps, etc.). These variations are recorded by imaging devices (cameras, webcams, smartphone lenses, etc.) as videos or images. Through algorithmic analysis, rPPG curves representing physiological information can be extracted from these videos.

A complementary tool for digital forensics

Heartbeat detection does not replace existing authentication methods but adds a valuable new dimension to forensic video examination. Other approaches remain crucial in the authentication process, such as analyzing electrical network frequency (ENF) traces embedded in images, identifying the recording sensor through its digital fingerprint (PRNU), or carrying out visual/automated checks of blinking patterns, abnormal movements, or generation artifacts (like a hand with six fingers). By combining these methods, experts can strengthen the reliability of their conclusions and stay ahead of forgers’ evolving tactics.

Robustness lies in combining traditional forensic techniques with AI-based approaches, rather than depending on one unique method.

A technological cat-and-mouse game

As new detection methods emerge, deepfake creators will inevitably attempt to circumvent them. In the near future, algorithms may try to artificially embed biological signals such as heartbeats into fake videos. This makes ongoing technological monitoring essential to stay one step ahead. As Geradts emphasizes, robustness lies in combining traditional forensic techniques with AI-based approaches, rather than depending on one unique method.

Towards judicial integration?

This approach is not yet deployed in real-world investigations—it is still undergoing scientific validation, with an academic publication expected in the coming months. However, researchers hope that in specific cases, particularly with high-quality videos, this method could soon be implemented. It opens a promising new avenue in the fight against digital evidence manipulation, leveraging a hard-to-fake truth: human physiology.

Références :

  • Geradts, Z., Pronk, P., & de Wit, S. (2025, mai). Heartbeat detection as a forensic tool against deepfakes. Présentation à l’European Academy of Forensic Science Conference (EAFS), Dublin.
  • Computer Weekly. (2025, 24 juillet). Dutch researchers use heartbeat detection to unmask deepfakes. Read here.
  • ForensicMag. (2025, 30 mai). Scientist Develops Method to Use Heartbeat to Reveal Deepfakes. Read here.
  • Amsterdam AI. (2025, 27 mai). Hartslaganalyse helpt deepfakes te ontmaskeren. Read here
  • DutchNews.nl. (2025, 25 mai). Dutch forensic experts develop deepfake video detector using heartbeat signals. Read here.
  • Poh, M.-Z., McDuff, D., & Picard, R. W. (2010). Advancements in non-contact, automated cardiac pulse measurements using video imaging. Massachusetts Institute of Technology (MIT) Media Lab.

AI in Forensics: Between Technological Revolution and Human Challenges

By Yann CHOVORY, Engineer in AI Applied to Criminalistics (Institut Génétique Nantes Atlantique – IGNA). On a crime scene, every minute counts. Between identifying a fleeing suspect, preventing further wrongdoing, and managing the time constraints of an investigation, case handlers are engaged in a genuine race against the clock. Fingerprints, gunshot residues, biological traces, video surveillance, digital data… all these clues must be collected and quickly analyzed, or there is a risk that the case will collapse for lack of usable evidence in time. Yet overwhelmed by the ever-growing mass of data, forensic laboratories are struggling to keep pace.

Analyzing evidence with speed and accuracy

In this context, artificial intelligence (AI) establishes itself as an indispensable accelerator. Capable of processing in a few hours what would take weeks to analyze manually, it optimises the use of clues by speeding up their sorting and detecting links imperceptible to the human eye. More than just a time-saver, it also improves the relevance of investigations: swiftly cross-referencing databases, spotting hidden patterns in phone call records, comparing DNA fragments with unmatched precision. AI thus acts as a tireless virtual analyst, reducing the risk of human error and offering new opportunities to forensic experts.

But this technological revolution does not come without friction. Between institutional scepticism and operational resistance, its integration into investigative practices remains a challenge. My professional journey, marked by a persistent quest to integrate AI into scientific policing, illustrates this transformation—and the obstacles it faces. From a marginalised bioinformatician to project lead for AI at IGNA, I have observed from within how this discipline, long grounded in traditional methods, is adapting—sometimes under pressure—to the era of big data.

The risk of human error is reduced and the reliability of identifications increased

Concrete examples: AI from the crime scene to the laboratory

AI is already making inroads in several areas of criminalistics, with promising results. For example, AFIS (Automated Fingerprint Identification System) fingerprint recognition systems now incorporate machine learning components to improve matching of latent fingerprints. The risk of human error is reduced and the reliability of identifications increased [1]. Likewise, in ballistics, computer vision algorithms now automatically compare the striations on a projectile with markings of known firearms, speeding the work of a firearms expert. Tools are also emerging to interpret bloodstains on a scene: machine learning1 models can help reconstruct the trajectory of blood droplets and thus the dynamics of an assault or violent event [2]. These examples illustrate how AI is integrating into the forensic expert’s toolkit, from crime scene image analysis to the recognition of complex patterns.But it is perhaps in forensic genetics that AI currently raises the greatest hopes. DNA analysis labs process thousands of genetic profiles and samples, with deadlines that can be critical. AI offers a considerable time-gain and enhanced accuracy. As part of my research, I contributed to developing an in-house AI capable of interpreting 86 genetic profiles in just three minutes [3]—a major advance when analyzing a complex profile may take hours. Since 2024, it has autonomously handled simple profiles, while complex genetic profiles are automatically routed to a human expert, ensuring effective collaboration between automation and expertise. The results observed are very encouraging. Not only is the turnaround time for DNA results drastically reduced, but the error rate also falls thanks to the standardization introduced by the algorithm.

AI does not replace humans but complements them

Another promising advance lies in enhancing genetic DNA-based facial composites. Currently, this technique allows estimating certain physical features of an individual (such as eye color, hair color, or skin pigmentation) from their genetic code, but it remains limited by the complexity of genetic interactions and uncertainties in predictions. AI could revolutionise this approach by using deep learning models trained on vast genetic and phenotypic databases, thereby refining these predictions and generating more accurate sketches. Unlike classical methods, which rely on statistical probabilities, an AI model could analyse millions of genetic variants in a few seconds and identify subtle correlations that traditional approaches do not detect. This prospect opens the way to a significant improvement in the relevance of DNA sketches, facilitating suspect identification when no other usable clues are available. The Forenseek platform has explored current advances in this area, but AI has not yet been fully exploited to surpass existing methods [5]. Its integration could therefore constitute a major breakthrough in criminal investigations.

It is important to emphasize that in all these examples, AI does not replace the human but complements them. At IRCGN (French National Gendarmerie Criminal Research Institute) cited above, while the majority of routine, good-quality DNA profiles can be handled automatically, regular human quality control remains: every week, a technician randomly checks cases processed by AI, to ensure no drift has occurred [3]. This human-machine collaboration is key to successful deployment, as the expertise of the forensic specialists remains indispensable to validate and finely interpret the results, especially in complex cases.

Intelligence artificielle IA en police scientifique et cybercriminalité - Forenseek

Algorithms Trained on Data: How AI “Learns” in Forensics

The impressive performance of AI in forensics relies on one crucial resource: data. For a machine learning algorithm to identify a fingerprint or interpret a DNA profile, it first needs to be trained on numerous examples. In practical terms, we provide it with representative datasets, each containing inputs (images, signals, genetic profiles, etc.) associated with an expected outcome (the identity of the correct suspect, the exact composition of the DNA profile, etc.). By analyzing thousands—or even millions—of these examples, the machine adjusts its internal parameters to best replicate the decisions made by human experts. This is known as supervised learning, since the AI learns from cases where the correct outcome is already known. For example, to train a model to recognize DNA profiles, we use data from solved cases where the expected result is clearly established.

an AI’s performance depends on the quality of the data that trains it.

The larger and more diverse the training dataset, the better the AI will be at detecting reliable and robust patterns. However, not all data is equal. It must be of high quality (e.g., properly labeled images, DNA profiles free from input errors) and cover a wide enough range of situations. If the system is biased by being exposed to only a narrow range of cases, it may fail when confronted with a slightly different scenario. In genetics, for instance, this means including profiles from various ethnic backgrounds, varying degrees of degradation, and complex mixture configurations so the algorithm can learn to handle all potential sources of variation.

Transparency in data composition is essential. Studies have shown that some forensic databases are demographically unbalanced—for example, the U.S. CODIS database contains an overrepresentation of profiles from African-American individuals compared to other groups [6]. A model naively trained on such data could inherit systemic biases and produce less reliable or less fair results for underrepresented populations. It is therefore crucial to monitor training data for bias and, if necessary, to correct it (e.g., through balanced sampling, augmentation of minority data) in order to achieve fair and equitable learning.

Data Collection: Gathering diverse and representative datasets
Data Preprocessing: Cleaning and preparing data for training
AI Training: Training algorithms on prepared datasets
Data Validation: Verifying the quality and diversity of the data
Bias Evaluation: Identifying and correcting biases in the datasets

Technically, training an AI involves rigorous steps of cross-validation and performance measurement. We generally split data into three sets: one for training, another for validation during development (to adjust the parameters), and a final test set to objectively evaluate the model. Quantitative metrics such as accuracy, recall (sensitivity), or error curves make it possible to quantify how reliable the algorithm is on data it has never seen [6]. For example, one can check that the AI correctly identifies a large majority of perpetrators from traces while maintaining a low rate of false positives. Increasingly, we also integrate fairness and ethical criteria into these evaluations: performance is examined across demographic groups or testing conditions (gender, age, etc.), to ensure that no unacceptable bias remains [6]. Finally, compliance with legal constraints (such as the GDPR in Europe, which regulates the use of personal data) must be built in from the design phase of the system [6]. That may involve anonymizing data, limiting certain sensitive information, or providing procedures in case an ethical bias is detected.

Ultimately, an AI’s performance depends on the quality of the data that trains it. In the forensic field, that means algorithms “learn” from accumulated human expertise. Every algorithmic decision implies the experience of hundreds of experts who provided examples or tuned parameters. It is both a strength – capitalizing on a vast knowledge base – and a responsibility: to carefully select, prepare, and control the data that will feed the artificial intelligence.

Technical and operational challenges for integrating AI into forensic science

Technical and operational challenges for integrating AI into forensic science

While AI promises substantial gains, its concrete integration in the forensic field faces many challenges. It is not enough to train a model in a laboratory: one must also be able to use it within the constrained framework of a judicial investigation, with all the reliability requirements that entails. Among the main technical and organisational challenges are:

  • Access to data and infrastructure: Paradoxically, although AI requires large datasets to learn, it can be difficult to gather sufficient data in the specific forensic domain. DNA profiles, for example, are highly sensitive personal data, protected by law and stored in secure, sequestered databases. Obtaining datasets large enough to train an algorithm may require complex cooperation between agencies or the generation of synthetic data to fill gaps. Additionally, computing tools must be capable of processing large volumes of data in reasonable time — which requires investment in hardware (servers, GPU2s for deep learning3) and specialized software. Some national initiatives are beginning to emerge to pool forensic data securely, but this remains an ongoing project.
  • Quality of annotations and bias: The effectiveness of AI learning depends on the quality of the annotations in training datasets. In many forensic areas, establishing « ground truth » is not trivial. For example, to train an algorithm to recognize a face in surveillance video, each face must be correctly identified by a human first — which can be difficult if the image is blurry or partial. Similarly, labeling data sets of footprints, fibers, or fingerprints requires meticulous work by experts and sometimes involves subjectivity. If the training data include annotation errors or historical biases, the AI will reproduce them [6]. A common bias is demographic representativeness noted above, but there may be others. For instance, if a weapon detection model is trained mainly on images of weapons indoors, it may perform poorly for detecting a weapon outdoors, in rain, etc. The quality and diversity of annotated data are therefore a major technical issue. This implicates establishing rigorous data collection and annotation protocols (ideally standardized at the international level), as well as ongoing monitoring to detect model drift (overfitting to certain cases, performance degradation over time, etc.). This validation relies on experimental studies comparing AI performance to that of human experts. However, the complexity of homologation procedures and procurement often slows adoption, delaying the deployment of new tools in forensic science by several years.
Intelligence Artificielle IA en police scientifique et en sciences forensiques cybercriminalité - Forenseek
  • Understanding and Acceptance by Judicial Actors: Introducing artificial intelligence into the judicial process inevitably raises the question of trust. An investigator or a laboratory technician, trained in conventional methods, must learn to use and interpret the results provided by AI. This requires training and a gradual cultural shift so that the tool becomes an ally and not an “incomprehensible black box.” More broadly, judges, attorneys, and jurors who will have to discuss this evidence must also grasp its principles. Yet explaining the inner workings of a neural network or the statistical meaning of a similarity score is far from simple. We sometimes observe misunderstanding or suspicion from certain judicial actors toward these algorithmic methods [6]. If a judge does not understand how a conclusion was reached, they may be inclined to reject it or assign it less weight, out of caution. Similarly, a defence lawyer will legitimately scrutinize the weaknesses of a tool they do not know, which may lead to judicial debates over the validity of the AI. A major challenge is thus to make AI explainable (the “XAI” concept—eXplainable Artificial Intelligence), or at least to present its results in a comprehensible format and pedagogically acceptable to a court. Without this, integrating AI risks facing resistance or sparking controversy in trials, limiting its practical contribution.
  • Regulatory Framework and Data Protection: Finally, forensic sciences operate within a strict legal framework, notably regarding personal data (DNA profiles, biometric data, etc.) and criminal procedure. The use of AI must comply with these regulations. In France, the CNIL (Commission Nationale de l’Informatique et des Libertés) keeps watch and can impose restrictions if an algorithmic processing harms privacy. For example, training an AI on nominal DNA profiles without a legal basis would be inconceivable. Innovation must therefore remain within legal boundaries, imposing constraints from the design phase of projects. Another issue concerns trade secrecy surrounding certain algorithms in judicial contexts: if a vendor refuses to disclose the internal workings of its software for intellectual property reasons, how can the defence or the judge ensure its reliability? Recent cases have shown defendants convicted on the basis of proprietary software (e.g., DNA analysis) without the defence being able to examine the source code used [7]. These situations raise issues of transparency and rights of defence. In the United States, a proposed law titled Justice in Forensic Algorithms Act aims precisely to ensure that trade secrecy cannot prevent the examination by experts of the algorithms used in forensics, in order to guarantee fairness in trials. This underlines the necessity of adapting regulatory frameworks to these new technologies.

Lack of Cooperation slows the development of powerful tools and limits their adoption in the field.

  • Another more structural obstacle lies in the difficulty of integrating hybrid profiles within forensic institutions, at least in France. Today, competitive examinations and recruitment often remain compartmentalised between different specialties, limiting the emergence of experts with dual expertise. For instance, in forensic police services, entrance exams for technicians or engineers are divided into distinct specialties such as biology or computer science, without pathways to recognize combined expertise in both fields. This institutional rigidity slows the integration of professionals capable of bridging between domains and fully exploiting the potential of AI in criminalistics. Yet current technological advances show that the analysis of biological traces increasingly relies on advanced digital tools. Faced with this evolution, greater flexibility in recruitment and training of forensic experts will be necessary to meet tomorrow’s challenges.

AI in forensics must not become a matter of competition or prestige among laboratories, but a tool put at the service of justice and truth, for the benefit of investigators and victims.

  • A further major barrier to innovation in forensic science is the compartmentalization of efforts among different stakeholders, who often work in parallel on identical problems without pooling their advances. This lack of cooperation slows the development of effective tools and limits their adoption in the field. However, by sharing our resources—whether databases, methodologies, or algorithms—we could accelerate the production deployment of AI solutions and guarantee continuous improvement based on collective expertise. My experience across different French laboratories (the Lyon Scientific Police Laboratory (Service National de Police Scientifique – SNPS), the Institut de Recherche Criminelle de la Gendarmerie Nationale (IRCGN), and now the Nantes Atlantique Genetic Institute (IGNA)) allows me to perceive how much this fragmentation hampers progress, even though we pursue a common goal: improving the resolution of investigations. This is why it is essential to promote open-source development when possible and to create platforms of collaboration among public and judicial entities. AI in forensics must not be a matter of competition or prestige among laboratories, but a tool in the service of justice and truth, for the benefit of investigators and victims alike.
Intelligence Artificielle IA en police scientifique et en sciences forensiques - Forenseek

The challenges discussed above all have technical dimensions, but they are closely intertwine with fundamental ethical and legal questions. From an ethical standpoint, the absolute priority is to avoid injustice through the use of AI. We must prevent at all costs that a poorly designed algorithm leads to someone’s wrongful indictment or, conversely, the release of a guilty party. This involves mastering biases (to avoid discrimination against certain groups), transparency (so that every party in a trial can understand and challenge algorithmic evidence), and accountability for decisions. Indeed, who is responsible if an AI makes an error? The expert who misused it, the software developer, or no one because “the machine made a mistake”? This ambiguity is unacceptable in justice: it is essential to always keep human expertise in the loop, so that a final decision—whether to accuse or exonerate—is based on human evaluation informed by AI, and not on the opaque verdict of an automated system.

On the legal side, the landscape is evolving to regulate the use of AI. The European Union, in particular, is finalizing an AI Regulation (AI Act) which will be the world’s first legislation establishing a framework for the development, commercialization, and use of artificial intelligence systems [8]. Its goal is to minimize risks to safety and fundamental rights by imposing obligations depending on the level of risk of the application (and forensic or criminal justice applications will undoubtedly be categorized among the most sensitive). In France, the CNIL has published recommendations emphasizing that innovation can be reconciled with respect for individual rights during the development of AI solutions [9]. This involves, for example, compliance with the GDPR, limitation of purposes (i.e. training a model only for legitimate and clearly defined objectives), proportionality in data collection, and prior impact assessments for any system likely to significantly affect individuals. These safeguards aim to ensure that enthusiasm for AI does not come at the expense of the fundamental principles of justice and privacy.

Encouraging Innovation While Demanding Scientific Validation and Transparency

A delicate balance must therefore be struck between technological innovation and regulatory framework. On one hand, overly restricting experimentation and adoption of AI in forensics could deprive investigators of tools potentially decisive for solving complex cases. On the other, leaving the field unregulated and unchecked would risk judicial errors or violations of rights. The solution likely lies in a measured approach: encouraging innovation while demanding solid scientific validation and transparency in methods. Ethics committees and independent experts can be involved to audit algorithms, verify that they comply with norms, and that they do not replicate problematic biases. Furthermore, legal professionals must be informed and trained on these new technologies so they can meaningfully debate their probative value in court. A judge trained in the basic concepts of AI will be better placed to understand the evidentiary weight (and limitations) of evidence derived from an algorithm.

Conclusion: The Future of forensics in the AI Era

Artificial intelligence is set to deeply transform forensics, offering investigators analysis tools that are faster, more accurate, and capable of handling volumes of data once considered inaccessible. Whether it is sifting through gigabytes of digital information, comparing latent traces with improved reliability, or untangling complex DNA profiles in a matter of minutes, AI opens new horizons for solving investigations more efficiently.

But this technological leap comes with crucial challenges. Learning techniques, quality of databases, algorithmic bias, transparency of decisions, regulatory framework: these are all stakes that will determine whether AI can truly strengthen justice without undermining it. At a time when public trust in digital tools is more than ever under scrutiny, it is imperative to integrate these innovations with rigor and responsibility.The future of AI in forensics will not be a confrontation between machine and human, but a collaborative work in which human expertise remains central. Technology may help us see faster and farther, but interpretation, judgment and decision-making will remain in the hands of forensic experts and the judicial authorities. Thus, the real question may not be how far AI can go in forensic science, but how we will frame it to ensure that it guarantees ethical and equitable justice. Will we be able to harness its power while preserving the very foundations of a fair trial and the right to defence?

The revolution is underway. It is now up to us to make it progress, not drift.

Bibliography

[1]​ : Océane DUBOUST. L’IA peut-elle aider la police scientifique à trouver des similitudes dans les empreintes digitales ? Euronews, 12/01/2024 [vue le 15/03/2025] https://fr.euronews.com/next/2024/01/12/lia-peut-elle-aider-la-police-scientifique-a-trouver-des-similitudes-dans-les-empreintes-d#:~:text=,il
[2] : International Journal of Multidisciplinary Research and Publications. The Role of Artificial Intelligence in Forensic Science: Transforming Investigations through Technology. Muhammad Arjamand et al. Volume 7, Issue 5, pp. 67-70, 2024. Disponible sur : http://ijmrap.com/ [vue le 15/03/2025]
[3]​ : Gendarmerie Nationale. Kit universel, puce RFID, IA : le PJGN à la pointe de la technologie sur l’ADN.  Mis à jour le 22/01/2025 et disponible sur : https://www.gendarmerie.interieur.gouv.fr/pjgn/recherche-et-innovation/kit-universel-puce-rfid-ia-le-pjgn-a-la-pointe-de-la-technologie-sur-l-adn [vue le 15/03/2025]
[4]​ : Michelle TAYLOR. EXCLUSIVE: Brand New Deterministic Software Can Deconvolute a DNA Mixture in Seconds.  Forensic Magazine, 29/03/022. Disponible sur : https://www.forensicmag.com [vue le 15/03/2025]
[5]​ : Sébastien AGUILAR. L’ADN à l’origine des portraits-robot ! Forenseek, 05/01/2023. Disponible sur : https://www.forenseek.fr/adn-a-l-origine-des-portraits-robot/ [vue le 15/03/2025]
[6]​ : Max M. Houck, Ph.D.  CSI/AI: The Potential for Artificial Intelligence in Forensic Science.  iShine News, 29/10/2024. Disponible sur : https://www.ishinews.com/csi-ai-the-potential-for-artificial-intelligence-in-forensic-science/ [vue le 15/03/2025]
[7]​ : Mark Takano.  Black box algorithms’ use in criminal justice system tackled by bill reintroduced by reps. Takano and evans.  Takano House, 15/02/2024. Disponible sur : https://takano.house.gov/newsroom/press-releases/black-box-algorithms-use-in-criminal-justice-system-tackled-by-bill-reintroduced-by-reps-takano-and-evans [vue le 15/03/2025]
[8] : Mon Expert RGPD. Artificial Intelligence Act : La CNIL répond aux premières questions.  Disponible sur : https://monexpertrgpd.com [vue le 15/03/2025]
[9]​ ​: ​ CNIL.  Les fiches pratiques IA.  Disponible sur : https://www.cnil.fr [vue le 15/03/2025]

Définitions :

  1. GPU (Graphics Processing Unit)
    A GPU is a specialized processor designed to perform massively parallel computations. Originally developed for rendering graphics, it is now widely used in artificial intelligence applications, particularly for training deep learning models. Unlike CPUs (central processing units), which are optimized for sequential, general-purpose tasks, GPUs contain thousands of cores optimized to execute numerous operations simultaneously on large datasets
  2. Machine Learning
    Machine learning is a branch of artificial intelligence that enables computers to learn from data without being explicitly programmed. It relies on algorithms capable of detecting patterns, making predictions, and improving performance through experience.
  3. Deep Learning
    Deep learning is a subfield of machine learning that uses artificial neural networks composed of multiple layers to model complex data representations. Inspired by the human brain, it allows AI systems to learn from large volumes of data and enhance their performance over time. Deep learning is especially effective for processing images, speech, text, and complex signals, with applications in computer vision, speech recognition, forensic science, and cybersecurity.

Artificial Intelligence (AI): A lever in the fight against crime

By Benoit Fayet, Defense & Security Consultant at Sopra Steria Next, member of the Strategic Committee of the CRSI, and Bruno Maillot, Data and Artificial Intelligence Expert at Sopra Steria Next, for the Center for Reflection on Internal Security.

Context

La majorité des Français expérimente l’IA sans parfois s’en rendre compte au quotidien : transports, Most French citizens experience AI in their daily lives—through transportation, e-commerce, energy, healthcare, smart homes, agriculture, and more—often without even realizing it. However, AI remains less prevalent in the field of security and in the work carried out by France’s Internal Security Forces (FSI, police and gendarmerie). This is despite the fact that, for years, IT systems and new technologies have already transformed these professions, while the Armed Forces and local authorities have embraced them far more extensively, sometimes for closely related challenges. Today, police officers and gendarmes rely heavily on digital tools, particularly for:

Their daily activities, using information systems and applications to take complaints, draft reports, consult information on individuals, or through the development and employment of biometric technologies—widely used for identification and authentication, such as fingerprinting.

Field communications, through dedicated communication networks and mobile devices that assist them during patrols or interventions.

Monitoring delinquency, especially at the local level or in crisis management situations (video surveillance, command centers, etc.).

Victim support, with the recent development of online platforms and applications offering the same services as in physical units (filing complaints, reporting incidents, etc.).

Artificial Intelligence represents a decisive lever to reinforce each of these existing digital uses by police officers and gendarmes. The digital tools they already possess, the wealth of data they process daily, and their operational needs could allow this, offering the Ministry of the Interior a new digital revolution.

Indeed, AI is not just another tool; it is a disruptive innovation capable of profoundly transforming the professions and practices of police and gendarmerie personnel, particularly in areas under strain or in crisis, such as criminal investigations. AI could also alleviate many of the daily frustrations that French citizens face regarding security. For example, by reducing the time officers spend on technical or administrative tasks in their units, AI could free them up to spend more time in public spaces, or by enhancing investigative capabilities, it could improve clearance rates for certain offenses. AI’s analytical capabilities in processing complex datasets could also strengthen the fight against organized crime and drug trafficking.Deploying AI systems, however, requires several prerequisites. First and foremost, mastering the national and European legal frameworks governing AI is essential. In addition, clear political guidelines for AI use must be established to ensure acceptance both by police officers and gendarmes themselves and by the public, so that AI is recognized as a tool—not an end in itself. Decision-making and oversight must always remain in human hands, to avoid slipping into the “civilization of machines,” as Georges Bernanos already warned in France Against the Robots (1947).

AI thus represents a decisive lever to reinforce each of the current digital uses by police and gendarmes.

Finally, in a context of growing cyber threats and challenges to our sovereignty, it is essential to ensure the maturity and resilience of the technologies employed, while identifying the most secure tools. A key concern is the lack of technological sovereignty within the EU and France regarding AI solutions, which currently come mostly from outside Europe. It is therefore crucial to identify AI tools that do not expose Europe and France to loss of sovereignty or increased vulnerability to intelligence and influence operations.

The objectives of this article are therefore to analyze the opportunities enabled by the current legal framework for integrating AI into internal security, and to identify concrete, realistic operational uses in the near future that remain technologically controlled and secure.

Early uses of AI underway—will the Paris 2024 Olympics mark a turning point?

Projects already exist in France, whether in public space crime management, administrative activities, or investigative work. Recent innovations in AI have been deployed in connection with the Paris 2024 Olympic Games.

AI used to support decision-making in crime prevention

AI has already been applied because it aligns closely with the core mission of France’s Internal Security Forces (FSI): anticipating and preventing crime. AI has not been developed to predict crime, but rather to better understand and analyze it, and ultimately to assist in decision-making. Crime is not a random phenomenon; it can be analyzed by gathering statistical data on a given territory and feeding it into models that help the FSI operate more effectively in that area (for example, patrol locations and schedules). Analytical methods have been used by the Gendarmerie nationale on non-personal data from the Ministry of the Interior’s Statistical Service for Internal Security (SSMSI), which were then exploited through data visualization to map and monitor the evolution of delinquency within a territory. These are not predictive policing tools—they forecast nothing—but instead provide decision-support analysis based on past events. They offer orientation to FSI units, who cannot realistically cross-check such volumes of data without AI’s analytical capacity. The method, for example, consists of identifying where burglaries or vehicle-related offenses occurred within a defined period and territory in order to infer where the next ones are likely to occur. The aim is to target specific areas and plan police deployments in locations where offenses are at risk of happening, thereby deterring crime.

Other experiments with more predictive tools—extending beyond decision support to actual risk or occurrence prediction—have also been conducted but have not demonstrated significant operational added value.

AI developed to support data processing in criminal investigations

Early AI-based data processing tools have also been developed by the Gendarmerie nationale to assist in investigative phases. Tools can, for example, support FSI in monitoring communications during an investigation by detecting spoken languages in court-authorized telephone interceptions, transcribing and translating conversations, and flagging relevant topics for the case through recurrent neural networks.

Another project has enabled the transcription of videotaped victim interviews and the annotation of procedural documents (persons, places, dates, objects, etc.).

Finally, the Digital Agency for Internal Security Forces (ANFSI), responsible for developing their digital equipment, is experimenting with a tool for producing intervention reports generated by “voice command” on NEO mobile devices.

A decisive shift with the Paris 2024 Olympics?

During the Paris 2024 Olympics, “augmented video” was authorized in Île-de-France under the supervision of the Paris Police Prefecture. For the first time, the law of May 19, 2023 authorized the deployment of AI in video surveillance, within a strict framework explicitly excluding facial recognition. The experimentation focused solely on detecting predefined events, such as abandoned objects, the presence or use of weapons, vehicles failing to respect traffic directions or restricted zones, crowd movements, and fire outbreaks. Article 10 specifically authorized AI processing on certain video streams from fixed cameras to detect these situations, with the goal of securing events particularly exposed to risks of terrorism or threats to public safety. An evaluation committee for these algorithmic cameras is expected to deliver a report by the end of 2024. Several use cases of intelligent video surveillance have already been deemed highly effective, notably those enabling the detection of individuals in restricted zones (facilitating the adjustment of police presence), the detection of crowd density or movements linked to fights, and interventions in urban transport systems.

In summary, while projects exist, they remain limited in scope and far from generalized deployment. Any large-scale adoption must occur within a constrained and evolving legal framework.

In France, a strict framework shaped by the CNIL and political efforts to move forward

The CNIL (French Data Protection Authority) has issued several specific recommendations to ensure that AI system deployments respect individuals’ privacy, in line with the provisions of the 1978 « Informatique et Libertés » law and the 2016 European “Police-Justice” directive, which defines data protection rules for information systems used by Internal Security Forces (FSI). Public authorities responsible for AI systems must comply with transparency obligations, making evaluations of such systems public, and follow the principle of “double proportionality.” This principle ensures that AI use is justified both in terms of the operational framework (patrols, criminal investigations, or counter-terrorism threats) and the type of data involved (personal data, statistical data, etc.). For the CNIL, the general rules of data protection (storage duration, independent oversight, etc.) apply equally to AI systems.

At the same time, the Ministry of the Interior and the legislature have advanced along the path outlined by the CNIL—through the 2020 White Paper on Internal Security and the 2023 Loi d’Orientation et de Programmation du ministère de l’Intérieur (LOPMI). These frameworks identified and legally codified specific use cases that may justify AI use in the security sector. They also introduced safeguards for experimentation, particularly in preparation for the Paris 2024 Olympic Games: data anonymization, secure storage, and ensuring that decisions and control remain in the hands of human agents.

The European Commission drafted the AI Act, aimed at regulating the use of AI in Europe, which was adopted by the European Parliament in December 2023 and scheduled to come into effect in August 2026.

A strengthened European framework with the AI Act

Complementing the French framework, the European Commission drafted the AI Act, aimed at regulating the use of AI in Europe, which was adopted by the European Parliament in December 2023 and scheduled to come into effect in August 2026. Its aim is to ensure that AI systems used in the EU are safe, transparent, and under human oversight. Generative AI systems capable of producing texts, code, or images are subject to particular scrutiny. The AI Act then establishes a detailed legal framework for public sector use of AI, including security applications:

Prohibited AI systems deemed dangerous: biometric identification in public spaces, facial recognition databases (including those based on open-source data), predictive policing systems, etc.

High-risk AI systems: allowed under strict conditions, requiring documentation, human oversight, compliance procedures, and continuous evaluation (e.g., biometric categorization systems, migration management tools).

Limited-risk AI systems: permitted but subject to transparency requirements (e.g., object detection systems). (By February 2025, prohibited AI systems must be withdrawn or brought into compliance. By August 2025, high-risk and limited-risk systems must be fully compliant).

It should be noted that the AI Act provides exceptions, particularly for law enforcement operations. Remote facial recognition (via camera or drone) may be permitted, but only under prior judicial authorization and within a strictly defined list of crimes—such as the search for a convicted or suspected serious offender.

Prospects for the Use and Application of AI in Internal Security

Building on the reflections already undertaken and the regulatory framework now in place, it is time to look ahead at the concrete contributions AI could bring to the professions of the national police and gendarmerie. This involves leveraging existing technologies, recent developments—particularly in generative AI—and identifying the conditions required for such use: communication and information-sharing, data access, simplification of technical tasks, data analysis in investigative phases, and more.

AI must support the Internal Security Forces (FSI), without becoming “the agent.” Tasks that may be entrusted to AI must always remain under human primacy in terms of oversight and validation.

It is important to emphasize that the use cases identified in this note are part of a forward-looking perspective. They take into account the regulatory framework described earlier and are grounded in the idea that AI should provide operational added value to the FSI, while safeguarding ethical principles regarding data protection. This approach must remain far removed from the practices of certain non-European countries, which would undermine the French democratic model. AI must support the Internal Security Forces (FSI), without becoming “the agent.” Tasks that may be entrusted to AI must always remain under human primacy in terms of oversight and validation. Delegation to AI should therefore accelerate action and decision-making, without creating dependence. The key lies in identifying appropriate use cases, particularly those involving tasks with little or no added value, so that FSI personnel retain their decision-making capacity and agency.

AI to Optimize Communication and Information-Sharing Among FSI

In today’s deteriorated security environment, communication and data-sharing are critical—whether during routine patrols, interventions requiring situational awareness, or more serious operations such as counter-narcotics or counter-terrorism missions.

Concrete use cases include the ability to centralize and process data from FSI mobile equipment or from video surveillance systems (video, audio, radio, conversations, and calls between units). These capabilities are currently unattainable but could become feasible with AI-powered tools, especially given the ever-increasing volumes of data being collected. Such tools would enhance operational performance by improving situational awareness and could be integrated into the ongoing transformation of FSI communication systems through the deployment of a national high-speed mobile network. AI could thus be a decisive enabler for faster information and intelligence-sharing, ensuring that actionable insights reach police and gendarmes in the field quickly enough to address emerging threats—for example, by detecting weak signals linked to operational drug intelligence units (CROSS) or through partnerships involving local authorities, municipal police, and associations. AI could process and qualify shared information almost in real time.

AI to Generate Knowledge and Support FSI Action in Real Time

As Internal Security Forces (FSI) increasingly produce data through their mobile devices, they operate in an environment where third-party data is also multiplying. To address this dual evolution, a data-valorization strategy leveraging AI could be developed, combining retrospective data analysis (already available in existing decision-support tools) with the enrichment of operational information in real time (e.g., patrol geolocation, AI-generated analytical notes), algorithmic developments, and the integration of external datasets (in compliance with the AI Act). This could include, for instance, analyzing real mobility flows across urban transport networks during an arrest mission or monitoring road traffic to detect accidents and disruptions in real time, thereby enabling faster and better-informed responses.

One of AI’s distinctive features is its ability to automatically flag incidents. When predefined conditions or scenarios are met—such as fights or crowd movements—an AI-based system can automatically generate detailed incident reports and dispatch alerts to FSI units for immediate assessment. This not only accelerates the documentation process but also ensures that minor infractions or disturbances (e.g., acts of vandalism or incivilities) that might otherwise go unnoticed are reported and addressed.

Moreover, the growing volume of available data can provide real-time access to a wider range of information. Tactical awareness could thus be enhanced by combining operational data (patrol geolocation, including other “security producers” such as municipal police or private security, and the geolocation of individuals targeted in an investigation), contextual data (points of interest, population density, infrastructure status), and sensor data (body-worn cameras, etc.). AI could retrieve, structure, and deliver these diverse datasets in real time to FSI officers on the ground, enabling faster intervention times (e.g., automated data transmission).

The challenge in this case lies in clearly defining needs and use cases to ensure relevant, actionable data, and in developing appropriate methods of restitution—such as cartographic visualization or automated integration into the information systems and mobile devices used by FSI.

Applied to these technical tasks, AI-driven automation could help FSI save time in their daily activities, allowing them to refocus on their core mission: being visibly present in the field, patrolling the streets, reinforcing public trust, deterring crime, and preventing delinquency.

AI to Streamline, Accelerate, and Simplify the Administrative and Technical Tasks of FSI

Internal Security Forces (FSI) often lament that a growing share of their working time is consumed by repetitive, burdensome administrative and drafting tasks with little added value. The use of AI for such “back-office” technical tasks is already widespread in other industries, particularly with generative AI, which shifts from passive analysis to active content creation.

Applied to these technical tasks, AI-driven automation could help FSI save time in their daily activities, allowing them to refocus on their core mission: being visibly present in the field, patrolling the streets, reinforcing public trust, deterring crime, and preventing delinquency. One of the key lessons learned from the Paris 2024 Olympic Games is that the visible, large-scale presence of FSI in public spaces was not only effective but also welcomed by the population.

Support procedure drafting, collect information 

AI opens the door to numerous functionalities to facilitate—or even eliminate—time-consuming repetitive tasks that dominate FSI daily operations, including drafting official reports (procès-verbaux), arrest records, complaint filings, or investigation notes. In the drafting and transcription phases, AI could accelerate report writing, whether at the station or in the field, by generating automated text or providing suggested formulations (e.g., regulatory phrasing), extracting relevant information from documents, accelerating video review by filtering or selecting scenes via semantic queries, or masking specific segments of documents or video (e.g., identifying relevant portions within large volumes of video using transformers).

The use of AI in this context would rely on recurrent neural networks that process data streams while retaining “memory” of texts, word sequences, or sentence patterns, much like biological neural networks—but with exponentially greater computational power. This can add real value to drafting and transcription tasks.

To further enhance efficiency, AI could also amplify the capabilities of tools already deployed by the Ministry of the Interior—for instance, integrating natural language processing into everyday applications (e.g., generating reports or official records via voice commands directly in the field). In this sense, AI is a powerful enabler, giving FSI more time to focus on high-value tasks—for example, during periods of police custody, allowing officers to interrogate suspects or work on case files instead of devoting limited time to repetitive administrative and technical tasks. (By law, police custody lasts 24 hours but may be extended to 48 hours if the alleged offense carries a prison sentence of more than one year, and up to 96 hours for specific crimes such as drug trafficking, terrorism, or organized crime.)

Fact-Checking and Assisting in Evidence Gathering

The collection of statements, testimonies, and various interviews forms the backbone of investigative work and often represents the first step in uncovering contradictions or verifying facts. The hundreds of documents that typically enrich a case file are still largely transcribed manually by investigators. Increasingly, however—and whenever required by law—these statements are filmed and recorded. In the future, they could be directly recorded and automatically transcribed by an AI-based system, thereby generating data that can be quickly processed and cross-checked by FSI. This would allow investigators to focus on analysis and fact-finding, ultimately improving case resolution rates.

This aggregation capability is one of the major contributions of AI, which must be considered within the legal framework set by the CNIL. Properly deployed, it could give FSI easier and faster access to the information they need.

Searching for Information Across Information Systems

L’IA pourrait aussi permettre de faciliter les travaux de recherche d’informations sur un individu ou un groupe d’individus interpellés ou recherchés. Ces phases de recherche sur des données biographiques, des données sur des antécédents judiciaires sont le quotidien des FSI et se font dans les différents fichiers de police mis à leur disposition (FPR – Fichier des Personnes Recherchées, TAJ – Traitement des antécédents judiciaires, etc.). Ces fichiers de police fonctionnent en silo et communiquent peu entre eux, notamment dans le but de respecter leurs finalités de traitement, conformément aux principes de la CNIL. Ainsi, le partage d’informations entre les fichiers est limité à des interfaçages applicatifs, et les FSIAI could also simplify information retrieval tasks on an individual—or a group of individuals—who have been arrested or are being sought. These searches, which involve biographical data and criminal records, are a daily routine for FSI and are performed using multiple police databases (such as the FPR – Fichier des Personnes Recherchées or the TAJ – Traitement des Antécédents Judiciaires). These systems function in silos and communicate very little with each other, partly to comply with their specific legal purposes as required by CNIL principles. As a result, information sharing between databases is limited to certain interfaces, and investigators often need to consult several systems simultaneously. Given the proliferation of data and the sheer volume to be analyzed daily, AI could overcome this challenge by aggregating information and delivering it directly to FSI. This aggregation capability is one of the major contributions of AI, which must be considered within the legal framework set by the CNIL. Properly deployed, it could give FSI easier and faster access to the information they need. —whether for personal safety during an arrest (e.g., understanding how to approach a specific individual), improving the effectiveness of public safety checks (e.g., ensuring the correct identification of a person during a stop), or supporting police investigations. For example, AI could support cross-referencing between different data sources, which is now authorized between Automatic License Plate Recognition (ALPR/LAPI) systems and other databases, such as stolen vehicle registries, vehicle insurance records, or the automated traffic enforcement system. Moreover, AI’s aggregation capabilities could streamline the process of freezing bank accounts through the Ministry of Economy and Finance’s information systems, thereby improving the recovery of fines—including amendes forfaitaires délictuelles (AFD, flat-rate criminal fines)—or directly targeting the financial assets of certain offenders, a political priority emphasized by the Minister of the Interior, Bruno Retailleau. à l’assurance des véhicules ou encore le système de contrôle automatisé.

Securing Police Databases and Their Use

In addition to consultation, a recurring task also involves “feeding” police databases with information about individuals who have been arrested or are wanted—data that includes descriptions of facts, offenses, and most importantly, identity details (biographic or biometric). This stage is critical, particularly in the acquisition of biometric data, as it determines the quality of the databases and ensures that, in the case of an offense or crime, suspects or victims can later be accurately identified. The computing power of AI algorithms can identify and highlight minutiae (specific points on a fingerprint) with greater precision than the human eye, leading to more accurate comparisons.

The interrogation of police databases containing fingerprint or genetic data could also be automated with AI, enabling faster and more reliable comparisons. Moreover, the deployment of automated quality-control checks could secure data acquisition, for example through an application assisting in fingerprint capture and automatically detecting non-compliant fingerprint images. Similarly, AI could enhance the processing of latent fingerprints left unintentionally on surfaces, which are often partial, blurred, or of poor quality. By extrapolating from recognized patterns, AI could fill in missing segments, enabling stronger matches.

During investigative phases, AI could also be leveraged to search data and cross-comparison with large databases.

AI to Strengthen Analytical Activities of FSI and Better Combat Delinquency

Handling Large Volumes of Data

AI offers opportunities to compute and automate certain tasks for FSI faced with vast amounts of data, whether in administrative screening activities or in criminal investigations. For example, Interior Ministry agents are tasked with vetting individuals applying for sensitive jobs, requiring them to check across all relevant police databases. In these mass data-analysis activities, AI could add value by accelerating and securing checks, allowing human analysts to focus on critical points, and ultimately enabling faster decision-making. AI could also optimize oversight activities by automatically detecting abnormal database consultations.

During investigative phases, AI could also be leveraged to search data and cross-comparison with large databases to improve clearance rates—for example, through DNA comparison against the national DNA database (FNAEG). DNA analysis is one of the most widely used forensic methods for identifying perpetrators of crimes. Moreover, AI could support judicial investigations, where case files are becoming increasingly voluminous and complex. Faced with the multiplicity and heterogeneity of data, AI’s processing power allows for faster classification, linkage, and recall of all relevant information within very short timeframes. Investigators could therefore analyze and cross-check evidence more efficiently, with fewer errors, ultimately improving case resolution.

Additionally, most data from criminal investigations are stored on hard drives for archiving. Tools capable of cross-referencing and linking these datasets are needed to identify evidence within massive amounts of stored information. AI could perform this classification and connection work, linking facts and evidence contained in judicial files. A key challenge lies in managing large volumes of judicial data to accelerate investigative processes—whether on the street or during interventions—enabling quicker decision-making. Here again, the definition of clear use cases and the pre-training of AI systems are essential to ensure the relevance of analyses, for instance through the generation of synthetic data.

Better Analyzing and Interpreting Images, Sounds, and Large-Scale Data

Generative AI introduces new techniques for analyzing and interpreting images. These developments can greatly enhance investigative work by processing large volumes of heterogeneous images, extracting requested elements from them, and understanding complex queries. Such solutions could support investigative phases through the analysis of crime scene photos, traffic accident images, or footage from urban supervision centers (CSU), helping to identify items of interest (vehicles, persons, etc.). Likewise, computer vision could become a major asset through the use of neural networks capable of interpreting and analyzing complex visual information on a large scale. Inspired by the human brain, these networks could, for example, be applied to aerial or satellite imagery to detect specific surfaces—useful for national police and gendarmerie to identify targeted vehicles, drug-dealing spots, and more.

AI would also improve vigilance in monitoring video surveillance feeds, thereby increasing efficiency and accelerating responses to suspicious situations (drug-dealing locations, brawls, gatherings, etc.). Indeed, it is estimated that after just one hour of real-time video monitoring, an operator loses concentration and may miss up to 50% of events. In the future, operators could rely on AI systems to flag events automatically, allowing them to focus on verification, analysis, and decision-making rather than continuous live monitoring.

AI can accelerate the detection process currently carried out by human eye, using image-analysis and behavioral-analysis tools based on convolutional neural networks to identify objects, actions, or individuals. Current machine learning techniques already allow the retrieval of a specific person’s photo, object, or weapon from thousands of stored photos on a computer or smartphone. AI’s object- and shape-recognition capabilities therefore represent a significant operational advantage.

Finally, AI can make major contributions through voice recognition technologies, capable of deciphering unique vocal characteristics, converting speech into models that can be processed and compared to stored voiceprints (samples from telephone calls or recordings).

AI would thus enable greater vigilance in video surveillance monitoring, improving efficiency and accelerating responses to suspicious situations.

Mieux analyser et détecter rapidement

Open-source intelligence gathering (OSINT, SOCMINT) has become a common practice due to the abundance of available data (social networks, etc.). AI support is fundamental in these phases, enabling rapid detection and characterization of urgent or dangerous situations at a faster speed than criminal networks or drug traffickers can erase their digital traces. For example, AI can be used for monitoring information flows “information noise” through web scraping, while respecting the AI Act framework (which excludes facial recognition), to detect and counter propaganda or disinformation. Using advanced AI and machine learning algorithms, vast amounts of data can be analyzed at high speed to identify patterns, keywords, or visual content—decisive in large-scale investigations such as narcotics trafficking or organized crime. Moreover, AI-driven systems can be trained on known propaganda or disinformation material, proactively spotting and flagging new content with similar features, ensuring swift and effective removal before it spreads.

To ensure the efficiency of these tools, the challenge lies in the ability to process both structured data (words, signs, numbers, etc.) and unstructured data (images, sounds, videos, etc.) at a large scale, relying on platforms and self-learning AI models capable of reformatting and making them exploitable. Automated language-analysis technologies powered by AI can extract and analyze written content across data volumes impossible to process manually.

AI also offers faster response times via machine learning, by training systems with massive datasets so they progressively learn to handle them autonomously. For instance, tracking the escape vehicle of a criminal suspect through surveillance cameras could be processed far more quickly by AI than by human analysis, leaving human investigators free to focus on data interpretation and oversight.

AI could support operators at an urban supervision center by detecting waste, abandoned objects, weapons, or fire outbreaks; calculating vehicle dwell time; monitoring parking near sensitive sites; identifying red-light violations or line crossings; and analyzing crowd movements.

AI to Enhance Investigative Capabilities and Case Solving

The sheer volume of data to be processed in investigations (video streams, images) has reached a level where exploitation is no longer possible without digital assistance. The massive increase in digital data places a heavy burden on the ability of Internal Security Forces (FSI) to handle it. The use of AI has therefore become not just an asset but a necessity—and, in the long term, an indispensable condition for effectively exploiting data or information that may contribute to establishing evidence. FSI must be equipped with tools to streamline video analysis, helping investigators avoid the need to review entire video or image sources manually. This would save time, increase efficiency, prevent concentration loss during long viewing sessions, and allow investigators to focus on higher-value analytical tasks. Without AI, it is likely that investigators would, in some cases, forego systematically analyzing all available video sources, thus missing out on particularly valuable digital evidence.

In addition to videos, investigators often rely on witness statements, bank records, phone logs, and testimonies—sources that could be processed by AI-powered software to flag inconsistencies, map data across time and space, or generate relational diagrams. Entering such data manually from paper or digital records is slow and labor-intensive. AI tools could detect key elements directly within texts, identify and classify them by meaning, and automatically generate relational diagrams. This would offer real added value by enabling real-time mapping of relationships and links from recordings or digital data, as well as suggesting follow-up questions to help investigators identify and apprehend suspects faster.

AI could also enable real-time detection across massive data flows of forged documents and fraud—tasks beyond human capacity—thereby improving case resolution rates. In identity management, applications are numerous, as in road safety, whether for combating driver’s license fraud, insurance fraud, or repeated fraudulent practices (disputes over traffic fines, registration fraud, vehicle theft, etc.). Furthermore, with AI-driven analysis, investigators could process millions of financial transactions, detecting suspicious fund movements indicative of money laundering schemes otherwise difficult to uncover. This would concretely improve the ability to analyze and understand criminal networks, connections, and environments, detect hidden links, and strengthen the fight against organized crime (drug trafficking, etc.). Recent dismantling of encrypted communication systems such as EncroChat or Matrix has highlighted the crucial role of advanced data analysis.

Finally, AI could help secure the use of images, documents, and videos by FSI by automating the redaction of passages or sections of documents to strengthen personal data protection and privacy compliance in line with CNIL and AI Act recommendations. For example, convolutional networks could detect and filter pre-defined elements in images. AI could also be used with precision and speed to exploit tools such as body-worn cameras to establish evidence—for instance, in cases where FSI conduct during interventions is challenged.

AI could also streamline procedures such as filing complaints or renewing administrative documents.

AI to Strengthen Victim Support

In recent years, the Ministry of the Interior has deployed online platforms and applications enabling citizens to report and declare incidents, file complaints (Perceval—reporting credit card fraud; Pharos—reporting illegal online content; THESEE—reporting cyber scams; PNAV—reporting crimes against persons and victim support), or access online information (locating a police station, etc.). These sites provide citizens with practical tools and represent a concrete way to expand data-driven practices. They also generate a new “data source,” often structured (words, numbers, signs, etc.), which could be further developed and exploited by AI for mapping reports, locating crime data, or analyzing crime types by area. Such processed data could then be made available to FSI in the field, allowing for quicker interventions (e.g., automated statistics, automatic reporting).

These platforms also open opportunities to rethink new modes of intelligence and information collection. They could help strengthen the detection of even weak signals using AI in a security context where diversifying channels for incident reporting is critical.

AI to Transform Police-Public Relations

AI could also streamline procedures such as filing complaints or renewing administrative documents by introducing conversational assistants (chatbots or callbots) to provide information or services.

By integrating AI capabilities into complaint-management software, police and gendarmes handling victims in their units could respond faster and more effectively (e.g., automatic scheduling of appointments via categorization). Online complaint services could also be enhanced, automatically registering complaints, analyzing them, and directing victims toward the most appropriate solution (automated confirmation, video-complaint with an officer, in-person appointment, etc.).

Recommendations and conclusion: Developing AI within a Clear and Secure Framework

The successful experiences from the Paris 2024 Olympic Games and recent technological advances highlight the urgent need to reflect on AI’s role in security. Given the deteriorating security context in France, increasingly organized crime, and the high demands of complex law enforcement professions where digital tools are already pervasive (investigation, forensics), AI is becoming an essential resource to help resolve cases, clarify facts, and provide robust solutions for FSI.

AI is not an end in itself but a powerful lever—just as IT systems and biometrics once were—and should be understood as such. To achieve this, it is vital to ensure:

• A clear political framework to reassure society and stabilize legislation.

• A ministerial strategy to secure FSI’s use of AI, supported by clear governance.

• Identifying the right use cases for AI, the right projects, and creating the conditions to scale up: ensuring data quality to guarantee tool efficiency, robustness, resilience and security-by-design of systems to limit vulnerabilities to cyber risks posed by criminals skilled in such threats, system auditability to understand and improve the results produced by these systems, etc.

• An integrated approach — ethical, legal, and technological — to systematically address the importance of the data being handled, requiring the establishment of balanced measures in terms of cybersecurity and transparency toward society.

• An identification of AI tools whose use does not come at the cost of France’s sovereignty or increase vulnerability to data leaks, cyberattacks, or intelligence and influence operations (information warfare, etc.)

Taking these technological steps is necessary if France is to meet the pressing challenges of public order and security it faces today.

un décodeur capable de retranscrire la pensée d’une personne avec une précision qui frise la perfection.

Bientôt la machine qui lit dans nos pensées ?

Des scientifiques de l’université d’Austin au Texas ont mis au point un décodeur capable de retranscrire les pensées d’une personne avec une précision qui frise la perfection. Pour quelles applications dans le futur ?

Quand informatique et neurosciences s’associent, cela donne naissance à une innovation technologique majeure qui risque de révolutionner la vie humaine. Dans une précédente étude, les universitaires de Zurich mettaient en évidence la possibilité de capter en moins de deux minutes l’empreinte d’un cerveau permettant d’identifier un individu avec une précision proche des 100% (voir article).

Cette fois-ci, les chercheurs américains ont utilisé deux technologies de pointe, un appareil IRM (Imagerie par Résonance Magnétique) et un modèle d’IA (Intelligence Artificielle) de type transformateur afin de décoder l’activité cérébrale et de la retranscrire en langage texte, tout cela de manière non invasive contrairement aux précédentes interfaces cerveau-machine qui exigeaient l’implantation d’électrodes par la chirurgie.

L’activité cérébrale passée au crible

Pour obtenir ce résultat, les scientifiques ont placé trois volontaires dans un appareil d’imagerie médicale et leur ont fait écouter des podcasts racontant des histoires. Pendant 16 heures, ils ont enregistré leur activité cérébrale et observé comment les mots et les idées générées par l’écoute de ces récits activaient les différentes régions du cerveau. Toutes ces données ont ensuite été passées au crible d’un système neuronal artificiel afin de les convertir en langage texte. Les tests, menés cette fois en faisant écouter de nouvelles histoires, ont permis de constater que le réseau décodait sans problèmes ces nouvelles pensées.

A terme, ce nouveau dispositif à visée médicale a pour but de permettre aux personnes lourdement handicapées, qui ont perdu l’usage de la parole et qui ne peuvent pas utiliser un clavier, de communiquer avec leur entourage par le simple fait de la pensée. Même si cette technologie reste perfectible, elle semble particulièrement prometteuse dans la mesure où selon l’un des chercheurs à l’origine de ce procédé, le décodeur IA peut déjà saisir l’essentiel d’une pensée parfois complexe et la retranscrire. En un mot, aller plus loin que les simples paroles…

Dans la tête d’un suspect …

Si l’étude suscite l’intérêt de tous les chercheurs en neurosciences, elle soulève également de nombreuses questions éthiques. Entrer dans la tête d’une personne pour fouiller dans ses pensées contre sa volonté pourrait à l’avenir devenir une réalité. On peut ainsi imaginer que dans le cadre d’une enquête judiciaire où le suspect reste muet et la victime introuvable, les policiers puissent obtenir des réponses et lever une fois pour toute le mystère de certaines disparitions. Et si cette machine avait pu faire « avouer » au cerveau malade de Michel Fourniret le lieu où est enterré le corps de la petite Estelle Mouzin ? Et si dans l’affaire de la disparition de Delphine Jubillar, cela permettait d’incriminer ou au contraire de disculper définitivement son mari actuellement en détention provisoire ? Autant de « si » qui plaident en faveur de ce procédé. A contrario, il peut également se transformer en arme redoutable pour museler la liberté de pensée, une tentation toujours possible dans certains pays pour qui la démocratie n’est pas une priorité.

Un risque qui n’existe pas selon les chercheurs à l’origine de l’étude. En effet, le décodeur ne fonctionne que sur le cerveau d’un sujet qui est entraîné et consentant. Dans le cas contraire, il lui est très facile de mettre en place des tactiques qui vont « saboter » les résultats. En bref, le cerveau humain gagne sur la machine. Pour l’instant…

Sources :

Des chercheurs parviennent à lire dans les pensées grâce à l’IA – Les Numériques (lesnumeriques.com)
L’IA peut maintenant lire vos pensées (iatranshumanisme.com)
Reconstruction sémantique du langage continu à partir d’enregistrements cérébraux non invasifs | Nature Neurosciences