Currently free during beta - premium features coming soon. Subscribe now to lock in early access.
AI_SAFETY

EU Regulatory Changes

25 changes tracked across 24 compliance frameworks including DORA, NIS2, GDPR, EU AI Act, Cyber Resilience Act, and more.

All DORA NIS2 GDPR CSRD MaRisk ISO27001 EU_AI_ACT CRA DSA DMA eIDAS2 SOC2 PCI_DSS HIPAA ISO42001 AMLD6 PSD3 DATA_ACT GPSR CER EUDR CVE BREACH AI_SAFETY
arXiv: Model Forensics in AI-Native Wireless Networks: Taxonomy, Applications, and Case Study
This publication introduces a taxonomy and framework for model forensics specifically designed for AI-native wireless networks, which are networks where artificial intelligence is deeply integrated...
Read analysis →
arXiv: MetaBackdoor: Exploiting Positional Encoding as a Backdoor Attack Surface in LLMs
This publication from May 2026 introduces a novel vulnerability in large language models, termed MetaBackdoor. The research demonstrates that an attacker can embed a hidden backdoor into an LLM by ...
Read analysis →
arXiv: Talk is (Not) Cheap: A Taxonomy and Benchmark Coverage Audit for LLM Attacks
This publication, a pre-print from arXiv dated May 14, 2026, introduces a new taxonomy and benchmark coverage audit for attacks on large language models (LLMs). It systematically categorises the ty...
Read analysis →
arXiv: Veritas: A Semantically Grounded Agentic Framework for Memory Corruption Vulnerability Detection in Binaries
This publication introduces Veritas, a novel AI-driven framework designed to automatically detect memory corruption vulnerabilities in compiled binary software. Unlike traditional static analysis t...
Read analysis →
arXiv: PickleFuzzer: A Case Study in Fuzzing for Discrepancies Between Python Pickle Implementations
This publication, titled PickleFuzzer: A Case Study in Fuzzing for Discrepancies Between Python Pickle Implementations, presents a new automated testing tool designed to find security and reliabili...
Read analysis →
arXiv: Analyzing Codes of Conduct for Online Safety in Video Games at Scale
This publication, a research paper from arXiv, does not represent a regulatory change but rather a significant analytical study that will inform future regulatory frameworks. The paper presents a l...
Read analysis →
arXiv: WARD: Adversarially Robust Defense of Web Agents Against Prompt Injections
A new academic paper published on arXiv, titled WARD: Adversarially Robust Defense of Web Agents Against Prompt Injections, introduces a framework designed to protect autonomous web agents from adv...
Read analysis →
arXiv: Toward Securing AI Agents Like Operating Systems
This paper, published on arXiv, proposes a new framework for securing advanced AI agents by treating them like operating systems. It argues that current AI safety approaches are insufficient for au...
Read analysis →
arXiv: Do Coding Agents Understand Least-Privilege Authorization?
A new preprint from arXiv, titled "Do Coding Agents Understand Least-Privilege Authorization?" examines the security behavior of AI coding agents when implementing authorization controls. The study...
Read analysis →
arXiv: Can Visual Mamba Improve AI-Generated Image Detection? An In-Depth Investigation
As a senior EU regulatory compliance analyst, I summarize the following publication for compliance professionals. On 14 May 2026, a research paper titled "Can Visual Mamba Improve AI-Generated Imag...
Read analysis →
arXiv: Known By Their Actions: Fingerprinting LLM Browser Agents via UI Traces
arXiv: EVA: Editing for Versatile Alignment against Jailbreaks
arXiv: Adapting AlphaEvolve to Optimize Fully Homomorphic Encryption on TPUs
arXiv: Capacitive Touchscreens at Risk: A Practical Side-Channel Attack on Smartphones via Electromagnetic Emanations
arXiv: One Step to the Side: Why Defenses Against Malicious Finetuning Fail Under Adaptive Adversaries
arXiv: Privacy Auditing with Zero (0) Training Run
arXiv: Angel or Demon: Investigating the Plasticity Interventions' Impact on Backdoor Threats in Deep Reinforcement L...
arXiv: Defenses at Odds: Measuring and Explaining Defense Conflicts in Large Language Models
arXiv: Exploiting LLM Agent Supply Chains via Payload-less Skills
arXiv: LiSA: Lifelong Safety Adaptation via Conservative Policy Induction