Protect your AI applications and agents from attacks, fakes, unauthorized access, and malicious data inputs.
Control your GenAI applications and agents and assure their alignment with their business purpose.
Proactively test GenAI models, agents, and applications before attackers or users do
The only real-time multi-language multimodality technology to ensure your brand safety and alignment with your GenAI applications.
Ensure your app is compliant with changing regulations around the world across industries.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Detect and prevent malicious prompts, misuse, and data leaks to ensure your conversational AI remains safe, compliant, and trustworthy.
Protect critical AI-powered applications from adversarial attacks, unauthorized access, and model exploitation across environments.
Provide enterprise-wide AI security and governance, enabling teams to innovate safely while meeting internal risk standards.
Safeguard user-facing AI products by blocking harmful content, preserving brand reputation, and maintaining policy compliance.
Secure autonomous agents against malicious instructions, data exfiltration, and regulatory violations across industries.
Ensure hosted AI services are protected from emerging threats, maintaining secure, reliable, and trusted deployments.
Hackers with compromised government email addresses can access highly sensitive data on major platforms in as little as 30 minutes.
Threat actors are increasingly taking advantage of emergency data request (EDR) systems to extract highly sensitive user data from major online platforms. EDRs are intended to grant government and law enforcement the means to locate individuals in potentially life-threatening situations.
The data granted by a successful EDR can include users’ IP addresses, phone numbers, and even physical addresses. A criminal having access to such data can use it to commit identity fraud, phishing and doxxing attacks, or even track victims’ location data in real time. Bad actors who impersonate government or law enforcement officials can exploit this sensitive data to devastating effect.
Bad actors on hacking forums and darknet marketplaces are selling fake EDRs and fake government email addresses, which may be used to submit EDRs to major online platforms. As well as sharing tips on crafting convincing fake requests, these forums allow for the sale of access to compromised government and law enforcement email accounts. Vendors can be found offering fake EDR services for as little as $100, with some claiming a turnaround time of as little as 30 minutes.
Â
A vendor on a darknet forum offering fake EDRs as a service
Our research uncovered threat actors offering EDRs as a service targeting a wide range of online platforms, including:
Threat actors also claim that they are able to leverage compromised government and law enforcement email addresses from dozens of countries in order to obtain sensitive user information.Â
Technology companies serving large numbers of users are inundated with legitimate requests from law enforcement and government agencies, with an estimated hundreds of thousands of EDRs granted each year. Threat actors exploiting this system place all these platforms in a difficult position, with the chance that they may either leak sensitive information to criminal elements or prevent emergency services from reaching individuals at immediate risk.Â
In order to prevent either undesirable outcome, it is up to online platforms, in cooperation with law enforcement, to create a more robust EDR verification system. Among the best practices we recommend are implementing tracking mechanisms for EDRs, which log metadata like IP addresses and may help security teams to spot suspicious anomalies before complying with the request.
In addition, cooperation with threat intelligence providers can help keep tabs on threat actor chatter associated with EDRs. This means that technology companies can stay ahead of the curve of new trends, and identify the threat actors and compromised email addresses involved in fake EDR activity.Â
Companies can attempt to spot specific trends and threat actors through routine searches of hacker forums, public channels, groups, and messaging apps where EDR vendors tend to facilitate communication with their buyers.
Mitigating the damage to user safety is a difficult balancing act for online platforms. Closer scrutiny of incoming emergency data requests could delay law enforcement or government officials from aiding victims in potentially life-threatening situations. On the other hand, granting requests to authorities without knowing who is really making the request can lead to severe breaches in data security.Â
Creating a more secure mechanism for validating EDRs will require cooperation with the authorities responsible for sending such requests, as well as a proactive approach that helps platforms stay aware of the means and methodologies allowing threat actors to target sensitive user data. Â
The EU AI Act is the world’s first comprehensive AI law. Enterprises deploying GenAI chatbots and agents must prepare now for compliance. Learn the key requirements, penalties, and how ActiveFence helps you meet them with red teaming, guardrails, and observability.
The 2025 ActiveFence AI Security Benchmark Report compares six models on prompt injection defense. ActiveFence delivers top F1, precision, and multilingual resilience.
ActiveFence partners with Databricks to integrate Guardrails into the Mosaic AI Agent Framework, helping enterprises deploy safer, policy-aligned AI agents at scale.