Blog

AI in Cybersecurity: What’s Effective and What’s Not – Insights from 200 Experts

Actualités
Curious about the buzz around AI in cybersecurity? Wonder if it's just a shiny new toy in the tech world or a serious game changer? Let's unpack this together in a not-to-be-missed webinar that goes beyond the hype to explore the real impact of AI on cybersecurity. Join Ravid Circus, a seasoned pro in cybersecurity and AI, as we peel back the layers of AI in cybersecurity through a revealing
Read More

Critical Cacti Security Flaw (CVE-2025-22604) Enables Remote Code Execution

Actualités
A critical security flaw has been disclosed in the Cacti open-source network monitoring and fault management framework that could allow an authenticated attacker to achieve remote code execution on susceptible instances. The flaw, tracked as CVE-2025-22604, carries a CVSS score of 9.1 out of a maximum of 10.0. "Due to a flaw in the multi-line SNMP result parser, authenticated users can inject
Read More

How Interlock Ransomware Infects Healthcare Organizations

Actualités
Ransomware attacks have reached an unprecedented scale in the healthcare sector, exposing vulnerabilities that put millions at risk. Recently, UnitedHealth revealed that 190 million Americans had their personal and healthcare data stolen during the Change Healthcare ransomware attack, a figure that nearly doubles the previously disclosed total.  This breach shows just how deeply ransomware
Read More

How we estimate the risk from prompt injection attacks on AI systems (Google Online Security Blog)

Actualités
Posted by the Agentic AI Security Team Modern AI systems, like Gemini, are more capable than ever, helping retrieve data and perform actions on behalf of users. However, data from external sources present new security challenges if untrusted sources are available to execute instructions on AI systems. Attackers can take advantage of this by hiding malicious instructions in data that are likely to be retrieved by the AI system, to manipulate its behavior. This type of attack is commonly referred to as an "indirect prompt injection," a term first coined by Kai Greshake and the NVIDIA team. To mitigate the risk posed by this class of attacks, we are actively deploying defenses within our AI systems along with measurement and monitoring tools. One of these tools is a robust evaluation…
Read More

How we estimate the risk from prompt injection attacks on AI systems (Google Online Security Blog)

Sécurité
Posted by the Agentic AI Security Team Modern AI systems, like Gemini, are more capable than ever, helping retrieve data and perform actions on behalf of users. However, data from external sources present new security challenges if untrusted sources are available to execute instructions on AI systems. Attackers can take advantage of this by hiding malicious instructions in data that are likely to be retrieved by the AI system, to manipulate its behavior. This type of attack is commonly referred to as an "indirect prompt injection," a term first coined by Kai Greshake and the NVIDIA team. To mitigate the risk posed by this class of attacks, we are actively deploying defenses within our AI systems along with measurement and monitoring tools. One of these tools is a robust evaluation…
Read More

How we estimate the risk from prompt injection attacks on AI systems (Google Online Security Blog)

Sécurité
Posted by the Agentic AI Security Team Modern AI systems, like Gemini, are more capable than ever, helping retrieve data and perform actions on behalf of users. However, data from external sources present new security challenges if untrusted sources are available to execute instructions on AI systems. Attackers can take advantage of this by hiding malicious instructions in data that are likely to be retrieved by the AI system, to manipulate its behavior. This type of attack is commonly referred to as an "indirect prompt injection," a term first coined by Kai Greshake and the NVIDIA team. To mitigate the risk posed by this class of attacks, we are actively deploying defenses within our AI systems along with measurement and monitoring tools. One of these tools is a robust evaluation…
Read More

How we estimate the risk from prompt injection attacks on AI systems

Actualités
Posted by the Agentic AI Security Team Modern AI systems, like Gemini, are more capable than ever, helping retrieve data and perform actions on behalf of users. However, data from external sources present new security challenges if untrusted sources are available to execute instructions on AI systems. Attackers can take advantage of this by hiding malicious instructions in data that are likely to be retrieved by the AI system, to manipulate its behavior. This type of attack is commonly referred to as an "indirect prompt injection," a term first coined by Kai Greshake and the NVIDIA team. To mitigate the risk posed by this class of attacks, we are actively deploying defenses within our AI systems along with measurement and monitoring tools. One of these tools is a robust evaluation…
Read More

Zyxel CPE Devices Face Active Exploitation Due to Unpatched CVE-2024-40891 Vulnerability

Actualités
Cybersecurity researchers are warning that a critical zero-day vulnerability impacting Zyxel CPE Series devices is seeing active exploitation attempts in the wild. "Attackers can leverage this vulnerability to execute arbitrary commands on affected devices, leading to complete system compromise, data exfiltration, or network infiltration," GreyNoise researcher Glenn Thorpe said in an alert
Read More

Broadcom Warns of High-Severity SQL Injection Flaw in VMware Avi Load Balancer

Actualités
Broadcom has alerted of a high-severity security flaw in VMware Avi Load Balancer that could be weaponized by malicious actors to gain entrenched database access. The vulnerability, tracked as CVE-2025-22217 (CVSS score: 8.6), has been described as an unauthenticated blind SQL injection. "A malicious user with network access may be able to use specially crafted SQL queries to gain database
Read More

UAC-0063 Expands Cyber Attacks to European Embassies Using Stolen Documents

Actualités
The advanced persistent threat (APT) group known as UAC-0063 has been observed leveraging legitimate documents obtained by infiltrating one victim to attack another target with the goal of delivering a known malware dubbed HATVIBE. "This research focuses on completing the picture of UAC-0063's operations, particularly documenting their expansion beyond their initial focus on Central Asia,
Read More