If you're a CISO, you've heard the promise before: a new tool that will finally eliminate the flood of false positives from your application security testing. This time, the promise comes from Endor Labs, which claims its new AI-powered Static Application Security Testing (SAST) tool can reduce false positives by up to 95%.
The claim, made in a recent blog post, is certainly attention-grabbing. The company attributes this dramatic improvement to a new multi-agent AI system that "thinks like a security engineer." This is a significant evolution from their earlier "DroidGPT" tool, which was more focused on open-source software research.
But is the 95% number real? The company's own materials have cited different figures, ranging from 92% to 95%. And while any reduction in false positives is welcome, security leaders have been burned by exaggerated marketing claims before. The proof, as always, will be in the pudding. CISOs will need to conduct their own rigorous evaluations to determine if Endor Labs' AI SAST can truly deliver on its promise.
If it can, the implications are significant. A 95% reduction in false positives would free up security teams to focus on real vulnerabilities, dramatically improving the efficiency and effectiveness of application security programs. It would also be a major step forward in the use of AI to solve complex security challenges. The industry will be watching this one closely.