Subscribe to PointGuard AI
See our latest blogs, videos, and expert commentary on security issues and trends.
Editor's Note: since this blog was published, PointGuard has been quoted in several other publications on the McDonald's breach:
Forbes: McDonald’s AI Breach Reveals The Dark Side Of Automated Recruitment
Security Boulevard: McDonald’s Hiring Bot: Would You Like A Side of PII With That?
Information Security Buzz: McBreach with Fries? Default Logins, Sloppy Code Expos McDonald’s Job Applicants
McDonald's was recently in the news for an unfortunate security gap with a provider of the HR recruiter app Paradox.ai. Researchers reported that by using the password “1243456” they could unlock access to the entire database of 64 million McDonald's job applicants. Arguably, this is not a direct AI-related incident, but rushing AI applications to market without thorough security vetting seems like a systematic problem. See our comments published in Enterprise Security Tech, along with the full article and links below.
“This isn’t unique to AI—it’s a recurring pattern with every so-called ‘game-changing’ technology,” said Willy Leichter, CMO at PointGuard AI. “The hype cycle drives organizations to deploy fast, chasing immediate gains while sidelining seasoned security professionals. We saw this with Amazon S3 buckets a decade ago, and now it’s AI’s turn. Maybe incidents like this one will finally serve as the wake-up call we need.”
Read the article in Enterprise Security Tech
“123456” Unlocks 64 Million Job Applications: Inside McDonald’s AI Hiring Chatbot Data Leak
It’s the kind of password you’d use as a joke. But when security researchers Ian Carroll and Sam Curry typed “123456” into the admin login for a widely used AI chatbot, they didn’t just gain access—they unlocked a time capsule of nearly 64 million job application logs tied to McDonald’s and other corporate giants.
The culprit: Olivia, an AI-powered hiring assistant developed by Paradox.ai and marketed as the future of recruitment automation. But as Carroll and Curry discovered, Olivia’s backend wasn’t just inefficient—it was wide open.
“So I started applying for a job,” Carroll said, “and then after 30 minutes, we had full access to virtually every application that’s ever been made to McDonald’s going back years.”
Fast Food, Faster Breach
Olivia was designed to streamline high-volume hiring by automating the most repetitive parts of the application process. Text your resume, schedule an interview via chatbot, and potentially get hired—all without human contact. But that convenience came at a hidden cost: security.
Researchers discovered that a misconfigured admin portal protected by the notorious password “123456” gave them access to a treasure trove of personal data: names, emails, phone numbers, resumes, job histories, and even sensitive documents from applicants across years of submissions.
This wasn’t a sophisticated attack. It was a digital equivalent of jiggling the front door—and finding it wide open.
“We Own This”
In response to the exposure, Paradox.ai confirmed the researchers were the only ones to access the data and acted quickly to shut the door. It launched a bug bounty program and issued the cybersecurity industry’s equivalent of a corporate apology tour.
“We do not take this matter lightly, even though it was resolved swiftly and effectively,” said, Chief Legal Officer at Paradox.ai. “We own this.”
McDonald’s, for its part, was quick to distance itself, emphasizing that it relies on third-party vendors for hiring infrastructure and had no direct oversight of Olivia’s technical backend.
Vendor Risk on the Menu
That explanation didn’t sit well with many in the security community. Critics argue that companies like McDonald’s can’t shrug off responsibility when outsourcing sensitive functions to AI-driven platforms. Vendor sprawl doesn’t absolve brands of due diligence—especially when millions of job seekers’ identities are at stake.
“While we all love a good burger, nobody wants their personal data served up with a side of cybersecurity negligence,” said Evan Dornbush, CEO of Desired Effect and former NSA cybersecurity expert. “This incident is a prime example of what happens when organizations deploy technology without having an understanding about how it works or how it can be operated by untrusted users.”
“Brands need to be thinking about vulnerabilities from the ground up, not just as an afterthought.”
Echoes of S3
For seasoned experts, the incident is less surprising than it is frustrating. Olivia’s breach is just the latest entry in a long list of security slip-ups where “move fast” has repeatedly come at the expense of “lock down.”
“This isn’t unique to AI—it’s a recurring pattern with every so-called ‘game-changing’ technology,” said Willy Leichter, senior officer at PointGuard AI. “The hype cycle drives organizations to deploy fast, chasing immediate gains while sidelining seasoned security professionals. We saw this with Amazon S3 buckets a decade ago, and now it’s AI’s turn.”
“Maybe incidents like this one will finally serve as the wake-up call we need.”
A Symptom of a Bigger Problem
Job seekers have already reported frustrations with AI-based application processes—bots getting stuck, failing to understand inputs, or “looping” them in endless chat cycles. Add a privacy breach to that and the trust deficit only deepens.
While Paradox.ai’s new bug bounty program is a welcome gesture, it raises a larger question: if “123456” was all it took to breach a system trusted by multinational employers, what else are we missing in the rush to automate?
The Takeaway
This wasn’t a state-sponsored cyberattack. It wasn’t a zero-day exploit. It was a failure of the basics. And that’s what makes it so dangerous.
The lesson is simple: before giving AI a seat at the hiring table—or any enterprise function—organizations must understand exactly what they’re adopting, and whether it’s secured beyond default credentials.
Because if your future employees’ data is just one weak password away from being exposed, your HR tech stack might be hiring more problems than people.