The promise of “vibe coding” is simple, describe the app you want, let AI generate the code, and ship. But a recent security probe shows the reality can be a lot messier.
Lovable, an AI app-building platform that generates full applications from prompts, is facing criticism after a security researcher discovered major vulnerabilities in one of the apps hosted on its platform. The app featured on Lovable’s Discover page and viewed more than 100,000 times exposed the data of more than 18,000 users.

The researcher, tech entrepreneur Taimur Khan, said he found 16 vulnerabilities in the project, including six critical flaws. The app itself wasn’t publicly named during disclosure, but it was reportedly an education platform used by teachers and students to generate exams and review grades. Some users appeared to come from major universities and K-12 institutions, raising the stakes given the potential exposure of student information.
The core problem traces back to how apps are generated on Lovable. Projects built on the platform typically rely on a backend powered by Supabase, which manages authentication, storage, and real-time updates through a PostgreSQL database. But unless developers explicitly configure security features such as row-level security or role-based permissions, the generated code can look perfectly functional while leaving major gaps underneath.
In this case, Khan found an authentication check that effectively flipped the intended logic. Instead of blocking unauthorized users, the function blocked logged-in users while allowing anonymous visitors through. The mistake was subtle but devastating: an unauthenticated attacker could access user records, delete accounts, send bulk emails, or even manipulate student grades through the platform.
The exposed dataset reportedly included nearly 19,000 user records, with thousands of student accounts and email addresses among them. Some entries even contained full personally identifiable information.
To Khan, the incident points to a bigger issue with the current wave of AI-generated software. Tools like Lovable dramatically lower the barrier to building apps, but they can also produce code that prioritizes functionality over security. A human reviewer might catch a logic bug like this in seconds. An AI model, optimized to output working code quickly, might not.
Lovable says it takes vulnerability reports seriously and has contacted the app’s creator to fix the issue. The company also says projects receive a free security scan before publication, though developers ultimately decide whether to implement the recommended fixes.
That distinction platform responsibility versus developer responsibility is quickly becoming one of the defining debates in AI-generated software. If a tool promises to generate “production-ready” apps, critics argue it can’t completely step back when those apps ship with security flaws.
For now, the episode is a reminder of something the AI boom occasionally glosses over: generating code is easy. Making sure it’s safe to run in production is still very much a human problem.