5,000 AI Apps Expose Data, Highlighting 'Vibe Coding' Risks

ARCHIVE This story is marked as archive content due to its age and may not reflect the current state of events.

Summary: A study reveals that 5,000 applications created with generative AI are publicly exposed without authentication, a risk associated with 'vibe coding'.

The Problem of “Vibe Coding”: Thousands of AI-Generated Apps Are Already Leaking Sensitive Data

The scene seems straight out of an old Silicon Valley promise. A person with little technical knowledge opens an artificial intelligence platform, types something like “I want an app to manage clients with login and administrative panel,” and, within minutes, has a functional product online. Database included. Backend included. Design included.

For years, the tech industry dreamt of this moment. The definitive democratization of software. Programming without programming.

Now that it has finally arrived, the first consequences are beginning to appear.

A recent investigation revealed that over 5,000 applications developed using “vibe coding” tools were left publicly exposed on the internet, many of them leaking extremely sensitive information. Medical records, financial data, private conversations, and administrative panels appeared accessible without proper protection, simply because someone published an AI-generated application without truly understanding how it worked.

The situation is causing concern among cybersecurity specialists because it is not an isolated failure or a specific vulnerability. It is something deeper: the massive appearance of software created by people who never learned basic concepts of computer security.

“Vibe coding,” a term popularized by former OpenAI researcher Andrej Karpathy, describes precisely this new way of developing applications using conversations with an AI. The user does not write code manually; they simply describe what they want and the system automatically generates the software. (xataka.com)

The experience is almost magical. Platforms like Replit, Lovable, or Base44 allow users to deploy complete products in minutes. For small startups or entrepreneurs with limited resources, the idea is irresistible: building software without relying on expensive technical teams or long development cycles.

The problem is that AI can write functional code much faster than it can guarantee that this code is secure.

For years, the idea was established that programming consisted mainly of writing syntax. But true software development has always been elsewhere: authentication, permissions, secret handling, architecture, data validation, access control, secure deployments, monitoring. All those things that are usually not seen in a spectacular demo.

Artificial intelligence solved the visible part quite well. The invisible part remains a human problem.

That explains why many of the applications detected by researchers had surprisingly basic flaws: publicly accessible databases, unauthenticated APIs, or administrative panels exposed directly to the internet. In some cases, the creators themselves did not even seem aware that the information was available to anyone. (es.wired.com)

The phenomenon inevitably recalls the crisis of Amazon S3 public buckets during the last decade, when entire companies leaked millions of records due to incorrect cloud configurations. Only now the potential scale is much greater.

Before, deploying software required a certain level of technical knowledge. Today, it is enough to know how to describe an idea in natural language.

The speed has also completely changed the rules. What once took weeks of development, testing, and deployment can now be resolved in an afternoon. And this acceleration eliminates many of the pauses where security reviews or technical audits traditionally appeared.

Specialists are even talking about a new stage of “Shadow AI”: employees creating internal tools using AI without passing through technology teams or corporate controls. (ecosistemastartup.com)

The paradox is interesting. Vibe coding is indeed democratizing software, probably faster than any analyst imagined just two years ago. But democratizing a powerful tool also means democratizing the errors associated with it.

AI drastically reduced the barrier to creating applications. It did not reduce the inherent complexity of the internet.

And there lies perhaps the most important change of all: for the first time in modern software history, millions of people can build complete systems without truly understanding what happens behind the screen.

While the application “seems to work,” many will automatically assume that it is also secure.

The tech industry has already learned that functional does not mean secure. It now seems poised to learn it again, but on a massive scale.

Key facts

  • 5,000 AI-created applications are publicly exposed without authentication.
  • Vibe coding involves blindly trusting LLMs without validating security.
  • Exposed applications include sensitive user information and credentials.

Why it matters

This development pattern without security validation places startups at immediate risk of exposing critical information. Founders must understand that AI-powered development speed must not replace proper security protocols.