Vibe Coding Is Fun Until Someone Gets Breached EU AI Act Is Live: What Every Tech Startup Must Do Now Why Bitcoin Is Down 37% From Its All-Time High — A Critical Analysis GPT-5.5 vs Claude Opus 4 — We Put Both Through Hell So You Don't Have To 10 Free Developer Tools That Don't Suck (and Actually Respect Your Time) $300 Billion in One Quarter — But Strip Out Four Companies and the Story Changes Completely TypeScript Hit Number One on GitHub. Now Comes the Hard Part. Rust Is Now in Half of All Enterprise Codebases — and the Language War Is the Wrong Thing to Watch The $10.5 Trillion Problem: Why AI Is Making Cybersecurity Simultaneously Better and Worse Ransomware Is Now a Franchise Business — And It's Growing Faster Than the Companies It Attacks
AI

Vibe Coding Is Fun Until Someone Gets Breached

91.5% of vibe-coded apps contain at least one security vulnerability. The tools are incredible — the defaults are dangerous. Here's what's actually going wrong, the eight failure modes that show up every time, and what to do before you ship something you'll regret.

Vibe Coding Is Fun Until Someone Gets Breached
Vibe Coding Is Fun Until Someone Gets Breached — Hitechies
A platform valued at $6.6 billion left a critical security hole open for 48 days in 2026 — after someone reported it and the bug bounty ticket was closed without anyone reading it properly. That platform is Lovable, one of the most popular vibe coding tools on earth. And it is not the outlier. It is the case study.

Let me start with something that sounds unfair but isn't. Vibe coding is genuinely incredible. Describing what you want in plain English and watching working software appear is not a gimmick — it's a real productivity shift that I use and you probably should too. Anyone telling you to just stop is selling you something more expensive than the problem.

But there's a difference between using a tool well and shipping whatever it generates. Right now a significant slice of the industry is doing the latter, and the data on what that produces has stopped being ambiguous.

A first-quarter 2026 scan of more than 200 vibe-coded applications found that 91.5% contained at least one vulnerability traceable to AI hallucination (vibe-eval.com, Q1 2026). Not apps built by careless people — apps built the way most people build with these tools right now. That number alone should settle the "is this actually a problem" debate.

Vibe-coded apps with at least one vulnerability
91.5%
Q1 2026 assessment of 200+ apps — vibe-eval.com
AI-generated code with security flaws
45%
Veracode GenAI Code Security Report 2025

The more precise framing: AI generates code that works. Security is not the same thing as works. Vibe coding optimises hard for works and treats everything else as someone else's problem. That someone else is usually you, six months later, explaining to users why their data was exposed.

This is not the same as using Copilot — the distinction matters

Context first

Traditional AI coding assistants — Copilot, Codeium, tab completion — are suggestion tools. You're still the author. You have context, you accept or reject completions, you understand what you're shipping. That's a fundamentally different relationship than what Cursor in agent mode, Lovable, Bolt, v0, or Replit Agent are doing.

With vibe coding you describe the outcome and the AI makes every implementation decision — which packages, how to structure auth, how to query the database, what to validate, what to skip. You review whether it runs. You often can't easily review whether it's secure, because you didn't write it line by line and you can't trace why any particular decision was made without reading through code that is not written the way you would have written it.

Gartner projects 60% of all new code written in 2026 will be AI-generated. The platforms enabling that shift have seen adoption surge well beyond early developer audiences into product teams, founders, and non-technical builders who are shipping to production. A lot of that code is going to real users.

The thing nobody says out loud A significant portion of those non-technical users are shipping software that handles real user data, real payments, and real personal information — built by a tool that has no idea what your threat model is, has never met your users, and is optimised for generating functional-looking output as fast as possible. Trend Micro's 2026 research put it plainly: "The real risk of vibe coding isn't AI writing insecure code. It's humans shipping code they never had a chance to secure."

What happened at Lovable — and why it's everyone's problem, not just theirs

The case study

Lovable is by several measures the fastest-growing software startup in recent history. $6.6 billion valuation. 8 million users. A product that genuinely does what it says. I'm not here to tell you Lovable is bad software. I'm here to document what happened when shipping velocity met security at scale.

Over 48 days in early 2026, a BOLA vulnerability — Broken Object Level Authorisation, meaning users could access other users' data — sat open across projects built on the platform. Someone filed a bug bounty report. The ticket was closed without proper escalation. The fix went out for new projects but not the thousands of existing ones already exposed. What got exposed: source code, database credentials, AI chat histories, and personal data of thousands of users.

The public response cycled through denial, deflection, blaming the documentation, blaming the bug bounty partner, and a partial apology — within a single day. The cycle is familiar because the pattern is not unique to Lovable. It's structural to a market that rewards growth and measures speed.

This is not a Lovable problem A scan of 1,430+ AI-built apps across multiple platforms found 5,711 vulnerabilities. Missing Row Level Security was the single most common issue across every platform tested. The Lovable incident is the most documented example of something happening everywhere. The incentive structure of the vibe coding market rewards shipping and penalises the friction that security adds. Until that changes structurally, these incidents will keep happening.

The CVE count from AI-generated code went from six in January 2026 to thirty-five in March, according to Georgia Tech's SSLab Vibe Security Radar project — and their researchers note that what gets formally disclosed is likely a fraction of what actually exists in production systems.

CVEs from AI-generated code — monthly trend 2026 (source: Georgia Tech SSLab)

Eight things AI does wrong — consistently, without being told to stop

From 1,430+ app scans

Not hypothetical. From scanning 1,430+ real AI-built applications and finding 5,711 real vulnerabilities. This is what the model does when you don't specifically tell it not to.

🔑
Your API keys end up visible to anyone with DevTools
Supabase keys, Stripe secrets, Firebase configs, database URLs — AI drops them into frontend JavaScript because the training data was full of examples that did this. One scan found live payment processing keys committed to public GitHub repos because the AI put them in config files and nobody checked before pushing.
Critical
🔒
Authentication that only exists in the browser
The AI builds a React component that checks whether you're logged in and redirects to /login if not. Looks exactly like a protected route. But the API endpoint has no server-side check at all. Anyone who knows the URL — which takes 30 seconds to find in the bundled JavaScript — can call it directly and get the data.
Critical
🗄️
Row Level Security — still the number one miss
If you built on Supabase or Firebase with a vibe coding tool, there's a reasonable chance any authenticated user can read or modify any other user's data. The AI connects to the database, generates working queries, and never configures RLS because nobody asked it to. Most common single vulnerability across every platform scanned.
Critical
💉
SQL queries built from user input
The model learned from thousands of examples that showed query construction with string interpolation. It generates that pattern because it was common, not because it's safe. Alessandro Pignati at NeuralTrust told Infosecurity Magazine he had seen multiple examples of SQL injection shipped to production because nobody sanitised the generated code before deploying it.
High
📦
Packages that don't exist — or exist with known CVEs
AI invents npm package names that sound completely plausible. Attackers publish malicious packages with those exact names — dependency confusion has been used in real supply chain attacks. Separately, the model recommends specific versions from its training data that have since been patched for known vulnerabilities. You inherit those vulnerabilities without knowing it.
High
🚦
Every endpoint is wide open to brute force
Rate limiting is rarely added by default — it requires explicit prompting in most current tools. Without it, login endpoints accept unlimited guesses, search APIs can be scraped entirely, and if you're paying per downstream API call, an attacker can run your cloud bill to the ceiling without breaking a sweat. Worth checking every endpoint you didn't specifically ask to be rate-limited.
High
🐍
Insecure serialisation — the pickle problem
Databricks' red team built a multiplayer snake game with Claude. It worked great. It also used Python's pickle module to serialise network data — a module with a well-documented arbitrary remote code execution vulnerability. The code was functional. It was also an open door for anyone on the network to run arbitrary commands on the server. The AI used pickle because it had seen pickle used before.
Medium
🔁
The AI copies its own mistakes across your whole codebase
If the model generates an insecure pattern early in a session, it references that output when generating subsequent code — and repeats the pattern. One bad authentication check at the start of a session can propagate into every protected route in the application before you've noticed anything is wrong. This is subtle, underappreciated, and very consistent.
Medium

Why it keeps happening — and why it's not really about careless developers

Root cause

I want to push back on the implied framing that vibe coding is dangerous because people are being stupid. Most of them aren't. The problem is structural.

AI models learn from what existed. What existed includes decades of Stack Overflow answers that prioritised working code over secure code, tutorials that skipped input validation because it complicated the example, and open-source projects where security was going to be addressed in the next version. The model reproduces those patterns because they were common and because they produce functional output — which is the objective it was trained to optimise for.

The second issue is that AI optimises for the objective you give it. "Build me a login system" is a functionality objective. The model builds a login system that logs users in. It does not build a system that resists credential stuffing, handles session expiry correctly, uses constant-time password comparison, or logs failed attempts — unless you explicitly require those things. Security requirements that aren't stated simply don't exist in the output.

The ownership problem nobody talks about When something goes wrong in human-written code there's an audit trail. You read git blame, look at the PR, ask whoever wrote it why they made that decision. With vibe-coded software, ownership fragments. The committer is identifiable. The reasoning behind each implementation decision, the dependency choices, the threat model assumptions — those aren't recorded anywhere. Fixing a "small issue" becomes a scavenger hunt through code nobody fully understands, generated by a process that left no documentation of its logic.
Security vulnerability rate — AI-generated vs human-written code by category

What the vulnerable code actually looks like — so you recognise it when you see it

Real patterns

Abstract warnings are easy to skip. Here's what these vulnerabilities look like in code you might actually have shipped.

SQL injection — the AI's default move

❌ What the model generates without security instructionsVULNERABLE
// Looks fine. Works fine. Terrible in production.
const query = `SELECT * FROM users WHERE email = '${userEmail}'`;
const result = await db.query(query);

// Attacker input: ' OR '1'='1' --
// Your database returns every user record.
// This is not a theoretical attack.
✓ What you get when you ask for it explicitlySAFE
// Prompt: "always use parameterised queries, never interpolate input"
const query = 'SELECT * FROM users WHERE email = $1';
const result = await db.query(query, [userEmail]);

// User input is data. Not SQL. The injection attempt does nothing.

The auth guard that protects nothing

❌ Security that lives only in the browserBYPASSABLE
// AI builds this. Looks right. Isn't.
const ProtectedRoute = ({ children }) => {
  const { user } = useAuth();
  if (!user) return <Navigate to="/login" />;
  return children;
};

// Meanwhile, the API endpoint it calls:
// app.get('/api/admin/data', async (req, res) => {
//   res.json(await db.query('SELECT * FROM everything'));
// })
// No auth check. None. Call it directly from curl.

Secrets in the bundle — visible to everyone

❌ Dropped into your frontend by defaultEXPOSED
// AI puts this in your React component
const supabase = createClient(
  'https://yourproject.supabase.co',
  'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...'
);

// This is in your bundled JavaScript.
// Open DevTools → Sources → search 'eyJ'
// Your database key is right there.
// This is happening constantly, right now.

What to actually do — and no, "stop using vibe coding" is not the answer

The practical part

The single most useful mental shift: treat AI-generated code like third-party code from an unknown author. You wouldn't deploy an npm package to production without checking what it does. Apply the same standard to output that came from a model optimised for a different objective than yours.

Start sessions with security requirements — not at the end

Databricks' red team found that security-specific prompts at the start of a session significantly reduced vulnerability rates with minimal impact on code quality. The word start matters. If you add security requirements after generating a bunch of insecure patterns, the model references its own earlier output and propagates the same problems through the "fixes."

Copy this into your system prompt right now "Use parameterised queries for all database operations. Never interpolate user input into queries, shell commands, or HTML output. All secrets go in environment variables only — never in source code, config files, or client bundles. Implement server-side validation for all user input — client-side validation is UX, not security. Add rate limiting to all API endpoints. Configure Row Level Security on all database tables. Apply the principle of least privilege to all database access."

Five things to check before anything ships

Manual checks — do these every time No secrets in source code or client bundles. Server-side validation on every user input. Auth checks on the server, not just the client. RLS configured on every database table. Dependencies exist and have no known CVEs — run npm audit or pip audit before deploying.
Tools that catch what eyes miss Semgrep or a SAST tool in your CI pipeline. GitGuardian or Trufflehog for secrets in git history. Supabase's RLS advisor if you're on Supabase. npm audit on every dependency install. These take 20 minutes to set up and catch the majority of what would otherwise make it to production.

Not all code deserves the same scrutiny

A colour scheme change and a payment processing flow are not the same risk level. The most effective teams apply tiered scrutiny — lighter review for isolated, low-stakes UI work, explicit security requirements and careful review for anything touching authentication, payments, user data, or external APIs. The question isn't "should we use AI here?" It's "how much scrutiny does this specific piece of code require before it ships?"

The EU AI Act dimension — August 2026 is 3 months away For anyone shipping AI-generated code that touches user data, employment, credit, healthcare, or essential services — "the AI wrote it" is not a compliance defence when the majority rules take effect in August. Human oversight, logging, and explainability are legal obligations. If this applies to you, building that evidence now is easier than building it after an audit request arrives.

The head of the UK's NCSC said at RSA 2026 that the industry needs to build vibe coding safeguards before making the same mistakes it made at the start of the cloud era — shipping fast, worrying about security in the next sprint, and spending years cleaning up preventable breaches. That window is still open. Use it.

Vibe coding is here. It works. It's fast. It also produces vulnerable software by default if you let it. That last part is a choice, not an inevitability — and it's yours to make before the breach is.

Decode JWT tokens without sending them anywhere → Our JWT Decoder runs entirely in your browser. No server, no logging, no data leaving your device. Because that's how security tools should work.

Open JWT Decoder