January 8, 2026
|
The Security Table Podcast

Why Web3 Audits Aren’t Enough: Audited, Yet Exploited — Real Examples

In 2024, the Web3 ecosystem lost approximately $5 billion to exploits. The shocking part? 90% of these hacked protocols had undergone external security audits.

At Olympix, we're on a mission to change this narrative. In our latest Security Table podcast episode, we examined six major hacks from the past year, all on audited codebases, to understand what went wrong and how proactive security tools could have prevented them.

The Core Problem: Audits Are Necessary but Insufficient

When Olympix CEO Channi Greenwall started the company, she discovered a troubling pattern: despite the massive attack surface and catastrophic consequences of smart contract vulnerabilities, the industry relied almost exclusively on manual third-party audits for security.

"In 2022 when we started, it was about $14 billion exploited with 90% of that undergoing manual audit," Greenwall explains. "The whole genesis of Olympix was: can we automate parts of what auditors are doing, and can we find things they miss?"

The answer, as these six cases demonstrate, is yes.

1. 1inch: $7.5M Lost to Buffer Overflow

What happened: A call data corruption vulnerability (essentially a buffer overflow attack from Web2 replicated in Web3) resulted from an underflow in a caller-controlled value.

Why audits missed it: The vulnerability wasn't massively sophisticated, but identifying that the variable was caller-controlled wasn't immediately apparent. Human auditors developed assumptions about the code's safety that led them to overlook the issue.

How Olympix could have prevented it: Our static analysis tool doesn't make the same assumptions humans do. It systematically checks for potential underflows without bias, flagging this vulnerability before deployment.

2. Penpie: $27M Drained Through Re-entrancy

What happened: A missing non-reentrant modifier on a critical function allowed an attacker to exploit a re-entrancy vulnerability, but only through one specific branch that developers never anticipated.

Why audits missed it: Finding that exact branch is nearly impossible for humans. When auditing code, you don't think about all the permutations and combinations of branches that might exist. It's simply not humanly possible to do repeatedly for all functions.

How Olympix could have prevented it: Our mutation testing tool would have directly flagged the lack of proper validation. By introducing small changes to the codebase (like removing the only-owner modifier from a critical function), mutation testing ensures your test suite would scream at you about missing protections. This happens at compile time, not after deployment.

Mutation testing has become a favorite among security engineering teams. As Greenwall notes: "When security engineers hear what it does, it clicks. They're like, 'Wow, I can't believe we didn't enforce this metric.' They're able to actually quantify their risk and the security of their test suite, which is something they've never historically been able to do."

3. Hedgey Finance: $44M Lost to Faulty Code Path

What happened: A specific code path failed to clear user approvals after campaign cancellation, allowing the attacker to repeatedly withdraw ERC20 tokens.

Why audits missed it: Among all possible code paths, this single faulty one was buried deep enough that manual review didn't catch it.

How Olympix could have prevented it: Our fuzzer/formal verifier would have detected this by finding those exact problematic paths and attempting to break the invariants.

Formal verification treats code as math: a series of logical statements and state transitions. It enforces rules (like "no user should have unlimited approval after cancellation") across all possible code paths and proves them mathematically.

The challenge is that traditionally, formal verification requires engineers to learn a completely new specification language and think through all protocol invariants, often an incomplete or error-prone process. Olympix solved this by automatically deducing code paths and generating the necessary specs, delivering one-click formal verification that previously required specialized expertise.

4. Cork Protocol: Complex Hook Vulnerability

What happened: A Uniswap V4 hook function lacked proper access control modifiers, creating an exploitable vulnerability when combined with a complex economic attack vector.

Why audits missed it: Uniswap V4 hooks were brand new. Auditors didn't have established heuristics for this vulnerability class yet. Without existing checklists or training data, both human auditors and AI-based security tools struggled to identify the issue.

How Olympix could have prevented it: Olympix has built extensive infrastructure around understanding code through our own intermediate representation (IR) of codebases. This allows us to understand data flow, function types, and token movement throughout contracts, giving us a comprehensive picture of what it means to miss a modifier in any context, even in novel code patterns.

5. Paxos/PayPal: $300 Trillion Minting Error

What happened: Insufficient validation on minting amounts led to a catastrophic fat-finger mistake. Someone entered the wrong number in a script or wallet.

Why audits missed it: This wasn't a sophisticated vulnerability. It was a basic validation gap that should have been caught but wasn't, possibly because it seemed too simple to be a real risk.

How Olympix could have prevented it: Both our static analysis and Bug Poker tools would have flagged insufficient validation on minting amounts, recommending multiple layers of validation on all external function parameters.

This case highlights the challenges institutions face moving from Web2 to Web3. They understand they need traditional tools and third-party validation (that's mandated in Web2). But there's an education gap around mutation testing, fuzzing, and automated formal verification. The good news is that institutions are already culturally attuned to tools-first security, making it easier to demonstrate value once they understand what's available.

6. Balancer: $121M in Recent Exploit

What happened: A rounding direction mismatch in the withdrawal process (already flagged with a comment acknowledging the issue) became exploitable when leverage tokens were introduced years later. The protocol decided the one-wei approximation difference was acceptable, not anticipating it could be compounded through leverage tokens and repeated transactions.

Why audits missed it: The original vulnerability wasn't novel and was documented. But years later, when new contracts were added that interacted with the old code, it created an exploitable condition. No one would economically justify re-auditing every line of old, deployed code every time something new is added. You'd bankrupt the company.

How Olympix could have prevented it: Our Bug Poker tool, running on decompiled code without even access to the original comments, identified the leverage token issue and the potential for compounding through batch execution.

This case illustrates why point-in-time audits cannot future-proof your protocol. Novel attack surfaces and vectors emerge constantly. Change and evolution happen continuously in Web3. If you're relying on a single fixed-point-in-time audit of a scoped codebase, you cannot protect against all eventualities.

The Path Forward: Proactive Security Layering

The takeaway isn't that audits are useless. They're critical. Human auditors excel at finding certain vulnerability classes. But tools are better at finding others.

Think of security as Swiss cheese: you need multiple layers because each layer has holes. Relying solely on audits leaves you vulnerable. The solution is a proactive, multi-layered approach:

  1. Static analysis during development to catch common vulnerabilities
  2. Mutation testing to quantify test suite coverage and find validation gaps
  3. Fuzzing and formal verification for mathematical assurance of critical invariants
  4. Automated internal audits powered by these formal methods
  5. External audits as a final validation layer

Olympix tools leverage formal methods, the same battle-tested approaches used in mission-critical industries like aerospace and medical devices where failure is catastrophic. For the fraction of the cost of a single external audit, you can implement tooling that's infinitely scalable across development lifecycles.

The result is that you catch vulnerabilities before going to external audit, making that audit more effective and exhaustive. When you publish an audit report with just a few lows, it signals to stakeholders that you're writing genuinely secure code. More importantly, you catch things audits miss.

Why This Matters Now

The correlation between market growth and exploit size is undeniable. As more institutional money flows on-chain, the incentives for attackers grow proportionally. State-level actors like North Korea actively target DeFi protocols to fund their nuclear programs. The attack surface isn't shrinking; it's expanding.

Each of these six protocols followed existing best practices. They got external audits. Some got multiple audits. They still lost millions (in some cases, hundreds of millions) because a single point-in-time security check cannot account for:

  • Novel vulnerability classes that emerge as the ecosystem evolves
  • Complex interactions between old and new code
  • Human error in validation logic
  • Specific code paths that manual review simply cannot cover exhaustively

The protocols that survive and thrive in this environment will be those that embrace proactive security tooling as table stakes, not nice-to-have. Static analysis, mutation testing, and automated formal verification need to be as standard as unit tests.

The Economics of Prevention

Consider the math. A single audit from a top firm costs $50,000 to $200,000+ depending on codebase complexity. That audit covers a fixed scope at a fixed point in time. The findings are valuable, but they're also final. Once deployed, you're on your own until the next audit.

Olympix tooling costs a fraction of that and runs continuously. Every code change. Every new integration. Every deployment. The tools don't get tired. They don't make assumptions. They don't skip checks because something "looks safe."

Teams like Uniswap Labs, Circle, LiFi, and Agora have integrated these tools into their CI/CD pipelines because they understand that security cannot be an afterthought or a checkpoint. It needs to be built into the development process itself.

When Cork Protocol was exploited despite three external audits, the issue wasn't that the auditors were incompetent. It was that the vulnerability class was too new to be in their checklists. When Balancer lost $121M, it wasn't because they were careless. It was because no one could have predicted how new leverage tokens would interact with old code.

These aren't failures of diligence. They're failures of methodology. The "audit and pray" approach is structurally insufficient for the threat model we face.

What Changes With Proactive Tooling

Security teams that implement comprehensive proactive tooling report several shifts:

Faster development cycles. When developers get immediate feedback on potential vulnerabilities during development, they don't wait weeks for audit results to learn they need to refactor.

More effective audits. External auditors spend less time finding the low-hanging fruit that tools catch automatically. They can focus on novel attack vectors and complex business logic vulnerabilities.

Quantifiable security metrics. Mutation testing in particular gives teams a concrete number: what percentage of your test suite actually validates your security assumptions. Branch coverage tells you if you ran the code. Mutation testing tells you if you actually tested it.

Confidence in deployed code. When you've run static analysis, mutation testing, fuzzing, and formal verification before your external audit, and then the audit comes back with minimal findings, you can deploy with genuine confidence. Not hope. Confidence.

Moving Beyond "Audit and Pray"

The six exploits we examined represent over $400M in losses. All on audited code. All preventable with existing tooling.

The common thread isn't that audits are bad. It's that audits alone are insufficient. They're a necessary layer in a multi-layered security strategy, but they cannot be the only layer.

Formal methods have proven themselves in aerospace, medical devices, and other industries where software failure means catastrophic loss. Web3 is no different. When your smart contract controls hundreds of millions in TVL, failure is catastrophic. The code is mission-critical.

The tools exist. The methodologies are proven. The cost is minimal compared to a single exploit.

The question isn't whether proactive security tooling works. These six cases prove it does. The question is whether teams will adopt it before they become case study number seven.

Secure Your Protocol Before It Becomes a Case Study

Every protocol that's been exploited believed their audits were enough. They followed best practices. They hired top auditing firms. They still lost millions.

Olympix gives you the proactive security layer that catches what audits miss. Static analysis, mutation testing, automated formal verification, and fuzzing built into your development workflow. The same tooling that could have prevented each of these six exploits.

Start with a free security assessment. We'll run our tools on your codebase and show you exactly what vulnerabilities exist today. No sales pitch. Just findings.

Book your first scan FREE so the next exploit isn't yours.

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

  1. Follow-up: Conduct a follow-up review to ensure that the remediation steps were effective and that the smart contract is now secure.
  2. Follow-up: Conduct a follow-up review to ensure that the remediation steps were effective and that the smart contract is now secure.

In Brief

  • Remitano suffered a $2.7M loss due to a private key compromise.
  • GAMBL’s recommendation system was exploited.
  • DAppSocial lost $530K due to a logic vulnerability.
  • Rocketswap’s private keys were inadvertently deployed on the server.

Hacks

Hacks Analysis

Huobi  |  Amount Lost: $8M

On September 24th, the Huobi Global exploit on the Ethereum Mainnet resulted in a $8 million loss due to the compromise of private keys. The attacker executed the attack in a single transaction by sending 4,999 ETH to a malicious contract. The attacker then created a second malicious contract and transferred 1,001 ETH to this new contract. Huobi has since confirmed that they have identified the attacker and has extended an offer of a 5% white hat bounty reward if the funds are returned to the exchange.

Exploit Contract: 0x2abc22eb9a09ebbe7b41737ccde147f586efeb6a

More from Olympix:

No items found.

Ready to Shift Security Assurance In-House? Talk to Our Security Experts Today.