On November 3, 2025, Balancer V2 lost $121M to a rounding error.
Not a reentrancy bug. Not an access control failure. Not a flash loan exploit. A rounding error. The kind of vulnerability that doesn't look dangerous in code review, passes comprehensive test suites, and survives multiple top-tier audits.
This isn't a story about Balancer being incompetent. Balancer represents the best case scenario in DeFi security: established protocol, experienced team, resources to hire the best auditors, and a security-conscious culture. They did everything the industry tells you to do.
They still lost $121M.
That's not a Balancer problem. That's an industry problem.
The Vulnerability No One Was Looking For
The exploit weaponized asymmetric rounding in Composable Stable Pools. Specifically, the protocol used mulDown for upscaling token amounts but inconsistent rounding for descaling. When combined with rate providers (yield-bearing tokens with dynamic exchange rates), this created a systematic bias that could be compounded through repeated micro-swaps.
The attacker didn't brute force anything. They performed approximately 100 billion iterations across 25 refinement loops to identify the exact swap amounts that would land tokens on wei-level "rounding cliffs"—the precise edges where truncation errors compound.
Then they executed atomic batchSwap transactions that:
- Positioned pools at exploitable balance states (as low as 9 wei)
- Executed crafted EXACT_OUT swaps that understated required input amounts
- Extracted value by swapping back at deflated prices
- Repeated inside single transactions to compound errors without triggering sanity checks
The attack played out across Ethereum, Arbitrum, Base, Optimism, and Polygon simultaneously. Despite rapid response that protected or recovered $45.7M, the damage was catastrophic.
Why Traditional Security Failed
Here's what makes this brutal: the vulnerable code had been audited. Multiple times. By firms you've heard of.
The auditors weren't incompetent. The vulnerability is structurally invisible to traditional audit methodology.
Rounding looks safe in isolation. Using mulDown for upscaling? That's textbook correct. Downward rounding typically favors the protocol when calculating amounts users owe. Reviewers saw exactly what they expected to see.
The danger emerges from interaction. The vulnerability required understanding how _upscale(), _scalingFactors(), rate providers, and swap paths interact across multiple operations in specific pool states. Manual code review can't systematically trace these interaction paths.
Rate providers amplify negligible into exploitable. When scaling factors only handle decimal normalization (10^(18-decimals)), rounding truncation is economically meaningless. But Balancer embedded dynamic token exchange rates inside scaling factors. Suddenly, those "safe" rounding operations were amplifying material percentage differences, not dust.
Testing can't catch adversarial edges. The attacker found exploitable states through massive simulation. Standard test suites don't include 100 billion iteration adversarial searches. They test happy paths, not the mathematical cliffs at pool edges.
This is precision engineering exploitation. It requires cross-component analysis, stateful reasoning, and adversarial calibration. Those aren't audit capabilities. Those are analysis capabilities.
The Industry's Blind Spot
DeFi security still operates on 2020 assumptions: reentrancy bad, access control important, flash loans scary. We've built an entire security apparatus around preventing the exploits we've already seen.
Meanwhile, the actual frontier of DeFi attacks has moved to mathematical precision vulnerabilities that:
- Appear safe in isolation
- Require specific conditions to exploit
- Compound through repeated operations
- Break protocols at low-liquidity edges
- Involve seemingly negligible amounts (wei, not ether)
The Balancer exploit required three preconditions:
- Underlying rounding error in exact-out swaps
- Rate providers to amplify precision loss
- Low-liquidity states to magnify impact
Only Composable Stable Pools with BPT and rate providers satisfied all three. That's why this passed audits. It wasn't exploitable in the general case. It was exploitable in a specific configuration that emerged from the combination of features.
Traditional audits don't catch this because they're not designed to. They're designed to find known vulnerability patterns in static code. They're not designed to simulate how components interact under adversarial conditions across state space.
What Olympix Found (And When)
After the exploit, we ran Balancer's Composable Stable Pool contracts through BugPoCer, our automated vulnerability detection system.
High Severity: Biased Rounding in Upscale with Rate-Augmented Scaling Factor
The finding identified the exact mechanism:
_upscale(amount, scalingFactor) = floor(amount * scalingFactor / 1e18)
When scalingFactor includes a non-unitary rate (yield-accruing or rebasable token), small amounts or low-liquidity conditions cause the floor operation to produce material percentage differences.
This becomes exploitable when combined with:
- Multi-step swap paths (GIVEN_IN + GIVEN_OUT)
- Repeated scaling and descaling
- Attacker-controlled pool positioning
The system flagged this because it analyzes:
- Rounding direction consistency across operations
- Amplification effects of dynamic rates in scaling factors
- Cumulative precision loss across multi-step operations
- Exploitability conditions under various pool states
This is exactly the vulnerability pattern that enabled the $121M drain.
The Fix That Would Have Worked
The core issue: embedding dynamic, non-unitary rates inside scaling factors while maintaining downward rounding bias.
The fix is straightforward:
javascript
// Instead of: _upscale(amount, scalingFactor_with_rate)// Which always rounds down via mulDown// Use: Separate rate application with compensating bias
scaled = _upscaleDecimals(amount, decimalScalingFactor); // normalize decimals
rateAdjusted = _applyRate(scaled, rate, ROUND_UP); // apply rate, round up to favor pool
Separate decimal normalization from rate application. Ensure rate operations don't inherit the downward bias appropriate only for decimal scaling.
With this architectural change, the exploit becomes impossible. No amount of iteration finds a rounding cliff because the bias consistently favors the pool.
Why V3 Survived (And What That Tells Us)
Balancer V3 was completely unaffected. Not because developers knew about this specific attack—they didn't. V3 survived because its architecture made entire classes of precision vulnerabilities impossible:
- Consistent 18-decimal precision: Uniform precision across all calculations
- Explicit rounding controls: Every operation has enforced rounding direction
- Separated rate application: Rates aren't embedded in scaling factors
- Formal verification: Development included properties for roundtrip swap invariance
V3 demonstrates something crucial: architecture matters more than audits.
You can't audit your way to security when vulnerabilities emerge from component interaction. You need architectural principles that prevent vulnerability classes, not just catch individual bugs.
V3 wasn't more audited than V2. It was better designed.
The Uncomfortable Truth
If Balancer—with their resources, expertise, and security investment—can lose $121M to a precision bug, what does that mean for everyone else?
The uncomfortable answer: most protocols have exploitable precision vulnerabilities right now. They just haven't been found yet.
Rate providers are everywhere in DeFi. Yield-bearing tokens, LSTs, and rebasing assets are core primitives. Every protocol integrating these is making mathematical assumptions about how rounding behaves when you embed dynamic rates in core calculations.
Most of those assumptions are wrong.
The attack pattern that hit Balancer—find the mathematical edge case, calibrate through simulation, execute atomically—works anywhere precision loss can compound. Which is most of DeFi.
What the Industry Needs to Accept
Point-in-time audits don't work for mathematical vulnerabilities.
Audits are snapshots. They catch what reviewers know to look for. But precision bugs require:
- Adversarial simulation to find exploitable states
- Cross-component interaction analysis
- Understanding how features combine to create vulnerabilities
- Continuous analysis as code evolves
That's not a manual process. That's an automated analysis process.
"Audited" doesn't mean "secure." It means "reviewed at one point in time by humans with specific expertise and limited hours." For obvious bugs and known patterns, that works. For mathematical precision vulnerabilities that emerge from component interaction under specific conditions, it fails.
Testing can't replace analysis. You can't test your way to finding rounding cliffs. The attacker used 100 billion iterations to find exploitable states. Test suites run thousands of cases, not billions. And they test expected behavior, not adversarial edges.
Security needs to be continuous. Code changes. Dependencies update. New features combine with old features in unexpected ways. Security analysis that happens once before deployment and never again is security theater.
What Protocols Should Actually Do
1. Enforce biased rounding everywhere. Every rounding operation should favor the protocol, never the user. Be especially careful when embedding dynamic values in scaling operations.
2. Separate concerns in mathematical operations. Don't mix decimal normalization with rate application. Don't embed dynamic values in supposedly static scaling factors.
3. Implement invariant guards. Validate that core invariants (like pool invariant D) cannot decrease through normal operations. Catch cumulative precision loss before it's exploitable.
4. Test adversarial edges, not just happy paths. Simulate attackers searching for rounding cliffs at unusual states. Low liquidity, extreme ratios, repeated operations.
5. Use continuous automated analysis. Not just pre-deployment audits. Ongoing security that runs on every code change and analyzes how components interact.
The Balancer Team Did Everything Right (Except One Thing)
Let's be clear: Balancer's response was excellent. Rapid coordination with security partners, effective use of pause mechanisms, transparent communication, and successful recovery efforts protected $45.7M in user funds.
The V3 architecture demonstrates sophisticated security thinking. The team clearly understands DeFi security at a deep level.
But they operated under the same broken security model everyone else uses: audit before deployment, monitor after, hope nothing breaks in between.
That model doesn't work for precision vulnerabilities. It can't. The vulnerability class requires capabilities that traditional audits don't provide.
This isn't about Balancer being bad at security. It's about the industry not yet understanding what DeFi security actually requires.
What Comes Next
The next major exploit won't look like Balancer. It won't look like Nomad. It won't look like anything we've seen before.
It will be a novel combination of seemingly safe operations that interact in unexpected ways under specific conditions that emerge from feature composition.
Traditional security approaches can't find these. They're looking for patterns they recognize, not patterns that haven't been exploited yet.
The industry needs to move from reactive security (audit and pray) to proactive security (continuous verification). That means:
- Automated analysis that systematically explores interaction space
- Architectural principles that prevent vulnerability classes
- Continuous monitoring as code and conditions change
- Understanding that "audited" isn't the same as "secure"
Mathematical vulnerabilities exist in production code right now. The only question is whether protocols will find them before attackers do.
Balancer lost $121M because the industry still treats security as a checklist item. Audit: ✓. Deploy: ✓. Hope: ✓.
That model is broken. The Balancer exploit proved it. The next exploit will prove it again.
Don't let mathematical vulnerabilities drain your TVL. Olympix's automated security analysis catches precision bugs, biased rounding, and rate amplification vulnerabilities before deployment. Get your first scan FREE!