The Difference Between Finding Patterns and Finding Vulnerabilities
Author: Lucas Amorim
Full technical article link: https://medium.com/@lucasaamorim/schnorr-nonce-generation-when-random-is-and-isnt-a-vulnerability-b79d33676c0b
In 2010, Sony shipped every PlayStation 3 with the same ECDSA signing key. Not similar keys, but the *same* key. Security researchers recovered it within months, and suddenly anyone could sign code that the PS3 would trust as legitimate. The problem? A nonce that should have been random was hardcoded to a constant value.
This incident, along with the 2012 Android `SecureRandom` vulnerability that exposed Bitcoin wallet private keys, fundamentally shaped how security tools approach signature implementations. The lesson seemed clear: random nonce generation is dangerous, deterministic derivation is safe.
But security lessons have a way of calcifying into rigid rules. And rigid rules, applied without context, generate noise.
The Noise Problem in Security Analysis
Part of my work as a Security Researcher at Webacy is building static analysis tools. Recently, I've been developing security analysis tooling for crypto projects written in Rust. Before writing the detectors, I audited several open source Rust cryptography projects to calibrate detection heuristics against real-world code patterns.
During an audit, I encountered this pattern:
rust
pub fn sign<R: Rng + CryptoRng, T: Tagged>(
&self,
rng: &mut R,
msg: &T
) -> Signature {
let k: Fr = rng.gen(); // Nonce generation
// ...
}
A pattern based analyzer sees `rng.gen()` producing a nonce and matches it against known vulnerability signatures. The association is immediate: external randomness for nonce generation, the same pattern behind PS3 and Android. Flag it, mark it critical, recommend RFC 6979 deterministic derivation.
The pattern match is correct. But pattern matching isn't risk assessment.
The Threat Model Is the Answer
Risk assessment starts with a question static analysis can't ask: *what are we defending against?*
The PS3 failure wasn't caused by random nonce generation, it was caused by *constant* nonce generation. The Android failure wasn't caused by external RNGs, it was caused by a PRNG that wasn't properly seeded. Both are implementation defects, not inherent flaws in random nonce generation as a design choice.
When we evaluate the Rust code above, the threat model determines everything:
Nonce reuse through RNG failure. The type signature requires `CryptoRng`—Rust's marker trait for cryptographically secure PRNGs. This isn't the Android scenario where `SecureRandom` silently fell back to predictable output.
Nonce reuse through state duplication. VM snapshots, process forks, container checkpointing—these can replicate RNG state across instances. A signing service on bare metal has a different exposure than one running in ephemeral cloud functions.
Specification compliance. Some protocols explicitly require random nonces. Others mandate deterministic derivation. The "correct" implementation depends on what you're implementing, and the specification is the authority.
A cryptographic system is secure when it meets its security requirements within its threat model. Pattern recognition from historical incidents should inform that assessment, not replace it.
Why Static Analysis Alone Falls Short
Static analysis excels at what it does: fast, deterministic pattern matching across large codebases. It can find every instance of a potentially dangerous pattern in seconds. But it operates on syntax, not semantics. It sees code, not context.
This creates two failure modes:
False positives flood the results. When every external RNG nonce is flagged as critical, teams learn to ignore the warnings. The signal drowns in noise. Real vulnerabilities hide among hundreds of "issues" that aren't actually issues.
False negatives slip through. A hardcoded nonce that happens to be set through an unusual code path might not match the expected pattern. The tool passes the code as safe because it doesn't *look* like the vulnerability it's searching for.
Security analysis requires understanding *why* something is dangerous, not just *what* dangerous patterns look like.
Reasoning as a Force Multiplier
This is why we believe the future of security analysis lies in combining static analysis with reasoning tools, AI/ML systems that can evaluate context the way a human researcher would.
The approach works in layers:
- Static analysis identifies patterns. Fast and deterministic. Every potential issue gets flagged for deeper evaluation.
- Contextual analysis evaluates environment. What's the deployment target? What protocols are being implemented? What does the specification require? What's the threat model?
- Semantic reasoning determines actual risk. Given the pattern, the context, and the threat model, is this a vulnerability, a design decision, or a hardening opportunity?
For the Schnorr nonce example, this means the difference between:
- "Critical: Random nonce generation detected" (static analysis alone)
- "Design Decision: Random nonce generation with CSPRNG. Verify entropy source and check for VM snapshot scenarios in deployment." (reasoning-augmented analysis)
For a deeper technical dive into Schnorr nonce generation and when random truly is (or isn't) a vulnerability, check out [this analysis].
Research Culture as Competitive Advantage
The insight that made this possible didn't come from reading CVE databases. It came from sitting with real code, understanding how cryptographic libraries are actually built, and recognizing that security research and security tooling require different modes of thinking.
Our research process looks like this:
- Audit real implementations. Before building any detector, we examine how the pattern appears in production code. What variations exist? What makes the difference between vulnerable and safe?
- Understand the specification. Cryptographic security isn't about avoiding patterns—it's about meeting security requirements. The spec is the authority.
- Model the threat. Every vulnerability exists within a threat model. An attack that requires physical access to hardware is different from one that works over the network.
- Calibrate against reality. Our detectors are tuned against what we find in the wild, not against synthetic test cases that maximize detection rates.
This research-driven approach lets us build tools that understand security the way researchers do—contextually, holistically, and with appropriate nuance.
The Path to Full Automation
The security industry has chased full automation for decades. The promise is obvious: full coverage without the cost and scalability limits of human auditors. The reality has been tools that generate thousands of findings, most of which are noise, all of which require human triage.
We believe the missing piece was always reasoning, the ability to evaluate context, apply judgment, and distinguish between patterns that look similar but have fundamentally different risk profiles.
By combining fast static analysis with deep contextual reasoning, the next generation of security tools can move toward genuine automation: not just finding issues, but understanding them. Not just flagging patterns, but evaluating risk. Not just generating reports, but providing actionable intelligence.
The goal isn't to replace human expertise. It's to encode that expertise into systems that can apply it consistently, at scale, across every deployment.
This is the research philosophy that drives our work at Webacy. Security isn't about pattern matching, it's about understanding.
---
At Webacy, we're building the future of digital asset security. Our research-driven approach combines deep technical expertise with cutting-edge analysis tools to protect the crypto ecosystem. Learn more at webacy.com.