False Positives

Identify and manage false positive findings. False positives are findings that look like vulnerabilities but aren't actually exploitable in your specific...

Last updated: January 14, 2026

False Positives

Identify and manage false positive findings.

What Are False Positives?

False positives are findings that look like vulnerabilities but aren't actually exploitable in your specific context.

Why They Happen

  • Scanner doesn't understand full context
  • Protective patterns not recognized
  • Unusual but safe code patterns
  • Dead code being analyzed

ML-Assisted Detection

BlockSecOps uses machine learning to help identify potential false positives.

How It Works

The ML model analyzes each finding based on:

  • Scanner confidence signals
  • Code context (test files, access modifiers)
  • Pattern history (known FP patterns)
  • Multi-scanner consensus

False Positive Score

Each finding shows an FP probability (0-100%):

  • 0-30%: Likely real vulnerability
  • 30-60%: Review recommended
  • 60-100%: Likely false positive

Training the Model

The model improves as you label findings:

  1. Mark findings as "False Positive" or "Confirmed"
  2. Model learns from your labels
  3. Accuracy improves over time

Note: Professional tier and above includes FP detection. Enterprise tier allows custom model training.


Identifying False Positives

Questions to Ask

  1. Is the attack path actually possible?

    • Can an attacker reach this code?
    • Do required conditions exist?
  2. Are there protective measures?

    • Access controls?
    • Rate limiting?
    • Guards or checks?
  3. Is the code reachable?

    • Is it dead code?
    • Is it disabled by configuration?
  4. Does context change the risk?

    • Trusted callers only?
    • Limited scope?

Common False Positive Patterns

Access-Controlled Functions

Finding: "Reentrancy in withdraw()"

Why it may be false positive:

  • Only admin can call
  • Admin is trusted

Check: Who can actually call this function?

Trusted External Calls

Finding: "Unchecked external call"

Why it may be false positive:

  • Called contract is verified
  • Called contract is owned by you

Check: What contract is being called?

Safe Math in New Solidity

Finding: "Integer overflow possible"

Why it may be false positive:

  • Solidity 0.8+ has built-in overflow checks
  • Compiler handles this

Check: What Solidity version is used?

Intentional Patterns

Finding: "Centralized admin control"

Why it may be false positive:

  • Intentional design decision
  • Admin key is a multisig
  • Documented centralization

Check: Is this by design?


How to Verify

Step 1: Read the Finding Carefully

Understand exactly what the scanner is claiming.

Step 2: Examine the Code

Look at the actual code in context:

  • What's the full function?
  • What are the modifiers?
  • What calls this function?

Step 3: Trace the Attack Path

Can an attacker actually:

  1. Reach the vulnerable code?
  2. Control the inputs?
  3. Exploit the condition?

Step 4: Check Protections

Look for:

  • Access modifiers (onlyOwner, onlyAdmin)
  • Guards (require, assert)
  • Reentrancy guards
  • Input validation

Marking False Positives

Change Status

  1. Open the finding
  2. Click status dropdown
  3. Select False Positive

Add Explanation

Always add a comment explaining:

  • Why it's a false positive
  • What protection exists
  • Your analysis

Example Comment

"False positive: This function has the onlyOwner modifier and the owner is a 3/5 multisig. The reentrancy pattern exists but cannot be exploited by untrusted parties."


Documentation Best Practices

Be Specific

Bad: "Not exploitable"
Good: "Not exploitable because the function has onlyOwner modifier and owner address is immutable multisig at 0x123..."

Reference the Protection

Point to the specific code that makes it safe:

  • "See line 15: modifier onlyOwner"
  • "Protected by ReentrancyGuard on line 23"

Note Any Conditions

If it would become exploitable under different conditions:

  • "Safe as long as admin key remains in multisig"
  • "Would be risky if upgrade changes access control"

When It's NOT a False Positive

Wishful Thinking

"We won't call it that way" is not a protection.

Future Plans

"We'll add access control later" means it's vulnerable now.

Trusted Admin

If admin is a single EOA, centralization risk is real.

Low Probability

"Unlikely to be exploited" doesn't mean it's false.


Reducing False Positives

Use Clear Patterns

Standard patterns are better recognized:

  • OpenZeppelin's ReentrancyGuard
  • Standard access control patterns
  • Well-known security libraries

Add NatSpec

Document your intentions:

/// @notice Only callable by owner
/// @dev Owner is a multisig, reentrancy is acceptable
function withdraw() external onlyOwner {
    ...
}

Use Recognized Modifiers

Name modifiers clearly:

  • onlyOwner
  • nonReentrant
  • whenNotPaused

Learning from False Positives

Track Patterns

Note which scanner/detector gives false positives:

  • Is it always wrong in this context?
  • Can you adjust scanner selection?

Provide Feedback

Report consistent false positives:

  • Helps improve scanners
  • Benefits all users

Update Code

Consider:

  • Making the safety more obvious
  • Adding comments for clarity
  • Using standard patterns

FAQ

Q: Should I mark all false positives?
A: Yes. It cleans up your findings and documents decisions.

Q: Can I mark something as false positive if I'm unsure?
A: No. If unsure, investigate more or get another opinion.

Q: Do false positives count toward my vulnerability count?
A: They appear in findings but don't affect summary counts once marked.

Q: What if I marked something as false positive by mistake?
A: Change the status back. All changes are logged.


Next Steps