There is a tension at the center of every AI-powered document processing workflow.
On one side: human review catches errors and ensures quality. On the other: human review is the bottleneck that prevents you from processing at scale.
If you review every single document, quality stays high but throughput stays low. If you skip review entirely, throughput goes up but so do errors. Neither extreme works.
Auto-approve with confidence thresholds is the middle path. It lets you scale processing without abandoning quality — by routing easy documents around the review queue and sending only the difficult ones to humans.
How confidence scoring works
When PaperAI converts a document, the AI generates a confidence score between 0 and 100. This score reflects the AI's assessment of its own output quality — how well it believes it has captured the document's content.
A high score (90+) typically means:
- The document was clearly legible
- The layout was standard and well-structured
- The AI encountered no ambiguous content
- Extraction fields (if configured) were populated with high certainty
A low score (below 80) often means:
- Parts of the document were hard to read (poor scan quality, handwriting)
- The layout was unusual and the AI had to make judgment calls
- Some content was ambiguous or the AI was unsure about certain fields
- The document type did not match what the Flow was configured for
What auto-approve does
When auto-approve is enabled on a Flow, it adds a simple rule:
- Above the threshold: The document is automatically approved. No human review needed.
- Below the threshold: The document goes to the review queue as usual.
The threshold is configurable from 50% to 100%, in 5% increments. The default starting point is 85%.
Setting the right threshold
There is no universal correct threshold. It depends on the document type, the consequences of errors, and your team's tolerance for imperfection.
Start at 85-90%
For most document types, 85-90% is a practical starting point. This auto-approves documents where the AI is quite confident and routes anything uncertain to humans.
Monitor for a week
After enabling auto-approve, monitor the auto-approved documents:
- Randomly sample 10-20% and check them manually
- Look for patterns in errors
- Check if the confidence score correlates with actual output quality
Adjust based on results
If auto-approved documents have consistent quality: Raise the threshold to 90-95%. You are being conservative and can let more documents through.
If you find errors in auto-approved documents: Lower the threshold to 80%. The confidence score is not calibrated well enough for this document type at the current level.
If almost nothing gets auto-approved: Lower the threshold or check if the document type is inherently difficult. Some document types (handwritten, mixed-format, heavily annotated) will rarely score above 85%.
Which document types work well with auto-approve
Auto-approve works best when the document type is standardized and the AI produces consistent results:
Good candidates:
- Standard invoices from regular vendors
- Typed government forms with fixed layouts
- Digital PDFs of reports and statements
- Standard contracts from templates
- Clean receipts and purchase orders
Poor candidates:
- Handwritten documents (scores are inherently lower)
- Scanned documents with variable quality
- Documents in unfamiliar languages
- Highly specialized documents the AI has not seen before
- Legal or medical documents where errors have serious consequences
The 80/20 approach
In practice, most organizations find that about 80% of their documents are straightforward enough for auto-approve, while 20% genuinely need human attention.
This means auto-approve is not about eliminating review — it is about focusing review on the documents that need it. Your team stops spending time confirming that clean, well-structured documents converted correctly, and starts spending time on the ones where human judgment actually adds value.
The math is compelling. If your team processes 500 documents per month and reviews each one for 3 minutes, that is 25 hours of review time. With auto-approve handling 80% of documents, review time drops to 5 hours — and those 5 hours are spent on the documents that genuinely need attention rather than rubber-stamping obvious conversions.
Plan requirements
Auto-approve is available on Business plans and above. Starter and Pro plans require manual review for all documents.
This is intentional. Auto-approve is a scaling feature. At low volumes (Starter and Pro), manual review is feasible and provides a learning period where teams understand the AI's output quality before trusting it to auto-approve.
Setting up auto-approve
Auto-approve is configured in two places:
Per-Flow: When creating or editing a Flow, toggle auto-approve and set the minimum confidence threshold. This is the recommended approach — different document types should have different thresholds.
Organization defaults: Under Organization Settings → Processing, set a default auto-approve threshold that applies to documents processed without a Flow. Flow settings always override organization defaults.
A cautionary note
Auto-approve is a trust mechanism. Before enabling it:
- Process at least 50 documents of the same type with manual review
- Verify that the AI's confidence scores correlate with actual output quality
- Start with a high threshold (90%+) and lower it gradually
- Never enable auto-approve for document types where errors have legal, financial, or medical consequences — at least not without a secondary verification step
The point of auto-approve is not to remove humans from the loop entirely. It is to remove them from the loop for the easy cases so they can focus on the hard ones.
For more on designing human-in-the-loop workflows, see building a human-in-the-loop document pipeline.
Related resources
- Features overview — Flows, extraction fields, and auto-approve configuration
- Pricing plans — auto-approve is available on Business plans and above