Human-in-the-Loop Testing Explained
Human-in-the-loop means you can pause AI exploration at any time, take manual control, guide the AI, or override its decisions. It's like pair programming with an AI - the AI does most of the work, but you're always there to help when needed.
What Does "In the Loop" Mean?
Traditional automated testing is hands-off:
- You write a script
- You run it
- You get results
- No interaction during execution
Human-in-the-loop means you're part of the process:
- You watch the AI work
- You can pause and intervene
- You can guide or override
- You work together
How It Works in Rihario
1. Live View
You see exactly what the AI is doing in real-time:
- What page it's on
- What it's clicking
- What it's typing
- What it's noticing
- What decisions it's making
This transparency builds trust. You're not waiting for results - you're watching the process.
2. Pause Anytime
During exploration, you can pause:
- Click "Pause" button
- Exploration stops immediately
- Browser state is preserved
- You can inspect what's happening
Useful when:
- The AI is going down the wrong path
- You want to check something manually
- You need to provide additional context
- Something unexpected happens
3. Take Control
When paused, you can take manual control:
- Click "Take Control"
- Interact with the page yourself
- Navigate, fill forms, click buttons
- Do whatever you need to do
Common use cases:
- Authenticate manually - Log in if the AI can't handle your auth flow
- Navigate to specific state - Go to a page the AI might not find
- Handle edge cases - Deal with situations the AI struggles with
- Set up test data - Create accounts, configure settings, etc.
4. Resume or Guide
After taking control, you can:
- Resume exploration - Let the AI continue from where you left off
- Provide guidance - Give the AI new instructions for what to check
- Continue manually - Keep exploring yourself if needed
5. Override Decisions
You can override AI decisions:
- Force it to click a specific element
- Provide different input
- Skip certain steps
- Change exploration focus
The AI adapts to your overrides and continues exploring from your guidance.
Why This Matters
Trust Through Transparency
Watching the AI work builds trust. You see:
- What it's actually doing
- Why it made decisions
- When it's struggling
- When it's working well
This transparency makes you confident in the results.
Handling Edge Cases
AI can't handle everything:
- Complex authentication flows
- CAPTCHAs and MFA
- Multi-step processes requiring human judgment
- Situations requiring domain knowledge
Human-in-the-loop lets you handle these cases manually, then let the AI continue.
Learning and Improvement
When you guide the AI, it learns:
- Your preferences
- Your app's patterns
- What you consider important
- How to handle your specific edge cases
Over time, the AI gets better at exploring your specific app.
Real-World Examples
Example 1: Authentication Required
- You start an exploration on a protected page
- AI hits the login screen
- You pause and take control
- You log in manually
- You resume exploration
- AI continues exploring authenticated areas
Example 2: Wrong Path
- AI starts exploring
- You notice it's going down the wrong path
- You pause and provide new instructions: "focus on the checkout flow"
- AI resumes with new focus
- Exploration continues on the right path
Example 3: Complex Form
- AI reaches a complex form
- It struggles to fill it correctly
- You pause and take control
- You fill the form manually with correct data
- You resume exploration
- AI continues from the submitted form
When to Intervene
You should intervene when:
- AI is stuck - Can't proceed past a blocker
- AI is wrong - Making incorrect decisions
- Edge case needed - Situation requires human knowledge
- Authentication required - Can't proceed without login
- You want to guide - Know a better path to explore
You don't need to intervene when:
- AI is working fine
- Exploration is going smoothly
- No blockers or issues
- You're just observing
Comparison to Traditional Testing
| Aspect | Traditional Testing | Rihario |
|---|---|---|
| Visibility | See results after execution | Watch execution live |
| Intervention | Can't intervene during execution | Can pause and take control anytime |
| Guidance | Must write new script for changes | Can guide mid-exploration |
| Edge Cases | Must code handling in script | Handle manually, then resume |
Best Practices
- Watch first, intervene when needed - Let the AI work, but step in when necessary
- Provide clear guidance - When you intervene, be specific about what you want
- Use for edge cases - Handle complex situations manually, let AI handle routine exploration
- Learn from interventions - Notice patterns in when you need to step in