Draft Documentation
This guide is currently in development. Content may be incomplete or subject to change.
Campaign Evaluations
Understand evaluation patterns, criteria items, and scoring methodology. Learn how scores are calculated and how to interpret evaluation results.
In this guide
Evaluation Pattern Grids
Below the summary grids on the Dashboard tab, you'll find the Evaluation Pattern grids. These show detailed breakdowns of scores for each evaluation criterion.
Dynamic Content: The pattern grids update based on your checkbox selections in the Campaigns, Call Centers, and Executives grids above.
Evaluation pattern grid showing item scores by campaign
Screenshot coming soon
Campaign Tabs
The evaluation pattern section is organized into tabs, with one tab for each campaign (engagement) that has evaluation data.
Tab Behavior:
- Each tab shows evaluation items specific to that campaign's evaluation script
- Tabs are dynamically generated based on available data
- Clicking a tab loads the corresponding evaluation pattern grid
- Filter selections from above affect which data appears in each tab
Note: Different campaigns may have different evaluation scripts with different items. The tabs ensure you see the correct items for each campaign.
Evaluation Items
Each row in the pattern grid represents one evaluation item (criterion). Items are defined in the campaign's evaluation script.
Grid Columns:
| Column | Description |
|---|---|
| Item Name | The main evaluation criterion name |
| Sub-item | Detailed sub-criterion (if applicable) |
| Average Score | Weighted average score for this item (clickable) |
| Critical Errors | Count of critical errors on this item |
| Average % | Percentage representation of score |
Common Evaluation Items:
- Greeting: Did the agent properly greet the customer?
- Identification: Did the agent verify customer identity?
- Active Listening: Did the agent demonstrate understanding?
- Resolution: Was the customer's issue addressed?
- Closing: Did the agent properly close the conversation?
- Compliance: Did the agent follow required scripts/disclosures?
Scoring Methodology
Scores are calculated using weighted averages based on the number of evaluated calls.
How Scores Work:
- Scale: Scores typically range from 1.0 to 5.0
- Weighted: Each call contributes proportionally to the average
- Color Coding: Green (≥4.0), Yellow (3.0-3.9), Red (<3.0)
- Target: The default target score is 4.0
Meets Target
Needs Improvement
Below Target
Weighted Average Formula: (Sum of score × call count) / Total calls. This ensures agents with more calls have appropriate influence on averages.
Clickable Scores
Every score value in the evaluation pattern grid is clickable. Clicking a score opens the Reverse Analysis modal for that specific evaluation item.
What Happens When You Click:
- The Reverse Analysis modal opens
- Shows statistics for that specific evaluation item
- Lists top performers (scoring ≥ target)
- Lists executives needing improvement (scoring < target)
- Displays recommended actions based on the data
Continue to Reverse Analysis: See the next guide section for detailed information about the Reverse Analysis modal and its features.
Evaluation pattern grid with highlighted clickable score
Screenshot coming soon