Draft Documentation

This guide is currently in development. Content may be incomplete or subject to change.

~10 minutes

Campaign Evaluations

Understand evaluation patterns, criteria items, and scoring methodology. Learn how scores are calculated and how to interpret evaluation results.

Evaluation Pattern Grids

Below the summary grids on the Dashboard tab, you'll find the Evaluation Pattern grids. These show detailed breakdowns of scores for each evaluation criterion.

Dynamic Content: The pattern grids update based on your checkbox selections in the Campaigns, Call Centers, and Executives grids above.

Evaluation pattern grid showing item scores by campaign

Screenshot coming soon

Campaign Tabs

The evaluation pattern section is organized into tabs, with one tab for each campaign (engagement) that has evaluation data.

Tab Behavior:

  • Each tab shows evaluation items specific to that campaign's evaluation script
  • Tabs are dynamically generated based on available data
  • Clicking a tab loads the corresponding evaluation pattern grid
  • Filter selections from above affect which data appears in each tab

Note: Different campaigns may have different evaluation scripts with different items. The tabs ensure you see the correct items for each campaign.

Evaluation Items

Each row in the pattern grid represents one evaluation item (criterion). Items are defined in the campaign's evaluation script.

Grid Columns:

ColumnDescription
Item NameThe main evaluation criterion name
Sub-itemDetailed sub-criterion (if applicable)
Average ScoreWeighted average score for this item (clickable)
Critical ErrorsCount of critical errors on this item
Average %Percentage representation of score

Common Evaluation Items:

  • Greeting: Did the agent properly greet the customer?
  • Identification: Did the agent verify customer identity?
  • Active Listening: Did the agent demonstrate understanding?
  • Resolution: Was the customer's issue addressed?
  • Closing: Did the agent properly close the conversation?
  • Compliance: Did the agent follow required scripts/disclosures?

Scoring Methodology

Scores are calculated using weighted averages based on the number of evaluated calls.

How Scores Work:

  • Scale: Scores typically range from 1.0 to 5.0
  • Weighted: Each call contributes proportionally to the average
  • Color Coding: Green (≥4.0), Yellow (3.0-3.9), Red (<3.0)
  • Target: The default target score is 4.0
≥ 4.0

Meets Target

3.0 - 3.9

Needs Improvement

< 3.0

Below Target

Weighted Average Formula: (Sum of score × call count) / Total calls. This ensures agents with more calls have appropriate influence on averages.

Clickable Scores

Every score value in the evaluation pattern grid is clickable. Clicking a score opens the Reverse Analysis modal for that specific evaluation item.

What Happens When You Click:

  1. The Reverse Analysis modal opens
  2. Shows statistics for that specific evaluation item
  3. Lists top performers (scoring ≥ target)
  4. Lists executives needing improvement (scoring < target)
  5. Displays recommended actions based on the data

Continue to Reverse Analysis: See the next guide section for detailed information about the Reverse Analysis modal and its features.

Evaluation pattern grid with highlighted clickable score

Screenshot coming soon