Voting Methods
Configure who votes and how on AwardKit. Pick judge-only, audience, or hybrid voting, then choose between Score Criteria and Top Picks for evaluation.
Configuring voting on the Voting tab is a two-step decision. First pick the voting mode (who scores entries), then pick the voting method (how they score). Both controls live in the Setup card.
Voting mode
The voting mode controls who can score entries. Switch between modes at the top of the setup card.
There are three options:
- Judge only (default): A panel of judges scores entries. The audience voting link is hidden. Best for formal evaluation where you want a controlled set of evaluators with named credibility.
- Judges + audience: A panel of judges scores entries and a public link is available for a People's Choice sidecar. Both Audience Voting and Judge Voting cards appear on the Voting tab. Audience and judge results are tallied separately and can be compared on the Results tab.
- Audience only: Open voting via a shared link, no judging panel. The Judge Voting card is hidden. Best for community-driven recognition like member voting or social-driven awards.
The mode you pick determines which sections appear on the Voting tab. Switching modes is non-destructive: judges and audience votes are kept in case you switch back.
The voting method (below) determines how votes are cast and how results are calculated. The same method applies to whichever groups your voting mode enables.
Score Criteria
Judges score each entry against weighted criteria on a numeric scale. This is the default method and the right choice for most professional award programs where you want detailed, defensible scores.
How it works
You define criteria (for example "Impact", "Innovation", "Execution") with weights that total 100%. Judges rate each entry on every criterion using a slider. The final score is the weighted average across all criteria.
Score scale
Choose the range judges use when scoring. The Setup card offers three options:
- 0 - 5: Simple, fast scoring. Good for shortlist rounds or non-specialist panels.
- 0 - 10 (default): The right balance of granularity and speed for most professional award programs.
- 0 - 100: Maximum differentiation. Useful when you have a large field and need to separate close entries.
Adding criteria
On the Voting tab, click Setup to open the configuration panel, then click Add Criterion.
Each criterion has:
- Name: What's being evaluated (for example "Impact", "Innovation", "Storytelling")
- Description (optional): Guidelines for judges on what to look for. The more specific, the more consistently judges score.
- Scope: Whether the criterion applies to all entries or only to a specific category (see Category-scoped criteria below)
Weights
When you have two or more criteria, each one gets a weight. Weights must total 100% across all criteria.
You can edit weights directly from the setup card by typing into the percentage fields next to each criterion. Click Equal to distribute weights evenly, or adjust them manually to prioritize what matters most for your program.
Criteria and weights cannot be modified after voting has started. Finalize your setup before judges begin scoring.
Category-scoped criteria
If your program has multiple categories, you can scope a criterion to a specific category. This lets you evaluate category-specific qualities alongside your general criteria.
When adding a criterion, the Scope dropdown lets you choose:
- General (default): Applies to all entries and is scored by all judges.
- A specific category: Only applies to entries submitted to that category.
For example, a "Best Sustainability Initiative" category might have its own "Measurable Environmental Impact" criterion that only applies to entries in that category. Judges still score the general criteria for every entry, but the category-specific criterion only appears when reviewing entries in "Best Sustainability Initiative."
How scoped criteria work
- Each category gets its own set of criteria with independent weights
- Scoped criteria appear as a separate section below the general criteria in the setup card
- Judges see both the general criteria and any relevant category criteria when scoring an entry
Assigning judges to category criteria
Each category criteria section shows a Judges button. Click it to control which judges see that category's criteria:
- No judges assigned (default): All judges see the category's criteria when scoring entries in that category
- Specific judges assigned: Only those judges see the category's criteria
This is useful when you have domain experts. For example, assign your sustainability specialists to the "Best Sustainability Initiative" criteria, while all judges still score the general criteria.
Tips
- Keep the number of criteria between 3 and 5. More than that and judges either skim or burn out.
- Write clear descriptions so judges evaluate consistently. "Impact" alone is ambiguous; "Measurable results affecting more than 1,000 people" is specific.
- Match weights to your program's priorities. If you announce that "Impact" is what matters most, weight it accordingly.
- Use category criteria sparingly. Two general criteria + one category criterion is usually plenty.
Top Picks
Judges pick and rank their favorite entries instead of scoring individual criteria. This is a simpler, faster method best for shortlist rounds, smaller programs, or "best of" awards.
How it works
Each judge selects a set number of their favorite entries and ranks them. Final ranking is calculated based on how many judges selected each entry and where they ranked it. No criteria, no weights, just picks.
Configuring Top Picks
The only setting is Picks per judge: how many entries each judge can select (default is 3). Adjust based on your program size. For 30 entries, 5 picks gives judges enough room to differentiate. For a "40 Under 40" with 200 nominations, 10 picks per judge calibrates the long tail well.
When to use Top Picks
- Shortlist rounds where you're narrowing 200 entries down to 40 finalists
- Programs with non-specialist judges who can't reliably score on multiple dimensions
- "People's Choice" or community voting alongside formal Score Criteria judging
- Annual "best of" awards where you want gut-feel evaluation
The voting method applies to whichever group your voting mode enables. When both judges and audience vote, the same method applies to both, but their results are tallied separately. See Judging Interface for what judges and audience members see for each method.