Skip to main content
AwardKit

Score Calculation

How AwardKit calculates final scores for award programs. Weighted averages for Score Criteria, Averaged Borda Count for Top Picks, and step-by-step examples.

AwardKit uses a transparent scoring system so anyone can verify how results are calculated. The formula depends on which voting method you're using. This page is the ground-truth reference: bookmark it for sponsors and judges who want to understand the math.

Score Criteria

With Score Criteria, the final score is a weighted average across all judges.

How it works

  1. Each judge scores every criterion on the scale you configured (0-5, 0-10, or 0-100)
  2. Criterion weights are applied to calculate each judge's weighted score for the entry
  3. The final score is the average of all judges' weighted scores

Example

For an entry scored by 3 judges with 4 criteria weighted at 25% each (on a 0-10 scale):

JudgeImpact (25%)Innovation (25%)Execution (25%)Storytelling (25%)Weighted Total
Judge A98788.00
Judge B89898.50
Judge C109888.75

Judge A's weighted score: (9 x 0.25) + (8 x 0.25) + (7 x 0.25) + (8 x 0.25) = 8.00

Final score: (8.00 + 8.50 + 8.75) / 3 = 8.42 / 10

Unequal weights

With unequal weights, criteria contribute proportionally:

CriterionWeightJudge's ScoreContribution
Impact40%93.60
Innovation30%82.40
Execution20%71.40
Storytelling10%60.60
Total100%8.00

Missing scores

If a judge hasn't finished scoring an entry, their evaluation is excluded entirely. Only completed reviews count toward the final score. For example, if 5 judges are assigned but only 3 have finished, the final score is the average of those 3.

Category-scoped criteria

If you have category-scoped criteria, they're applied alongside the general criteria for entries in that category. The weighted score for an entry in a category with a scoped criterion is calculated using the merged criteria set: general criteria + that category's criteria, with weights summing to 100% within the merged set.

For entries in categories without scoped criteria, only the general criteria apply.

Top Picks

With Top Picks, scores are calculated using an Averaged Borda Count: a point-based ranking system that accounts for both how often an entry was picked and how highly it was ranked.

How it works

  1. Each judge selects and ranks their top picks (for example 3 picks)
  2. Points are awarded by rank: 1st pick gets the most points, last pick gets 1 point. With 3 picks: 1st = 3 points, 2nd = 2 points, 3rd = 1 point
  3. The final score is the total points divided by the number of eligible voters (not just voters who picked that entry)

Dividing by all eligible voters (rather than just those who selected the entry) is what makes this an "averaged" Borda Count. It prevents an entry picked by 1 out of 10 judges from scoring the same as one picked by 1 out of 2 judges.

Example

With 5 judges and 3 picks each (1st = 3pts, 2nd = 2pts, 3rd = 1pt):

Entry1st picks2nd picks3rd picksTotal pointsScore (/ 5 voters)
Entry Alpha310112.20
Entry Beta12181.60
Entry Gamma11271.40
Entry Delta01240.80
Entry Echo00000.00

Entry Alpha: (3 x 3) + (1 x 2) + (0 x 1) = 11 points / 5 voters = 2.20

Notice that Entry Echo received no picks, so its score is 0. The maximum possible score equals the number of picks (3 in this example), achieved only if every single voter placed the entry 1st.

Assignment modes and the voter denominator

The voter denominator counts judges who actually submitted votes (not every judge invited). It adjusts based on your judge assignment mode:

  • All judges: every judge who submitted any vote is counted for every entry
  • Per judge: only judges assigned to a specific entry, who voted, are counted for that entry
  • Rooms: only judges in the same room as an entry, who voted, are counted for that entry

This keeps scoring fair when different entries are reviewed by different numbers of judges.

Ranking and ties

Entries are ranked by their final score, highest first. When two or more entries have the same score, they share the same rank using Olympic-style numbering (1, 1, 3). Within a shared rank, entries are sorted by vote count so the one with more reviews appears first on the leaderboard.

For ties at a critical position (Winner vs. Finalist, for example), you can override the ranking by manually assigning the award. The leaderboard remains visible for transparency. Best practice: document any manual override decision in your program records so you can explain it later if asked.

Judge vs. audience scores

If your program uses Judges + audience voting, AwardKit calculates judge scores and audience scores independently using the same formula. On the Results tab you can toggle between Judges, Audience, and All views to see how each group ranked entries.

This is the basis for "People's Choice" sidecar awards alongside the juried winners: the highest audience-only score wins People's Choice, regardless of where it lands in the juried ranking.

When you publish results, AwardKit publishes the formula transparently alongside the rankings. Entrants and the public can see exactly how scores were calculated. This is one of the most important pieces of trust in an award program: if the math is opaque, the program is too.

On this page