Horizon Europe Evaluation Criteria: How Proposals Are Scored on Excellence, Impact, and Implementation
Every Horizon Europe proposal is judged against exactly three award criteria: Excellence, Impact, and Quality and Efficiency of the Implementation. These are not vague headings — each one has defined sub-criteria, and each evaluator must assign an integer score from 0 to 5 for every criterion independently.
Understanding precisely what evaluators are looking for, and how they distinguish a score of 3 from a score of 5, is one of the most actionable things you can do to improve your submission. This guide provides a detailed breakdown of all three criteria, the official scoring scale, threshold rules, and practical guidance on what separates a competitive proposal from a merely adequate one.
All content in this guide is based on the official Horizon Europe Programme Guide and the General Annexes of the Work Programme, published by the European Commission.
The Official 0–5 Scoring Scale
Every criterion is scored on the same integer scale. Half-point scores are not used. The official definitions, as published in the Horizon Europe Programme Guide, are:
| Score | Label | Meaning |
|---|---|---|
| 0 | N/A | The proposal fails to address the criterion, or cannot be assessed due to missing or incomplete information. |
| 1 | Poor | The criterion is inadequately addressed, or there are serious inherent weaknesses. |
| 2 | Fair | The criterion is broadly addressed, but there are significant weaknesses. |
| 3 | Good | The criterion is addressed well overall, although there are a number of shortcomings. |
| 4 | Very Good | The criterion is addressed very well, although a small number of shortcomings are present. |
| 5 | Excellent | The criterion is addressed with no or only very minor shortcomings. |
Each criterion is scored independently. A strong score on one criterion cannot compensate for a failing score on another — threshold rules apply to every criterion individually.
Threshold Requirements
Before ranked lists are formed, proposals must pass minimum thresholds on each criterion and in aggregate:
- Per-criterion threshold: 3 out of 5 (applies to each of the three criteria)
- Overall threshold: typically 10 out of 15 (the sum of all three criterion scores)
- Action-type variation: some specific calls set different overall thresholds — always check the call conditions in the Work Programme
A proposal scoring 5 / 5 / 2 — excellent on two criteria but below threshold on Implementation — is rejected outright, regardless of its total score of 12. This is one of the most common causes of unexpected rejection for technically strong proposals.
Two-Stage Calls
For calls with a two-stage submission process, Stage 1 typically evaluates only Excellence and Impact, each with a threshold of 4 out of 5. Proposals that pass Stage 1 are invited to submit a full proposal, which is then evaluated against all three criteria. Check the specific call conditions for exact Stage 1 thresholds.
Criterion 1: Excellence
What Evaluators Score
The Excellence criterion captures the scientific and/or technical quality of what you are proposing to do. Evaluators assess:
- Clarity and pertinence of the project objectives — Are the objectives specific, measurable, and aligned with the call topic? Do they directly respond to the challenge described in the Work Programme?
- Soundness of the proposed methodology — Is the approach appropriate for achieving the stated objectives? Are the methods justified and well-suited to the research questions?
- Credibility of interdisciplinary approaches (where relevant) — If the work spans disciplines, is the integration genuine and necessary, or decorative?
- Extent to which the proposed work goes beyond the state of the art — Is novelty convincingly demonstrated with reference to existing literature and funded work? Are the innovations clearly articulated?
What Separates a 5 from a 3
A score of 5 means the proposal's objectives are clearly defined, specific, measurable, realistic, and ambitious. The methodology is fully appropriate and thoroughly justified. Novelty is argued convincingly with references to the current state of the art — evaluators can see precisely what gap is being addressed and why the approach is the right one. There are no or only very minor shortcomings in how these elements are presented.
A score of 3 means the objectives are reasonable but vague in places — perhaps ambitious without being specific, or specific without being clearly connected to the call challenge. The methodology is adequate but not fully justified; the reader is left with questions about whether the approach will actually work. Novelty is asserted rather than demonstrated — the proposal claims to be innovative without convincingly situating itself against the existing literature or funded projects.
Common Shortcomings on Excellence
- Objectives written as activities ("we will develop...", "we will analyse...") rather than outcomes
- Methodology described at a high level without justifying key technical choices
- State-of-the-art section that reads as a literature review rather than a gap analysis
- Novelty claims that could apply to any proposal in the field
Criterion 2: Impact
What Evaluators Score
Impact is about what happens as a result of your project — during it, at its end, and beyond. Evaluators assess:
- Credibility of the pathways to achieve expected outcomes and wider impacts — Is there a realistic, logical chain from research activities to outcomes and impacts? Are the steps plausible given your consortium and timeline?
- Magnitude and importance of expected impacts — How significant are the scientific, economic, and societal impacts? Are they clearly linked to EU policy priorities and the specific objectives of the Work Programme topic?
- Suitability and quality of dissemination, exploitation, and communication measures — Does the plan go beyond publishing papers? Are there concrete routes to market, policy uptake, or technology transfer? Is communication targeted at identified audiences?
- Quality of open science practices — Is there a Data Management Plan (DMP)? Are open access commitments clear? Are FAIR principles addressed?
What Separates a 5 from a 3
A score of 5 means impact pathways are concrete and quantified where possible. Key performance indicators (KPIs) are defined, specific stakeholders are identified by name or sector, and exploitation routes are realistic given the nature of the research. The communication plan targets specific audiences with tailored messages and channels. Open science commitments are specific and actionable.
A score of 3 means the impact claims are plausible but generic — the proposal describes societal benefits in broad terms without a credible pathway. The dissemination plan lists standard channels (project website, scientific publications, conference presentations) without tailoring them to the audience or the specific outputs being disseminated. Open science is mentioned but not planned.
Common Shortcomings on Impact
- Impact described in terms of the project's outputs ("we will publish 20 papers") rather than outcomes or societal change
- No clear link between the proposed activities and the specific expected outcomes named in the call topic
- Exploitation section that assumes results will automatically find users
- Communication plan copied from a previous proposal without adapting to the call audience
- Data Management Plan deferred to month 6 with no substantive commitments
Criterion 3: Quality and Efficiency of the Implementation
What Evaluators Score
Implementation assesses whether you can actually deliver what you are promising, on time and on budget. Evaluators assess:
- Quality and effectiveness of the work plan — Are the work packages logically structured? Are tasks, deliverables, and milestones clearly defined and realistic? Are dependencies between work packages identified?
- Appropriateness of management structures and governance — Is there a clear project management work package? Are decision-making procedures defined? Is there an internal review process for quality assurance?
- Complementarity of consortium partners and adequacy of resources — Does each partner bring something the others do not? Are person-months and budget allocations justified and proportionate to the tasks assigned?
- Integration of SMEs and non-academic partners (where appropriate) — If the call encourages industry involvement, are non-academic partners genuinely integrated into core activities rather than advisory roles?
What Separates a 5 from a 3
A score of 5 means the work plan has clear dependencies between work packages, a realistic timeline with buffer built in for integration and review phases, and person-months that are explicitly justified by the tasks described. Risk mitigation is specific — risks are named, their likelihood and impact assessed, and concrete mitigation measures identified. Every consortium partner's role is clearly justified and non-redundant; there are no "passenger" partners whose removal would not affect the project.
A score of 3 means the work plan is logically structured but dependencies are unclear or the timeline is over-optimistic. A risk table is present but the risks and mitigations are generic (e.g. "staff turnover — mitigated by hiring additional staff"). Some partners' roles are not fully justified, or the budget breakdown does not clearly map to the tasks described.
Common Shortcomings on Implementation
- Work packages that run in parallel with no integration mechanism
- Gantt chart that does not reflect the task descriptions in the text
- Risk register with only technical risks — no organisational, financial, or external risks
- Consortium where two partners have identical profiles and overlapping roles
- Budget justification that lists costs without explaining why they are necessary
Weight Differences by Action Type
For standard Horizon Europe action types, all three criteria carry equal weight in the evaluation:
| Criterion | RIA | IA | CSA |
|---|---|---|---|
| Excellence | Equal | Equal | Equal |
| Impact | Equal | Equal | Equal |
| Quality of Implementation | Equal | Equal | Equal |
However, two important caveats apply:
-
Call-specific weightings: Some specific call topics or cross-cutting priorities specify different weightings. Always read the full call conditions in the Work Programme — the General Annexes define the default, but the call text can override it.
-
EIC Pathfinder: The EIC Pathfinder scheme adds a fourth criterion — Quality of the team and consortium — which is assessed in addition to Excellence, Impact, and Implementation. This criterion evaluates the track record, complementarity, and capacity of the team to execute the proposed high-risk, high-gain research.
Tips for Maximising Your Score on Each Criterion
Address every sub-criterion explicitly
Evaluators use the sub-criteria as a checklist. If your proposal does not address interdisciplinarity because it is not relevant to your project, say so explicitly — do not leave evaluators to infer it. Structure your proposal sections to mirror the evaluation criteria headings so evaluators can find what they need without effort.
Use the exact language from the call text
The Work Programme topic describes specific expected outcomes and impacts. Use that language in your proposal. If the call says "contribute to the European Green Deal objectives," name those objectives. If it specifies target TRL ranges, state your starting and ending TRL. Evaluators are assessing fit to the call, not just the quality of the science in isolation.
Provide evidence, not assertions
Every claim in your proposal should be supported by evidence: references to literature, preliminary results, letters of intent from end users, adoption rates from comparable projects, or data from CORDIS on similar funded work. "Our consortium has extensive expertise" scores lower than "our consortium has coordinated 14 Horizon 2020 projects in this field, delivering X outcomes."
Quantify where possible
Scores of 4 and 5 typically involve specific, quantified claims. Instead of "significant impact on the transport sector," write "reducing transport sector CO2 emissions by an estimated 2.3 Mt/year across the three target regions by 2030, based on adoption modelling validated with industry partners." Specificity signals credibility.
Cross-check against proposals from funded projects
Reviewing the objectives and descriptions of funded projects in the same topic area gives you a benchmark for the level of detail and ambition that received funding. The EU Funding & Tenders Portal and the CORDIS database provide public access to funded project summaries. Tools like CriteriaI benchmark your proposal against 54,884+ funded EU projects and surface the most relevant funded work, helping you situate your novelty claims and avoid reinventing the wheel.
Conclusion
The three Horizon Europe evaluation criteria — Excellence, Impact, and Quality of Implementation — are not equally demanding for all proposals, but they are equally weighted and equally unforgiving. A single criterion falling below 3 out of 5 terminates the proposal's chances regardless of its total score.
The gap between a 3 and a 5 on any criterion almost always comes down to specificity, evidence, and alignment with the call text. Proposals that score 5 do not necessarily describe more impressive science — they describe their science more convincingly, in the evaluator's frame of reference, with no sub-criterion left unaddressed.
For a broader overview of how the evaluation process works from submission to consensus — including how individual scores become a consensus score and how the ranking list is formed — see our companion guide: Understanding the Horizon Europe Evaluation Process.
CriteriaI's AI evaluation tool uses data from 54,884+ funded Horizon Europe and H2020 projects to benchmark your proposal's novelty, call fit, and consortium strength against real funded work — using the same framework evaluators apply. Try a free evaluation before your next submission.
Ready to evaluate your proposal?
Get AI-powered feedback against 55,000+ funded EU projects in under two minutes.