Common Mistakes That Sink EU Grant Proposals — and How to Avoid Them
Why Good Proposals Get Rejected
Writing a strong Horizon Europe proposal is hard. Even experienced researchers and grant writers make mistakes that cost them crucial points on the evaluation. Since proposals must score at least 3 out of 5 on each of the three criteria (Excellence, Impact, and Implementation) to pass the threshold, a single weak area can eliminate an otherwise excellent submission.
This guide maps the most common mistakes to the specific evaluation criteria they affect, based on the evaluation framework described in the official Horizon Europe Programme Guide.
Mistakes That Hurt Excellence
1. Vague or Unmeasurable Objectives
The mistake: Objectives like "improve sustainability" or "advance knowledge in the field" that give evaluators nothing concrete to assess.
Why it hurts: Evaluators score objectives on their clarity, credibility, and measurability. An objective they cannot assess is an objective that scores low.
The fix: Use the SMART framework — make each objective Specific, Measurable, Achievable, Relevant, and Time-bound. For example: "Develop a prototype membrane achieving >95% CO2 selectivity at temperatures above 200°C, validated in laboratory conditions (TRL 4) by month 30."
2. No Explicit State-of-the-Art Analysis
The mistake: Jumping straight into the proposed approach without establishing what currently exists and where the gaps are.
Why it hurts: Without a clear baseline, evaluators cannot judge novelty. If you don't tell them what is new about your approach, they have to guess — and they won't guess in your favour.
The fix: Dedicate a subsection to the state of the art. Structure it as: "Current approaches do X. Their limitation is Y. Our approach differs because Z." Be specific and cite relevant literature — including competitors' work, not just your own publications.
3. Methodology as a Shopping List
The mistake: Listing methods and technologies without explaining why they were chosen or how they connect.
Why it hurts: Evaluators need to see a logical flow from research question to methodology to expected results. A list of techniques without rationale suggests the approach hasn't been thought through.
The fix: For each methodological choice, briefly explain why it is the most appropriate option. If you chose machine learning over simulation, explain why. If you're using a specific experimental protocol, explain what advantage it offers.
Mistakes That Hurt Impact
4. Confusing Dissemination with Exploitation
The mistake: Listing conferences and publications as the main exploitation strategy.
Why it hurts: Dissemination (raising awareness) and exploitation (creating value from results) are assessed separately. A proposal with no clear exploitation plan — even if the dissemination plan is solid — scores poorly on Impact.
The fix: For each key result, define who will use it and how. Will it be commercialised? Licensed? Used to inform policy? Integrated into standards? Each exploitation pathway should name the responsible partner, the target user, and the timeline.
5. No Quantification of Benefits
The mistake: Claiming the project will "significantly contribute to the green transition" without any numbers or evidence.
Why it hurts: Impact evaluators are looking for credible, evidence-based estimates. Unsubstantiated claims read as wishful thinking.
The fix: Where possible, estimate the magnitude of the expected impact. Reference industry reports, market data, or policy targets. For example: "The global market for [technology X] is projected to reach EUR Y billion by [year] (Source: [specific report]). Our solution addresses [specific segment] representing Z% of this market."
6. Ignoring the Work Programme Context
The mistake: Writing a generic impact section that could apply to any call topic.
Why it hurts: Each call topic is nested within a specific Horizon Europe destination and cluster, with explicit expected impacts listed in the work programme. If your impact section doesn't address these specific expected impacts, evaluators will note the misalignment.
The fix: Copy the expected impacts from the call topic text and address each one explicitly. Show evaluators you understand why the Commission is funding this topic and how your project contributes to those specific goals.
Mistakes That Hurt Implementation
7. Work Packages Organised by Partner, Not by Objective
The mistake: WP1 = what University A does, WP2 = what Company B does, WP3 = what SME C does.
Why it hurts: This structure suggests the consortium is a collection of independent projects rather than an integrated effort. Evaluators look for work packages that map to project objectives, with multiple partners contributing to each WP.
The fix: Structure work packages around technical or thematic objectives. Each WP should have a clear goal, defined inputs and outputs, and contributions from multiple partners. Use a PERT chart or dependency diagram to show how WPs interconnect.
8. Budget Without Justification
The mistake: Allocating budget figures without explaining why each cost is necessary.
Why it hurts: Evaluators check whether the resources requested are proportionate to the described activities. Unjustified costs — especially large equipment purchases or excessive travel budgets — raise red flags.
The fix: For major cost items, include a brief justification in the work plan. For personnel, explain the role and effort (person-months) of each key team member. For equipment, explain why existing infrastructure is insufficient.
9. Missing or Generic Risk Management
The mistake: Either no risk assessment, or a table listing generic risks ("Partner may leave the consortium") with generic mitigations ("We will find a replacement").
Why it hurts: Risk management is an explicit part of the Implementation assessment. Generic risks suggest the proposers haven't thought seriously about what could go wrong.
The fix: Identify risks specific to your project — technical risks (this experiment might not work), commercial risks (the market may shift), and organisational risks (key personnel leaving). For each, describe a concrete mitigation: what you will do differently, what alternative approach you will take, what contingency budget is available.
10. The Invisible Partner
The mistake: Including a partner in the consortium who has no clear deliverables, leads no work package, and receives a small budget allocation.
Why it hurts: Evaluators will question why this partner is in the consortium. If their role is unclear, it weakens the perception of the entire consortium as a well-designed team.
The fix: Every partner should lead or significantly contribute to at least one work package. If a partner is included primarily for an advisory role, consider making them an associated partner or advisory board member instead.
Structural Mistakes
11. Exceeding Page Limits
Proposals that exceed the page limits specified in the call are automatically truncated — evaluators will not read the excess pages. This often means your impact or implementation section is simply cut off.
The fix: Check page limits early and plan your content accordingly. Leave a buffer for last-minute additions.
12. Not Reading the Call Topic Carefully
Each call topic specifies the scope, expected outputs, and type of action. Submitting a proposal that doesn't fit the scope — for instance, submitting a basic research proposal to an Innovation Action call — results in rejection.
The fix: Before writing, highlight the key requirements and constraints in the call topic text. Check: Is this a RIA or IA? What TRL range is expected? Are specific outputs required (demonstrator, pilot, standard)?
A Self-Assessment Framework
Before submitting, score your own proposal on each criterion (1–5) and ask:
| Question | Criterion |
|---|---|
| Are objectives specific and measurable? | Excellence |
| Is novelty explicitly stated with evidence? | Excellence |
| Is the methodology explained and justified? | Excellence |
| Are pathways to impact concrete and quantified? | Impact |
| Is exploitation clearly distinguished from dissemination? | Impact |
| Does the impact section address the call's expected impacts? | Impact |
| Are WPs organised by objective, not by partner? | Implementation |
| Is every partner's role justified? | Implementation |
| Are risks specific with concrete mitigations? | Implementation |
| Is the budget proportionate to described effort? | Implementation |
If any answer is "not sure" or "no," that section needs work before submission.
Learn from What Has Been Funded
One of the best ways to calibrate your proposal quality is to study what evaluators have funded before. Browse previously funded projects by topic, cluster, and funding scheme in our Project Explorer.
Want pre-submission feedback? Try CriteriaI for free — get an AI-generated evaluation with novelty scoring and gap analysis in under 30 seconds.
Ready to evaluate your proposal?
Get AI-powered feedback against 55,000+ funded EU projects in under 30 seconds.