Skip to main content
Insight

You Spent 6 Months Writing Your EU Grant Proposal. Here Is What You Are Missing.

CriteriaI Team16 March 202614 min read

The Problem Nobody Talks About

You have spent months assembling a consortium across five countries. Your research is original. Your objectives are clear. You submit your Horizon Europe proposal, wait four months, and receive an Evaluation Summary Report with a score of 11 out of 15.

One point below the funding line.

The ESR says: "The proposal does not sufficiently demonstrate how the approach goes beyond the current state of the art." One sentence. Months of work. No second chance until the next call.

This is the reality for the vast majority of EU grant applicants. And the frustrating part is that most of these failures are predictable and preventable — if you had the right information before submitting. (Not sure how the evaluation process works? We wrote a full breakdown.)

The Scale of What You Are Competing Against

CriteriaI's database contains 54,884 funded projects from Horizon Europe and Horizon 2020, representing €121.2 billion in EU contributions across €139.1 billion in total project costs. These projects involved 56,671 unique organisations from across Europe and beyond.

This is the landscape your proposal is evaluated against. Every evaluator has seen hundreds of proposals in your topic area. They know what has been funded, what has failed, and what "beyond the state of the art" actually looks like in practice.

The question is: do you?

Pain Point 1: You Cannot See What Has Already Been Funded

The problem

Before writing a proposal, you need to know what exists. What projects have been funded in your topic? What approaches have already been tried? What gaps remain?

Today, the options are limited:

  • CORDIS has the data but searching it is painful — no semantic search, no filtering by funding scheme and topic simultaneously, no way to explore consortium compositions
  • Google Scholar shows publications but not the funded projects behind them
  • Word of mouth from colleagues covers a tiny fraction of the landscape
  • Funding consultants charge thousands of euros for landscape analysis that is often based on the same incomplete searches you could do yourself

The result: you write your proposal in partial blindness, hoping your novelty claim holds up against projects you never found.

How CriteriaI solves this

CriteriaI's Project Explorer lets you search all 54,884 funded projects using natural language. Type a research question or technology description and get semantically matched results — not keyword matches, but projects that are actually about the same thing.

Filter by funding scheme (RIA, IA, ERC, MSCA, EIC), date range, country, and topic. For each project, see the full consortium: which organisations participated, from which countries, and with how much funding.

This is the landscape analysis that used to take weeks of manual work, available in seconds.

Pain Point 2: You Do Not Know If Your Idea Is Actually Novel

The problem

Novelty is scored under the Excellence criterion — the tiebreaker when proposals have equal overall scores. Yet most applicants assess novelty the same way: they search Google Scholar, read a few recent papers, and write a state-of-the-art section based on what they find.

This misses the bigger picture. A project funded two years ago in a neighbouring cluster may have already demonstrated exactly what you are proposing. An ongoing project with similar objectives may make your proposal redundant in the eyes of evaluators. A consortium in another country may have published results that invalidate your baseline assumptions.

You cannot claim novelty against a landscape you have not fully explored. (We wrote a dedicated guide on how to demonstrate novelty effectively.)

How CriteriaI solves this

When you submit a proposal for evaluation, CriteriaI generates a Novelty Score by embedding your objectives and comparing them against all 54,884 funded projects using vector similarity search.

The result is not a generic "your idea is novel" statement. It is a specific list of the most similar funded projects — ranked by how closely they match your proposal — with an analysis of where your approach overlaps and where it genuinely advances the state of the art.

If three funded projects already cover 80% of what you are proposing, you will know before the evaluator tells you. And you will know exactly what to change to strengthen your novelty argument.

Pain Point 3: You Picked the Wrong Funding Scheme

The problem

Horizon Europe offers multiple funding instruments — RIA, IA, CSA, EIC Pathfinder, EIC Transition, EIC Accelerator — each designed for a different stage of the research-to-market pipeline. The funding rates differ (100% for RIA, 70% for IA), the consortium requirements differ, and the evaluation emphasis differs. (Our complete comparison of funding schemes breaks down every instrument.)

Submitting a basic research proposal to an Innovation Action call is a waste of everyone's time. Yet it happens regularly because applicants focus on the topic description and skip the fine print about action type and expected TRL range.

How CriteriaI solves this

CriteriaI's evaluation report includes a Call Fit Analysis that checks whether your proposal aligns with the specific call you are targeting. If you provide a call code, the system fetches the call text directly from the EU Funding & Tenders API and assesses alignment.

The analysis flags mismatches: wrong TRL range, missing exploitation focus for an IA, insufficient fundamental research for an ERC, or scope that does not match the call's expected impacts. These are the exact mismatches that evaluators catch — except you catch them before submitting.

Pain Point 4: Your Consortium Has Blind Spots

The problem

The average consortium in funded EU projects has 5.3 partners. But size is not what matters — complementarity is. Evaluators check whether each partner brings a unique capability, whether the geographic coverage makes sense, and whether the consortium can credibly deliver the work plan. (See our analysis of what makes a strong consortium.)

Common consortium problems:

  • Two universities with overlapping expertise (redundancy)
  • No industrial partner in an IA that requires market validation
  • A coordinator with no prior EU project experience
  • Missing representation from the target end-user group mentioned in the call

These issues are invisible to the consortium itself. Everyone thinks their contribution is essential. But evaluators see the gaps.

How CriteriaI solves this

CriteriaI's Consortium Recommendations analyse your described consortium against the patterns found in successfully funded projects in your topic area.

The system identifies: what types of organisations typically participate in similar funded projects, which countries are most active, and where your consortium may have gaps. If similar funded projects consistently include an SME for market validation and yours does not, the report flags it.

In the Deep Evaluation mode, you also get Reference Consortiums — real examples of funded project teams in your topic area, so you can see exactly what evaluators have approved before.

Pain Point 5: You Have No Feedback Loop Before Submission

The problem

In most universities and research organisations, the internal review process for EU proposals looks like this:

  1. The PI writes the proposal
  2. A colleague reads it and says "looks good"
  3. The grants office checks the budget
  4. Submit and hope

There is no structured pre-submission evaluation against the actual criteria. No benchmark against funded projects. No systematic check for the weaknesses that evaluators consistently flag.

The first real feedback comes four months later, in the ESR. By then, the call may have closed permanently.

How CriteriaI solves this

CriteriaI generates an AI-powered evaluation report in under 30 seconds. The report covers all three evaluation criteria:

Scorecard — scores across novelty, methodology, impact potential, and consortium strength, calibrated against the patterns in 54,884 funded projects.

Gap Analysis — specific weaknesses in your proposal mapped to evaluation criteria, with actionable recommendations for improvement.

Similar Projects — the funded projects most similar to yours, so you can benchmark your scope, approach, and ambition against what has been approved.

Deep Evaluation adds: an improvement checklist, risk register, call alignment matrix, mock reviewer critique, and budget allocation guidance — generated by specialist AI agents that each focus on one evaluation dimension.

Want to see what a report looks like? Browse our sample report to explore the format, scoring, and analysis sections.

This is the feedback loop that did not exist before. Not a replacement for expert review, but a structured first pass that catches the predictable mistakes before a human reviewer needs to.

Pain Point 6: Landscape Research Takes Weeks

The problem

A thorough landscape analysis for a Horizon Europe proposal typically involves:

  • Searching CORDIS for funded projects (limited search, slow interface)
  • Searching the Funding & Tenders Portal for related calls
  • Reading through dozens of project summaries and deliverables
  • Mapping consortium compositions manually
  • Cross-referencing with Google Scholar for academic overlap
  • Compiling everything into a state-of-the-art section

For a well-prepared proposal, this takes 2–4 weeks of a researcher's time. That is time not spent on actual research.

How CriteriaI solves this

The combination of the Project Explorer and proposal evaluation compresses weeks of landscape research into minutes.

Search semantically across 54,884 projects. Filter by funding scheme, date, country, and topic. View consortium details for any project. Then submit your proposal objectives and get a scored evaluation with the most relevant funded projects identified automatically.

The state-of-the-art section still needs to be written by a human who understands the science. But the data gathering — finding what exists, identifying gaps, benchmarking novelty — no longer needs to take weeks.

What This Looks Like in Practice

Here is a typical workflow:

  1. Explore — Search the Project Explorer for your topic area. See what has been funded, by whom, and with how much. Identify the gap your proposal fills.

  2. Evaluate — Submit your proposal title and objectives. Get a scored evaluation with novelty assessment, similar projects, and gap analysis in under 30 seconds.

  3. Refine — Use the feedback to strengthen your novelty framing, adjust your consortium, and align with the call requirements.

  4. Re-evaluate — Submit the revised version and compare scores. Iterate until the predictable weaknesses are addressed.

  5. Submit — Send your proposal to the Funding & Tenders Portal with confidence that the avoidable mistakes have been caught.

Want to see the report format before signing up? View the sample report →

The Numbers Behind CriteriaI

All data in CriteriaI comes directly from the European Commission's official CORDIS database:

  • 54,884 funded projects from Horizon Europe and Horizon 2020
  • €121.2 billion in EU contributions tracked
  • 56,671 unique organisations mapped
  • Average consortium: 5.3 partners (ranging from solo ERC grants to 210-partner coordination actions)
  • Top participating countries: Germany (16,090 projects), Spain (13,965), United Kingdom (13,160), France (12,716), Italy (12,533)

The database is updated monthly. Every number in a CriteriaI evaluation report traces back to real funded projects, not estimates or projections.

Try It Free

CriteriaI offers 3 free evaluation credits on sign-up — no credit card required.

Submit your proposal objectives and see how they compare against 54,884 funded projects. Get a scored evaluation with novelty assessment, similar projects, consortium recommendations, and gap analysis.

The proposal you spent six months writing deserves more than "looks good" as a review.

Start your free evaluation →

Ready to evaluate your proposal?

Get AI-powered feedback against 55,000+ funded EU projects in under 30 seconds.