The Role of Personal Interviews in Ensuring Fair Evaluation Amid Increasing AI‑Generated Proposal Distortion – GAIA
IMPORTANT NOTICE: This article was generated using artificial intelligence, guided by prompts from GAIA´s Director Jokin Garatea. We share it intentionally as an experiment, highlighting the complexities and ambiguities of using AI to create content and project proposals. In doing so, we invite readers to reflect on how AI engages and attempts to address the very challenges it creates.
“As the European Union’s flagship programme for Research and Innovation, Horizon Europe (HE) has introduced substantial improvements over previous Framework Programmes. Despite its strategic ambition, many businesses—especially SMEs—as well as academic and industrial researchers continue to face major obstacles in accessing and fully benefiting from the programme. Administrative burden, perceived inconsistencies in the evaluation process, and limited financial appeal remain among the most cited barriers to participation.
In recent years, a new challenge has emerged: the widespread use of generative artificial intelligence in proposal development. While AI can enhance efficiency, it has increasingly led to homogenised, generic, and artificially polished applications that obscure the true capabilities of applicants. This makes it more difficult for evaluators to accurately assess merit, feasibility, and the applicant’s actual capacity to execute the proposed work.
In this context, the personal interview—whether conducted remotely or in person—has become a critical instrument for restoring fairness, authenticity, and depth to the evaluation process.
Why AI Introduces Distortion in Proposal Evaluation
Homogenisation of Proposal Language
Generative AI tools can produce highly coherent, structured, and compelling narratives. While this may support applicants with fewer resources or weaker writing skills, it also blurs distinctions between highly capable teams and those with limited expertise.
Risk of Over‑engineered or Misleading Content
AI can artificially strengthen arguments, forecast unrealistic impacts, or incorporate advanced terminology without the applicant fully understanding it. This creates the risk that evaluators might overestimate the applicant’s expertise, misinterpret AI-generated content as evidence of actual competence or fail to identify gaps, inconsistencies, or feasibility issues hidden by polished writing.
Reduced Visibility of the Human Team Behind the Proposal
AI drafting tools can mask human dimensions, making it harder to distinguish genuinely strong teams from those whose written proposals are “enhanced” beyond their true capability.
The Added Value of Personal Interviews in Ensuring Fairness
Authentic validation of competence and understanding
A personal interview enables evaluators to directly assess whether the applicant understands the technical details of the proposal, can justify methodological choices, or can realistically deliver the work described.
This “human verification layer” counterbalances AI-generated distortion and ensures that competence—not writing proficiency or AI fluency—drives evaluation outcomes.
Clarification of ambiguous or AI-inflated Claims
Interviews allow evaluators to explore areas where AI might have exaggerated expected impacts, generated overly generic methodologies or introduced technical assertions without evidence.
Through open discussion, evaluators can differentiate between genuine innovation and AI-produced embellishment.
Assessment of Team Dynamics, Motivation, and Commitment
A proposal alone cannot reveal how well the team collaborates, their strategic alignment, their capacity to respond to challenges, or their passion and motivation.
The interview provides a richer, more reliable picture—essential in programmes like Horizon Europe, where execution capability is as important as innovative potential.
Recommendations
- Integrate structured interviews in all high-impact or highly competitive calls.
- Adopt standardised evaluation guidelines for interviews to ensure fairness and consistency across evaluators and thematic areas.
- Train evaluators to identify AI-generated distortions and assess real competence.
- Use interviews to complement—not replace—written materials, ensuring a balanced, holistic assessment.
- Provide feedback to applicants to enhance transparency and improve proposal quality over time.
Conclusion
As AI tools reshape the landscape of proposal writing, Horizon Europe must adapt to ensure fairness, quality, and authenticity in its evaluation procedures. Personal interviews provide a reliable and equitable mechanism to counteract AI-driven distortions, enabling evaluators to focus on what truly matters: the people, the expertise, the feasibility, and the real innovation potential behind each proposal.
By placing greater emphasis on direct human interaction, HE can safeguard its integrity while fostering a more inclusive and effective research and innovation ecosystem.”
Comment by the author:
The added value of personal interviews in ensuring fairness will, among others, realistically deliver the described work and allow evaluators to explore areas where AI might have exaggerated expected impacts. Through open discussion, evaluators can differentiate genuine innovation and AI-produced embellishment.
Written by Jokin Garatea, GAIA
