Back

Who we are

With research staff from more than 60 countries, and offices across the globe, IFPRI provides research-based policy solutions to sustainably reduce poverty and end hunger and malnutrition in developing countries.

Erick Boy

Erick Boy

Erick Boy is the Chief Nutritionist in the HarvestPlus section of the Innovation Policy and Scaling Unit. As head of nutrition for the HarvestPlus Program since 2008, he has led research that has generated scientific evidence on biofortified staple crops as efficacious and effective interventions to help address iron, vitamin A, and zinc deficiency in sub-Saharan Africa, Latin America, and South Asia.

Back

What we do

Since 1975, IFPRI’s research has been informing policies and development programs to improve food security, nutrition, and livelihoods around the world.

Where we work

Back

Where we work

IFPRI currently has more than 600 employees working in over 80 countries with a wide range of local, national, and international partners.

Beyond the hype: Centering humans in CGIAR’s genAI research

Open Access | CC-BY-4.0

Illustration showing five human figures and robot gesturing at surrounding charts and graphs

Photo Credit: 

ProStockStudio/Shutterstock.com

A related webinar series will begin with a June 5 genAI skills workshop for researchers.

The rise of artificial intelligence (AI)—and more recently, the expanding availability and growing public use of generative AI (genAI)—has sparked interest across scientific and policy communities. GenAI refers to a class of models that generate text, images, code, and other content by identifying patterns in vast datasets. These tools allow users to interact with AI systems through everyday language, making advanced technologies more accessible and easier to experiment with. Since the public release of ChatGPT in 2022, genAI has evolved rapidly, opening new possibilities for research, communication, and innovation.

A growing number of genAI applications now promise to streamline economic and social analysis, improve program impacts, and play useful roles in decision-making across sectors—all key to research and policy on food systems, sustainable development, climate adaptation, and other core elements of CGIAR and IFPRI missions. But genAI is more than a technical innovation—it is altering how we create, interpret, and share knowledge. This shift brings not only enthusiasm and optimism, but also uncertainty and concern over potential risks and unintended impacts.

Much of the uncertainty surrounding genAI stems not only from the technology itself—which is prone to hallucinations (fabricated, incorrect information), systemic biases, legal risks, and significant environmental costs—but also from how it is described and promoted. Public narratives often overstate its objectivity and reliability, masking the human choices and assumptions built into its design. GenAI’s value is inseparable from human judgment; it depends on the competencies, ethics, and oversight of those who use it. While often presented as a shortcut to neutrality or efficiency, it remains a tool—one that requires critical interpretation and ethical reflection.

This post is the first in a series from IFPRI’s AI For Food Systems Research initiative. The series aims not to teach readers how to use AI tools but to invite reflection. By surfacing key questions and challenges, we hope to start a longer conversation about how AI is changing the way we work—and what capabilities, cautions, and further discussions are needed to shape its use.

Opportunities for using genAI in CGIAR research

GenAI is already beginning to reshape how research is conducted across CGIAR. genAI can help researchers stay current by scanning and summarizing large volumes of open-access literature, policy documents, and historical datasets. This can save time, support more informed analysis, and reduce the burden of manual review. Given CGIAR’s commitment to open access, CGIAR publications have likely helped train large language models like ChatGPT (which use vast corpuses of open data harvested from the web)—enabling them to better serve agricultural researchers around the world.

GenAI can also support more creative aspects of research. It can assist in writing code, drafting reports, structuring communications, and generally broadening the reach of research insights. In some cases, CGIAR researchers have already used AI to streamline content development and translation for policy audiences.

Looking further ahead, genAI tools may be used to support project delivery by improving how researchers and farmers communicate. For example, chatbots could help answer questions in local languages or provide real-time, localized insights on-demand about weather, pests, or soil health—offering farmers something akin to a digital extension agent.

GenAI is also likely to play a growing role in policy development and decision-making. Global development organizations are already exploring its potential to synthesize complex datasets and generate evidence-based recommendations for nutrition, health, and agricultural investments. For example, the AI firm DevelopMetrics, which uses models tuned to global development issues, has partnered with IFPRI, MercyCorps, and other organizations.

Potential genAI risks

But these potential benefits come with risks. The large datasets used to train genAI models often include copyrighted or proprietary content, raising concerns about plagiarism, data misuse, and privacy. Because these models are “black boxes,” they may also generate so-called hallucinations—outputs that appear authoritative but are factually wrong or entirely made up. Without strong domain knowledge, these errors can be hard to catch, running the risk of propagating misinformation.

Overreliance on genAI can reduce independent thinking and may lead researchers to accept outputs uncritically. Because these models inherit and amplify patterns from their training data, they can reinforce existing biases—related to gender, race or ethnicity, age, geography, or ideology—without users realizing it. Even well-intentioned efforts to correct for bias may create unintended effects, for example, oversimplifying complex dynamics or unintentionally excluding less visible perspectives.

There are also concerns that increasing dependence on genAI could limit how researchers collaborate and communicate. When outputs are generated quickly and easily, there may be fewer opportunities for discussion, peer feedback, or collective sense-making—all critical to rigorous and inclusive research.

A related problem for research collaboration is inequity. While it may seem that less experienced users stand to gain the most from genAI, the opposite is often true. Those with more expertise are typically better equipped to prompt effectively, interpret results, and correct errors. This dynamic can deepen existing inequalities in research capacity, rather than closing them.

For genAI to serve as an inclusive and responsible tool, then, it is clear that ethical model design alone is not enough. We also need a better understanding of how researchers engage with AI in practice—what they expect, how they evaluate outputs, and where misunderstandings or risks arise. Developing that understanding is essential for building institutional literacy and shaping AI’s role in ways that strengthen research, rather than compromise it.

(Gen)AI literacy

Given this range of concerns, what are the best approaches for responsible use of genAI in research? Organizations can draft a set of responsible AI principles, but these guidelines are often abstract and difficult to apply in practice. Their effectiveness depends not only on technical implementation, but also on whether researchers can understand, interpret, and adapt them in diverse real-world contexts.

One promising current trend in genAI development is toward models adapted to handle specific tasks or subjects (in contrast to general-purpose applications such as ChatGPT). For example, the Elicit chatbot does AI-enabled systematic reviews of research papers. These are meant to better reflect the needs of particular sectors or users, and thus may help researchers seeking to employ AI both effectively and responsibly.  

But even well-tailored models come with trade-offs. Elicit, for example, promises 90% accuracy and its website advises users to check its outputs closely. GenAI models require significant technical capacity and computing power to develop, and still face the “garbage in, garbage out” problem. Poor-quality training data or prompting can result in misleading insights, reinforcing unintended biases and obscuring critical knowledge.

Addressing the challenges of responsible AI use in research requires more than just better models. It demands a shared, critical understanding of what genAI can and cannot do. AI literacy, “a set of competencies that enables individuals to critically evaluate AI technologies,”  is essential for mitigating risks and ensuring responsible application.

This includes understanding the strengths and limits of different models, knowing when to use (or not use) genAI tools, and situating those decisions within broader research and ethical frameworks.

Several frameworks now outline progressive steps to help researchers build these competencies. For CGIAR, five priority areas stand out:

  1. Understanding capabilities and limitations: Recognizing that genAI tools are statistical models with specific strengths, weaknesses, and use cases. Knowing when and how to use them is as important as knowing what they can do.
  2. Effective interaction: Developing practical skills for structuring inputs and refining outputs. This includes giving clear instructions and checking responses for relevance, accuracy, and fit.
  3. Critical evaluation: Assessing the credibility and reliability of AI-generated content—while maintaining human oversight and understanding the trade-offs between creativity and specificity that shape outputs and risk hallucinations.
  4. Responsible use: Bringing ethical, legal, and policy awareness to AI use. This includes identifying when content is AI-generated, protecting privacy, and aligning use with organizational values.
  5. Adaptability: Staying open to change. As genAI tools evolve, so must our capacity to experiment, learn, and adapt—individually and institutionally.

Ultimately, the goal is not to replace human expertise, but to enhance it. GenAI works best when it is guided by human creativity, experience, and ethical reasoning. These tools can assist, but they should not decide. Researchers still need to set the agenda, define the questions, and make the final judgments.

As AI becomes more embedded in research practice, soft skills—such as critical thinking, collaboration, and adaptability—will become just as important as technical proficiency. Building AI literacy is therefore an investment in how CGIAR and its partners will engage with emerging technologies to support fair, informed, and effective research.

Towards AI literacy

IFPRI is launching AI For Food Systems Research as a training program to build organizational AI literacy and to strengthen staff competencies and capacities to assess AI’s role in project delivery. This will be an iterative process, involving needs assessment, targeted professional development, and participatory engagement to align AI adoption with institutional values and norms for responsible and effective implementation.

We encourage other CGIAR centers and partners to collaborate in this effort. Those interested in participating or learning from our approach are invited to contact the project participants.

By taking an active role in shaping AI’s integration, we aim to build a pragmatic, forward-looking research community and an effective institutional policy, equipping us not only to use AI responsibly but to actively shape its role in advancing sustainable food, land, and water systems.

Eliot Jones-Garcia is a Senior Research Analyst with IFPRI’s Natural Resources and Resilience Unit. Opinions are the author’s.

ChatGPT was used iteratively during the planning, drafting, and revision of this blog post. The author provided the content and the text was carefully reviewed and edited for publication.


Previous Blog Posts