Back

Who we are

With research staff from more than 60 countries, and offices across the globe, IFPRI provides research-based policy solutions to sustainably reduce poverty and end hunger and malnutrition in developing countries.

Elodie Becquey

Elodie Becquey is a Senior Research Fellow in the Nutrition, Diets, and Health Unit, based in IFPRI’s West and Central Africa office in Senegal. She has over 15 years of research experience in diet, nutrition, and food security in Africa, including countries such as Burkina Faso, Chad, Ethiopia, Ghana, Kenya, Mali, and Tanzania.

Back

What we do

Since 1975, IFPRI’s research has been informing policies and development programs to improve food security, nutrition, and livelihoods around the world.

Where we work

Back

Where we work

IFPRI currently has more than 600 employees working in over 80 countries with a wide range of local, national, and international partners.

Asking the right questions: A stakeholder dialogue on generative AI in digital extension

Open Access | CC-BY-4.0

Woman seated on ground, right arm resting on water can, looks at smartphone

A farmer in Ghana pauses to look at her phone. AI extension services offer a new way to reach farmers, but there are many trade-offs.
Photo Credit: 

Kwame Amo/Shutterstock

By Eliot Jones-Garcia, Kristin Davis, and Niyati Singaraju

Generative artificial intelligence (gen AI) is gaining traction in agriculture. Conversational agents such as chatbots and voice assistants are being explored as digital extension solutions, promising scalable, personalized advice for farmers that can supplement and support the work of human extension agents. Yet designing and deploying such AI tools presents many challenges.

The needs of smallholder farmers have often been among the last considered in rollouts of new agricultural technologies, resulting in uneven rates of adoption, where some farmers benefit while others are left behind. These disparities are amplified in the digital landscape. Some farmers may not have the necessary access to mobile phone technologies, internet connectivity, or electricity; among those who do, some may be comfortable with new AI tools and others confounded.

At the same time, the potential rapid deployment of gen AI in agricultural risks entrenching a common deficit-based assumption about new technologies: That farmers lack knowledge and that this gap can be “fixed” from the outside with the right content or nudges engineered by AI developers. This approach downplays or even ignores those key farmer needs, overlooking social, economic, and technical context, as well as the community-based knowledge systems that farmers already rely on.

On April 7, 2025, 20 members of the GenAI for Advisory (GAIA) consortium, including experts from CGIAR centers, CABI, Digital Green, and the Gates Foundation, among other organizations, convened to explore more grounded alternatives. The GAIA project aims to enhance the efficacy, reliability, and contextual relevance of AI-generated agricultural advisories for small-scale producers in the Global South.

The discussions were guided by the C-H-A-T framework (Figure 1), a tool developed by IFPRI researchers based on existing digital extension literature and early evaluations of gen AI tools to surface dilemmas in achieving these goals. Participants examined the values embedded in the design of gen AI tools and examined ways that they can best be adapted for agricultural advisory.

Figure 1

Source: IFPRI

Below are the key insights from our discussions:

C – Collective knowledge, not just custom content

Gen AI conversational agents deliver personalized, one-on-one advice. Yet such an approach to extension cuts against the way that farmers have long engaged with agricultural knowledge: Collectively, through observation, informal debate, and peer exchange. Cultivating such approaches is widely recommended in extension practice. Discussion participants looked at how best to integrate AI tools with them..

Key takeaways

  • WhatsApp groups and self-help networks offer useful models. These spaces, both online and in-person, enable farmers to collaborate, share information, and make sense of the routine problems they face, and offer lessons for creating effective AI tools.
  • Local hierarchies matter. The history of extension shows that social networks around agriculture can exclude certain groups (most prominently women) and amplify misinformation. Efforts to develop AI extension tools must build in safeguards to address these problems, including transparent selection and rotating roles for those overseeing the projects, offline alternatives, and clear grievance and feedback channels.
  • Leverage collective knowledge. AI-generated advice and recommendations should be classified by scope—personal, household, or community—and should not be taken in isolation; AI tools should actively support deliberation with prompts to consult peers or extension agents before acting.

H – Human insight into context

Human extension workers do more than answer questions—they read between the lines, probe intent, and respond to nuance. Gen AI systems, in contrast, generally lack such capacities, addressing only the surface meaning and intent of farmers’ questions. This creates another set of challenges for designing AI agricultural agents that must be addressed.

Key takeaways

  • Designing for the “average” user flattens diversity, limiting an AI agent’s capacity to help. AI tools typically do not examine the broader context of a given question, including the user’s personal economic circumstances, gender, or other demographics. Thus, they often overlook meaningful differences shaped by access to resources—for example, women are less likely to hold land titles, affecting how they engage with advice or act on recommendations.
  • Farmers express needs differently. Differences in literacy, language, and personal experience mean some users may be unaware of underlying issues that prompted their question, or unable to articulate the problem clearly. To overcome this obstacle and respond consistently and meaningfully to queries, AI agents must be trained to employ careful interpretation and probing questions.
  • Language is messy. AI systems must be updated regularly to keep up with idioms and evolving terms, and trained to understand the typical ways that users in particular areas frame questions.
  • Emotion and uncertainty shape interactions. Trust is an essential dimension of extension work, typically built through personal encounters. Gen AI tools face many limitations on this front. But they should be able to recognize common emotional cues in the language in exchanges with users and respond to them. AI agents that can’t account for tone, urgency, or hesitation risk losing trust.

A – Augmentation with accountability and adaptability

Rather than replacing human advisors, gen AI tools should be used to extend their reach—especially where extension resources are limited. Even so, safeguards are needed to ensure accountability, including clear points where the AI signals that a query goes beyond its scope and directs the farmer to a human advisor. Without such measures, automation may lead to bad advice, damaged trust, and other problems.

Key takeaways

  • Different farmers bring different expectations. Some farmers have never engaged with extension services; some that have may be unfamiliar with gen AI tools and view them with uncertainty; others will be more comfortable with them and even liken them to trusted human agents. To ensure a dependable first contact for all users, balance establishing accountability with broad reach; offer human help when needed.
  • Design for equity gaps, not around them. Projects to build gen AI tools should mirror the ways many development programs address inequalities, seeking to actively reduce inequities by prioritizing underserved users and responding to their needs, so AI-generated recommendations close gaps rather than perpetuate them.
  • Advice must lead to action. Where possible, humans should follow up on AI-generated advice:Check its feasibility against the farmer’s constraints (budget, labor, water, input access, timing), then work to translate that guidance into concrete steps with local suppliers or services, cost and time estimates.
  • Making humans available is essential. When a farmer’s risk or uncertainty surrounding the use of an AI agent is high, ensure there are procedures to hand off to a qualified advisor; examples include pest or disease outbreaks, pesticide advice and dosages, credit or legal commitments, and complex diagnoses.
  • Systems must adapt over time. As users grow more confident or conditions change, a virtual agent’s advice and interaction style must evolve to incorporate the new conditions.

T – Trust through transparency and dialogue

Trust in extension services is not just about delivering correct answers—it’s about explaining, listening, and building confidence over time. This presents an obstacle for AI agents; many users associate digital tools with scams or misinformation, especially when advice seems opaque or tone-deaf.

Key takeaways

  • Farmers want to know who’s behind the tool. Institutional transparency is important: Making clear who is offering an AI tool (a specific project, established extension service, etc.), and explaining what it can and cannot do.
  • Explanations build credibility. Users need to know why a recommendation was made—and what alternatives exist, so virtual agents should provide ample context.
  • Feedback should lead to change. AI extension systems should value user input. They should provide an accessible, easy way to offer feedback and log and respond to comments and criticisms—not ignore or bury them.
  • Gen AI tools need social anchors. Champion farmers, local facilitators, and other trusted intermediaries can provide legitimacy to AI in extension; they help interpret advice, make referrals, and collect feedback.
  • Transitions must be planned. New AI-in-extension projects must sooner or later be handed over to community groups, government extension programs, or others offering extension services. This requires a funded, staged transition—including training for group members, clear governance and maintenance roles, data and service continuity, and established sources of support once a project closes—so tools do not disappear or break down when short funding cycles end.

Designing for trade-offs

No advisory system can deliver perfect answers to every farmer, every time. But understanding design trade-offs helps balance usability, trust, and equity (Table 1).

Table 1

These tensions can’t be resolved all at once, but require ongoing negotiation—guided by user feedback, grounded evaluation, and continual adaptation.

Conclusion: Responsible gen AI is context-sensitive, not just technically capable

Building better digital advisory systems means asking better questions—not just in the way AI applications interpret data, but in their overall design. Responsible AI begins by recognizing that design is never neutral. It requires balancing the immediate needs of farmers with the longer-term goal of fostering more equitable food systems; navigating trade-offs through iterative design, evaluation, and adaptation; anticipating and mitigating unforeseen or harmful consequences; and ensuring that tools are human-centered and problem-oriented—enabling farmers to easily access useful and relevant information and advice.

Eliot Jones-Garcia is a Senior Research Analyst with IFPRI’s Natural Resources and Resilience (NRR) Unit; Kristin Davis is an NRR Senior Research Fellow; Niyati Singaraju is a Postdoctoral Fellow in Gender Research, International Rice Research Institute/Gender and Inclusion Focal Point, CGIAR Gender + AI Accelerator & Digital Transformation Initiative. Opinions are the authors’.


Previous Blog Posts