HE-NEXUS Group

“Connecting People and Planet for a Resilient Future”

Flooding the zone: how inauthentic networks weaponise information in Sudan – and what humanitarians can do (March 2026) [EN/AR]
Country: Sudan
Source: CDAC Network

Please refer to the attached files.

Sudan’s conflict is as much an information war as a physical one. Systematic disinformation campaigns amplify hate speech, erode social cohesion and target humanitarians – making information both a weapon of war and a barrier to effective humanitarian response.

CDAC’s work in Sudan’s information environment underscores a critical point: the information space must be treated as a strategic and critical part of humanitarian response, requiring urgent investment in communication infrastructure that builds trust and enables rapid clarification.

In August 2025, CDAC launched a new initiative in collaboration with Valent to address this directly. As the technical partner, Valent provided an AI model trained to identify inauthentic networks and monitor manipulation of online narratives – i.e. when disinformation campaigns use AI to artificially increase their reach and impact – using machine learning to detect patterns humans might miss. CDAC Network then translates these findings into trusted, clear and timely information in Arabic and English, shared with Sudanese and international actors to support safer decision-making amid displacement and crisis.

The value of this work does not sit in identifying signals alone. CDAC’s role is to act as the bridge between technical detection and humanitarian action – translating signals into trusted, contextualised and operationally relevant insight. Through its networks, its information integrity focus and its background in accountability to affected people, CDAC ensures analysis is grounded in local realities, aligned with humanitarian principles, and used safely by operational actors. This is what enables AI-assisted detection to inform decisions without introducing additional risk in already fragile contexts.

This report examines the project’s co-design ambitions, reviews the integration of AI into humanitarian response, measures the project’s operational impact on Sudanese and international partners, and draws together key lessons and recommendations from the pilot.

Key lessons and recommendations

Automation takes time – at first

Training AI models to detect and categorise behaviour in complex information environments requires significant time investment. Teaching AI models to identify protection risks, categorise sentiments and bridge the gap between common parlance and humanitarian terminology is not quick. Automation is expected to save time, but an intensive learning phase must be planned for from the start.

An AI model is a product – adaptable, but within limits

AI tools can be refined and expanded, but only within the constraints of their original design. When working with AI experts and labs, it is essential to differentiate between the product (what the model fundamentally is and does) and the service (how it is applied). This distinction will define the boundaries of co-design and determine whether a model can be designed from scratch or comes with assumptions and features already built in.

AI creates opacity – human ground-truthing remains irreplaceable.

AI systems are often opaque, especially to non-technical users. In this project, the tool mapped networks and assessed narrative authenticity, but the ‘how’ behind those outputs was not transparent.​ Validating findings with local ERRs and international partners was therefore essential, and the AI analysis did generally aligned with on-the-ground realities, building confidence in the outputs. But this does not negate the opacity problem: humanitarians struggled to grasp the methodology, while AI’s veneer of bias-free authority demands greater scrutiny. Human judgement remains irreplaceable.

AI integration requires deliberate effort, time and staffing

Introducing AI-based analysis requires significant time for all partners to understand what the tools were, how they work and what they can and cannot deliver. Without this foundational understanding, stakeholders cannot meaningfully steer the process, offer informed feedback or integrate the outputs into their workflows. Shared intentions are not enough: humanitarian responders and tech developers start aligned on impact but bridging gaps – technical understanding, translation between fields, expectation management and communication – requires time, skills and effort that must be explicitly resourced, not assumed within existing workflows. Learning curves must also be built into project timelines on the understanding that capacity-building is not optional

Understanding a model’s limits is crucial to interpreting its results

Large language models can compare large volumes of text and group similar content. In this project, that capacity was used to map inauthentic networks and the narratives they were pushing. However, this is very different from assessing how far a narrative travels beyond those mapped networks, or what its humanitarian impact might be. The model was good at seeing patterns inside a predefined set of networks but unable to detect authenticity outside them.

Over time, the team recognised the need for a clearer shared understanding of what the model was – and was not – doing, communicated in ways that enabled humanitarian colleagues to judge when outputs were relevant and when they were not. Being explicit throughout the process about which parts of the system are flexible, which are fixed and why is essential for productive collaboration across technical and humanitarian roles.

AI tends to obscure human contribution, which undermines participation

Co-design relies on contributors being able to see how their insights shape final outputs. However, AI-generated outputs can feel sterile and impersonal, making it harder for participants to recognise their role or feel that shared outputs are created with them in mind. This underscores the need for deliberate mechanisms (feedback loops, validation steps, human-in-the-loop methodologies) to keep people’s contributions visible and valued throughout the workflow.

AI shows potential for positive humanitarian impact, but realising it will need structured, intentional approach and a longer timeframe

Despite the cautions above, this pilot offers promising signs that AI can meaningfully enhance humanitarian decision-making, with distinct impacts on humanitarian operations ranging from volunteer safety to resource planning. The challenge ahead is to responsibly harness this potential, embedding safety and humanitarian principles as non-negotiables. Frameworks like SAFE AI, a sector-wide architecture for responsible AI in humanitarian action, offer a systematic approach to embedding governance requirements, including community-in-the-loop, from design through deployment.

Connectivity is a humanitarian necessity, not a technical afterthought

Local responders, particularly ERRs and national NGOs, are operating with extremely limited connectivity. Yet this project reinforced that connectivity gaps have severe impacts on coordination capacity and critical information-sharing between Sudanese and international actors. Addressing this requires deliberate investment in: secure satellite internet access, shared connectivity hubs for humanitarian actors, equipment for local response networks and operational costs for local actors.

Strengthening the information environment requires investing in trusted channels, not just detecting manipulation

Detecting and countering inauthentic networks addresses the supply side of the problem, but communities still need improved access to trusted, reliable, locally relevant information to fill the space that mis- and disinformation exploit. Strengthening information exchange between humanitarian actors and affected communities is essential to supporting informed decision-making. Investments should be made in strengthening trusted local media networks, radio, community information systems, and community feedback mechanisms. These channels can help ensure that communities have access to verified information during crises, reducing the reach and impact of inauthentic networks.

Posted in

Leave a Reply

Discover more from HE-NEXUS Group

Subscribe now to keep reading and get access to the full archive.

Continue reading