The Council on Geostrategy’s online magazine

About | Contributors | Submissions

AI and international development: Explainability is key

At the United Nations (UN) General Assembly last month, James Cleverly, the Foreign Secretary, set out a vision for using artificial intelligence (AI) to make international development more impactful. The ‘AI for Development’ initiative aims to improve local AI skills and innovation in developing countries, initially focusing on Africa. Alongside this, the UK announced a £1 million fund to harness AI-enabled tools to help predict and respond to conflict and humanitarian crises.

The initiative sounds promising. Capacity building in AI skills is undoubtedly a productive endeavour, and AI’s novel predictive and classification capabilities are already proving helpful in development – such as for predicting crop yield, monitoring deforestation, and improving early warning systems for crises.

But there are reasons to be cautious about using AI for development. International development and AI can – at times – appear deceptively simple, and can offer seemingly definitive answers to complex challenges. The problem is that international development and AI sometimes suffer from the ‘black box’ phenomenon; the inputs and outputs are visible, but what happens in between (or the pathway that led to those outputs) is less understood. To put it another way, both sectors have an explainability problem.

Given this uncertainty across both fields, caution is key in efforts to bring the two together. It is imperative that practitioners acknowledge the strengths and limitations of both international development and AI before embarking on grand projects attempting to transform the world.

Attempts to ‘do development’ have varied enormously in their approach and outcome. Whilst projects can be hugely beneficial for communities, there is also a long history of projects that have ended up having no effect or even a detrimental impact. This includes large infrastructure projects, such as cities, highways, and pipelines, that either do not get completed, are too expensive to maintain, or are unsuited to the needs of local communities. 

Failed development projects also include attempts to reskill populations in areas they have no cultural affinity for, such as trying to teach a largely pastoral community to exploit the fish from their lakes. Then there are initiatives such the ‘award-winning’ merry-go-round that pumps water whilst children play on it, later criticised for encouraging child labour and for actually reducing access to water by replacing existing, functioning handpumps that could be repaired with locally available parts with roundabouts that could not easily be repaired once they had broken down.

The point is this: development projects often suffer from a lack of contextual understanding, a lack of flexibility in approach, perverse incentives, and a lack of involvement of the people and communities at the heart of development efforts. The field of international development has a tremendous amount to offer in informing progress towards the eradication of poverty, hunger, disease and illiteracy. But the answers are not apparent at this stage, and it is hubristic to overstate what international development is capable of currently.

The economist Albert Hirschman perhaps put this best in a 1981 essay. He wrote that development economics ‘had achieved its considerable lustre and excitement through the implicit idea that it could slay the dragon of backwardness virtually by itself or, at least, that its contribution to this task was central. We now know that this is not so’.

It is important to be clear about the limitations of development and AI. Otherwise, in bringing them together, problems of the past may be repeated, and may even unfold faster.

AI in its current form will not solve these issues facing development, and its application will prove counterproductive in some cases. It is understood that AI can embed biases learnt from training data. Given that international development seeks to redress injustices, training data that reflects historical or social inequities may not be the best place to find solutions. Indeed, rather than just perpetuating existing inequalities, there is a risk that AI will exacerbate them.

The opaqueness of AI decision-making can also make it difficult to scrutinise how it produces outputs in a sector where accountability and explainability are vital. Development projects deal with vulnerable populations, and the use of AI in development raises ethical and privacy concerns, as well as the issue of consent – about whether people know data is being collected on them, and about the potential for sensitive data to be mishandled or misused.

To ensure AI is used effectively in these contexts, the development sector needs to come together to create international standards for the responsible use of AI in development. These standards should focus on explainability, as well as inclusivity, accountability, transparency, and contextual factors such as geography. Local talent should also be included in creating AI systems and standards for development. 

It is important to be clear about the limitations of development and AI. Otherwise, in bringing them together, problems of the past may be repeated, and may even unfold faster.

Dr Mann Virdee is a Senior Research Fellow in Science, Technology, and Economics at the Council on Geostrategy.

Join our mailing list!

Stay informed about the latest articles from Britain’s World

Leave a Reply

Your email address will not be published. Required fields are marked *