DATA SCIENCE FOR SOCIAL GOOD
The challenges facing our world today have grown in complexity and increasingly require large, coordinated efforts: between countries; and across a broad spectrum of governmental and non-governmental organisations (NGOs) and the communities they serve. These coordinated efforts work towards supporting the Sustainable Development Goals (SDGs), and there continues to be an important role for technology to support the developmental organisations and efforts active in this field to deliver the highest impact.

Artificial intelligence (AI) and machine learning (ML) have attracted widespread interest in recent years due to a series of high-profile successes. AI has shown success in games and simulations, and is being increasingly applied to a wide range of practical problems, including speech recognition and self-driving cars. These commercial applications often have indirect positive social impact by increasing the availability of information through better search and language-translation tools, providing improved communication services, enabling more efficient transportation, or supporting more personalised healthcare. With this interest come a lot of questions regarding social impact, malicious uses, risks, and governance of these innovations, which are of foremost importance.
Targeted applications of AI to the domain of social good have recently come into focus. This field has attracted many actors, including charities like DataKind (established in 2012), academic programmes such as the Data Science for Social Good (DSSG) programme at the University of Chicago (established in 2013), the UN Global Pulse Labs, AI for Social Good workshops in conferences such as the 2018 and 2019 NeurIPS conference, the 2019 ICML conference and the 2019 ICLR conference, along with corporate funding programmes such as Google AI for Good Grants, Microsoft AI for Humanity, Mastercard Center for Inclusive Growth and the Rockefeller Foundation’s Data Science for Social Impact, amongst several others.
Results from several recent studies hint at the potential benefits of using AI for social good. Amnesty International and ElementAI demonstrated how AI can be used to help trained human moderators with identifying and quantifying online abuse against women on Twitter. The Makerere University AI research group supported by the UN Pulse Lab Kampala developed automated monitoring of viral cassava disease, and this same group collaborated with Microsoft Research and other academic institutions to set up an electronic agricultural marketplace in Uganda. Satellite imagery was used to help predict poverty and identify burned-down villages in conflict zones in Darfur, and collaborative efforts between climate and machine learning scientists initiated the field of climate informatics that continues to advance predictive and interpretive tools for climate action. Future improvements in both data infrastructure and AI technology can be expected to lead to an even more diverse set of potential AI4SG applications.
This wealth of projects, sometimes isolated, has led to several meta-initiatives. For example, the Oxford Initiative on AIxSDGs, launched in September 2019, is a curated database of AI projects addressing SDGs that indexes close to 100 projects. Once publicly accessible, it should support a formal study of such projects’ characteristics, success factors, geographical repartition, gaps, and collaborations. Attempts at similar repositories include the ITU AI Repository. Another growing initiative focused on networking AI4SG and making their blueprints easily accessible and reproducible by anyone, is the AI Commons knowledge hub backed by 21 supporting organisations and 71 members. These meta-initiatives can help aggregate the experience and transfer knowledge between AI4SG projects, as well as establish connections between teams and organisations with complementary aims.
Despite the optimism, technical and organisational challenges remain that make successful applications of AI/ML hard to deliver within the field and that make it difficult to achieve lasting impact. Some of the issues are deeply ingrained in the tech culture that involves moving fast and breaking things while iterating towards solutions, and a lack of familiarity with the non-technical aspects of the problems. There is also a long history of tech for good, including 30 years of Information and Communication Technology for Development (ICT4D). Not all applications of technology aimed at delivering positive social impact manage to achieve their goals, leaving us with important experiences from which we must learn. Importantly, technology should not be imagined as a solution on its own, outside of the context of its application: it merely aligns with human intent and magnifies human capacity. It is therefore critical to put it in service of application-domain experts early, through deep partnerships with technical experts.
To achieve positive impact, AI solutions need to adhere to ethical principles and both the European Commission as well as OECD have put together guidelines for developing innovative and trustworthy AI. Related principles are encoded in the Montreal Declaration for Responsible AI and the Toronto Declaration. The European Commission states that AI needs to be lawful, ethical and robust, to avoid causing unintended harm. OECD Principles on AI state that AI should be driving inclusive growth and sustainable development; designed so as to respect the rule of law, human rights, democratic values and diversity; transparent, so that people can understand AI outcomes; robust, safe and secure; deployed with accountability, so that organisations can be held responsible for AI systems they develop and use. Proper ethical design and governance of AI systems is a broad research topic of fundamental importance, and has been the focus of institutions and initiatives like the AI Now Institute and the ACM Conference on Fairness, Accountability and Transparency.
Also, it is important to recognise the interconnectedness of the Sustainable Development Goals (SDGs) and of efforts to achieve them. The UN stresses that each goal needs to be achieved so that no one is left behind. Yet, an intervention with a positive impact on one SDG could be detrimental to another SDG and its targets. Awareness of this interconnectedness should also be a driving principle for fair and inclusive AI for social good: AI applications should aim to maximise a net positive effect on as many SDGs as possible, without causing avoidable harm to other SDGs. Therefore, while being careful to avoid the pitfalls of analysis paralysis, both application-domain experts and AI researchers should aspire to measure the effects, both positive and negative, of their AI for social good applications across the five areas of people, planet, prosperity, peace and partnerships, which are the targets of the sustainable development agenda.
A recent UN report details how over 30 of its agencies and bodies are working towards integrating AI within their initiatives. According to the report, AI4SG projects need to be approached as a collaborative effort, bringing communities together to carefully assess the complexities of designing AI systems for SDGs. These initiatives should aim to involve NGOs, local authorities, businesses, the academic community, as well as the communities which these efforts support. The report highlights the vast potential of the technology across a wide spectrum of applications, while recognising the need for improving data literacy and a responsible approach to AI research and deployment. Our own efforts to put these considerations into practice have led us to put forward in the next section a set of guidelines with which to approach AI4SG, which we then exemplify with a set of case studies before concluding with a call to action for technical communities and their important role in supporting the success of our social and global goals.