This transdisciplinary project aims to fill a gap in the research into the ethical development of innovative data-driven technologies for Smart Cities. These technologies, for example, sensor networks, cameras, robotics and augmented reality, increasingly make use of AI for automated decision-making.
This kind of decision-making raises many ethical questions for AI developer communities. One well known issue relates to facial recognition systems that are aimed at reducing crime rates but have been shown to unfairly target some groups. After more than a decade of discussion, there is consensus on the guidelines that are necessary for ethical AI deployment, and these are widely published and disseminated. There are also many toolkits out there now that aim to implement these guidelines in practice.
However, there is little research looking at how well these toolkits have led to increased awareness and positive change in the ethical know-how of developers working with AI. Therefore, this project will take a practice-based approach and combine methodologies from the humanities and social sciences with expertise in computing science and engineering. The aim is to identify tangible steps for improving ethics-in-practice in the context of Smart Cities and AI development.