Over the past decade, we have seen several major smart cities projects announced with much acclaim and ultimately fell short. Masdar City (UAE), Google’s Toronto Sidewalk (Canada), Lavasa (India), and Santander (Spain, which once boasted having the most significant number of sensors of any city in the world) are just a few examples. There are enough of them that it is time now to take stock of what we can learn from past failures and seek alternative ways of framing and implementing smart cities.
Over the past several years, several books and articles have voiced essential criticisms of the first wave of smart cities and proposed ways of rethinking the entire field. The main objections focus on the ethico-political framings of the projects, their tendency toward “solutionism” and reductionism, and the lack of an empirical understanding of how cities work. It is not very dissimilar to the criticism that technology companies have faced in domains such as AI.
In this post, I will summarize these critiques and offer alternative ways of thinking that could contribute to building more citizen-centric and ethically sound cities. We will cover this topic at the Intelligent Infrastructure in Austin, Texas on April 28th and 29th, 2022.
In this vein, we hope to lay the groundwork for dialogue around ethics and technology that is constructive and informed by rigorous thinking by scholars and practitioners.
This blog is the first attempt, and let me know if you disagree with our approach and how we can make it better. So without further ado, here are our Top 10 Lessons Learned:
- Top-down bird’s eye view framing of smart cities as extensive tech surveillance. Consent management is relatively weak in the data collection efforts of smart cities, so standards and ethical principles are sorely needed. The growing role of data brokers raises transparency issues, and the public wonder who is using which data, for what purposes, and for what benefits.
- Lack of attention to bias, not just in algorithms but also in unfair outcomes for marginalized communities, and lack of awareness of equity and fairness. In this respect, AI-based policing methods and facial recognition have a well-earned bad rap. They can contribute to the stigmatization of some communities and obfuscate the need for local police to build better race relations. Some academics even view these solutions as contributing to the poor race relations in societies.
- Contested data: Data is observation coming from specific contexts and therefore objectionable. Assuming that more data in and of itself will solve the political challenges of cities is naïve at best. Opening up to the different meanings of data can also offer genuine opportunities for civic engagement. A telling example is air sensors and public displays of real-time air quality measures in the UK in the 1990s. The posting of the data provoked a civic debate over which pollutants mattered for whom and which thresholds were dangerous. This type of debate around the meaning of data can incentivize citizen science and new partnerships that engage the public with the challenges. Technology becomes an enabler of discussion and not the debate itself.
- Privacy and Identity challenges. Lack of attention to inference and predictive potentials from data collected in public (images, mobility, transportation, etc.) raise fears of what smart cities projects can see and assume about people’s private lives and identities. For instance, the places people visit can help identify sexuality or infer sexual identities. The pervasive use of cameras and facial recognition can undermine the right to be anonymous, a foundational element of democracy. Yet, some other applications may require facial recognition to authenticate users or ensure security without the surveillance component.
- Epistemologies and the politics of knowledge. Cities have complex histories and memories, and too frequently, the use of technologies such as sensors can sideline other ways of knowing. Sensors and technologies have their place, but designers of smart cities need to be attentive to “small data” and use ethnographic tools to engage with community organizations that may build on other foundational forms of knowledge. Libraries can play a significant role and do far more than just loan books (See “The City is Not a Computer” for a fascinating discussion of this issue).
- Limitations of dashboard cultures. Cities have built user interfaces for decision support. They look like the traditional “war room” or “command center” for displaying metrics and sensor data. Those dashboards need to evolve to become the basis of civic engagement and alternative framings of problems and data.
- Lack of engagement on civic apps. The notion that governments are flawed because they lack information and technology to make governing more efficient seems to be erroneous. Cities are fundamentally about power, politics, and capabilities (see “Ethics of Smart Cities” in “The Oxford Handbook of Ethics of AI”). 311-like programs for reporting problems do not explicitly make cities more equal, inclusive, or cultivate citizenship, as the author of the above chapter asserts. These projects can have the unintended consequence of making citizens less engaged, and many of the problems addressed through 311 programs are essentially questions of taxation and allocating resources for infrastructure.
- Smart cities can learn significantly from designers, artists, and projects that engage citizens with data, science, and sustainability to encourage creative solutions. Several projects worldwide have sought to engage the public with technology and data in more open-ended ways. Firefox and Tactical Tech have created The Glass Room as a pop-up store that turns a mirror on the technology industry to explore privacy and security experiences, encourage debate, and explore solutions. CitizenLab is a civic engagement platform that works across numerous areas, from sustainability to participatory budgeting. Artists such as Natalie Jeremjenko, an environmental artist whose work spans chemistry, physics, and engineering, engage the public in new ways of thinking about the environment and their capacities to change. These examples provoke mutual learning and new paths for socio-technical change for better futures.
- More cities are developing ethical frameworks to guide smart cities. We are beginning to see more cities and trans-city networks form to create frameworks and guidelines to drive more equitable, sustainable, and appropriate socio-technical change in cities. The Basque Declaration and The Sharing Cities Declaration are two such examples. The Basque Declaration emphasizes criteria that promote technologies that improve the common good, addressing the digital divide and open standards. The Sharing Cities Declaration seeks to address the downsides of the sharing economy and platform-mediated services that may impact fair working standards, sustainability, health and safety, digital rights, and data sovereignty. Barcelona has also created a framework for protecting technological sovereignty and more democratic control of smart cities projects. These should be viewed as a counter-movement to the perceived shortcomings of the first wave of smart cities and are likely to grow in coming years to become important drivers of smart cities of the future. For more on this, see Goodman in “The Oxford Handbook of Ethics and AI.”
- Transparency in “the city as platform” values. The current skepticism about big tech has forced a more open discussion of platform values. The section above on ethical frameworks is a testament to how the significant tech backlash impacts cities. How are smart cities explicitly protecting values such as privacy and security and which services do platforms drive citizens toward, and why are essential questions being asked. What data are missing from smart cities, and who is impacted by smart city platforms from the data not collected? Are cities getting locked into specific software and companies that may not be sustainable or good for the long-term? Expect these issues to be asked more frequently.
The early approaches to smart cities were too often models of “solutionism” and were framing complex political realities of the city as amenable to a quick technological fix. We see examples where AI becomes the solution before understanding the complexities of the problem. In a way, smart cities do follow the same pitfalls that traditional technology companies undertook while building the first generation of the Internet.
As the saying goes: insanity is doing the same thing repeatedly, expecting different results. The learning of smart cities 1.0 points to a few observations to drive different outcomes:
- In the same way, socio-technical interventions in a city require external study, changes or adaptation to the city's digital and social infrastructure need the same level of care;
- Education and dialogue will be vital to helping citizens understand and decide what type of privacy/ethics they are willing to give up for which outcomes. The days of fine line print for smart cities about consent are over;
- Citizen engagement in the digital infrastructure will be critical to the digital infrastructure's success;
- Collaboration and standardization need to occur to enable cities to learn from each other and leverage each other’s software and data.
“Slow tech” that builds relationships and stakeholders with experience and then co-designs an approach with tech, when appropriate, should achieve better outcomes in the long run for smart cities.
In this blog post, I’ve tried to capture the most salient critiques of experiences with smart cities in the past. As we develop our Responsible Tech Program at Topio Networks, we will be diving into many of these issues and looking for creative approaches to solving problems, creating constructive dialogues, and ultimately playing a role in creating the next generation of responsible technologies.
Edited by
Maurice Nagle