Putting the lid on Pandora’s Box: how community power shapes AI

Communities power resilience, resistance, activism and change

AI regulation is not just a technical issue, it’s a social one. Any future Labour government must put public interest first and move on from the corporate capture that is baked into the Conservative’s approach. This starts by empowering communities.

This blog post discusses the role of small-scale and discrete communities in monitoring, understanding and altering the social impacts of AI and offers clear policy recommendations for improving democratic engagement. Taking a people-first, emergent approach is essential to avoiding the “Pandora’s Box” problem of technology regulation while also strengthening social infrastructure and enabling more equitable outcomes.

The social impacts of AI are a social problem

When it comes to technology, money makes momentum, but people experience the outcomes.

The accelerated roll-out of AI over the last few years has been driven by huge financial incentives that benefit a few corporate actors. In the UK, this is further expedited by the fact that our economy is sluggish, productivity is low, and everyone is looking for a magic bullet. If a piece of technology claims to bring down costs and increase efficiencies while also appearing to be innovative and cutting-edge, then the business case for adoption appears to write itself. This is a trickledown view of progress that prioritises the profit margins of a small number of businesses over the maintenance of the existing social contract.

With the establishment of the AI Safety Institute, UK AI governance has taken a “technically grounded” approach that attempts to understand societal harms through “usage data and incident reporting”. This is an in-app, technocentric view of the world that prioritises an algorithmically coded view of reality and assumes labour-market outcomes and democratic engagement can be monitored through technical quality assurance. 

People, not code, must be at the heart of democratic technology governance and community power is an incredibly important corrective to this narrow perspective. The Labour Party has so far been quiet on their AI strategy, but any future progressive government must engage beyond business and technologists in meaningful ways to ensure AI does not only lead to corporate benefit.

Community power corrects corporate capture

Everyone in the UK has a democratic right and a responsibility to ensure our technologies will help build and shape more equitable societies. Community power is more than activism, and it maps onto the OECD’s depiction of AI Systems in multiple ways. 

Community power is responsive, diverse and distributed: it can act in many ways, to many ends, all at once, without having to wait for the enactment of complex legislation or the agreement of technical processes.

Community power makes us more than consumers and data points on risk registers. It enables us to organise and act together to safeguard our rights, to build resilience and resistance; to gather evidence and create more effective advocacy networks. 

Community power enables us to deliver collective action and to develop more equitable models that benefit everyone, not just a few billionaires.

Communities and micro-publics as sensors of everyday harms

AI governance is often understood to be tripartite, drawing on (1) the Market, (2) the State, and (3) Academia. It also tends towards the technological and technocratic, as if AI systems might be self-monitoring.

In the UK, wider engagement is often focussed on understanding the preferences and sentiments of the general public through surveys and public deliberation exercises with nationally representative sample groups. This highly generalised approach to governance draws on pre-digital expectations that the same or similar consequences will be experienced by all or most members of society.

In reality, the impacts of AI are highly contextual and unevenly distributed. This difference of experience is exacerbated by other inequalities and disparities, such as income, health, race, age and gender which leads to the creation of micro-publics - smaller groups that experience specific harms that may not be visible or legible to those operating in other contexts, and which do not emerge at significant enough scale to warrant legislative or regulatory intervention.

Communities and civil society organisations are often the first port of call for those who have experienced day-to-day harms, but there are few - if any - mechanisms to monitor or understand the impacts of these events until they have become either so severe as to be individually newsworthy or so widespread that they are difficult to reverse. This means that regulatory and legislative measures tend to reflect the concerns of those with high social status and access to traditional methods of redress; meanwhile, communities are working together at smaller scales to access justice or create alternative pathways.

The Pandora’s Box effect

This gap between the intended impact of a product or service and its real-world implications can be difficult to imagine because we rarely get to see the consequences of digital tools. This is because they tend to emerge “slowly, then all at once” in isolated contexts, first impacting discrete people and communities at different times and then appearing — suddenly — to be everywhere.

This generalised impact can best be understood by zooming in and examining how even everyday AI applications, the kind that might save some people a few minutes, can have devastating consequences for others.

For instance, image-generation tools that turn text prompts into visual outputs can seem like a very convenient way of creating a first pass at an illustration or bringing something to life wihout commissioning any original graphics. However, this convenience can also have multiple negative consequences for others.

Let’s start by understanding the particular harms that will be experienced by specific groups of people. In the case of image generators, the work of many photographers and artists may have been stolen to provide training data; this is, firstly, copyright theft that will, secondly, likely undermine people’s future income security as their distinctive style becomes “learnt” by an application. Another specific group is content moderators who may be exposed to large amounts of traumatising content, day in day out.

Zooming out slightly to more general groups, the physical likenesses of real people may be used to create AI-generated porn or other deepfakes without their knowledge or consent. The people most likely to be at risk of this are women and children. Meanwhile racial and gender stereotyping is likely to become standardised in image outputs, leading to demeaning and inaccurate images that are likely to lead to what Abeba Birhane has called “hate scaling”. White men, who are over-represented in the technology industry, are the least likely group to experience these harms, which can lead to the importance and relevance of such outcomes being deeply under-rated.

And zooming out again, the longer-term impacts of relying on image generators are likely to include the normalisation of a vaguely samey, unremarkable aesthetic that makes life less interesting for everyone.

This path from the specific to the generalised is quite common in the diffusion of a new technology, and it’s clear that intervening at the first indicator of a specific harm would make it more likely that an outcome might be limited or redirected. This doesn’t always happen, and indeed the first people affected by an innovation that might be attracting plaudits from VCs and tech journalists are often ignored or dismissed.

These emergent harms also tend to be invisible to top-down, risk-based regulatory approaches, which are driven by broad public impacts and attitudes, of the kinds that are reflected in surveys. A general-public focus means that harm to content moderators, for instance, can be overlooked by regulators and legislators, or treated as a specific issue, if a technology has broad public approval and is otherwise subject to significant levels of hype.

However, overlooking these emergent harms is how the Pandora’s Box effects of technologies become entrenched: as we have seen with social media, once a set of harms is normalised it can quickly appear to be inevitable — but paying attention to early indicators, and giving credibility to minoritised groups and communities, will lead to better outcomes.

Communities as social infrastructure

An illustration showing a group scene with people speaking and working together

Illustration by Elly Jahnz from communitytech.network

While occasional announcements of extraordinary new discoveries or sparkly new chatbots may momentarily lift our spirits, the reality of automation in modern Britain is significantly less exciting. The Pandora’s Box of automation is already having deep and widespread impacts on modern social infrastructure and quality of life in the UK.

Public services are under-resourced, increasing numbers of local authorities are facing bankruptcy, and the cost-of-living crisis continues to deepen poverty. In this context AI is being sold by consultancy firms and policymakers as a miracle that will supercharge innovation and lower the cost of service delivery. But in day-to-day terms, this isn't quite as exciting as the Prime Minister and Elon Musk might have us believe.

Rather than delivering space-age experiences to the masses, some aspects of everyday automation end up making our lives a little bit more difficult and frictionful. This is bargain-basement automation, the kind that is not designed as a service but put in place instead of a service.

For instance, supermarket loyalty cardholders get personalised prices and cheaper deals for self-check-out or self scan; a visit to the GPs might involve repeated, unsuccessful attempts to use a self-service blood pressure monitor; popping into a high street retailer can mean we're expected to remove our own security tags while staring into a camera; ordering a meal in a fast-casual restaurant involves scanning a QR code if you’re lucky and waiting for a touchscreen ordering station to become available if you’re not. And let’s not forget car parking, which requires a smartphone, a suite of apps and a debit card. Students are monitored by webcams, service-economy jobs are turning into robot-supervision gigs, and bots are rewriting the news and producing deepfakes of political speeches. 

Photo by Jo Walsh, submitted to the Everyday Automation Observatory

Human-centred automation should offer assistance and smooth over the edges of life, but too often it actually increases our administrative and mental load and excludes those of us who might want or need to talk to a person or benefit from some physical assistance. It also makes many more jobs more boring and introduces conflict to interactions that only occur because we’ve pressed the wrong button or the machine has gone wrong – all the time weaving tools created by a handful of big tech companies more deeply into the fabric of our daily lives. 

Individually, each of these changes is small, but taken in aggregate they make life a little harsher and meaner, snipping the bonds of social infrastructure and replacing them with robotic error messages. These are not the kinds of changes that are likely to be picked up by the AI Safety Institute’s “technically grounded” approach, and they’ll be mostly unseen by more affluent people who can afford to pay the dividend for human service. But this turn towards everyday automation is shaping and blunting the day-to-day experience of many of us in the UK. 

The benefits of a plural, systemic approach 

It is, to put it mildly, annoying that a relatively small number of technology companies have such a pervasive impact on daily life — and even more annoying that there is not a silver bullet for ensuring better outcomes. The point of this post is that, in addition to classic regulatory and legislative measures, many community interventions are both possible and required.

As consumers, there are some things we can reshape with our spending patterns; as service users, we can bend some communications technologies to our will; as voters, we can engage with democratic and regulatory processes, but none of these roots are particularly fast or efficient on their own. We can, however, intervene in multiple, systemic ways and shape outcomes through a range of different activities and behaviours. 

For instance, participatory decision making is useful for understanding public sentiment about big issues, but it is not a sustainable technique for ongoing decision making and it does not surface emergent harms. Collective intelligence is useful for sensing and shaping systems but its increasing dependence on AI risks reinforcing existing power structures and inequalities rather than shifting them. Meanwhile, algorithmic auditing and model assessments are only able to predict outcomes that are knowable to the system; they cannot observe unanticipated or unanticipatable emerging outcomes. 

A systemic, plural and community-powered approach combines sensing with action; horizon-scanning with organising; data gathering with redress. It allows us to act as whole people and collectives rather than boxing off our agency as consumers or citizens, and it enables campaigning and resistance to take place in a constantly changing real-world environment. And vitally, a community-driven approach also enables us to create alternatives. Good governance is not just about making big tech or government services better; it requires the conditions for other models and approaches to thrive.   

Policy recommendations

For policymakers and any future Labour government, this means a shift to more inclusive evidence gathering and policymaking. The following recommendations set-out a path to achieving that.

  1. Restore AI policy consultation processes with civil society to pre-2019 levels

    Since the 2019 election, government consultation on AI policy with UK civil society organisations has been much reduced. Among other policy failures, this has led to a prioritisation of risk over rights in the AI White Paper and minimal levels of UK civil society engagement at and during the AI Safety Summit. This also means the AI Safety Institute has been established to prioritise technical auditing over social reality. On its own, increased consultation is insufficient, but it is essential for the restoration of democratic process.

  2. Fund and support community-led innovation

    Innovation doesn’t just happen at start-ups and in labs. Community organisations are contributing to solving some of the most pressing problems of our time, but they do so largely without infrastructure support or appropriate levels of funding. We Can Make in Bristol and Civic Square in Birmingham are leading community-led housing and retrofit; Carbon Coop is helping households reduce their carbon emissions; Equal Care Coop is putting the people who matter most at the heart of social care. Resilient community innovators are essential for maintaining a space between the market and the state and powering people-led technologies.

  3. Establish a Civil Society AI Observatory to monitor and mitigate emerging harms

    The role of communities in observing and responding to emerging harms is a vital part of any democratic society’s response to rapid technological change. A progressive government should actively engage with organisations including unions, advocacy groups and campaigners in a structured, responsive way, and support horizon scanning and qualitative research that shows, in real-time, how advanced technologies are affecting people’s lives and changing society.

The seeds of everything here already exist in the UK; what is needed to from a government is a commitment to real democratic engagement and financial support.

In my previous post, I wrote about the importance of robust funding; in the next instalment, we’ll discuss the particular role of community data in powering resistance, activism and change.

Background research for this blog post was conducted by the Promising Trouble research team. Thumbnail illustration by Elly Jahnz. With thanks to Deb Chachra for the “Slowly, then all at once” framing.

Previous
Previous

Connect who?

Next
Next

Recognising community power as a pillar of AI governance