I have used the fact I was on a panel on AI at the Stronger Things conference with some brilliant people to get my thinking organised around AI and community power. The session is called: Responsible robots – how to embed community power in your digital, data and AI strategy. The whole conference was fantastic with great energy and purpose – highly recommend it!
I’ve tweaked it since the session, reflecting on some of the things that Rachel and Tom said about the centrality of transparency as well as the need to be proportionate in all of this and use it when its needed rather than throwing it at everything. Here is are the things I’ve been reflecting on.
One of the biggest opportunity and risk with respect to AI is how we are on a knife edge of it being used to make things more human or deeply in-human. The decisions which we make over the next few years will set a path for this and now is the time to think deeply about it. This feels like the absolute essence of the philosophical dilemma in this even before we get to the ethics of it. How do we make sure that we are actually pursuing the right goal – and then we can evaluate the ethics of this.
Are we trying to make people more effective workers, are we trying to replace them and if so to what purpose? What are we trying to achieve beyond what the technocratic complex wants us to consume? Do we trust society to look after the people who are displaced by AI? Do we value the things that are uniquely human? Lots of links the session at the conference with Hilary Cottom and her new book ‘The work we need’ in all this.
To keep things simpler; Lets assume that we want to keep things human, but that we need to properly rethink how we get stuff done to do that.
I am increasingly drawing a distinction between ‘ambient AI’ such as gemini, co-pilot and siri which can, if implemented well, be used to quiet the noise created by all the other technology we have loaded ourselves with and targeted AI which is there to solve a particular problem – in that I think about the kind of paradigm shifts being made with respect to diagnosis in the health space or the more worrying biases of automated decision making based on the data which is trained on. This is where the value of transparency is so huge – do we know what data is being used to train LLMs and where it comes from?
As an aside, there is something slightly horrifying in the fact that the only reason we need AI to summarise our inboxes is because we are overwhelmed by the amount of stuff in there – much of it being driven by an over reliance on emails over actual conversations. Beyond our inboxes we are talking about a world of digitally generated content where robots are burning through massive planets resources creating and then sorting through the AI generated content in order to find you something you actually want to see for microseconds. What a waste.
More practically….I spend more time thinking about the ambient AI space as it feels the most immediate priority to get organisations ready for, but the bridge is perhaps the potential for AI in research and analysis which we are using increasing to speed up policy development – or rather to feel as if we are – that’s a point to reflect on.
At Adur and Worthing we are trying to make sure that we are giving people chance to explore and experiment and to find the ways that this stuff actually helps. I don’t want to dive into use cases which risk finding ways to make old ways of working go faster without unlocking the curiosity and creativity of the people doing those jobs. I have a lot of faith that with the right tools people will reinvent their own jobs – but we have to trust them to do that. I think this is you can create a culture of hope rather than fear and its how we will create more positive futures.
The same goes for our community. We are increasingly using AI to help us rapidly summarise and sense make from our participation work but we are then taking that back to the group who generated the insights to sense check and improve. This is the really vital bit and stops the technology spinning out and creating a false reality – you have to keep checking, grounding and improving it. The WAVES project which Miriam spoke about on the panel is a really great contribution in this space with a similar approach to interactive loops in deliberation and I’m watching it with interest.
With more of a CFGS hat on, and I think about AI in governance and decision making there are huge opportunities. The first the ability to get insights from the public and integrate them at scale. We can imagine interactive feedback looks of public engagement which make the ability to test and learn in partnership a realistic possibility – what a prospect. I also think there is a role for AI in helping to make expert content more accessible. The aphorism of ‘I didn’t have time to make it shorter’ is never truer than when applied to committee reports where we lack the capacity (but not the capability) to refine content to be elegant and accessible text – AI and shift the dial on that and open up what we are doing to our stakeholders and more importantly to our decision makers.
All of this needs constant vigilance and the paying of attention to what we are doing – again something we have little capacity for in the public sector. I wonder therefore what the potential is for community power in this space to properly help us shape and develop our use of AI and to hold us to account on an ambition to use it become more human as a sector.
I’ve been thinking about where there is a need for an AI commission that will allow us to think about these things in partnership with our communities and to experience and explore together as well as start to develop the principles we implement these new tools in place. I have a summer research project to read around the work already done in this space to see if there is a gap, or a case for having the conversation in place even if its not the first time its been done.
Finally, and because I can’t write or think about anything without the lens of LGR in all this, we are at a moment when we are doing massive redesign of the state. If we don’t have some visible principles about the role we want AI to have in that we risk that inhuman future even more. While we are in a frantic period right now as those us in the eye of this storm scramble proposals for the autumn there will be time over the next couple of years to embrace the possibilities and gibe our staff and communities the knowledge and confidence they need to navigate the possibilities in all this with us and build those responsible robots.




Leave a Reply