Ethical technology?


Ethical compass

Forget about values – its about ethics these days

Ethical thinking must be a significant shaper of technology in the 2020s. I am not in the habit of making predictions so let’s call this a hope based on a growing sense that people are starting to really look at the consequences – intended and otherwise -of our technological choices.

So much of our lives are subject to the unconscious biases and technological evangelism’s of the people who create the virtual worlds and services we spend so much of our time in and our current fascination with ethics is a desire to create a controlling framework around the tools and systems which are now controlling our lives. As technologists we will really need to get our ethical compasses working and develop a framework for navigating a set of increasingly complex choices for ourselves, our teams and our organisations. Many people are already working on these questions and in putting this together I have drawn as much from my undergrad Philosophy degree to my more recent reading in a field which I think is critical to our wellbeing as a society going forward.

Much of ethical theory hinges on whether you believe in an absolute right or wrong (in which you will like Kant and his categorical imperative or Plato’s concept of the cave and its idea of perfect forms) or whether you see morals as relative (in which case check out Spinoza and be warned that you are to some extent agreeing with Nietzsche which is morally tricky in other ways). Without wishing to trivialise – and if you want to avoid what can be a rather frustrating tour through undergraduate philosophy and debates about the nature of reality you may just want to watch the Good Place and enjoy their exploration of what being good actually means. More recently (the good place seems to end its philosophical thinking at the age of enlightenment) the role of cultural norms has been much more widely acknowledged with the understanding that some moral values are culturally relative even while they may be absolute in that cultural context. This is where the anthropologists and sociologists live and while they (we!) will reference back to classical/enlightenment philosophy we will do so with a strong cultural and behavioural framing on that thinking. This is relevant as we start to think about the impact of organisational cultures on ethical decision making.

But going back to basics, and just so we all start in the same place; ethics is our framework for making moral judgements. Morality and Ethical are terms which tend to be used fairly interchangeably with values being the more corporate word used – but thats in the loosest sense as values in that context tend to be behaviourally normative rather than tools or mindsets intended to support and guide decision making.

I wrote recently about the triumph of purpose over culture and the fact that organisational values are best pursued obliquely. One aspect of that is the futility of companies trying to overlay their own values on top of the believes of their employees. I’m not saying that organisations can’t have a set of aligned values but rather that culturally speaking they emerge rather than being something you can impose. They are also massively contextual and will only emerge in sympathy to the moral climate in the system around the organisation. Ethics may be something that is easier to engineer into decision making within organisations as it operates on explicit and stated rather than intrinsic frameworks. For charities – which tend to wear their feelings of right and wrong on their sleeves even when they don’t articulate them – this is even more important as the people we attract internally and externally will tend to care about this stuff.

This is reflected in a recent article by the excellent folks at Data and Society who spend a lot of time thinking about and researching technology ethics. They have been looking at what we could call the corporate takeover of ethics that we are seeing as large internet giants wake up to the impact of their work on the wider society. The paper is well worth a read but there were two things that particularly struck me and are summed up in these two quotes first on technological complicity:

Yet, even when members of the tech industry recognize their complicity in contributing to social problems, such as rising income inequality, they often respond by proposing technical solutions. It is therefore little wonder that ethical problems within the industry are often framed as challenges amenable to technological solutions. All too often, ethics is framed as a problem that can eventually be “solved” once and for all.

And then this on performative ethics:

Performing ethics, then, becomes a crucial component of doing ethics in tech. While an ethics management consultant admits that this can look “like ethics-washing companies” (akin to “greenwashing,” the act of superficially making environmentally unsustainable practices appear “green”), another engineer admits that in some cases “the appearance of effort matters more than results.” These performances come in many varieties, from the release of white papers and blog posts that proclaim companies’ searches for best practices, to corporate reorganization that promotes or creates a new ethics initiative.

These two points provide an important reminder to technologist looking to explore ethics; beware of using ethics as simply another way of proving you are right and always avoid assuming that problems – especially complex ethical questions – can be engineered away.

We can’t engineer away problems but we can be better aware of them and therefore mitigate the impact if needed. Doteveryone have done some great work the last few years highlighting the ethical consequences and challenges of technological change (their outgoing CEX Rachel Coldicott has got a well deserved OBE for her leadership in this area) and the one I would want to see most widely adopted is their design for a ‘consequence scanning’ agile ceremony which is a simple but effective way of making sure that teams think through the potential consequences of the work they are doing.

There are of course already whole technology domains where ethical thinking is already established if not mature; we need to understand the differences between technology ethics and data ethics as well as understanding the boundaries where data ethics become AI ethics.

This section is a quick canter through some of the things in this intersected space which I have enjoyed over the last year:

And finally – I have not really explored the area of ethics and sustainability – something which is clearly and vitally going to be central to our choices in the twenties. How do we make good sustainable choices about how we build and buy technology?

But what choices can we make? We have reached a point where technology is inherently complex and interdependent so that your ethical choices as a builder of code and consumer of services are now inextricably linked. We can’t take our code apart and put it back together being clear on how it was constructed as we are coding on top of black box platforms using tools which give us building blocks rather than raw materials.

With this complexity comes the need for collective decision making because once your decisions have implications for others then shared progress becomes a question of collaborate or compete. This collective decision making can be done via the law, the market or as a result of coproduction of value – and we can see examples of all of these being tried.

Sometimes – in times of moral stress – societies and organisations are stripped down to their underlying myths and beliefs. It will be fascinating to see what moral panic does to the underlying culture of the internet; something which was founded on a combination of techno-anarchism (leaning towards free market competition) and techno-utopianism (leaning towards collectivism) – born from its antecedents of Darpanet and the academic hippie collectives that fed early communities like the Well.

I would argue – and this leans heavily into more democratic theory – that transparency, shared frameworks and standards and openness in all its senses become our best tools to avoid bad decisions. Yes I am still one of those techno-utopians and unashamedly so as this is where my hope for this stuff comes from. This is the kind of territory the Tech for Good community are navigating along with places like the Open Data Institute and of course the open source movement (I don’t have a single reference for this – its a blog post in its own right).

One of the reasons I personally tend towards collectivism is because I think it provides the right balance of individual agency and responsibility to collective decision making. Relying on regulation and law makers runs the risk of abdicating personal responsibility and relying on the market allows you to make decisions through a narrow prism that defines value as monetary gain.

This is different for charities where financial wellbeing is a means to an end rather than the end itself and presents a set of more complex ethical choices that force trade offs between the ‘good’ which is your charitable mission’ and the ‘good’ which may be derived from specific technology choices. The simplest example here would be when is it the right thing to incur additional costs (ie opportunity cost for the charities mission) in order to make a better technology choice? This could be something meaty like whether or not to go open source but I think this also applies to what could be argued as the persistent ethical failure across organisations to prioritise building accessible code.

In all of this – and in any ethical thinking you see a set of close dependencies between personal, organisational and social choices where you need to balance off goods and ills to each of those aspects of the decision. The organisation is caught in the middle of this – having to balance between the wants and needs of individuals vs the wider social implications of its actions. If we see technology as being ethically charged then we need to go beyond organisational values and think about, as the data and society article suggests, organisational ethics in order to create frameworks that look specifically at the trade offs that there might be between actions that help a specific organisational mission but might cause wider social harm.

In all of this we need to think about how we are going to test and experiment. Complexity means we can’t assume action/reaction – instead we need to think about how we ethically test and explore in order to between evolve our ethical frameworks. We are also going to need to explore our own ethics and values and make sure that we are clear as possible on our own inherent biases and beliefs as this stuff is personal – not just professional.

All enquiry starts with good questions so here are the ones I think its good to start with:

  • Do you know your own ethical boundaries? What feels ok and not ok to you?
  • At what point do they conflict with your organisations ethical boundaries? What decisions do you feel should be part of an ethical debate
  • What ethical domains do you feel are relevant to your organisation – what are your ethical dilemmas?
  • What strategic decisions does your organisation fit within these boundaries? What do you want to draw as the scope of your ethical technology discussion?
  • What is needed in order to bring organisational ethics more in line with your own? Remember this could be you changing – not just your organisation
  • Where do you think you can make a difference?

I’m going to play around with these questions a bit – feedback is very welcome.

Leave a Reply

Your email address will not be published. Required fields are marked *