Our previous blog mentioned “turning AI talk into action” to go beyond AI hype. This blog continues on the topic of turning talk into action, but in terms of AI ethics — or sustainable AI, responsible AI, or maybe trustworthy AI, depending on how your organization frames it.
I have also been working with this side of AI for eight years now as a researcher. In that time, I have come to see many different ways in which the same things are discussed in different organizations. Whether you call it sustainable AI or AI ethics, ultimately the goals are often very similar in practice.
It is about going past the bare minimum set by laws and regulations, to do something extra; to do “good”. For some companies, this means more and can be an active branding point, in other companies, these topics receive less emphasis.
A recurring challenge for companies is turning this talk into concrete actions.
Responsible AI is not just an issue for AI developers
Up until three years ago, discussion on responsible AI and ethical issues in AI was mostly focused on companies developing AI.
Now this is an issue for a much wider audience, with more emphasis on using AI responsibly. Most knowledge workers and companies use AI now. There are many value choices to be made, regardless of whether you are using, fine-tuning, or developing AI.
For example, in the light of very recent global events, the organizational sustainability aspect of sustainability is also on the table for many. Is relying on international cloud services sustainable in the current political climate? This is a user choice, not a developer one.
From talk to action
Many organizations have AI guidelines and principles. These may emphasize topics like fairness, which — simplifying — typically refers to topics like non-discrimination, avoiding biased AI outputs, and equality. Yet turning these principles into practice, for them to have a real impact, requires action. And most importantly it requires processes, practices, and methods.
Ask yourself:
- What are the values your company cares about? These can be communicated through principles taken from the common AI-related discussion like fairness, or transparency, or explainability, or something more uniquely relevant to you.
- What do these values or principles really mean for each project, system, or use case? For example, how does fairness manifest in the current project, or does it?
- How do you make sure potential issues are detected, and something is done about them? Do you have processes, tools for this?
- Who is responsible for this in your organization? If it’s everyone’s responsibility to do “something” to make these values or principles into reality, it is unlikely that much gets done. Tasks and responsibilities are a good starting point.
Why would this matter to you?
Depending on your own values, and your organization’s values, what responsible, ethical, sustainable, or trustworthy AI mean to you can vary greatly. It also really depends on what kind of AI systems you are developing or using how these might manifest in practice. This means analyzing your company’s context and your AI system use/development context from this viewpoint.
The bottom line is that it is always useful to take an informed stance. Otherwise, someone in your company will implement their own values anyway, with or without your input.
We all have values. Sometimes we just don’t speak about them.
