Icon close

Navigating Your Organisation’s AI Ethics Journey

Author

Date

Why is AI Ethics important?

The impact of AI on business is growing exponentially, especially in light of the rapid advancement of generative AI technologies and their use applications. According to the 2023 AI Readiness Report [1] which surveyed 1600 executives and ML practitioners, 65% of respondents have reported accelerating their AI strategies or have been driven to develop one for the first time. It’s important to remember that as the impact of AI grows, so do the ethical considerations surrounding its development and deployment. While data-driven technologies have the potential to revolutionise industries and transform the lives of individuals, they also pose significant ethical challenges. It’s easy to get caught up in the excitement and wonder of these powerful new tools. However, there seems to be little awareness that our window to steer the direction of these new technologies in a way that ensures their benefits outweigh the risks is rapidly closing. AI ethics plays a crucial role not only in promoting responsible development of AI solutions but also in facilitating public trust and confidence in AI. Industry organisations often view AI ethics through the lens of compliance. However, failure to meaningfully engage and act on AI ethics can result in low trust, which can impede adoption and acceptance of data-driven technologies and ultimately limit business goals and their potential benefits on society. Engaging in AI ethics can be tricky and there are several challenges that prevent us from effectively and meaningfully implementing them. This blog will explore some of these challenges and discuss potential solutions for addressing them so AI can be leveraged to its full potential for both businesses and society.

Clearing the Path to Ethical AI

Complexity of AI

By far the greatest barrier that stagnates wider discourse and engagement in AI ethics is the sheer complexity of AI itself which makes it difficult to understand its ethical implications. This dilemma can be illustrated by using AI Ethics Activist Deborah Raj’s analogy of engineering responsibility in the automobile industry. If we think about accountability in this industry, there is a clear allocation of responsibility and defined processes for fixing problems. For example, when a car is found to have faulty brakes there is no question as to who is responsible. The manufacturer has a duty of care to ensure the safety of their vehicles and in the interim between identifying and fixing the problem, cars with faulty brakes are recalled. At the same time we can easily distinguish discussions about responsibility at the micro level, such as ensuring the overall safety of cars before they are allowed on the road, and responsibility at the macro level, such as the need to completely rethink how we make cars to make them more environmentally friendly. Working on both these areas simultaneously is essential towards ensuring that we are optimising the short and long term benefits of cars while limiting the negative impacts they have on our future environmental security. 

 

Having similar simultaneous ethical discussions with AI is, by comparison, very difficult – for a number of reasons. For one, we can get stuck muddling through the micro level responsibility issues because sometimes we can’t even measure the “brakes” in some systems. For example it may be difficult to identify subpopulations in a dataset which you need to do to ensure the model is working appropriately across all groups. Additionally, the ambiguous accountability caused by the lack of transparency when black box models are used in AI assisted decision making makes it difficult to assign responsibility.

 

If we take facial recognition technology as an example, there can be severe consequences when the system does not perform as intended or when black box models calculate outputs we take as infallible truth. In fact, it’s already happening. There are already people being implicated of crimes they did not commit, wrongful arrests against people of colour, and the possibility of wrongful death isn’t out of the realm of possibility with increasing integration of facial recognition technology in the military. It’s important to call out here that there are ethical consequences when facial recognition systems don’t work as intended but there are questionable ethics even when they do. The amount of biometric data required to train these powerful models is astronomical and even with the implementation of privacy preserving technology, we may never be able to mitigate the risks to privacy to an ideal or safe level. Asking people to risk their privacy requires a high level of justification and transparency which might vary depending on the context and intended use but should be especially high in societies that prioritise democratic values such as Australia and New Zealand. 




AI at all cost

After overcoming the challenge of complexity, there are still obstacles preventing us from effectively engaging with AI ethics. It has been argued that in the neoliberal economic paradigm, AI technologies reinforce and are constituted by an economic logic based on profit maximisation [2] which ignores and undervalues human rights. In recent decades many seem to have narrowed the definition of innovation to mean advancements in only science and technology and advance the narrative that all progress in these areas are inherently Good For Society. They are not. We make things good by designing them carefully for that purpose. When things go wrong – we often hear the excuse of “unintended consequences”. “Unintended” suggests consequences we simply can’t imagine. It is often used by entrepreneurs and investors to distance themselves from responsibility for harmful consequences they, sure, did not intend but also (and most importantly), did not even attempt to consider. Careless deployment of technologies without any attempt of understanding their impacts cannot be simply brushed away with the excuse of “unintended consequences”. 

“You can’t imagine impact at scale” is a common pushback from entrepreneurs and investors reluctant to talk about the impact of their technologies but as Aza Raskin, inventor of the “infinite scroll” points out “an inability to envision the impact at scale is actually a really good argument as to why one shouldnt be able to deploy a technology at scale”. [3]

Value of AI ethics is not understood

Despite the growing recognition of the importance of AI ethics, at least on a surface level, its value is still not well understood by stakeholders in organisations who are considering or have just started using AI. The recently published 2022 Responsible AI Index Report [4] found that a significant proportion of organisations who were in the Planning and Initiating phases of their responsible AI journey felt that the cost of responsible AI was only roughly equal to the benefits (Figure 1.). These perceptions look to be changing for the better. The majority of organisations with mature AI capabilities already understand the value of responsible AI, but with generative AI pushing the adoption of AI in organisations who have never used it before, there is a risk AI ethics will be overlooked. AI systems are not developed in a vacuum. They reflect the ethical values, biases and perspectives of the organisations and people that develop and deploy them. Understanding the ethical implications of AI can unlock confidence in their implementation and maximise their potential benefits. At the same time, integrating ethical considerations plays a crucial role in derisking systems so they are safe for both customers and businesses. 



Figure 1: Snapshot from the 2022 Responsible AI Index Report, Fifth Quadrant

Solutions

Increase the uptake of AI Ethics guidelines and tools

AI Ethics Frameworks, AI Ethics Guidelines and Ethical Impact Assessments are integral tools that can help organisations scrutinise AI systems before they go into production. These instruments help to ensure AI systems are being responsibility deployed and that their intended use does not put at risk the dignity, safety and opportunity of individuals and groups. In a previous blog post, we unpacked the essential ethical considerations required in a robust responsible AI framework. Eliiza has developed its own AI Ethics Framework which integrates these elements along with other best practice indicators and is designed to incorporate AI Ethics measures throughout the entire model lifecycle through a risk-based approach.

Invest in AI governance and oversight

Whilst we wait for AI regulation and standards to inevitably enforce the implementation of AI ethics, organisations can proactively take measures today by developing their own internal AI Ethics Frameworks either on their own or with the help of a partner like eliiza, to minimise the risk of developing and deploying malicious AI systems. Such frameworks and ethics risk management tools are powerful instruments which can help classify risks of AI systems, reduce ambiguous accountability and suggest suitable mitigation measures that can help limit or prevent harm to customers and brand reputation. In fact, there are a number of ways AI ethics can align with business goals, these include improving efficiency and increasing trust.

Contribute to collaboration between industry, academia and government

Crafting and effectively implementing best practice AI ethics tools is a complex task and will require seamless collaboration between different stakeholders. Industry can provide a high level of expertise and insights into the real-world application of AI and therefore can play an important role in strengthening the feasibility and practicality of AI ethics tools developed by the research community in academia and policy tools developed by the government. In doing so, industry can ensure AI regulation, which will ultimately force the implementation of AI ethics tools, is fit-for-purpose and not overly restrictive or inhibitive to innovation and progress.  

We at Elliza aim to advocate and facilitate greater uptake of AI Ethics tools across industry and look forward to supporting efforts led by the National Artificial Intelligence Centre to help establish best practice industry guidance here in Australia through their newly established Responsible AI Network. The Network is a cross ecosystem collaboration aimed at uplifting the practice of responsible AI across the commercial sector. Several initiatives at the global and national level are already underway to establish ethical and governance standards for AI and we look forward to contributing towards them. 

Develop a clear understanding of the ethical issues related to AI for your work context

Having open conversations about AI ethics is critical to developing a clear understanding of how they might relate to your work context. Depending on the industry or field, the ethical considerations related to AI will differ. By engaging in dialogue and exchanging ideas about these concerns, individuals and especially decision makers can gain a better understanding of how to navigate the ethical challenges posed by AI within their own organisations.

Here are a few questions to consider when evaluating the ethical considerations related to AI use within your industry or field:

  • How might the use of AI impact the privacy rights of our customers/clients/users?
  • Are there any potential ethical concerns related to the use of AI in our industry that we should be aware of and actively working to address?
  • How can we involve diverse voices and perspectives in our discussions around AI ethics to ensure that we are taking into account a wide range of viewpoints and concerns?
  • What measures can we implement to mitigate the potential risks associated with AI deployment in our specific industry or field?
  • How can we continuously evaluate and improve our AI systems to ensure that they are meeting ethical standards and aligning with our organisational values?

AI is about to have a transformative impact on businesses and how they operate. Don’t be afraid to embed AI ethics into your development processes – it can help ensure and safeguard positive outcomes for both your organisation and your customers. 

References

  1. AI Readiness Report 2023: Scale Ai. ScaleAI. (n.d.). Retrieved April 21, 2023, from https://scale.com/ai-readiness-report 
  2. Gurumurthy, A., & Chami, N. (2021, July 9). The wicked problem of AI governance. SSRN. Retrieved April 17, 2023, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3872588 
  3. Botsman, R. (2022, May 24). Tech leaders can do more to avoid unintended consequences. Wired. Retrieved April 17, 2023, from https://www.wired.com/story/technology-unintended-consequences/ 
  4. Responsible AI Index 2022: Fifth Quadrant (n.d.). Retrieved April 21, 2023, https://www.fifthquadrant.com.au/2022-responsible-ai-index

Stay up to date in the community!

We love talking with the community. Subscribe to our community emails to hear about the latest brown bag webinars, events we are hosting, guides and explainers.

Share