By Celia Vandestadt, Sadaf Afrashtehand & Edwin Kurniawan
An ethical lens is needed for the responsible use of AI
“AI will be the best or worst thing ever for humanity.”
This quote from Elon Musk alludes to a tipping point for AI and what it holds for our collective future. It’s no secret the Tesla founder holds grave concerns for where rapidly evolving AI advancements are headed. On the other hand, his tech nemesis in Mark Zuckerberg sees us racing to an increasingly digitised and automated world, with a vision to accelerate our adoption of AI through an all-encompassing collision of digital and physical worlds, coined the metaverse.
Like two sides of a coin, both these viewpoints must be equally weighed: opening the door to new opportunities that AI brings, while equally considering the ethical implications and potential risks associated with its deployment.
The good, the bad and ugly in AI Advancements
Advancements in AI to date have produced a mixed bag of the good, the bad and the ugly. Google’s AI research laboratory at DeepMind has both inspired human creativity (hello move 37 from AlphaGO) and is poised to expedite drug discovery by cracking the elusive problem of accurately determining protein structures, thanks to AlphaFold.
The story of a Facebook moderator life experiencing panic attacks after watching a video of a man die, draws public attention to the dark side of the AI, while the benefits of content moderating in avoiding severe harm to people are not out of sight.
Even with the best intentions, a neglect of upfront interrogation of system inputs and outputs can lead to unintended consequences. Overlooking the link between race and healthcare expenditure has led to racial bias in reduced quality of care and hospital referrals for African American patients, compared to their equally sick Caucasian counterparts. These injustices may have been around even before AI came into the picture, however it can potentially amplify this across a much wider range if implemented without ethical considerations.
Likewise, in an attempt to expedite their hiring process, Amazon inadvertently penalised female applicants applying for technical roles due to gender bias in historical data. Businesses beware: ignoring ethical and data governance considerations early on can lead to amplification of injustices and reputational risk bubbling under the surface.
Consumer trust in AI and reputational risk
Both reputation and upfront investment are at stake when businesses fail to get AI ethics right. A push to adopt AI across businesses and industries is increasingly becoming the norm. By 2022, over 60% of companies will have adopted machine learning algorithms, big data analytics, and related AI tools into their operations to help lift revenue and reduce costs. Where businesses have already adopted AI, up to 90% have reported encounters where ethical issues have been raised, while less than half have independently audited their AI systems for ethical implications. It is no wonder that consumers still hold a high degree of scepticism towards AI, with only a quarter willing to trust AI systems. Encouragingly, this sentiment jumps to 62% when the AI system is understood to be ethical. For businesses to both adopt and leverage the full potential of AI, it is clear that long term success is founded in algorithms that pass the pub test.
Consumer trust is not the only motivating force for ensuring AI is ethical, businesses also need to consider the financial and reputational implications of getting it wrong. To make matters worse, imagine pouring all those hours into gathering data, processing them, designing the algorithm, testing and deploying them only to realise that there has been biases in data collection (intended or otherwise). It could cause millions of dollars down the drain, not to mention the potential lack of trust towards any attempt at other AI applications in the future. If it manages to get out to the public, the reputational damage can really be immense; take Cambridge Analytica or Robodebt as a case in point.
The evolution of AI systems has increased the scope of ethical considerations
At this point, the case for ethically designed and implemented AI is mounting, but what exactly is it? The AI ethics scope has been changing over time due to the ever-evolving nature of AI technologies. A prominent issue that often comes to mind is the vulnerability of such systems to biases. The concerns about the disproportionately negative consequences AI systems are having on gender, racial, and cultural minorities, as well as other groups marginalised by society have been long standing. However, AI ethics is not limited to bias and a lack of fairness anymore. Today, the scope of AI ethics is expanding into a broader range of transparency and explainability, privacy and security, and accountability and contestability. Examples of increasing use of monitoring technologies during COVID-19 have raised serious questions about AI surveillance consequences like privacy and data subjects rights.
6 Key Ethical Considerations for developing an AI framework
Let’s point out the main ethical considerations that must be taken into account when using AI with some examples:

1. Fairness and bias
AI systems can result in discrimination against minority groups. An existing bias in the input dataset, non-representative sampling in data collection and AI system reliance on discriminatory variables can all be sources of unfair outcomes. Imagine the videos and photos that are circulating on social media nowadays, it would be an impossible job for humans to manually review them, while AI can go through them like a breeze. At the same time, the variety of these videos and photos that are being collected will introduce a number of biases that have already indicated the potential for harm. For example the varying effectiveness of facial recognition AI to accurately detect across gender and race, or how recommendation systems of social platforms may discriminate when promoting content.
2. Transparency and explainability
For consumers and targets, AI is often seen as a black box. Therefore, the reason why an AI system is used should be explained to the target of that system. Likewise, transparency on the information used by algorithms for decision-making should be made available. Why a decision is reached by an AI algorithm is increasingly becoming important for both targets of those decisions and those who take action based on outputs. Examples such as a judge predicting the likely recidivism of a defendant, or a lender deciding not to offer credit highlights how at the pointy end of AI algorithms real people are affected.
3. Privacy and security
Personal information and each individual’s private data must to be protected to avoid potential breach of privacy. Any potential vulnerabilities for surveillance, cyberattacks and hacking are also required to be eliminated. Examples like sensitive information leakage by ProctorU, or ransomware attack which compromised Melbourne heart group’s patients’ personal information demonstrate how critical data protection is. As companies grow and obtain larger datasets it’s imperative that systems are put in place to actively protect the privacy of their customers and avoid data leakages.
4. Autonomy and informed consent
Autonomy and the right to genuine informed consent should be respected throughout the course of AI system development. Obtaining informed consent of the individuals involved in or affected by the AI system is crucial, more specifically when it comes to personal and sensitive data collection. Examples like facebook targeting its users with ads on their phone numbers provided for security purposes without their consent, points out the cruciality of having the data subjects’ informed consent.
5. Accountability and contestability
Organisations and individuals responsible for the design and implementation of algorithms should be identifiable and held to account for potential unethical consequences. Having a clear process which lets the negatively affected individuals challenge the unfavorable outcome is also of significant importance. Examples of flow on effects relating to content moderation on social media platforms point out the issues people encounter once they try to contest algorithmic decisions. A survey on the experience of social media platforms highlights the lack of contestability as most of the time there are no instructions provided for the users on how to initiate an appeal and even if there is any, decision-makers often fail to respond to the users’ request for review!
6. Non-maleficence
Organizations are required to avoid any potential harm caused by the AI systems they use including any violations of individuals’ rights, any kinds of physical or psychological harms to data subjects or even any environmental harms. Different examples of data breach highlights the necessity of being vigilant for the organisations about the potential risks posed by the AI systems.
Existing AI Regulation
With ethical AI currently based on self adoption, it’s no surprise that uptake in best practice is low. This raises the question how companies could put them into practice. Spoiler alert: There is no one-size-fits all solution to ethical AI implementation. The solution is absolutely subjective and needs to be contextualised across different industries. And although some industries like banking and insurance are already obliged to comply with regulations on data privacy and information security, there is currently no federally imposed AI specific regulation in Australia. To date, Australia has taken a self-regulation approach, consequently leaving it up to organisations to develop an ethics framework aligned with their specific requirements. The proposed voluntary framework encourages organisations to apply ethical principles by highlighting the potential benefits including lifting customers’ trust, driving customers’ loyalty in AI-enabled services and ensuring AI outcomes have positive impacts.
It is important to note that meeting legal requirements by the regulated industries doesn’t neccessarily mean that they are ethical. Higher standards are needed here, rather than simply regulatory compliance. One approach to this for companies who align with progressing beyond baseline regulations is to develop their own ethics framework either by building up internal teams specialised in AI ethics or by calling external specialists out.
Curious about how far enterprises have gone so far with ethics? Only roughly one third of the enterprises surveyed have detailed knowledge of how and why the outputs of their AI system have been produced. The best-case scenario is that just over half of the enterprises have their own leader responsible for AI ethics.
The challenge in AI and ethics is going deeper than just principles, committing to the development and use of ethical AI systems. This requires investment, time, and delving into your organisation’s values and operational framework. The effort is worth the rewards of increasing customer trust, solution longevity, and peace of mind in knowing that you’ve contributed to AI being the best thing for humanity.
At Eliiza…
We firmly believe that creating trust in AI solutions needs more than just using the principles as window dressing. A continuous review of legal frameworks, institutional arrangements and rules are required to ensure adequate protections are in place as technology evolves. Our offerings for AI ethics include an ethical review of the AI solutions developed by Eliiza, or act as a standalone objective review and auditing board for an implemented AI solution. Our experts usually start with an ethical impact assessment to carefully review the potential ethical risks for specific use cases and prioritise the considerations to avoid and minimise the negative consequences. The ethical review is well-aligned with the model development plan and consists of audit checking for input dataset, preprocessing and model performance.
The ethical report we offer provides a reasonable level of transparency on the AI solution both for the company and its customers. We also believe that ethical considerations are required to be addressed continuously and are not just limited to the development process. Hence, as part of our ML Ops process, we can help identify, track and report on key metrics to monitor performance against ethical benchmarks over time
We take a holistic approach to our reviews, bringing together both quantitative factors to check the potential bias for input data as well as model output and qualitative factors to develop a framework according to the defined ethical principles. This approach ensures a comprehensive ethics evaluation and future recommendations for ensuring a more responsible AI.
Stay up to date in the community!
We love talking with the community. Subscribe to our community emails to hear about the latest brown bag webinars, events we are hosting, guides and explainers.
Share