The Human Technology Institute
Listen and Subscribe
Today’s panel discussion is an extra special one. We welcome three guests from the Human Technology Institute, or HTI, to talk about how AI can be used and directed for building the kind of future we actually want. Sally Cripps, Edward Santow, and Nicholas Davis offer a host of insightful opinions and reflections from their work in the field, commenting on the most recent developments we are seeing around OpenAI and ChatGPT, prioritising ethics and governance, and how Australia might approach playing an important part in the emerging global landscape. We also spend some time on the responsibilities of third parties and how data science models can most effectively be used for real-world improvements. So if you would like some great perspectives on where innovation is most needed, how the country might proceed in the AI space, and how it all starts with a policy, be sure to listen in!
Key points from this episode
- An introduction to our guests, their backgrounds, and current roles in the cohort.
- A few reflections on the recent developments in and statements about the AI field.
- Sally talks about potential solutions and approaches to current objections about AI.
- The role of governance in AI progress and safety in Australia.
- Approaches to making ethics and governance more tangible across the industry.
- Advice for organisations wanting more information about implementing ethics and governance policies.
- AI and regulation; exploring the contrasting opinions on using AI in the actual process.
- Thoughtful AI in Australia; an argument for the rigour that the country should prioritise.
- The kinds of questions we need to be asking in regard to consumer rights and policy.
- Data science models and the profound influence that these can now have on real-life scenarios.
- Big businesses’ obligations when it comes to duty of care and negotiating contracts with partners.
- Closing thoughts on the responsibility of third parties.
[00:00:06] ANNOUNCER: Welcome to the AI Australia Podcast, hosted by Natalie Rouse and Dr Kobi Leins, and brought to you by Eliiza.
We report what’s going on in artificial intelligence in Australia today, with a particular focus on practicing responsible AI, diversity and inclusion, and AI at scale. What should we be aware of? And how could this affect society in Australia? Natalie is a champion of all things data, committed to spreading better understanding of the impacts of AI within our communities. Dr Kobi Leins is a reformed lawyer who has always run alongside and at times, with science and technology. We bring in leaders from across industry to get a sneak peek into what they’re working on, and how they are handling any ethical considerations.
[00:01:00] NR: Hello, and welcome to the AI Australia Podcast. I’m your co-host, Natalie Rouse from Eliiza.
[00:01:08] KL: Oh, and I’m the other co-host. Sorry. We’re still just getting used to working together. So welcome to the podcast. We have three fabulous guests with us today. Absolutely amazing initiative that just been started in Australia. We’re starting here, because we want to feature their work as part of AI in Australia. I’d say it’s pretty ground-breaking and cutting-edge work. We have with us, Sally Cripps, Ed Santow and Nick Davis. Can you give us a little introduction about yourself? I haven’t used titles quite deliberately, because I’d like you to tell us about yourself and from the angle that you want to share. So maybe we’ll start with Sally, the newest to the cohort, and we just love to hear a little bit about you.
[00:01:41] SC: Okay. So, yeah. My name is Sally Cripps and I am the newest to the cohort. I’ve just taken the position of Director of Technology at the Human Technology Institute or HTI. Background in mathematics and statistics, so I’m a professor of mathematics and statistics as well at UTS. I do a lot of work in AI, particularly around the fundamentals of AI, how we’re using mathematics to make it a bit – well, to make it more rigorous. In particular, we’ll talk later on about the sort of models that I get into, but that’s me in a nutshell.
[00:02:11] KL: Fantastic. Just one question before we go on. How do you describe AI to the layperson, when someone says to you, “What do you do?” at a dinner party? How do you kind of talk about your work?
[00:02:22] SC: Well, I start out by saying, AI is just an algorithm, and go on then to define what algorithms are and how we use the and how I think AI is a bit of a misnomer. I don’t think it is artificial intelligence at all. It’s more augmented intelligence, if you’d like, how we’re actually using algorithms, data to make better decisions. Once we figure out how to do something, it’s called an algorithm before we do, we call it AI.
[00:02:51] KL: Yeah, fantastic. I’m sort of making assumptions that you have time to go to dinner parties, given you a new role. But I thought that was fairly safe assumption, given your colleagues. I’ll go in reverse to the order of who joined.
[00:03:00] SC: They haven’t invited me yet to any dinner parties.
[00:03:03] KL: Oh, this this sounds very remiss. I’m glad we’ve highlighted that on this podcast. Nick, can you tell us a little bit about yourself, because you’ve come out – coming back to Australia from a long stint overseas doing some amazing work. We’d love to hear it. Just a little bit about your background, and don’t be shy. Please, share.
[00:03:17] ND: Thanks, Kobi. It’s great to be here with you and Nat and my colleagues. Yeah, so like you, Kobi, I’m a reformed lawyer. I started my life working in employment law and international law. But I quickly moved over into the international scene. Again, like you, spent time in Geneva. I spent a long time with the World Economic Forum, which is a really interesting place for convening both experts and decision making on lots of different topics. My entry into the AI space actually came through really a focus on how we conceptualise and think about the role that technology plays in our lives. Not just AI, but AI as a general kind of purpose technology driving so many of the underlying systems that we see, from drug discovery, through optimisation aspects that surrounds us in our daily lives. Or in traffic, and lights, and energy management, et cetera.
Then, more and more, directly through the kind of recent activities like ChatGPT. So my role at the Human Technology Institute, after that long stint working on innovation and the role that technology plays with the World Economic Forum, and a bunch of large tech organisations, is now really focused on how do we develop the skills, tools and policy here in Australia to make sure that everything we do in this area is fair, fit for purpose, accurate and accountable. It’s just really kind of a tagline of the Human Technology Institute at UTS. The three of us all bring different skills. I guess my specialty here is that organisational view of what happens inside large organisations as they try and come to [inaudible 00:04:48] as they kind of try and come to grapple with the advent and use of these technologies.
[00:04:54] KL: A question for you, just out of personal curiosity. I’ve heard that those speed and scale term, that Klaus Schwab uses, and it’s now become daily lingo came from you. Is that correct or is that a misplaced rumour?
[00:05:03] ND: Look, there were three of us at the World Economic Forum that worked very closely with Professor Schwab on his 2016 book, The Fourth Industrial Revolution. And yeah, it was, there were a lot of ideas in there that I think have been pulled from lots of great thinkers. As ever, I think everyone knows, if you’ve written a book, or you’re thinking of writing a book, it’s actually getting that kind of key wording that penetrates the public consciousness that is important. So I’m very happy that Professor Schwab has spread that message far and wide. Indeed, the term “The Fourth Industrial Revolution,” which I think is an interesting metaphor for the time that we’re in, and a useful one in many circumstances.
[00:05:38] KL: Yeah, humble, as always. Nick, I’m just going to attribute that to you, I think, going forward without an overt denial. Ed, you hardly need introduction, I think in the Australian landscape, even in AI. You’ve just become a face and a voice that’s become so well known. I remember starting out with you a few years ago, where we both sort of went, “What are we doing?” Can you tell us a little bit about what you’ve been doing, why you’ve been doing it and why this centre?
[00:05:58] ES: Yeah. It’s so great to join you all. I’m yet another lawyer on this podcast. So I think we’re only two lawyers short of being able to change a light bulb. But my background is as a human rights lawyer. Going back more than 10 years ago, now, I could see a whole bunch of my clients, some of the most vulnerable disadvantaged people in Australia, were increasingly having technology used on them, choosing my kind of language quite carefully here and some of that was algorithmic decision making. Some of it wasn’t. But there was certainly a lot of automated processes that were being used more and more on really vulnerable people. I guess a lot was shown on that through the robo-debt catastrophe. But this was in areas like policing and a range of other areas as well. It really got me thinking, I was thinking, “Gosh, there are some real risks with this,” because our clients were getting negative outcomes a lot of the time.
But then, simultaneously, we were seeing some really positive stuff as well. I’ve done a lot of work with people with disability throughout my career. I was seeing how people who are blind or have low vision were starting to use AI-enabled technology on their smartphones, to almost literally see the world around them. So there were these sort of two currents that are flowing in different direction. Some how AI could really improve people’s lives, some showing the opposite of that. So I became kind of convinced that that was something I really needed to kind of go deeper on. So how do we make sure that the technology that is increasingly kind of – beat the way in which we experience all aspects of life? How do we make sure that that gives people, again, to choose Nick’s words here — that’s the kind of future that we wanted made, and not the kind of dystopia that we’re all hearing.
[00:07:48] KL: Yeah. I think that’s a really good launching point. I mean, Nick, you touched on ChatGPT too. It’s been a really busy summer. So our initial podcasts was before Christmas, and we’re now a couple of months in and we’ve seen DALL-E going crazy and Nick Cave coming out and say, “You can’t create art using automated systems. You’ve got a report or evidence that’s now come out about that automated Tesla testing, that was actually not entirely automated, including that the driver of that test is the one who passed away in the Tesla accident.” It’s been a lot going on sort of raising queries and raising questions. Gary Marcus has been increasingly vocal in this space globally about our expectations of an understanding of these technologies. What are your thoughts about some of the more recent development, before we jump into sort of more thematic questions? Have you had anything that’s been on your mind that you’ve thought that’s changing what we’re doing, or affecting how we’re engaging or is just all happening in support of your work for better governance and better oversight?
[00:08:43] ND: Yeah, I think it’s a really interesting time right now. I’ve said a few times that I think our generation of anyone that’s alive age of kind of 10 to 80 right now are interested in these topics. We’re incredibly privileged because this next period, this next kind of 10 to 20 years is the time that we will sit down the guardrails and build the infrastructure, the computing, the social, the political infrastructure around automated decision making, algorithmic systems and the current and future forms of what we call AI in that broad sense. So we know from infrastructure in the past that when you build those systems that they’re durable. They’re durable, because of a lot of it is actually physical as well. It’s actually compute designs, and fads, et cetera. In doing this, it’s really a duty and a privilege to be able to start thinking about what those systems should look like during this kind of age, this decade of AI governance.
I think it’s really interesting that ChatGPT and a bunch of other kind of generative AI systems are the ones that are driving much more kind of public discussion around this. Then, there was a case a couple of years ago. OpenAI is a structure. The fact that you can just log into chat.openai.com and start a conversation with a large language model, has meant that, you ask people in a room, “Who’s played with it?” Half of people will put their hands up in most rooms. That allows you to say, “Well, how does that make you feel? What kind of errors did you see? Would you want your child writing an essay on this? Or do you use it to write essays? Et cetera.” So those kinds of conversations are really interesting.
The problem that I have with a lot of this is, what’s called the Collingridge dilemma. The idea that by the time that policymakers and those of us who want to gather careful data on social phenomenon like this in order to put rules and guidelines around them. Before we can actually get the amount of data and the public consciousness embedded or in hand, they become so ubiquitous, that they’re incredibly difficult to change. So we really need to be in policy in the policy world, but also people like Sally who build the actual tools, and use them. We need to be really thoughtful right now, rather than waiting for someone else to solve these problems of misinformation, and risk, and human harm and psychological harm that we know out there in these systems already.
[00:10:56] KL: Yeah, there’s a wonderful article out this week about TikTok being a weapon of misinformation, which I thought was fantastic and will link to in this podcast as well. To your point, Nick, just how to approach these tools that as you might have coined or might not have, but haven’t denied, move at speed and scale. I think that’s a lovely segue into Sally, as the technologist. I mean, it’s already highlighted that we have far too many lawyers on this call or reformed lawyers. What do you see as the solutions from a technological perspective before we jump into sort of the broader governance or where Australia can differentiate itself?
[00:11:24] SC: Yeah, sure. So just to pick up the ChatGPT sort of argument. I want to sort of say, upfront that it’s not, I think ChatGPT is overall a positive thing. I totally understand Nick Cave’s position. He’s somebody who creates things and he gets songs, which are sort of compilations of his other songs. In fact, when I played around with ChatGPT, I actually thought, “Oh, I’ll give it something really hard.”
So I asked it to construct a, what’s called an algorithm that we’re using machine learning to discover stuff called Markov chain Monte Carlo, we call it MCMC. So it came back, and it actually came back with the algorithm. Then I said, “Oh, well, let’s try something even harder.” Let’s take an MCMC scheme that changes dimensions on the way through. Lo and behold, it came back and there was a reversible jump, MCMC scheme in front of me. I was thinking, “Wow.” Then, I thought about it, and I thought, “But it’s just telling me what has already been done.” I think that’s the spirit in which Nick Cave should take it.
Nick Cave actually came up with the songs. New people like me come up with the algorithms. What the chat is doing is taking that knowledge that already exists and repackaging it. It is so far from creative thought. So it is just a reframing and a repackaging of knowledge. It didn’t come up with the reversible jump, MCMC algorithm. It didn’t say, “How on earth am I going to think about multiple dimensions in different dimensional spaces and exploring it?” That was actually a really smart guy in the UK. Actually, there are a lot of smart people thinking about those sorts of things.
I think that there’s a bit of a hype about how terrible this thing may be. Whereas, we overlook, perhaps the positive. I can see from a teacher’s point of view, it’d be terrible. You’d never know what the students and what the algorithm doing. But if the student is learning at the end of the day, maybe that’s what’s important. But to answer your question more about what math is in, the sort of technical stuff I can do, how we can play a role, I think in making the AI space, a safe a better place. I’ve had lots of conversations with Ed and Nick about this. Basically, what we see now a lot around the AI regulations is sort of, as Nick alluded to, after it’s happened. It’s like the icing on the cake, but the baking of the cake has to be done well, if you’re going to have a good cake.
Actually, the foundations of any AI system, there’s algorithms behind it. That means there’s a computer thinking about going through a series of operations in the background. Behind that is mathematics. What is a little alarming to me is that, we’re graduating, the only area of engineering that we’re graduating students in that doesn’t require you to take mathematics as computer science. What we’re doing is, imagine you were asking people to build a rocket to go to the moon, and they were doing a lot of data driven, but they never learned any physics. I think that there is the danger from my point of view, is that that lack of really fundamental – not everybody has to have that fundamental rigour. But you need that part in the process of the development of the AI and it shouldn’t be part of every AI system so that the AI is explainable. You can attribute causation. You understand what it can do and what it can’t do.
I always tell people when they come up with new algorithms, and they say, “Oh, it’s wonderful.” I will always say, “Show me when it fails, because it must fail and I want to understand the conditions under which it fails. Then we can start to think about whether or not it’s going to be any good. Because if you can’t show me that it fails, and explain why, then it’s rubbish to begin with.” So there we go.
[00:15:16] NR: I really like that. That’s a really good way of looking at things. Because nothing’s not going to fail, so that seems like a really healthy approach. So what do you think then, Sally about the current approach to AI? We’ve kind of invested a lot sort of down one road. Do you think we may be coming to the end of that road or are there other approaches that we should be exploring more in addition to that?
[00:15:42] SC: Oh, AI is huge, so I don’t think we’re necessarily going down one road. I do think, perhaps, we’re not paying enough – well, this shows my biases. Perhaps we’re not paying enough attention to the initial steps down that road, which is the rigour behind it, I think. And in a way that’s – because people want to be innovative and I get that. They sort of get all excited. Look at what I can do with this new algorithm and without. Then suddenly, before you know it, it’s gone viral, and somebody’s taken it up here, and that’s fine. But then there’s a role for governments and for people like HTI and other institutions to actually have a bit of a handbrake because of the impact on people’s lives. We have regulations around building bridges. We don’t actually just allow any old sod to go out and build a bridge and then allow people to walk over it. We need to start to think in the same way about AI.
I think the wonderful thing about working with Ed and Nick is that they think about the things that I don’t think about. They think about how it gets played out, because there’s certainly a lot more to regulating, or to having good AI than just mathematics. But I would say, in a mathematical expression, it’s unnecessary but not sufficient condition.
[00:16:58] KL: Thanks. That’s absolutely a great opening. I’d love to ask you many more questions, but for the sake of time, I’m just going to open this up to what is the institute doing or where do you see Australia as being able to lead? We’re trying to sort of feel out what Australia is doing differently, or what we could be doing differently. I know of some amazing work that I’m seeing the repercussions of on my side. Can you talk to us a little bit, and the audience, a little bit about where you see that governance coming in and what we need to be doing from a non-mathematical computer science perspective at? Nick or both.
[00:17:26] ES: I’m happy to make a start. Look, I think, in the broader scheme of the kind of global race on AI, the superpowers, the countries like China and the United States, they’re out investing us, They’re always going to add invest us. That’s okay. Right? Because they’ve got their own kind of pathway. I think what really makes sense for a country like Australia, is to work out what is our niche. We’ve seen some other countries, like what Germany did with its national strategy and AI was, it basically done paraphrasing. What it basically said was, when you’re buying a German car, a Mercedes or Volkswagen or whatever. It’s not the cheapest car in the lot, but you’re going to benefit from the fact that it’s going to be really well designed, it’s going to be incredibly high performance, all of that sort of thing. That’s why you’re going to pay the extra money for it.
So the question for us as a country is, what’s our niche?
Now, I would say this, I acknowledge that. But I think our niche could be that we really care about fairness and equality. We are the country of the fair go. We take human rights seriously. We’re not perfect. No country is, but we do take those things seriously. Our commitment to liberal democracy is very, very strong. I think one of the things that we’re seeing over the last few years as the broader community has become more conscious of how AI is being developed, how it’s being used on them, how they can use it themselves. Is that people are saying more and more, “We want those protections of our basic rights to be baked in to the way in which the new technology is being designed, developed, used, regulated and are the same.”
If we can show that that is how we’re going to kind of put our stamp on AI, then I think we get two benefits. The first and perhaps maybe the only one we should care about is that it will be better for people, right? It’ll be safer. It’ll be more reliable. It will better respect people’s basic human rights. But if that’s not enough for, and I sound like a dental advertiser from the 1980s. But if that’s –
[00:19:37] KL: Do I get knives with this, Ed? I feel like –
[00:19:39] ES: Yeah, exactly. Here are three set of steak knives. Also, I think using that, enlightened self-interest to take that approach, because that’s a point of difference, right? That is actually something that consumers of the world over are saying that they want more of. I think we’ll get like an additional benefit in that regard.
[00:20:00] KL: So how are you doing that? What are you doing from your side? I’m framing this quite carefully, because I recently presented after a cybersecurity expert who said, “Data ethics and governance, how boring.” It was a great intro for me to present because I got to prove him completely wrong. But for many in the audience, about governance, yawn, particularly data scientists. I mean, ethics governance, you talk to a data scientist, which I have for years and sort of the eyes glaze over until you talk about concrete examples. What are you doing? What is it that you’re thinking and doing in this space, just to make it a little bit more tangible?
[00:20:31] ES: Yeah. I’ll give a quick example. So we’ve created these three labs: skills, tools, and policy. What we’re saying is, in a sense, those are three levers that you can pull to advance fairness, human rights in the way in which new technologies are developed and used. So policy is a really good example. You can call it governance, if you want. Law is in other words, the rules that we all have to comply with. We know that the best innovation tends to actually be constrained. You don’t want to necessarily give innovators a completely blank page, because they can go in all kinds of different directions. But if you constrain it, by what Australians actually want, and need, in this case, I’m particularly focused on human rights, you’re going to end up with innovations. New technology that is better, that better responds to people’s needs.
We, late last year, developed a model law for facial recognition technology, because well, Nick, and myself and Lauren Perry, our third co-author all recognise that there are all kinds of really positive use cases for facial recognition, and those things should be encouraged. That’s fine. But there’s a whole bunch of terrible stuff that’s happening that is not being properly regulated, as well. What we want to do is we want to sort of move the market. We want innovators to better understand what their obligations are, so that they can innovate within that parameter.
[00:21:48] NR: Yeah, that’s awesome. I think a lot of organisations want to do the right thing and a kind of policy takes a long time to get enshrined in law through our government and legal system. So having a place where organisations can come to and say, “How do I make sure I’m doing the right things and under what conditions should we be thinking about these things? How to organisations kind of – who are wanting to find information on how to do the right thing, and what steps to take, like how can organisations get in touch with you guys and get that kind of advice from you?
[00:22:25] ND: Yeah. I think the great thing about being at UTS, as the Human Technology Institute is, we’re part of that bigger structure of experts, who are able to think through lots of different parts of the challenge that organisations are facing. But we welcome people reaching out directly to us via our website or on LinkedIn. The way we work is we work on really specific challenges with organisations to solve what is really at some of the trickiest challenges at the frontier of the design, deployment use and impact of AI systems. But our goal as an institute is then to really democratise, and spread and shift the whole market. So we’re always looking for really great examples of those challenges, but also very happy to chat about the different types of resources that exist out there in terms of guidance.
I will say, a lot of the public discussion over the last few years in AI at the organisational level has focused on principles and ethics at a high level. The empirical evidence is that those kinds of activities and investments at the organisational level actually don’t change behaviour, or have a positive impact in how AI systems are actually used. At the very other end, Kobi and I are engaged in Australian standards and international standards setting exercises through the ISO SC 42 Committee. That’s quite specific –
[00:23:49] KL: What’s ISO, Nick? Nick, what’s ISO for our listeners?
[00:23:52] ND: International Standards Organisation. It’s the international body that brings together, and harmonises standards and then fits those into with national bodies as members. So in that standards work being developed right now are some very specific management system standards around AI and impact assessment standards that will help organisations very tangibly audit, assess and ensure that the way that they do AI, not just the individual algorithms, but the way they set those up inside their organisations as a whole will be safe and will avoid those commercial risks. Where, as a business, you just don’t want your algorithms misfiring because that’s bad for business. Let alone the community impact, the regulatory impact, the reputational impact that those failures might have. So between those two poles, I think it’s best really too, for organisations today to look really practically. What practically can we do? Organisations like yours, Nat are part of that ecosystem as well?
[00:24:50] KL: Yeah. I think if I can just add to that, Nick, I think there’s a little bit of a standards race on at the moment for those who are not following this. So we’ve got a series of different standards from different bodies coming out, and everybody’s sort of racing to get the best and the quickest first standard out. Because the EU’s indicated that they will potentially adopt some of these standards if they’re functional as regulations and requirements. To Ed’s point, this is kind of the, we don’t want to be doing harm, we want to be doing good, we want to be complying with the risk that the boards have a responsibility for. But we also, it’s just a really – it’s coming and it’s a really good thing to have.
Companies should be building out, and I think this ties in with your comment, Sally, that if you don’t think about it from the outset, if you haven’t got the thought that you’ve put into your data sets, and your algorithms and your hardware, you’re going to have major problems down the other end. So these considerations that you need to have early on.
[00:25:40] ND: And maybe just to add to that Kobi, when I talk about the decade of AI regulation, a lot of people take that as being, perhaps rightly. They look at things like the EU’s AI Act and evolving legislation in China, and in the US, in lots of other jurisdictions, specific around AI. But the decade of AI regulation for Australia is also a decade of regulators applying existing law around discrimination, privacy, consumer rights, directors’ duties taught and harms that are either statutory or in common law. Regulators are looking really closely whether or not you’re in finance, or you’re selling a product at how these systems can and do create individual and societal level harms.
So yeah, if you’re listening to this podcast, and you’re in a large organisation, or even an organisation that is small, but does some really meaningful things using AI systems, even just recruitment AI, you need to be asking some pretty critical questions about bias, potential for discrimination and harm right now, because regulators are. Ed and I talk to them all the time, and they are really looking closely at this.
[00:26:46] KL: Yeah. I remember Ed being one of the first people I heard talk about existing law, because we’re talking to boards and sort of, “Is this going back to waste?” People are saying, “There’s no law governing AI” and I was facepalming regularly, in my mind. Yeah, exactly. There’s body language on the podcast right now. But it was just Ed who started saying this as well, where some of us were saying, “But there’s all this law, there’s a whole landscape of law. It’s really boring, just look.”
But it’s boring, we don’t want to look. We just want to pretend there’s nothing and we’re operating in some space. I think this is a really good opportunity to circle back to what Sally was talking about in terms of the technological side. We’ve talked a lot about the – as you would with three reform lawyers, what the governance side is. But from the tech side, how can you – a lot of tools coming out now promised to assess bias using the technology or to – I mean, I’ve had tools presented that have sort of applied tolerance levels for racism, or sexism, which I think can be fundamentally problematic. Can you talk a little bit about where you see the tech coming in on the governance? Because it is also part of the governance.
[00:27:42] SC: Yeah, absolutely. Let me say up front that I’m actually not a huge believer in having AI to check the regulations of AI. There is a large AI community that says that the solution to bias and fairness is more AI. In fact, we should have deep learning too. I think that they’ve missed the point.
[00:28:02] KL: I thought we’re going to be able to disagree on this, Sally. We actually agree this is very boring.
[00:28:08] SC: Okay. I will say, then, something that you might disagree with. Where it can actually add value to it. It is actually I think, by being rigorous in the mathematics, so you can actually build systems that enable you to quantify uncertainty. That actually requires a lot of probability theory. Once you – I mean, that’s got to be key to any decision maker, because it’s the amount of risk you take. So you have those sorts of checks and balances. I go back just to the original purpose of having mathematics behind things, is because you can prove things. You can prove things out of sample. So you’ve got all these AI techniques at the moment, and they talk about in sample and out of sample, they train them on this, and they hope they work out a sample.
Now, what’s great about mathematics is actually, you get generalisable proofs. You can actually show where it’s not going to work. But that does require you have somebody who’s at least one person who understands this and can actually do these proofs and show then under these conditions at work. So deep learning will not work when you’ve got incredibly noisy data or how can we – people say we can’t actually prove outputs from deep learning. We can’t get properties. But actually, we can. We can prove if I can be technical. I’m presuming everybody here knows what deep learning is, just a really infinite light. Well, lots and lots of layers and neural networks.
If they were an infinite layer, they approximate what’s called a stochastic differential equation and there’s lots of work going on, looking at the similarities. We’ve been starting stochastic differential equations for a couple 100 years. We can start to get a sense of how they behave by looking at what they approximate in mathematic sense. I think that it’s a really under-tapped and under-appreciated component of how we can use mathematics for the better. One of the reasons I’m so excited about what HTI is doing is, quite frank – because I mean, that’s sort of what I do. But when I do what I do, I don’t have the impact that Ed and Nick have. It’s actually wonderful listening to, and working with people that are actually totally different from me. It’s actually the three –
[00:30:18] KL: I think I disagree with that, Sally. I think you are going to have the same impact. In fact, I think that it’s exactly this kind of collaboration, which sort of opens up the question, do we need more teams like yours? Yeah.
[00:30:30] SC: It’s actually, me listening to the sort of issues that Ed faces, the issues that Nick faces, and then me thinking about what are the limitations of the math, how much that can actually help, how much it can’t help. When can it help? What kind of help? But actually, I think, Nick and Ed had both alluded to this. It is the way to learn this is to focus on specific issues, so we have specific challenges and it’s amazing when you’ve got a specific challenge. So one of those projects that I’m working on at the moment is how can we improve education standards to reduce social disadvantage? That’s a societal sort of challenge.
To answer those questions, you get a whole bunch of people together who all come with very different backgrounds. But focusing on a specific question is absolutely key to really – it’s by focusing on the specific. Until you do that, you can’t generalise. I think that that’s –
[00:31:23] KL: Does the project include data literacy as well, looking at another opportunity for Australia to lead in terms of educating people to be able to ask the right questions to understand systems to be more data literate?
[00:31:34] SC: The current program is more about trying to figure out what interventions – are we talking about the program with the social disadvantage?
[00:31:41] KL: I’m asking. I have no idea. Yeah.
[00:31:44] SC: All right. So I’ll answer a slightly different question, if that’s okay. Which is, yes, I do think that one of the areas that Australia can lead on is this sort of thoughtful or rigorous AI, which I think comes about with a better data literacy, better rudimentary. It’s not really – you don’t have to have sophisticated maths to show, to realise what robodebt was doing was shocking. I mean, it’s just a basic understanding, that would have actually delivered a better outcome. But people are just not thinking like that. I think raising the awareness is really important.
[00:32:22] ND: One thing that I felt a little bit been overlooked in the public discussion of AI is what Sally just mentioned, this kind of which Nat knows very well from her work is these multiple types, this multiplicity of different techniques of algorithms, of approaches, that all get bundled up into AI.
As this space quickly moves into, becomes more commercial as AI – avenues become more commercial. They become productised, and packaged behind layers of interfaces, cloud services, whole, what we call ML ops platforms, offering different aspects. It becomes harder for an organisation and SME, particularly who’s engaging with that product to really know and understand that kind of questions, the hard questions that Sally’s mentioning there. What data is used? Who has labelled that data? In your last podcast, you mentioned Kate Crawford’s work. Is that labelling being done? Not even to mention ethically, but accurately and with what biases?
This kind of question of, is the algorithm fit for purpose? It’s a bit hard to tell when you’ve just got three companies trying to sell you a recruitment tool that uses AI. So being able to ask those questions is actually a matter of really consumer rights and governance in terms of just being able to have a little bit more, or actually, in some cases, a lot more transparency about what lies behind these systems on the market.
[00:33:50] KL: Question for all three of you. [Inaudible 00:33:51], who’s a professor of data science was talking about the fact that data science hasn’t got a professional kind of obligation to your point, Sally. So doctors and lawyers have codes of conduct, and they can be disbarred, or not permitted to practice these licenses, I suppose. His position was, I thought was really interesting one that data scientists have never been required to have a similar license, because what they’ve been perceived as doing is just maths. It’s just maths, maths can’t really change anything.
To your points, it’s calculations of risk, it’s true or false, it’s binary, it’s not really problematic. As we move forward, I know there’s an increasing sort of groundswell talking about potentially licensing data scientists to have a certain skill set to understand the impacts of their tools, because we know that these are not just mathematical tools. We know that across many, many other areas, the impacts. We talked about them, and we’ll continue to talk about them. What’s your view of that as a team? Do you think that’s something that would be helpful or useful going forward? Is it even possible?
[00:34:49] SC: Yeah. I think it’s quite a good idea in the same way that we have engineering. I mean, it’s just another field. It’s a sort of a combination of maths and computer science. We have engineering bodies, which control what you can actually do and we’re seeing it. Not only in Australia.
So there is a wonderful woman called [inaudible 00:35:07] in the QUT, who’s advocating very strongly for a data science sort of body that has a look at this. There are similar organisations in the UK, which are also doing it. So yes, I would – I think it would help enormously because I think data science doesn’t mean – I mean, of course, if you speak to a statistician, I’ll just tell you, it’s just statistics, but it’s much broader. Yeah.
[00:35:28] ES: It’s a really interesting question. The Carnegie Foundation, about 15 years ago did a really interesting study looking at half a dozen different professions, including medicine and law. They tried to pick the eyes out of what was so important about those professional periods of practice, that meant that they should be subject to additional constraints. Some of them are a bit boring. They have a monopoly of being able to provide a particular service. That’s the one side.
But I think what’s really interesting about data science, and its does align with some of those other professions, is that they’re not just providing some general insights. We have moved into a phase where data scientists are essentially providing products or services. Not always, right? But often, that’s what they’re doing. So here is like a tool that you can use to perform a function that you were previously performing in a completely different way. That means that the risks I think are higher, because it takes it out of the kind of abstract world of better understanding the world around you, or whatever it happens to be, to actually starting to make decisions that can affect people in quite profound ways. For me, that’s quite a strong reason to think more about the professionalisation of that area.
[00:36:53] SC: Yeah. If I could just add to that, it’s just a specific example. If we take when COVID first hit, where there were all these data science models, they’re all – and actual governments were making big decisions based on those models. They were deciding to move people out hospital, into old people’s homes, people who happen to have COVID in the hospitals into old people’s homes, who then – this was in New York. Well, I wrote with a team in the US quite a few papers about what was going on. We didn’t release vaccines until they were tested. Where is that sort of rigour that we place on vaccines versus data science models that are governing huge amounts of policy? That is a classic example of what I think is needed.
[00:37:36] KL: Yeah. I’d love to get my hands on that report, if you care to share, and we’re very happy to put links in. I’m also going to refer to a book that if you haven’t read, it is an absolutely incredible book. Sally, to your point, about risk. When people think risk is mathematical and binary, there’s a whole history to the politics of risk and how it became to even be a field, which shapes how we look forward for those who are historically minded and have a lot of spare time on their hands or are just super nerdy. There are a couple of really good books on this, that I can link to in the podcast.
But I think just having this conversation around, what we sort of assume are norms and what’s possible, and sort of – this is one of the things I really love to do on this podcast is explore, not only what we’re doing now, but looking forward to what we can be doing. [Inaudible 00:38:14] having future imaginaries of things that we could practically be doing in Australia that differentiate us and make us better positioned. I think there are a lot of opportunities.
Is there anything else that you want to add? I’m conscious of the hard 12:30 finish. If you wanted to add anything that we haven’t covered, or even just to say where we can contact you. I know you’ve mentioned a couple of options. Nick?
[00:38:33] ND: Maybe one more thing to add here, if you’re big enough business to be able to negotiate the terms of contracting with partners, IT partners or vendors on this. There are some good examples starting to leech in terms of standard contractual papers, and clauses and agreements for when you acquire AI services. The City of Amsterdam pre-COVID, seems a long time ago now. But in 2018, has published this along with the City of Helsinki, this standard contracting clause for AI services. That’s actually, even if you run through that – we can put the link on and give that to you for the podcast. But if you run through that as a vendor, it gives you some really critical questions around what rights you might want to have as a client in this space in order to ensure that you’re discharging your duties to your stakeholders, and customers and your board as well. Things like this are starting to appear, but they’re far from standard in discussions and engagements right now. So it’s this kind of attacking this problem of responsible AI from multiple levels from the math angle, from the training aspect, from the government policy, but also from these commercial relationships is pretty critical.
[00:39:41] NR: Yeah. I like seeing that as well. Like it’s – this is everybody’s responsibility. Organisations who are large enough, like you say, can put these agreements in place that also, like you say, vendors with some partners like us can – like that’s our responsibility as well to come in and say, actually, we’re not going to build an algorithm with data that affects people, if we’re not prepared to also do, put this rigour around it and put these considerations in place. I think that’s kind of the onuses on all of us to make sure that we’re, as you say, Nick, discharging our duty of care around this.
I think as well, when it comes to the kind of education side of things, I think the National AI Centre are doing a really great job of having a real kind of mission at the moment to highlight to people the everyday uses of AI, where it is. Like you say, being bundled up and products, and people might not even realise decisions are being made with the use of AI behind the scenes, and in a lot of really, really prolific everyday uses. I think their new podcast, Everyday AI is going to do a really good job of helping people to understand where they might be being impacted, and hopefully really kind of shine a light on that as well.
[00:41:06] KL: Absolutely. We’re looking forward to interviewing director of that centre, Stela SOLAR in our next podcast. Final point, which we haven’t touched on, but might be able to be stitched back in. Third parties, we haven’t touched on it all, Nick. When you talk about those sorts of contracts that exist as models. To what extent do you or others thinking about third party suppliers, and this chain of data that we’ve seen, again, through the Medibank, and office breaches? It might be worth just having, if anyone wants to talk to that, we can perhaps stitch it back in.
[00:41:31] ND: Our ongoing work on the corporate governance of AI is looking very closely at third-party responsibilities. So when you have a recruitment system that is contracted out, but it is acting on your applicants to your business. You are responsible. You can’t just turn around and say that was biased and therefore, it’s the company that provided those services that comes back up to you. There are even things even slightly further afield than that. So maybe in your supply chain, there is an organisation using AI in a really terrible way that can interact with lots of different legislation and statutory duties, but also just creates reputational risk for who you’re working with. These are things that we see more and more senior executives and boards are doing different forms of audit, of saying, “Which kind of tools are we using? Where’s the data coming from and where it’s being used in our supply chain that we either are responsible for or should be responsible for?”
[00:42:23] KL: Awesome. Thank you so much. Nat, did you want to ask anything or wrap up?
[00:42:28] NR: No. Just to say, it’s been an absolute pleasure to chat with you all today. Could easily stay here and talk with you all day, but respectful of your time as well. Thank you so much for joining us today and would love to keep in touch. All the best with HTI. You’re doing such important work and we just wish you all the best with it.
[00:42:48] ND: Thanks, Nat.
[00:42:48] KL: Please link us all your links to advertise all the things and anything you’ve touched on that you’ve mentioned today. Happy to make this a richer resource for those listening.
[00:42:56] ES: Thanks, Kobi.
[00:42:57] SC: Thank you so much.
[00:42:58] ES: Thanks, Nat.
[00:42:58] KL: Thank you.
[00:42:59] NR: Thank you all so much.
[00:43:01] ANNOUNCER: Thank you for tuning in to this episode of the AI Australia Podcast. Subscribe to the show wherever you listen to podcasts, so you’ll never miss an episode or sign up on the Eliiza website for meetups, podcasts, guides and events. You can connect with us via LinkedIn by searching Natalie Rouse or Kobi Leins or through eliiza.com.au/aiaustraliapodcast.