Insight,

Managing AI and data – how can you do it well? - Podcast

AU | EN
Current site :    AU   |   EN
Australia
China
China Hong Kong SAR
Japan
Singapore
United States
Global

Telstra’s Stuart Powell sits down with KWM Tech partner Bryony Evans to discuss the human side of the data age, explaining how he helps Telstra’s business leaders make good decisions about when and how to use AI and manage data.

This transcript of their conversation is edited for length and readability.

Bryony Evans: I’m here today to speak to Stuart Powell from Telstra on navigating ethical and governance lenses around the use of AI, and how that’s being dealt with at Telstra.  Stuart leads the data and AI governance program at Telstra which is aimed at empowering Telstra with trusted high-quality data and AI by driving accountability across Telstra.  Stuart has a background in technology and design of data systems. 

My practice focus is on issues around data and technology of all shapes and sizes. One of my key focuses recently has been on untangling the spaghetti of data ownership in a range of different types of M&A transactions including some recent complex financial services divestments. This has led me into thinking about data and AI and how that fits into different business models and different structures. 

Stuart, while people might have a general understanding of what data and AI governance involves, it’s also likely to mean different things to different people.  Can you break down what you do and how you’ve seen your role evolve over recent years?

Stuart Powell: Let me start with data governance, because that’s where we started as an organisation - knowing that we needed to leverage data in our organisation much better than we had in the past.  We see our data as a strategic asset, but we weren’t leveraging it properly.  That is a big problem for AI because if your data is not right it’s very hard to use AI.  The main thing we’re trying to do is to make good decisions about the data that we have.  We have a lot of silos in the organisation and a lot of data issues cross those silos - the tech people implementing things and the business needing those things to be done.  AI brings its own challenges around how we do things ethically and how we do things responsibly.  So we started down that journey and made sure that we were focussing on outcomes that would deliver value to the business both in data and AI.

BE Picking up your point around silos, I do see a number of clients grapple with that challenge too - organisations often approach data and AI governance through that lens; ‘what’s the business unit doing and what does that mean for that particular business?’ Can you have a one size fits all approach to data and AI governance, or do you really need to look at it through that idea of one size fits many and customising it for different business units?

SP You can’t ignore the silos that exist, since everybody is arranged by business function and the accountability at the top is by business function.  What has been an interesting lesson for us is that the primary cut of data must be by business function, otherwise you just won’t drive the right accountabilities.  So we have a primary view with the focus on the business function and then a secondary view which looks at processes that go across the business functions and drive outcomes across the business. That was really helpful for implementing a practical governance strategy would work to drive accountability in the right way.

BE: I’m particularly curious about how you describe your work at Telstra as leading a shift in culture and practice.  What have you worked to change?  How do you put in place frameworks that really shift the culture around the use of data and AI?

SP: The bigger shift really was data and AI was seen as an IT issue.  Everybody thought that if you’re talking about data or if you’re talking about an AI then it’s an IT problem but that doesn’t work, because the funding is often driven by the business not by the IT people.  So one of the things we had to do was make people aware of the issues that are data related or AI related - focus on how you deliver business value out of governance, not just talk about it as compliance. The big challenge for us was educating people about how to understand data and AI and how to make business decisions about that, not technical decisions. 

From the examples that I have seen at Telstra, the trick is really to empower the business to start solving the problems around the management and use of data, understand what their accountabilities are, what decisions we’re expecting them to make, and how we’re expecting them to drive the business in the right direction without having to be data experts or AI experts. That was a journey.  We got our senior management talking about it. The lead from each of our business functions now meet every month on our data and AI council and talk about the real issues and our group execs all meet on a 3 monthly basis to talk about data and AI - that’s been a big change and a big education piece for us.

BE: I can see that that’s a really significant cultural shift. Part of what we see in AI and how people think about governing AI in particular is around developing some high level frameworks or ethical principles about how to use AI and then thinking about tools to implement those ethical principles. What do you think is important for organisations looking to adopt AI and conscious of wanting to do that well and ethically - how have you approached that at Telstra?

SP: In order to use AI to improve business we have to do it ethically and responsibly. The government’s position in driving the principles has been the same and almost everywhere where people are talking about AI governance it’s the same message.  The first question we really had to ask was what are we trying to do?  Having the senior managers driving from the top, having the people who are doing work on the ground understanding their responsibilities is important.  You don’t want tech people making ethical decisions on their own, you want to make sure that that’s sort of done across the business, so being aware of those issues and knowing when they need to ask for help.  So those are some of the things that we would start with.  Like data, it was understanding that AI, doing AI responsibly is not an IT function, it’s something we all had to buy into. 

SP:  Working as a lawyer in this space, you see organisations wanting to develop frameworks for governing and using AI, what does that mean from your point of view from a legal perspective?

BE: We often see clients focussing on the use of AI as a compliance or a risk issue or as a tech issue, but when you’re thinking about AI and thinking about the challenges, this is even broader. You need a multidisciplinary lens. For me as a lawyer that’s recognising that I will look at AI and immediately think about for example privacy law risks in terms of use of personal information and the privacy implications. When we speak to organisations about how they are developing these frameworks, we’re speaking with them about how to bring that legal lens into how AI fits in with all of those other factors. We’re having conversations with clients about how you develop those frameworks and also how you put in place a process where you’re not escalating everything.

SP: In terms of the legislation, I’ve heard it said that it’s almost impossible to effectively legislate the use of AI. Do you agree with that?  Do you think we may end up in a place where legislation will be thrust upon us in a way that is very difficult for compliance?

BE: I do think that from the legislation perspective it is very challenging. The EU legislation released last year shows that actually a one size fits all approach is probably going to be very broad. I personally think that if that type of approach is taken or if there’s legislation that’s not specific it could be quite onerous for businesses to comply with, because it doesn’t take into account specific scenarios and the complexities of AI and how it can be applied in different ways.  I think where it could be effective is where there are assumptions under existing laws that need to be changed because they don’t work for AI.  So how do you be clear on who’s responsible for the decisions that AI makes? The person that programmed the software? The person who came up with the algorithm? The end user? Where does that responsibility sit? I also think that legislation would be effective in specific applications of AI. A group from the University of Technology in Sydney, including the previous human rights commissioner, Edward Santow is actually developing a facial recognition model law to propose to the Australian government to regulate the use of facial recognition technology. Those specific uses are probably more consistent with having legislation or specific rules than a broad-brush approach which I can see could be tricky for businesses to comply with.  

Back to you, Stuart…  In terms of companies starting their AI journey, what are some of the key governance processes that you think are important to start out with and what can really be developed on the go, on an ad hoc basis?

SP: For us the challenge was knowing what was going on in AI and putting some sort of governance across the top of it.  The way that we did that was to form the Risk Council on AI & Data, for which we use as an acronym: RCAID, pronounced “arcade”. It has become reasonably well-known in the company because we talk about the ‘RCAID process’.  If you’ve got an idea about AI, you’ll come to RCAID where we have people who are experts in the risks for legal, cyber security, privacy, human impact and fairness, communications, and reputation.  We assess the impact of the new AI use case from a risk point of view in those different dimensions. We’ll approve it or make recommendations to mitigate risks that we find. 

Our definition of AI is very broad, capturing everything from robotics process automation to deep learning. One of the things that we do is to expect new AI projects to do an initial risk assessment - that rates the impact of the project and high, medium, or low. The high and medium ones come to RCAID. If there’s anything that is particularly high risk, we escalate it, so that the final decision is made by our Data and AI Council that has cross-company representation.

BE: It sounds like it’s a mix of having a process nimble enough so that it’s really attaching to the most high impact, high exposure type projects, and balancing wanting to be innovative as well – not having so many rules that people don’t feel like they can actually do things. Then, I imagine once you identify the riskiest types of AI, there’s then a question of how to get into the black box to understand the algorithm driving AI tools? 

SP: Yes, it is a fascinating area. Especially if you buy an AI system and it’s making the high or medium risk decisions, you have to be confident as the operator of that system that it’s working effectively. We will probably need our suppliers to give us some access to the models to do our fairness testing.  Otherwise, my recommendation would be not to proceed with them and to find some other solution.  At the moment that’s the only way I see to drive reliable compliance to those ethical AI principles.  I’m hoping that over time we might get to the point there are some more standards in place. But at the moment I can’t see anything that meets our requirements to do this.

BE: So it’s also a matter of being able to explain - not just internally but potentially at some point to regulators or to consumers - what’s actually happening in terms of how an output is being generated, so I can see that being really important

SP: Yes, one of the principles is explainability. You can never turn around and say; ‘I’m sorry the machine made the decision, not our fault.’  So how do you back that up when you’ve bought the system off the shelf?  We’ve got significant resources and we can usually marshal them to work on sort of some of our biggest problems, but I imagine that’s not true for everybody and, no matter who you, are there’s going to be a limit on the resources that can be applied to governance. 

How are your clients dealing with a question of taking those limited governance resources that they have and prioritising them?

BE: I think this really comes down to one of the points that you made earlier Stuart around the focus on impact and risk. We are seeing a number of clients also take that approach -  focussing their governance and analysis resources on the higher-risk scenarios.  So thinking about if this output was disclosed and what’s the possible impact on individuals?  Not doing that for every single case but where it is higher risk or higher impact, thinking about explain ability also thinking about that simple test: When I take all of that into account, is it creepy? That creepiness test, it’s a simple question on its face but it usually does require quite a lot of background thinking and almost stepping outside of, what’s the initial commercial need that we see this being justified for and looking at it from all the different perspectives that we talked about earlier - the fairness perspective, the discrimination perspective, the privacy perspective, bringing all those things together. That’s how we’ve seen clients focus on what’s important or the highest risk for them from a AI and data governance perspective.

SP: Yes that’s actually one of our rules - does it pass the “creepy factor” test?

BE: Don’t be creepy!

SP:  Yes! Don’t be creepy.  One of the things we think about all the time. AI governance is often less about the sophistication of the AI and more about the impact it has on people, particularly when it goes wrong.  Take Robodebt, for example. It wasn’t particularly sophisticated but the impact on people was very big.  So I think there is a developing consensus amongst AI governance people that the riskier impact situations need to be identified. I loved your point about the legislation for particular use cases like, do not use face recognition in law enforcement.  It’s just a very sensible rule given the quality of face recognition and the potential for abuse in that.  Legislation in those particular cases makes perfect sense.  But we should avoid expanding from there to very broad legislation that isn’t particularly helpful. 

BE: We opened with what have you changed in your role. This is such a fast moving field I imagine there’s little that really stands still. As a closing question how are you enabling your team and Telstra to continue to adapt?

SP: Good question.  The good thing about being in telco is that it’s always changing so the idea of change is something that’s been built into what we’ve done all my career.  In recent years we’ve been moving to agile approaches to developing software and systems and solutions.  For agile development and managing change, you need to have a very clear idea of what you’re trying to achieve.  If you have a clear idea of the end goal, even with all the little changes that are happening, you can ensure that you’re still progressing towards where you need to be.  You might need to tweak things, you might even tweak your end goals as you learn more, but you’ve always got the objective in mind and every little increment should sort of lead you towards that objective.  That’s the way I think about it: clarity of thought about what you’re trying to achieve and then a bit of flexibility in the way you implement as you go along.  And if the organisation changes or if systems change or processes change you can be a little bit flexible about how you implement. But you should have a very clear idea of your goals for the long run.

BE: Thank you. It reflects wonderfully on Telstra that you’re able to discuss this journey as well and I know that there’s so much work that goes on in that space at Telstra.  I’ve learnt a lot and I think our audience will have too.

SP: Thanks to you and KWM as well Bryony. Good to talk about it as lawyers and data nerds together!

LATEST THINKING
Insight
The ACMA is consulting on the new SMS Sender ID Register and associated Draft Telecommunications (SMS Sender ID Register) Industry Standard 2025.

11 April 2025

Insight
In this update, we summarise key modern slavery law developments in Australia and overseas during 2024, and what changes businesses should be prepared for in the rest of the year ahead.

10 April 2025

Insight
In a move to support its ‘Rebuilding the Economy’ agenda, the Northern Territory’s Country Liberal Party has enacted new legislation in what it’s calling the Territory's "most important piece of economic reform" in a decade.

10 April 2025