- Generative AI: what accountants need to know in 2023
Generative AI: what accountants need to know in 2023
Podcast episode
Garreth Hanley:
This is INTHEBLACK, a leadership, strategy and business podcast, brought to you by CPA Australia. Welcome to INTHEBLACK. Today we'll be discussing what accountants need to know about generative AI. This year we've seen what feels like an endless stream of new AI tools being released. Led by ChatGPT and other large language models, these new tools bring with them the promise of improving processes and increasing efficiency. But also there is a lot of discussion around risks, reliability, and if these tools can be trusted by businesses. Today we're talking to Matt Dunn, head of automation at The Missing Link, where Matt helps businesses implement new technologies and automate their systems. Welcome to INTHEBLACK, Matt.Matt Dunn:
Thanks, Gareth.Garreth Hanley:
Matt, for our listeners who don't know you, are you able to give us a quick overview of what you do at The Missing Link and your background in automation, machine learning and AI?Matt Dunn:
Yeah, sure. My background is actually in data analytics and management consulting, and I moved into automation about eight years ago. So I first saw intelligent automation at a dinner at KPMG where I was working. And the dinner was on this topic and from that moment I became obsessed with what it was and what its potential could be. And I joined a former colleague to form a startup to learn about the technology and deliver it to clients. And then since then I've joined The Missing Link to lead their automation consulting practice. Now, The Missing Link is a technology company. They're probably most famous for their cybersecurity and IT infrastructure services, but after automating their own processes, they started offering it to clients and that's when I joined.Garreth Hanley:
For those of us who don't know, can you explain the difference between AI and what we would call business automation? Is there a crossover between the two?Matt Dunn:
Yeah, they are different, but they are complimentary technology, so we often use them together. Automation is using technology to do repetitive predictable tasks. So it follows rules and it can't learn new things. So I'll give you an example of some automation that I use in my role. So when we have a new client, I enter the client name in an online form and when I hit, okay, the automation creates a record for them in our CRM. It sets up a new project folder for them in SharePoint, creates a new agreement based on the project type, which is populated with their name and adds a task to our consultants' list and sets up a page for them in our note-taking system. So it does the same set of tasks every time a new client is entered. Whereas AI can handle more complex tasks than automation and it can learn from feedback. So in short, automation does the repetitive tasks while AI essentially makes machines smarter to do more and to be able to deal with ambiguity and learn new things.In the accounting industry, some examples of automation would be, so let's say monthly financial reports. So, automation software can be programmed to gather data from various sources such as sales transactions, expenses, payroll, and then it can automatically generate those financial statements, so like the profit and loss and balance sheets. But if it's not all in one system, we can use automation software like UiPath or Power Automate to bridge the gaps between the systems and the software will do the same set of steps for you every time. But AI on the other hand, because it can deal with ambiguity and learn over time, a couple of things that we've done for clients using AI is getting data out of Australian PDF bank statements. I'm not sure how many bank statements you've had to look at, but they come in so many formats. It's like someone who's working at the bank, it's their KPI to come up with a new format every month and it's very difficult to get the data out of them. So we've built an intelligent system that extracts this data and it can deal with new formats. So it can deal with 95% of new formats. And with some human guidance where it's unsure, then learns from the guidance so that it knows for future similar statements. Another example is using the categorising data based on text. So if you have the text of an invoice and you're wanting to allocate a GL to it, for example, you can use AI to understand the text and allocate it to the GL. So automation, the more mundane tasks, whereas AI and in particular generative AI, is speeding up the higher level tasks that require more thought or judgement .
Garreth Hanley:
So it sounds like AI is a bit more adaptive in its ability to automate processes.Matt Dunn:
Yeah, exactly that. So think of it as a more flexible worker as opposed to one that will only follow instructions.Garreth Hanley:
You did mention generative AI just then. So, how is generative AI different to machine learning and other types of AI? Are they the same thing or are they different?Matt Dunn:
They're all part of the same family, but generative AI is a relatively new field and it's created a whole world of possibilities. So, it can create new information based on what it's seen before. So ChatGPT being the most prevalent example of generative AI. You can get it to combine what it knows to create new content, given that it's read about a third of the internet. So all of Wikipedia, a lot of scientific papers and general content. And there are two parts of generative AI. It's knowledge and understanding. Knowledge is what's in its brain, essentially everything that it's read. And understanding is what it uses to interpret information given to it. And those two things combined are extremely powerful. A ridiculous example would be to say, give it your privacy policy and get it to rewrite it in the style of Shakespeare. So something that hasn't been done before, but it would do very effectively, given the body of knowledge that it has to draw on.Garreth Hanley:
Some of those policies are hard enough to read already.Matt Dunn:
So it might be an improvement. But the other types of AI and machine learning that we've used in the past, they'll do different things like making predictions from existing data or understanding human language, but they don't create new information like generative AI does.Garreth Hanley:
I'm keen to hear your thoughts on the benefits and risks of generative AI, but before we get into that nitty-gritty, are there any broad ethical concerns that accountants need to consider before they look at using these tools?Matt Dunn:
Well, that's a big question. So I'll talk specifically about generative AI in this case. So the first point being privacy and data protection. So, safeguarding individual privacy is we run a training in ChatGPT to ensure that users, A, don't put their sensitive data in there and, B, that they switch off the settings that are required to prevent your data going into the model. But more broadly, I guess there's bias and fairness of the model. So they're going to be biased based on the data that they're trained on. The likes of OpenAI, the creators of ChatGPT, they try and remove the bias of the outputs to some extent, but you can't remove the bias of the person training the model. And there's just so much information in there to try and remove the bias from, it's an endless task. So it's going to have biases built into it. If you think of it as a mirror of society, because it's going to have all the information that it's read on the internet by us, that it's going to have those biases built in.So which is not a problem if you're asking about a privacy policy, but if you're asking about something that's more polarising, so say, climate change or Donald Trump, you're going to get whatever the majority is saying. Just based on statistics, it's going to assume that that's right. I guess the responsibility of the model, so who's accountable for AI decisions? I think people using AI have to take responsibility for quality controlling the output and treating it as a draught to which they then apply their expertise. The good news is, the humans still have a place in that we are still the experts and it's our place to quality control what comes out, especially if it's financial advice that you were basing on the outputs of an AI model. Other areas, I guess, the impact on employment. The rise of AI, it means that we do need to address potential job displacement and invest in re-skilling programmes so that society evolves to adapt.
Garreth Hanley:
So, there you covered privacy bias, quality assurance and some jobs and role changes. I suppose businesses are probably needing to set up some governance around those things.Matt Dunn:
Yeah. And, I guess, also to bear in mind that it's not only the good guys that get the use of this technology. So as far as cybersecurity and malicious use, they are getting a free boost from the technology as well in terms of their effectiveness. If you think about a typical phishing email, the first clue is that the grammar is terrible in it and they get a free pass in terms of improving their grammar and making a more convincing and coherent message. And lastly, who has control of the AI? So, OpenAI has an incredible responsibility for what comes out of the model. Let's say they could sway it towards particular political candidates. As people become more reliant on the information that comes out of these models, it's important that who has control of them is monitored and regulated so they don't have negative impacts on society.Garreth Hanley:
Like rules of the road.Matt Dunn:
Yeah, exactly. And there has been a lot of talk in Congress in the US about exactly that. Well, how do we regulate this industry?Garreth Hanley:
We will talk about some of the good. So, you've said that AI isn't going to take our jobs away, but a recent report by Microsoft and the Tech Council of Australia says 36% of accountants' tasks may be automated and 26% could be assisted by GAI, generative artificial intelligence. So, what does this mean for accountants?Matt Dunn:
It was an interesting report, actually. So it talks about how it could make a big difference to Australia's economy and potentially adding billions of dollars over the next decade. And what would deliver this kind of value is its ability to help businesses to be more productive and also help create brand new products and services. But if I'm thinking about specifically in the accounting area, it will lift the profession to those roles that will require more thinking. So a few examples, taking away the need to compile and clean data and rather focusing on the analysis and the insights. And also to spot anomalies in accounting data such as, say, an unexpected expense spike or irregular revenue patterns. And that might indicate errors or potential issues that need further investigation.So it's the human's role then to do the investigating as opposed to doing the grunt analysis. So, one of the examples that we go through in our training for ChatGPT for clients is where we enter balance sheets and income statements and we get ChatGPT to analyse them and provide commentary on the year-on-year performance of the company and make suggestions for improvements based on that data. And it's incredibly effective at doing a first draught of that, which then the accountant's role would be to sense check and refine. So they can spend more time refining and adding strategic insights as opposed to putting the information together the same as last year. And in communication, so a lot of what accountants need to do is communications with clients, be they internal or external clients. And it can simplify technical accounting speak into layman's terms. So you can use it to put together, say, annual reports which are more readily understood.
Garreth Hanley:
So moving from a number crunching to more of an advisory and communications role.Matt Dunn:
Precisely, yeah.Jaqueline Blondell:
We hope you're enjoying INTHEBLACK. If you are interested in the latest news, analysis, policy updates and business insights, you should check out CPA Australia's With Interest podcast. Join us as we dive into the news and delve into the business issues of the day. Each week we talk to thought leaders from across the accounting, finance, strategy, economic and business spectrum, and you get their expert opinions. Now, back to INTHEBLACK.Garreth Hanley:
Right, new technologies always have benefits, but it would be remiss to not talk about the risks. So, in your opinion, and we've already covered some of the risks, but what are the potential dangers that accountants might encounter with generative AI and what do businesses really need to know about or be on the lookout for? There's different types of generative AI out there, there's different tools, are some better than others? How do people know what they should or shouldn't be using? Do you have any tips for people?Matt Dunn:
Yeah, I'll talk specifically about ChatGPT, given it's the most prevalent of this type of AI. And the biggest data risk, I don't know if you've ever heard the term, picnic?Garreth Hanley:
No, I haven't.Matt Dunn:
Okay. So, when I was working in a consulting company, this is more than a decade ago, I used to sit next to the IT department. And for about a month I used to hear the head of IT put down the phone after a call with a user and say to his colleague, picnic. And then they would both laugh about it. And after about a month, the curiosity got the better of me. And I said, "Why do you say, picnic?" And he said, "Oh, problem in chair, not in computer." And this is very relevant to the ChatGPT scenario. Now you remember we're a cybersecurity company as well, so I got some advice from our head of cyber. But let's say, for example, I have signed up to ChatGPT with my work email address. And let's say that I'm in mobile technology and I'm doing a patent application, let's say as a ridiculous example, I put in as my prompt, the following is my draught patent application for a neck tattoo by which you can communicate with your phone. Rewrite it to be clearer, suggest improvements, and identify any gaps in the application. Now that's a pretty good prompt. And then they would paste in there their patent application after that prompt. It's a pretty well-structured prompt, but the big problem with that is that you shouldn't enter company IP into the chats. So the first thing is that that prompt is now stored in their prompt history down the left-hand side of ChatGPT. So let's say that they had used their work account to sign up for that and a weak password, like asdf1234.Now a hacker can very easily get into their instance of ChatGPT and look in their history and see what they've been searching for, get the content out of that and associate it with their company because they've signed up with their work email address. So that's risk number one. So we would suggest not signing up with your work email address, but rather create a burner account. So something that even if you want to distance yourself further from your data is to create an account that doesn't have your name in it. So let's just say [email protected]. And that way anything that's found in there isn't associated with your company. The other risk, and this is one that you have seen Samsung famous for in the news lately, was where they were dumping company IP into their prompts, such as code. And that then can be included in ChatGPT's model. So you remember I talked earlier about its knowledge and its understanding. They are very clear in their terms and conditions that they can include your prompt data in the model. So, they choose to include that and someone else came back and asked, what are the latest applications for patents by technology companies? The information would come right back out and there goes my company information. And by the way, that is a real patent that Motorola has. So, there is a setting within ChatGPT, which I would urge everyone to switch off right away, which is within the data controls. And that enables you to stop your data from going into the model. So it's called, Chat History and Training, and it'll stop storing your history down the left-hand side and it'll stop your data going into the model. And this was actually in response to Italy's ban on ChatGPT as an entire country where they were concerned about the data of their citizens going into the models. So, I guess OpenAI, a little worried that the EU might follow suit. So they made this change in response to Italy's demands.
Garreth Hanley:
And I think the EU is still working on regulation.Matt Dunn:
Oh yeah, the entire world is working on regulating this technology. It's a new problem to solve and the EU has tended to lead the way in this area. So, example being GDPR. They will likely lead the way in this area as well. The other thing is, as I mentioned before, the humans are still the experts and still have responsibility for the accuracy of what we produce. So if you think about it as having a grad or an intern that you give a task to, they might do a reasonable first pass of the task, but you still need to check the output. I don't know if you saw on the news recently in the US, so this was in New York, where a judge imposed sanctions onto lawyers who submitted a legal brief that included fictitious case citations generated by ChatGPT. So they had seen that the beginning of its work was fine and they didn't go on to check the rest of it because they made an assumption and as a result they're in a lot of trouble because they didn't check their work. And that relates to any industry where expertise is being relied upon and particularly to the accounting profession.Garreth Hanley:
I guess that's not the way you want to find out you've made an error.Matt Dunn:
No.Garreth Hanley:
That's all we've got time for today, Matt. So thanks for your time.Matt Dunn:
I really appreciate yours. Thank you for having me on.Garreth Hanley:
We will leave links in the show notes for listeners who want to find out a little bit more information about what we've been talking about today, including a link to that report that we were discussing earlier. If you've enjoyed this episode, help others discover INTHEBLACK by leaving us a review and sharing this episode with colleagues, clients, or anyone else interested in leadership, strategy and business. To find out more about our other podcasts, check out the show notes for this episode. And we hope you can join us again next time for another episode of INTHEBLACK.
About the episode
If you’re an accountant wanting to understand the risks and benefits of AI tools, this episode will help you.
AI tools can improve processes and increase efficiency. But what are the risks? Can these AI tools be trusted with sensitive business and client data?
Tune in now for expert insights.
Host: Garreth Hanley, Podcast Producer, CPA Australia
Guest: Matt Dunn, Head of Automation, The Missing Link
For more insights on today’s topic, INTHEBLACK has several useful articles.
These include a review of six generative AI content tools as well as expert tips for using ChatGPT in Microsoft Excel.
Additionally, there is a look at AI-generated video-making tools for business. And for public practitioners, CPA Australia’s public practice resource has an article on the use of ChatGPT in an accounting practice.
For more information, you can read the report mentioned in this episode on Generative AI contributing $5 billion to Australia’s manufacturing sector by 2030.
CPA Australia publishes three podcasts, providing commentary and thought leadership across business, finance, and accounting:
Search for them in your podcast platform.
You can email the podcast team at [email protected]
Subscribe to INTHEBLACK
Follow INTHEBLACK on your favourite player and listen to the latest podcast episodes