Accounting Standards
AI and Privacy Regulations – The Accounting Technology Lab Podcast – Oct. 2024
Hosts Brian Tankersley, CPA, and Randy Johnston discuss how the privacy implications of artificial intelligence and new regulations attempting to address these issues.
Oct. 11, 2024
Hosts Brian Tankersley, CPA, and Randy Johnston discuss how the privacy implications of artificial intelligence and new regulations attempting to address these issues.
Watch the video:
Thanks for reading CPA Practice Advisor!
Subscribe for free to get personalized daily content, newsletters, continuing education, podcasts, whitepapers and more...
Already registered? Login
Need more information? Read the FAQ's
Or use the below podcast player to listen:
Transcript (Note: There may be typos due to automated transcription errors.)
Brian F. Tankersley, CPA.CITP, CGMA 00:00
Randy, welcome to the accounting Technology Lab sponsored by CPA Practice Advisor, with your host, Randy Johnston and Brian Tankersley,
Randy Johnston 00:10
welcome today’s accounting Technology Lab on AI and privacy regulations. Now we had recorded a prior podcast for in the accounting Technology Lab on software licensing and privacy policies. It’s a lengthy record, but if you will take a listen there, we think you’ll gain some additional knowledge on privacy and why we were talking about that. But Brian, I think this might be a good time to pick up on the AI laws and regulations that are out there, because you and I have been following these for, you know, a good number of months. What do our listeners need to know about?
Brian F. Tankersley, CPA.CITP, CGMA 00:49
So AI is again, a hot topic that people are talking about, but you need to know that there are some very specific regulations that exist, and the regulations are evolving very rapidly related to this. So the EU has its artificial intelligence Act. The big one that we really looked at is President Biden’s Executive Order, number 14110, that came out at the end of October of 23 it actually is going to start, start driving some rate, some rulemaking. There already is driving some rulemaking in many, many of the government agencies. We have the AI Safety Institute that’s now part of part of NIST, the National Institutes of Standards of technology that, again, NIST is a tool, is a is a part of the Department of Commerce, and it comes out with the standards that the federal government has to comply with, related to related to computer security and it and many other things like that. Now you may be subject to the to those rules However you do if you do work for the federal government, so if you’re a federal contractor or healthcare provider, or, you know, other things like that, and get significant federal funds, you may actually have be required to comply best. Actually has an AI risk management framework that is optional now, but I think for certain people it’s going to be mandatory in the intermediate term. There is also a GAO document as well as as well as a national AI initiative act. So a lot of stuff in here. Let’s just kind of go through those in order. All right, so the first one is the EU’s artificial intelligence act. And so this, again, this was issued in response to things like chat, chat, GPT. The idea behind this is that, is that you classify things into four categories, unacceptable risk, high, limited and minimal levels of risk. So there are certain things again that you again, that’s that you have, that’s that again, that you need to decide. And so with this, the if you have unacceptable risks in apps are banned by statute. The high risk apps have to comply with more rigorous requirements for security, transparency and quality. Payroll would be a high risk application. For example, because of all that data, you know, investment management would also be a high risk thing. Limited risk apps or have only have transparency obligations to let people know what you’re doing, and then the minimal risk apps are unregulated, okay, but again, the general purpose AI has transparency requirements, and again, it has to be evaluated when the risks are above the limited level, okay? And so go ahead, Randy. Well, I was
Randy Johnston 03:32
just going to mention you. You’ve covered the EU artificial intelligence act, but I also want to mention the Bletchley Decker declaration from November 1 of 23 which was an agreement between the g7 nations as well. So the broad umbrella I was just trying to call out Brian, is there’s going to be global regulations on AI, and it’s very much a moving target. So anything that we say in today’s podcast could vary by tomorrow, and we understand that, but these big frameworks we think will be around for a while,
Brian F. Tankersley, CPA.CITP, CGMA 04:08
yeah, now, the the tip of the spear right now on on AI and regulation is currently what’s going on with the President’s Executive Order Number 14, 110, and the reason this one is so important is that is that this is a 47 page, 47 pages of excruciatingly boring memorandum. Okay, it’s every bit as exciting as the Federal Register, but it it drives, it actually comes out with some new requirements. And it is going to it requires DoD energy and HHS all create regulations about model AI models that might pose a serious risk to national security infrastructure, public health and safety. So think about things like big botnets running on cloud servers, or AI models. Models that AI models that can be used to create biological weapons or is a major concern that DOD and HHS are both concerned about. And so there’s actually a lot of reporting on this. And so these AI models have to be evaluated by the government, and has to pass, have to pass. They have to have red team testing in here as well.
Randy Johnston 05:20
So, So Brian, are you trying to tell me that I Robot and Guy net, those movies are coming to life here? What’s going on?
Brian F. Tankersley, CPA.CITP, CGMA 05:28
I will tell you that, you know, speaking of iRobot, one of the things that’s a real concern for some people is the is that the Roomba put out by iRobot, which I know you were talking about the movie I Robot, or the Asimov story, but the I also mentioned, though, the Roomba here, because one of the things that they actually do with their AI is they gather and they have a map of where all the furniture is in your house, so that the Roomba can steer around it, and they retain that data on their servers. And so that’s you know, again, if you think about it, if you know if some if, if a bunch of crooks got a hold of that information, they could, they could figure out how quickly to run through your house with that. And so I just, I mentioned it to you here, because I want you to know that that sometimes their devices, like Internet of Things, devices that can do things with your data that you might not think about well,
Randy Johnston 06:24
and others of us worry about things like military equipment that are given full autonomy, which the EU is against and the US government is absolutely for. And you know, you just, I can give you all sorts of military equipment. It’s not the point of today’s podcast, but you know, military AI that has no human intervention just doesn’t sit well with me, just like our data pimping, you know, doesn’t sit well with both of us. And so for certain things that I worry about, whether it’s rational or irrational, because I’m really looking for the good of all.
Brian F. Tankersley, CPA.CITP, CGMA 07:05
You shouldn’t have a death machine running around. I agree. You know now, now I will tell you that, since I live next to the Oak Ridge Reservation, where the US stores all of its highly all of its surplus, highly enriched uranium, you know their applications like that, and Kirtland Air Force Base, where they store the nukes that, you know, maybe in those context, I can soften up a little bit. Okay, on that, but, but, but again, that’s only because I don’t want to glow in the dark Randy, you know. So it’s a, it’s but, but I, but I agree that the again, we’ve got to evaluate these things, because we don’t want folks to be using AI to solve some of these problems. Now there, again, there’s some privacy provisions. It’s short of GDPR and the Canadian Privacy Act, though. It does review data brokers at brews, how data brokers and commercially available data is used, and they’re going to recommend privacy guidance. Okay, I suspect that this guidance will not be will not be optional in the future. It through some rulemaking. And again, there they, they’ve he’s actually directed the federal agencies to evaluate the effectiveness of their private privacy preserving techniques, and to again, go through and and identify some of these, some of these things, and
Randy Johnston 08:25
where we’ve taught GDPR in the past, you know, the data brokers, in my mind, on the license agreements, extend out the privacy again, to the sub processors that we’ve talked about In other technology labs, and my naughty and nice list, who’s a good data broker, who’s a bad one. And you know, if you go back to the terminology that was, you know, being used with the unacceptable high, limited and minimal, there is no directory that we’re aware of today that lists applications as being unacceptable, high, limited or minimal. And we think it’s going to have to get to that over time, just like in the banking community, where we have vendor supplier approvals on a regular basis to make sure that they’re not breaking into our banking systems.
Brian F. Tankersley, CPA.CITP, CGMA 09:19
Yeah, and you know, the thing, the thing I one of the things I think about here, is that, you know, when we talk about the you’re absolutely right to focus on those sub processors Randy, because I think it’s critical for you to see whether, you know, when somebody is using an outside processor, are they? Are they actually processing the data? Are they just pimping it out? And that’s the key different? Yeah, that’s a key thing to look at. Now. Commerce Department also is coming out, supposed to come out with best practices for detecting deep fakes such as text images and sounds which are not distinguishable from real text images or recordings. And there’s some pretty interesting some of these on YouTube, if you actually want to watch them, they have they have speeches that they. The text of speeches that President Obama gave being delivered by President Trump and vice versa. So they have, they’ve swapped them out. And since there’s enough video out there, they can create an AI model that looks like former President Obama or former President Trump and and then feed whatever language in it and have it deliver it the way they would say. And so it’s, it’s actually pretty amazing. In fact, I think at CES Randy, you don’t want to talk about, talk about the model that was created of you.
Randy Johnston 10:29
Yeah, the hollow AI chat bot is pretty stunning. They just updated that model here in the last couple of weeks of me. And you know, it took 45 seconds. And our rule right now is, if there’s seven or eight seconds of audio on you, they can reproduce your voice. So Brian and I know we’re targets with this podcast, because there’s plenty of voice around us and plenty of video around us as well, so we also are following though, the evolution of the big AI providers, Microsoft, with copay, chatgpt, cloud three, Gemini, agreeing to start doing identification, if you will, basically marking these images as genuine or fake.
Brian F. Tankersley, CPA.CITP, CGMA 11:20
Yeah, yeah. I just wish we could do that with news, but I guess you know it’s, what are you going to do? What
Randy Johnston 11:26
are you going to do? Yeah.
Brian F. Tankersley, CPA.CITP, CGMA 11:27
So as we’re looking at this, though, it also creates a cybersecurity program to develop AI tools to find and fix vulnerabilities and critical infrastructure. So they think terrorists could use AI to target critical infrastructure. Increases, government investment, other actions to provide guidance to landlords, federal benefit programs and federal contractors to avoid algorithmic discrimination and provide best practices in the criminal justice system. So again, this, this use of AI, is going to make it easier for people to find out things that have historically not been private, or they’ve been private, but have not been necessarily. They haven’t been private, but they have been public, but difficult to access. It will now be much easier to access that information. And so this so again, they’re they’re targeting at targeting those things, safety program to reduce harms and unsafe healthcare practices using AI transform education by creating resources to support educators with AI enabled education tools and maximize benefits of AI for workers, so that those are the some of the major sections that are in this.
Now, the important thing for you to know is that the regulations are going to start rolling out of HHS DOD and And the Energy Department in the fairly near future. So you should expect those new regulations to be proposed in the fairly near future in the federal register that come out of this effort, and so, yes, even before the election. So you know, we’ll see, we’ll see how this all works out. Now with this, there’s also NIST also has a trustworthy AI risk management framework that you can use, which is a very good document. There is a playbook and glossary and a roadmap, and there are crosswalks, importantly, here, to different standards and frameworks.
So if you want to map you know this, their AI risk management framework against other regulations. They actually have a crosswalk that does this. So if you’re trying to figure these things out, and again, if you if you have regulatory compliance things, that’s something to consider. This video is called Introduction to the NIST AI risk management framework. It’s on YouTube. It is six minutes, and it’s a, it is a great kind of summary of things. Now, this was actually issued in January of 2023 it is voluntary, but not mandatory. But we expect that it may change in the near future.
It tries to incorporate, incorporate four major things, governing, governance, structures and policies, AI related risks, measuring, again, measuring things and quantifying, evaluating risks and managing things. And so the the idea is that we’re trying to again evaluate all of those. So here is their roadmap that they have of what’s involved in it. It’s just kind of a nice diagram. I kind of like it. There is also a risk management framework playbook, okay? And so these are tools. These are some free tools that will help you implement this should you want to. So it has checklists and suggestions and and other items like that. And again, it is similar to other risk management frameworks, like the ones from koso COBIT, that’s from ISACA and many others out there. Now, as we consider the potential harms from AI systems, these, these are the things that that, again, it’s it’s folk that this particular documents focused on, again, harm to individuals, to groups and community. Into society depending on different things, harms to organizations and harms to ecosystems. So they’ve actually listed out quite a few different things there. And Randy, what comments would you? Would you just say generally about about these laws and regulations at this point?
Randy Johnston 15:16
Well, Brian, I think you’ve got the basic frameworks fairly tight. What I am concerned for for our listeners, is that the AI regulations are very nascent. They’re exceptionally new. They’re exceptionally variable. A lot of the regulators they’re trying to set the frameworks don’t even know what they’re regulating. In many cases, the interests of the AI developers are, in many cases, being advanced, maybe for additional sources of data, as opposed to protection of data. We’ve only seen two providers who, have, you know, tried to create environments where they’re protecting the data more. So, if you go back to the open AI, founder, you know, basically what happened with the split, if you want to think of it that way, I believe it was. Andre karpathy went off and said, you know, we’re going to, you know, create a new, safer so there’s a few people trying to do those safer, AI type of approaches, but it’s, it’s a big deal, as it turns out. So, you know, I’m just going to return to a theme that we have talked with you about in AI for a year plus, and that is protecting your client data, not putting any personal information inside these AI models. The vendors are trying to get data to train their models. They’re running short on that, and the amount of risk to you and your clients if you make that type of decision is high. We’re not anti AI. We think AI should be private and built into a lot of the systems. So that’s kind of a long rant in answer to your question there, Brian, but
Brian F. Tankersley, CPA.CITP, CGMA 17:18
we all do another episode on AI risk management to try to give you some overview on that, and so that’ll be coming in the fairly near future.
Randy Johnston 17:28
Yeah. Well, your expertise is appreciated, and we appreciate all of you who have listened in today, and we will talk to you again soon in another accounting Technology Lab. Good day,
17:41
good day.
Brian F. Tankersley, CPA.CITP, CGMA 17:44
Thank you for sharing your time with us. We’ll be back next Saturday with a new episode of the technology lab from CPA practice advisor. Have a great week.
= END =