Should AI Be Regulated? Can It Be Regulated?
Q&A With Muthu Krishnan: Chief Technology Officer at WhiteSpace Health
by Muthu Krishnan
As artificial intelligence (AI) gains momentum in mainstream society and in healthcare, there has been excitement about what it can do, but also a nagging concern about the problems it might cause. The possibilities for good and bad seem to be endless in either direction.
Recently, the U.S. federal government gave somewhat of an indication that it is leaning toward AI regulation. On 30 October 2023, President Biden issued an executive order on “The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks,” Biden said in the executive order. But is that possible?
Muthu Krishnan, who is the chief technology officer at WhiteSpace Health, gives his perspective on the executive order, on regulating AI, and what he thinks the future holds when it comes to its use – especially in the revenue cycle management (RCM) sector.
President Biden's executive order calls for the Department of Health and Human Services to establish an AI task force to sketch out a strategic plan for regulating AI. Do you think AI needs regulation?
AI is here to stay. There is no avoiding it. People will be using it in some form or fashion, so it needs to be regulated. And it needs to be regulated with two things in mind: the results of AI should be reproducible and repeatable. Just because I learn new things, I cannot do away with what I did in the past. So, I think it is important to have that regulation in place. People should be held accountable to what they recommend using AI. Or, if you do not want to make that recommendation, don’t use AI. Very simple fact. If you don't like to drive a car, you can ride a bicycle.
How do you see AI regulation affecting RCM?
It is going to have huge ramifications if you do not have very clear, well-rounded problems that you can solve with AI. Here is the example I like to give. People are excited about self-driving cars, but you can’t buy them yet. And the reason is the complex set of rules needed to teach the car what to do. For example, at a pedestrian crosswalk, think of all the variables. If somebody is in the median waiting to cross, what should the car do? If they have pressed the button to cross, but they are still standing there, should the car proceed? How do you reproduce every possible scenario and create a set of rules?
Can you give some examples of what you’re talking about in revenue cycle management?
You know that there are certain payer behaviors where AI can determine that a claim is fine. And we can figure out what these modalities are, under which your payer is likely to deny a claim. What do we need to do to make sure that you know the payer might pay it once you refile it with those artifacts, clarifications, or additional medical documentation the claim needed. So, we can fix the problem with respect to claims going out before the claim goes out, based on historical evidence. Payers can also change behavior, often, because they are also changing and learning because of how providers are billing for procedures.
As claims come in, there is going to be this constant learning cycle. The part of human knowledge that is going to be exercised to make the determination on how to pay the claim can be done by a machine learning (ML) engine. Because it is basically looking for patterns – and the perfect problem for ML to solve. Another problem is, what code sets should you use to bill for a procedure? You should not overbill. You know what the boundary conditions are to bill for a procedure. Likewise, it is important to ensure that the procedure is billed in its entirety and uses the correct code, because otherwise, you can get into claim fraud issues. You can avoid fraud issues by having a good ML engine in place.
One thing the executive order said is that some AI regulation will probably have to come from lawsuits because of the problems that will arise. The courts will have to decide. What do you think about that?
It is inevitable that there will be lawsuits, but it's also a cop out to say that's the mechanism to fix this problem. You really want to avoid all these frivolous lawsuits that are going to come up. If you don't want to use AI, do not use AI. Very simple. Make it so the end user can say I don't want to use it and turn it off. You can't just keep putting regulation on top of regulation and make life difficult for everybody. If I am willing to take that risk and use AI, let me take that risk. As long as I don't hurt somebody else.
About Muthu Krishnan