I get the logic, but I'm not certain human judgement is always better. Especially when the relevant doctors are hired by the insurance company and incentivized to deny claims are medically...
I get the logic, but I'm not certain human judgement is always better. Especially when the relevant doctors are hired by the insurance company and incentivized to deny claims are medically necessary.
Humans can also be more biased than an algorithm which is specifically designed to avoid racial and socioeconomic biases. I guess we'll see if any major differences emerge between health insurance outcomes in California and other states.
Humans can be more biased, but you can hold a human being accountable for their decisions in a way you can't with AI. You can put a human being up on the witness stand and force them to testify...
Humans can be more biased, but you can hold a human being accountable for their decisions in a way you can't with AI. You can put a human being up on the witness stand and force them to testify under oath to what their reasons for denying a claim were, if it comes down to it, whereas AI systems are black boxes.
Not to mention, human biases are often integrated into AI systems since the training data isn't checked for equality in my limited knowledge but I hope to be proven wrong.
Not to mention, human biases are often integrated into AI systems since the training data isn't checked for equality in my limited knowledge but I hope to be proven wrong.
There are attempts to debias training data in some domains, but it's quite difficult to accomplish. It's also not nearly as straightforward as you think -- ML models are pattern-finding machines,...
There are attempts to debias training data in some domains, but it's quite difficult to accomplish. It's also not nearly as straightforward as you think -- ML models are pattern-finding machines, and the patterns that lead to bias are deeply ingrained in our world and thus in any real-world training data. For instance, the Amazon recruiting AI that famously exhibited extreme gender bias wasn't actually provided with applicants' names or genders. It instead disadvantaged candidates who went to certain women's colleges and resumes that mentioned the word "women's" (such as when mentioning organizations like "Women's Rugby Team" or "Women's Chess Club"). It apparently also privileged resumes that contained certain verbs that were overwhelmingly used by male candidates, like "executed" and "captured." These can hopefully illustrate how difficult it is to truly debias training data for AI -- even if you get rid of explicit indicators of gender, you probably can't get rid of every single subtle indicator that could lead an ML model to infer a gendered pattern.
How in the world are they going to check and enforce such a law though, especially considering that a given insurance company's internal decision-making rules would not be made public?
How in the world are they going to check and enforce such a law though, especially considering that a given insurance company's internal decision-making rules would not be made public?
The California Department of Managed Health Care will oversee enforcement, auditing denial rates and ensuring transparency. The law also imposes strict deadlines for authorizations: standard cases require decisions within five business days, urgent cases within 72 hours, and retrospective reviews within 30 days.
Under SB 1120, state regulators have the discretion to fine insurance companies and determine the amounts owed for violations, such as missed deadlines or improper use of AI.
They don't need to; the entire benefit of AI here is as a catspaw, which is inherently public. If (when) someone sues them, they'll have to blame either a human or their AI, and if they blame...
They don't need to; the entire benefit of AI here is as a catspaw, which is inherently public. If (when) someone sues them, they'll have to blame either a human or their AI, and if they blame their AI then they're admitting to breaking the law.
I know someone who was a labor law investigator. They visited businesses and inspected their records looking for violations. If someone sues an insurance company over denials, business records...
I know someone who was a labor law investigator. They visited businesses and inspected their records looking for violations.
If someone sues an insurance company over denials, business records will be subpoenaed and turned over during discovery.
I get the logic, but I'm not certain human judgement is always better. Especially when the relevant doctors are hired by the insurance company and incentivized to deny claims are medically necessary.
Humans can also be more biased than an algorithm which is specifically designed to avoid racial and socioeconomic biases. I guess we'll see if any major differences emerge between health insurance outcomes in California and other states.
Humans can be more biased, but you can hold a human being accountable for their decisions in a way you can't with AI. You can put a human being up on the witness stand and force them to testify under oath to what their reasons for denying a claim were, if it comes down to it, whereas AI systems are black boxes.
— IBM presentation material, 1979
Not to mention, human biases are often integrated into AI systems since the training data isn't checked for equality in my limited knowledge but I hope to be proven wrong.
Even when somebody has tried to correct for explicit bias, the implicit bias has been shown to be present consistently
There are attempts to debias training data in some domains, but it's quite difficult to accomplish. It's also not nearly as straightforward as you think -- ML models are pattern-finding machines, and the patterns that lead to bias are deeply ingrained in our world and thus in any real-world training data. For instance, the Amazon recruiting AI that famously exhibited extreme gender bias wasn't actually provided with applicants' names or genders. It instead disadvantaged candidates who went to certain women's colleges and resumes that mentioned the word "women's" (such as when mentioning organizations like "Women's Rugby Team" or "Women's Chess Club"). It apparently also privileged resumes that contained certain verbs that were overwhelmingly used by male candidates, like "executed" and "captured." These can hopefully illustrate how difficult it is to truly debias training data for AI -- even if you get rid of explicit indicators of gender, you probably can't get rid of every single subtle indicator that could lead an ML model to infer a gendered pattern.
How in the world are they going to check and enforce such a law though, especially considering that a given insurance company's internal decision-making rules would not be made public?
Governmental auditing.
They don't need to; the entire benefit of AI here is as a catspaw, which is inherently public. If (when) someone sues them, they'll have to blame either a human or their AI, and if they blame their AI then they're admitting to breaking the law.
I know someone who was a labor law investigator. They visited businesses and inspected their records looking for violations.
If someone sues an insurance company over denials, business records will be subpoenaed and turned over during discovery.