The article is titled "On robots killing people" which is sensational, but doesn't give you much about what's in the tin. Salient quote:
The article is titled "On robots killing people" which is sensational, but doesn't give you much about what's in the tin.
Salient quote:
But as technology continues to change, the government needs to more clearly regulate how and when robots can be used in society. Laws need to clarify who is responsible, and what the legal consequences are, when a robot’s actions result in harm. Yes, accidents happen. But the lessons of aviation and workplace safety demonstrate that accidents are preventable when they are openly discussed and subjected to proper expert scrutiny.
AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days. (OpenAI did not comment when asked about its stance on regulation; previously, it has said that “achieving our mission requires that we work to mitigate both current and longer-term risks,” and that it is working toward that goal by “collaborating with policymakers, researchers and users.”)
Large corporations have a tendency to develop computer technologies to self-servingly shift the burdens of their own shortcomings onto society at large, or to claim that safety regulations protecting society impose an unjust cost on corporations themselves, or that security baselines stifle innovation. We’ve heard it all before, and we should be extremely skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned, and human operators trying to assist were killed in unexpected ways. Since the first-known death resulting from the feature in January 2016, Tesla’s Autopilot has been implicated in more than 40 deaths according to official report estimates. Malfunctioning Teslas on Autopilot have deviated from their advertised capabilities by misreading road markings, suddenly veering into other cars or trees, crashing into well-marked service vehicles, or ignoring red lights, stop signs, and crosswalks. We’re concerned that AI-controlled robots already are moving beyond accidental killing in the name of efficiency and “deciding” to kill someone in order to achieve opaque and remotely controlled objectives.
This is good as far as it goes, but I think the tricky part is when you zoom in a level and start talking more specifically what the regulations should be. It seems clearer for driverless cars and...
This is good as far as it goes, but I think the tricky part is when you zoom in a level and start talking more specifically what the regulations should be. It seems clearer for driverless cars and other situations where computers are controlling heavy machinery - that is, actual robots. Physical dangers are well-understood.
But it’s not clear how that relates to what OpenAI is doing. I don’t think they have any robots? There are a wide variety of more conceptual dangers that people are worried about, but little consensus on what’s important.
Safety is about identifying and reducing risk of harm or loss. The article traces several cases (boilers and airplanes) where it took analysis and regulation to make a new technology acceptably...
Safety is about identifying and reducing risk of harm or loss. The article traces several cases (boilers and airplanes) where it took analysis and regulation to make a new technology acceptably safe. But complying with those rules is expensive. The point of the article is that OpenAI and others are trying to avoid being forced to do the analysis or be held accountable for the potential harms.
So you're right, "there's little consensus on what's important", but that's because the work of doing that analysis is being actively discouraged.
There's lots of work on AI safety, including by OpenAI. The organization was even founded to work on it, though things have changed since then. And they are at least saying the right things about...
There's lots of work on AI safety, including by OpenAI. The organization was even founded to work on it, though things have changed since then. And they are at least saying the right things about doing research and they say they're funding it, too.
There are apparently some regulations that they are resisting. I haven't studied it, but I don't think it's accurate to frame this as pro- versus anti- safety. Instead, people disagree a lot on the approaches to take and on power issues.
I agree they say they are working on it, but the important thing is that they are actively resisting external regulation. Unless there is regulation to force transparency, they will get to decide...
I agree they say they are working on it, but the important thing is that they are actively resisting external regulation. Unless there is regulation to force transparency, they will get to decide pretty much unilaterally what safe means, and we will have to take their word for it.
Having the people who directly benefit from the success of AI decide what "safe" means is perverse incentive -- the fox guarding the hen house.
I believe that process should be taken up as a public policy decision and should include a much wider variety of stakeholders because the impact on society is going to be on every single one of us. Without a larger, more inclusive process, the people who reap the benefits do so expense of those who bear the risks.
Here is an example: lets say someone doesn't get a loan because the algorithm decides they are a credit risk, but the algorithm has bias against people of a certain socioeconomic or racial background. That person did not ask to be evaluated by an LLM - they had no choice. They are bearing the cost of that training bias while the bank saves money by processing loan applications with AI and the LLM provider gets paid for offering a flawed service.
It's essentially the same situation as a vulnerable road user getting hit by an AV. They didn't get to ride to work and take a nap while the AV drove them to work. They didn't make money selling the service. But because the perception algorithm had a flaw where it doesn't recognize someone wearing red shorts, they are the ones who were injured.
I think we should avoid taking sides based on the form of the dispute and who the players are. It would be easy to assume that proposed safety regulations are usually good and companies arguing...
I think we should avoid taking sides based on the form of the dispute and who the players are. It would be easy to assume that proposed safety regulations are usually good and companies arguing against them are usually doing so for bad reasons. That's not entirely wrong, but I think it's a too simple, zoomed-out view of things. This stuff is complicated.
I do think more safety regulations might be a good idea, but I won't go so far as to support them sight unseen. As we often read in the news about proposed and even enacted Internet regulations, lawmakers do sometimes come up with bad ideas. Lobbyists pointing that out are serving corporate interests, but that doesn't necessarily mean they're wrong.
So I'd have to know more about what was actually proposed and what the arguments against it were. Fortunately in this case, we can read some arguments for ourselves - the Times article linked to an OpenAI white paper.
I didn't read the whole thing, but here's a bit of their argument:
“The currently proposed Article 4.c.1 contemplates that providers of general purpose Al systems will be exempted “when the provider has explicitly excluded any high-risk uses in the instructions of use or information accompanying the general purpose Al system.” While we believe that we would currently fall under this exemption given the protective measures we employ, Article 4.c.2 potentially undermines the intent of Article 4.c.1 by stating that “Such exclusion shall...not be deemed justified if the provider has sufficient reasons to consider that the system may be misused.”
As outlined above, we consider and continue to review on an ongoing basis the different ways that our systems may be misused, and we employ many protective measures designed fo avoid and counter such misuse. The current framing may inadvertently incentivise an avoidance of active consideration of ways that a general purpose Al system may be misused so that providers do not have “sufficient reasons to consider [misuse]" and can avoid additional requirements. The fundamental nature and value of general purpose Al systems are that they can be used for many application areas; we do not think it would meet the goals of safe and beneficial Al to inadvertently encourage providers to tum a blind eye to potential risks.
We suggest reframing the language to incentivize rather than penalize providers that consider and address system misuse, especially if they take actions that indicate they are actively identifying and mitigating risks.
This seems to be about the relationship between OpenAI and its customers, who would be the ones to actually misuse the AI. I'm a customer myself. Anyone who has used ChatGPT knows that they put a lot of countermeasures in place to keep it from being misused. But the nature of the product is that LLM's are easily fooled, customers are actively trying to "jailbreak" them, and customers are sharing jailbreaks.
Even if the bots really were intelligent, preventing misuse would be a very tough gig. A chatbot has a very limited view of the situation; it just gets text input, which is entirely controlled by the customer, who can lie about what they're doing. Imagine trying to prevent misuse of your words under those circumstances; you'd have to think about how your words could be twisted or seen in a different context.
Meanwhile, people are really worried about privacy and OpenAI has to reassure business customers that they'll never look at or use their data. Some businesses refuse to use ChatGPT altogether. Also, lot of people on Hacker News are really interested in LLM's that they can run on their own machines, so they can do whatever they like.
As we saw in Apple's abandoned attempts to prevent CSAM on the devices they manufacture, pushing too hard on companies to prevent misuse will result in more privacy-invading countermeasures. This places practical limits on what we should ask companies to do. Companies tend not to be all that interested in alienating their customers, or at least not large numbers of them, so they're going to resist doing things that are too intrusive. (That's not the only reason they resist, but it's one reason.)
Structurally speaking, the way this often works is that government tells companies what kind of rules they have to make about their customers. (They might not tell them the rules specifically, just what kind of rules they want, and penalize them if they don't do it right.) Asking for stronger regulation often means making the company be the bad guy. We see this in banking where they have "know your customer" regulation that requires them to pay attention to what their customers do. Sometimes banks will close somebody's account and they won't say why.
It's not all that clear that stronger regulations are what you want if you would like corporations to have less power. Sometimes it's making them be the police. It makes sense to be thoughtful about what you ask businesses to do, and maybe ask around to see if there's some way it might go wrong.
I think this is where we have a fundamentally different viewpoint. I don't think it is that complicated. I think it's pretty simple. I think it should be about when he players are: the incentives...
I think we should avoid taking sides based on the form of the dispute and who the players are. It would be easy to assume that proposed safety regulations are usually good and companies arguing against them are usually doing so for bad reasons. That's not entirely wrong, but I think it's a too simple, zoomed-out view of things. This stuff is complicated.
I think this is where we have a fundamentally different viewpoint. I don't think it is that complicated. I think it's pretty simple. I think it should be about when he players are: the incentives that a corporation operates under are fundamentally selfish. They may act in a way that sees altruistic, but cannot be expected to hold to an ethical course when profits are on the line. Once the external growth options are exhausted, to make the line go up, they turn to other means. This might be their customers (drive up prices) or their employees (drive down labor costs) or people who aren't their customers (drive down costs at the expense of people who have no voice).
During their market-capture growth phase, Google's motto was, "Don't be evil" because it helped people trust them. But once they had cornered the ad and search market, it just .... wasn't their priority anymore. Boeing built safe airplanes under the FAA's guidance for decades. But regulatory capture gave us the 737-max, where lives were lost because corners were cut to save costs.
I think the mistake here is using incentives as a shortcut to understanding. That is, thinking you don’t need to understand anything specific if you can just understand the incentives of the...
I think the mistake here is using incentives as a shortcut to understanding. That is, thinking you don’t need to understand anything specific if you can just understand the incentives of the entities involved and predict what they’ll do from there.
Yes, economists often think this way, but predictions made based on a crude understanding of incentives aren’t reliable. For example, Boeing has very strong incentives to build safe airplanes. [1] Concluding that therefore they will never have safety problems would be too simple. Pilots always have a strong incentive not to crash, but relying on that won’t make air travel safer, and it’s no substitute for a crash investigation.
Crime and human error often involve people doing things that they had incentives not to do, but somehow the incentives didn’t work. Rely on your own ideas about incentives too much and you assume Putin won’t invade Ukraine, because that would be stupid. Therefore it’s just a bluff.
Assuming corporations are “fundamentally selfish” isn’t even a good guide to understanding their incentives, particularly if you assume that selfishness is only about wanting bad things. But there are many competing interests, depending on how short-term or long-term you want to be. Merchants have incentives both to please their customers and to cheat their customers. Sometimes it’s in their self-interest to raise prices, and sometimes to lower prices. They have incentives to hire more employees and also to lay off employees, to raise wages and to cut wages.
Figuring out what they’re going to do is a matter of corporate strategy and executives can have strong disagreement about how to balance competing interests and what the best strategy would be. Everyone involved will argue that their proposal is in the long-term best interests of the company. But different companies make different decisions, and sometimes it’s just a matter of the personalities involved.
From the outside, that means we often can’t use reasoning from incentive to predict their decisions. We can see patterns, sure, but it’s a lot easier to see patterns in past behavior than to use them to predict future behavior.
[1] From Wikipedia: “The accidents and grounding cost Boeing an estimated $20 billion in fines, compensation and legal fees, with indirect losses of more than $60 billion from 1,200 cancelled orders.”
The article is titled "On robots killing people" which is sensational, but doesn't give you much about what's in the tin.
Salient quote:
This is good as far as it goes, but I think the tricky part is when you zoom in a level and start talking more specifically what the regulations should be. It seems clearer for driverless cars and other situations where computers are controlling heavy machinery - that is, actual robots. Physical dangers are well-understood.
But it’s not clear how that relates to what OpenAI is doing. I don’t think they have any robots? There are a wide variety of more conceptual dangers that people are worried about, but little consensus on what’s important.
Safety is about identifying and reducing risk of harm or loss. The article traces several cases (boilers and airplanes) where it took analysis and regulation to make a new technology acceptably safe. But complying with those rules is expensive. The point of the article is that OpenAI and others are trying to avoid being forced to do the analysis or be held accountable for the potential harms.
So you're right, "there's little consensus on what's important", but that's because the work of doing that analysis is being actively discouraged.
There's lots of work on AI safety, including by OpenAI. The organization was even founded to work on it, though things have changed since then. And they are at least saying the right things about doing research and they say they're funding it, too.
There are apparently some regulations that they are resisting. I haven't studied it, but I don't think it's accurate to frame this as pro- versus anti- safety. Instead, people disagree a lot on the approaches to take and on power issues.
I agree they say they are working on it, but the important thing is that they are actively resisting external regulation. Unless there is regulation to force transparency, they will get to decide pretty much unilaterally what safe means, and we will have to take their word for it.
Having the people who directly benefit from the success of AI decide what "safe" means is perverse incentive -- the fox guarding the hen house.
I believe that process should be taken up as a public policy decision and should include a much wider variety of stakeholders because the impact on society is going to be on every single one of us. Without a larger, more inclusive process, the people who reap the benefits do so expense of those who bear the risks.
Here is an example: lets say someone doesn't get a loan because the algorithm decides they are a credit risk, but the algorithm has bias against people of a certain socioeconomic or racial background. That person did not ask to be evaluated by an LLM - they had no choice. They are bearing the cost of that training bias while the bank saves money by processing loan applications with AI and the LLM provider gets paid for offering a flawed service.
It's essentially the same situation as a vulnerable road user getting hit by an AV. They didn't get to ride to work and take a nap while the AV drove them to work. They didn't make money selling the service. But because the perception algorithm had a flaw where it doesn't recognize someone wearing red shorts, they are the ones who were injured.
I think we should avoid taking sides based on the form of the dispute and who the players are. It would be easy to assume that proposed safety regulations are usually good and companies arguing against them are usually doing so for bad reasons. That's not entirely wrong, but I think it's a too simple, zoomed-out view of things. This stuff is complicated.
I do think more safety regulations might be a good idea, but I won't go so far as to support them sight unseen. As we often read in the news about proposed and even enacted Internet regulations, lawmakers do sometimes come up with bad ideas. Lobbyists pointing that out are serving corporate interests, but that doesn't necessarily mean they're wrong.
So I'd have to know more about what was actually proposed and what the arguments against it were. Fortunately in this case, we can read some arguments for ourselves - the Times article linked to an OpenAI white paper.
I didn't read the whole thing, but here's a bit of their argument:
This seems to be about the relationship between OpenAI and its customers, who would be the ones to actually misuse the AI. I'm a customer myself. Anyone who has used ChatGPT knows that they put a lot of countermeasures in place to keep it from being misused. But the nature of the product is that LLM's are easily fooled, customers are actively trying to "jailbreak" them, and customers are sharing jailbreaks.
Even if the bots really were intelligent, preventing misuse would be a very tough gig. A chatbot has a very limited view of the situation; it just gets text input, which is entirely controlled by the customer, who can lie about what they're doing. Imagine trying to prevent misuse of your words under those circumstances; you'd have to think about how your words could be twisted or seen in a different context.
Meanwhile, people are really worried about privacy and OpenAI has to reassure business customers that they'll never look at or use their data. Some businesses refuse to use ChatGPT altogether. Also, lot of people on Hacker News are really interested in LLM's that they can run on their own machines, so they can do whatever they like.
As we saw in Apple's abandoned attempts to prevent CSAM on the devices they manufacture, pushing too hard on companies to prevent misuse will result in more privacy-invading countermeasures. This places practical limits on what we should ask companies to do. Companies tend not to be all that interested in alienating their customers, or at least not large numbers of them, so they're going to resist doing things that are too intrusive. (That's not the only reason they resist, but it's one reason.)
Structurally speaking, the way this often works is that government tells companies what kind of rules they have to make about their customers. (They might not tell them the rules specifically, just what kind of rules they want, and penalize them if they don't do it right.) Asking for stronger regulation often means making the company be the bad guy. We see this in banking where they have "know your customer" regulation that requires them to pay attention to what their customers do. Sometimes banks will close somebody's account and they won't say why.
It's not all that clear that stronger regulations are what you want if you would like corporations to have less power. Sometimes it's making them be the police. It makes sense to be thoughtful about what you ask businesses to do, and maybe ask around to see if there's some way it might go wrong.
I think this is where we have a fundamentally different viewpoint. I don't think it is that complicated. I think it's pretty simple. I think it should be about when he players are: the incentives that a corporation operates under are fundamentally selfish. They may act in a way that sees altruistic, but cannot be expected to hold to an ethical course when profits are on the line. Once the external growth options are exhausted, to make the line go up, they turn to other means. This might be their customers (drive up prices) or their employees (drive down labor costs) or people who aren't their customers (drive down costs at the expense of people who have no voice).
During their market-capture growth phase, Google's motto was, "Don't be evil" because it helped people trust them. But once they had cornered the ad and search market, it just .... wasn't their priority anymore. Boeing built safe airplanes under the FAA's guidance for decades. But regulatory capture gave us the 737-max, where lives were lost because corners were cut to save costs.
I think the mistake here is using incentives as a shortcut to understanding. That is, thinking you don’t need to understand anything specific if you can just understand the incentives of the entities involved and predict what they’ll do from there.
Yes, economists often think this way, but predictions made based on a crude understanding of incentives aren’t reliable. For example, Boeing has very strong incentives to build safe airplanes. [1] Concluding that therefore they will never have safety problems would be too simple. Pilots always have a strong incentive not to crash, but relying on that won’t make air travel safer, and it’s no substitute for a crash investigation.
Crime and human error often involve people doing things that they had incentives not to do, but somehow the incentives didn’t work. Rely on your own ideas about incentives too much and you assume Putin won’t invade Ukraine, because that would be stupid. Therefore it’s just a bluff.
Assuming corporations are “fundamentally selfish” isn’t even a good guide to understanding their incentives, particularly if you assume that selfishness is only about wanting bad things. But there are many competing interests, depending on how short-term or long-term you want to be. Merchants have incentives both to please their customers and to cheat their customers. Sometimes it’s in their self-interest to raise prices, and sometimes to lower prices. They have incentives to hire more employees and also to lay off employees, to raise wages and to cut wages.
Figuring out what they’re going to do is a matter of corporate strategy and executives can have strong disagreement about how to balance competing interests and what the best strategy would be. Everyone involved will argue that their proposal is in the long-term best interests of the company. But different companies make different decisions, and sometimes it’s just a matter of the personalities involved.
From the outside, that means we often can’t use reasoning from incentive to predict their decisions. We can see patterns, sure, but it’s a lot easier to see patterns in past behavior than to use them to predict future behavior.
[1] From Wikipedia: “The accidents and grounding cost Boeing an estimated $20 billion in fines, compensation and legal fees, with indirect losses of more than $60 billion from 1,200 cancelled orders.”