Not much detail beyond the headline. Mostly, I expect this to be hand waved. The simple fact is that things like OpenAI represent too much potential production/profit to not be allowed, so the...
Not much detail beyond the headline. Mostly, I expect this to be hand waved.
The simple fact is that things like OpenAI represent too much potential production/profit to not be allowed, so the laws will bend around it rather than the other way around. The big one is going to be copyright/patents/etc, which need reworking anyways, but are absolutely an issue in the face of mass scraping data for neural networks.
With that in mind, Disney will be the company to watch. There's 0 way they're not going to try and save every penny they can with this, while also making sure they have grounds to sue into dust any network that generates anything looking remotely like Mickey Mouse. This tech is going to force some rethinking of worldwide laws, but it obviously doesn't have to be for the better.
I'm with you there on Disney being the most interesting company to watch. Mainly because, if they invest heavily in AI, they risk breaking copyright law which they themselves have helped create....
I'm with you there on Disney being the most interesting company to watch. Mainly because, if they invest heavily in AI, they risk breaking copyright law which they themselves have helped create. Specifically the nicknamed Mickey Mouse Protection Act.
Personally, I think the current law is way too stringent, and with the advent of AI it is going to come into the forefront just how stringent these copyright laws are. They were lobbied for and created, in my opinion, to protect the interests of corporations and the already extremely wealthy, not to benefit society. Laws should be set up to benefit society. Part of that is balancing the incentive to create and iterate on a work of art by the original creator, but a larger part of that is also allowing older works to be used in the creation of new things. A work is currently copyright protected for the life of the author plus 70 years, or for corporate works 120 years after creation or 95 years after publication, whichever end is earlier. 200 years ago copyright lasted closer to 30 years. And this was in a world where a new publication could take decades to cross our society as a whole. Now it takes just hours. I would argue that copyright should be getting shorter as communication and distribution becomes faster, not the other way around!
It's wildly interesting to see a landscape slowly developing in which the wealthy are at odds with the societal changes they have spent 100 years creating. I don't know how it's going to play out, but I can hope it will change things to benefit society.
As someone who was involved in the early drone business, I understand this stance. Of course, it benefits the bigger players because they can comply and be licensed and dominate the space once...
[Altman] invited A.I. legislation to oversee the fast-growing industry
As someone who was involved in the early drone business, I understand this stance. Of course, it benefits the bigger players because they can comply and be licensed and dominate the space once they help write the regulations - but also Congress doesn't even understand Facebook, why even bother attempting to regulate something they probably can't even comprehend? Not in a bootlicking way, mind you, more a cynical "late stage capitalism" way.
This is another revolution at this point, we're either gonna end up Flintstones or go towards Jetsons.
This right here is exactly why OpenAI is announcing alignment research. It’s to head off investigations like this, so they can show they are “self-regulating”. Wikipedia has it right:...
This right here is exactly why OpenAI is announcing alignment research. It’s to head off investigations like this, so they can show they are “self-regulating”. Wikipedia has it right:
According to a 1994 report for the Australian Criminology Research Council, the large majority of documented cases of self-regulation are better viewed as an attempt to placate the public and to keep government regulators at bay than a genuine strategy to achieve broader public interest goals.
Self-regulating worked for the Movies, Music and Video Games industry in the US. We were close to passing pretty draconian laws here because of moral panic/ protect the kids bullshit. But that...
Self-regulating worked for the Movies, Music and Video Games industry in the US. We were close to passing pretty draconian laws here because of moral panic/ protect the kids bullshit.
But that worked because the whole industry got together to do it and because the solution was basically to put a label on products.
I don’t see how something like that would work in the Ai space, and I hope this is not use for regulatory capture making only giant corporations be able to enter the field.
I absolutely think this is about regulatory capture. My suspicion is allocating 20% of their compute to alignment is intended to set a threshold cost for doing serious AI research -- don't have a...
I absolutely think this is about regulatory capture. My suspicion is allocating 20% of their compute to alignment is intended to set a threshold cost for doing serious AI research -- don't have a billion dollars of compute? Then you're not capable of being an "ethical AI developer".
https://en.wikipedia.org/wiki/Regulatory_capture That's what I'm asserting OpenAI is seeking to do. They want to control the regulatory process to lock out competitors.
In politics, regulatory capture (also agency capture and client politics) is a form of corruption of authority that occurs when a political entity, policymaker, or regulator is co-opted to serve the commercial, ideological, or political interests of a minor constituency, such as a particular geographic area, industry, profession, or ideological group.
That's what I'm asserting OpenAI is seeking to do. They want to control the regulatory process to lock out competitors.
Yes, I know what regulatory capture is. Read the descriptions further down in the article. It's when the regulators are doing something out of self-interest or when the regulators are basically...
Yes, I know what regulatory capture is. Read the descriptions further down in the article. It's when the regulators are doing something out of self-interest or when the regulators are basically just industry members, whether because they are actual former industry employees or because industry practices become normalized and the regulators come to accept them.
Note that regulators are, for example, agents who work for the SEC, FAA, or other bureaucracies, not lawmakers. OpenAI is trying to convince the lawmakers that was they are doing should be industry standard. That's a variant of self-regulation.
Yes, this is precisely what OpenAI seeks. This is not a novel view:...
or when the regulators are basically just industry members, whether because they are actual former industry employees or because industry practices become normalized and the regulators come to accept them.
Yes, this is precisely what OpenAI seeks. This is not a novel view:
A lobbying document sent to EU lawmakers titled "OpenAI White Paper on the European Union’s Artificial Intelligence Act" makes the case that OpenAI's large foundational models should not be considered high-risk.
The white paper, dating back to September 2022 and obtained by Time, suggests several amendments that reportedly have been incorporated into the the draft text of the EU AI Act(emphasis mine), which was approved a week ago. The regulatory language will be the subject of further negotiations and possible changes prior to final approval, which could happen within six months.
I hope congress and friends regulate open AI specifically into the ground while leaving actually open players who didn't go crying to congress for protection-through -regulation can thrive
I hope congress and friends regulate open AI specifically into the ground while leaving actually open players who didn't go crying to congress for protection-through -regulation can thrive
Not much detail beyond the headline. Mostly, I expect this to be hand waved.
The simple fact is that things like OpenAI represent too much potential production/profit to not be allowed, so the laws will bend around it rather than the other way around. The big one is going to be copyright/patents/etc, which need reworking anyways, but are absolutely an issue in the face of mass scraping data for neural networks.
With that in mind, Disney will be the company to watch. There's 0 way they're not going to try and save every penny they can with this, while also making sure they have grounds to sue into dust any network that generates anything looking remotely like Mickey Mouse. This tech is going to force some rethinking of worldwide laws, but it obviously doesn't have to be for the better.
I'm with you there on Disney being the most interesting company to watch. Mainly because, if they invest heavily in AI, they risk breaking copyright law which they themselves have helped create. Specifically the nicknamed Mickey Mouse Protection Act.
Personally, I think the current law is way too stringent, and with the advent of AI it is going to come into the forefront just how stringent these copyright laws are. They were lobbied for and created, in my opinion, to protect the interests of corporations and the already extremely wealthy, not to benefit society. Laws should be set up to benefit society. Part of that is balancing the incentive to create and iterate on a work of art by the original creator, but a larger part of that is also allowing older works to be used in the creation of new things. A work is currently copyright protected for the life of the author plus 70 years, or for corporate works 120 years after creation or 95 years after publication, whichever end is earlier. 200 years ago copyright lasted closer to 30 years. And this was in a world where a new publication could take decades to cross our society as a whole. Now it takes just hours. I would argue that copyright should be getting shorter as communication and distribution becomes faster, not the other way around!
It's wildly interesting to see a landscape slowly developing in which the wealthy are at odds with the societal changes they have spent 100 years creating. I don't know how it's going to play out, but I can hope it will change things to benefit society.
As someone who was involved in the early drone business, I understand this stance. Of course, it benefits the bigger players because they can comply and be licensed and dominate the space once they help write the regulations - but also Congress doesn't even understand Facebook, why even bother attempting to regulate something they probably can't even comprehend? Not in a bootlicking way, mind you, more a cynical "late stage capitalism" way.
This is another revolution at this point, we're either gonna end up Flintstones or go towards Jetsons.
This right here is exactly why OpenAI is announcing alignment research. It’s to head off investigations like this, so they can show they are “self-regulating”. Wikipedia has it right:
https://en.wikipedia.org/wiki/Industry_self-regulation
Self-regulating worked for the Movies, Music and Video Games industry in the US. We were close to passing pretty draconian laws here because of moral panic/ protect the kids bullshit.
But that worked because the whole industry got together to do it and because the solution was basically to put a label on products.
I don’t see how something like that would work in the Ai space, and I hope this is not use for regulatory capture making only giant corporations be able to enter the field.
I absolutely think this is about regulatory capture. My suspicion is allocating 20% of their compute to alignment is intended to set a threshold cost for doing serious AI research -- don't have a billion dollars of compute? Then you're not capable of being an "ethical AI developer".
What does that have to do with regulatory capture?
OpenAI is seeking to control the regulatory process to keep out competitors.
But that's not regulatory capture.
https://en.wikipedia.org/wiki/Regulatory_capture
That's what I'm asserting OpenAI is seeking to do. They want to control the regulatory process to lock out competitors.
Yes, I know what regulatory capture is. Read the descriptions further down in the article. It's when the regulators are doing something out of self-interest or when the regulators are basically just industry members, whether because they are actual former industry employees or because industry practices become normalized and the regulators come to accept them.
Note that regulators are, for example, agents who work for the SEC, FAA, or other bureaucracies, not lawmakers. OpenAI is trying to convince the lawmakers that was they are doing should be industry standard. That's a variant of self-regulation.
Yes, this is precisely what OpenAI seeks. This is not a novel view:
https://www.washingtonpost.com/business/2023/06/02/openai-s-sam-altman-regenerates-the-gilded-age-playbook/51cc15de-0148-11ee-9eb0-6c94dcb16fcf_story.html
https://www.theverge.com/2023/5/19/23728174/ai-regulation-senate-hearings-regulatory-capture-laws
In fact, OpenAI is literally already writing its own legislation:
https://www.theregister.com/2023/06/21/openai_government_regulation/
I hope congress and friends regulate open AI specifically into the ground while leaving actually open players who didn't go crying to congress for protection-through -regulation can thrive