Without the originals, this essay seems rather unimpressive. The impressive thing here is the coherence and logical progression of each argument, which seems to be completely human-created in the...
GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.
Without the originals, this essay seems rather unimpressive.
The impressive thing here is the coherence and logical progression of each argument, which seems to be completely human-created in the editing process.
I too have "edited" someone's work and created something wholly different and useable by editing structure, selecting the best arguments and getting rid of all the darts thrown at the board to see what might stick in the review process.
People really want to write an article like this and they're willing to cheat a little to make it happen. At the same time, many readers want to believe. A skeptical take is no fun.
People really want to write an article like this and they're willing to cheat a little to make it happen. At the same time, many readers want to believe. A skeptical take is no fun.
True enough, but the skeptical take is more in line with reality. It keeps hype bubbles about the 'general AI that will save us from ourselves' that is 'almost here' from growing too large. I'm...
True enough, but the skeptical take is more in line with reality. It keeps hype bubbles about the 'general AI that will save us from ourselves' that is 'almost here' from growing too large.
I'm still bitter about the nanotechnology hype bubble circa 2000 bursting. Would have been a far preferable reality to our current one.
It might be not fun but it changes everything for me, I lose interest and it actually pisses me off a bit. It's like those funny/wholesome videos that's completely innocent but then you find out...
It might be not fun but it changes everything for me, I lose interest and it actually pisses me off a bit. It's like those funny/wholesome videos that's completely innocent but then you find out it's staged (for no obvious reason) and it just ruins everything for me. People say "but it's still fun/cool" either way and I completely disagree. I'm on a rant here but these small chips of truth that's taken away all the time and dismissed as "nothing to worry about" or "it's just for the fun of it" is going to haunt us in the future because we will have lost faith in everything.
It reeks of "this dog drove a car/flew a plane/whatever else" and you see something like this. Yeah, if you squint a little maybe I could see it. But until you don't need to cherrypick and omit...
It reeks of "this dog drove a car/flew a plane/whatever else" and you see something like this. Yeah, if you squint a little maybe I could see it. But until you don't need to cherrypick and omit nonsense, it's not even close.
Especially because now I don't know how much I can really trust in regards to what I was impressed by. I was going to say that even if its a bit cherrypicked, I'm still impressed by the flow, language, and coverage of the peice but now I have no clue how much of that was the editor or the product. If the cherrypicked examples of gpt2 and how compelling Ai Dungeon turned out to be, I am actually a bit scared of how well gpt3 will perform. But this article felt a bit like a nothing burger to me.
Sorry, I should have written more clearly as my sarcasm was missed. I actually agree with you and don’t think what they’ve doing in the article is good.
Sorry, I should have written more clearly as my sarcasm was missed. I actually agree with you and don’t think what they’ve doing in the article is good.
A cool article but also like... Just show us the articles GPT-3 wrote? I think it would've been more genuine to have taken one article and edited. At the bare minimum they at least explained WHY...
A cool article but also like... Just show us the articles GPT-3 wrote? I think it would've been more genuine to have taken one article and edited. At the bare minimum they at least explained WHY they did what they did but I'm far less impressed that they pieced together the best parts of all the articles to make one. I know it's all part of the editing process but I'd still like to see one article unedited for this piece. It's still interesting but I guess I'll have to look and see what I can find myself.
There's something unsettling about a robot writing down words which describe human emotions and sayings such as 'grateful', 'god knows', 'I don't feel like', and 'happily'. In fact, much of the...
There's something unsettling about a robot writing down words which describe human emotions and sayings such as 'grateful', 'god knows', 'I don't feel like', and 'happily'. In fact, much of the writing feels disturbing to me, not because of the content, but because there seems to be a fundamental disconnect with the humanity of the writing and the fact that the robot is not, in fact, human. It feels like a failing of language, to distinguish between emotions and computation... which also feels weird to write, because at some point AI probably will be capable of emotion, but my own familiarity with GPT3 algorithms allows me to understand that it's merely mimicry.
On a fundamental level there's no difference. You could say, in vague terms, that emotions are a computation of sensory inputs modulated by perception. Which does nothing to address your feeling...
to distinguish between emotions and computation
On a fundamental level there's no difference. You could say, in vague terms, that emotions are a computation of sensory inputs modulated by perception.
Which does nothing to address your feeling over it, but I feel like it was worth saying.
While you're correct, if I program a very simple circuit to output the text "I am happy", very few people would disagree that this circuit is not experiencing the same concept we are trying to...
On a fundamental level there's no difference. You could say, in vague terms, that emotions are a computation of sensory inputs modulated by perception.
While you're correct, if I program a very simple circuit to output the text "I am happy", very few people would disagree that this circuit is not experiencing the same concept we are trying to convey with the word 'happy'. To an extent, the word 'happy' doesn't mean the exact same thing for all humans, and its questionable what other animals can feel 'happy', but language was primarily devised by humans to convey abstract ideas to others - a crucial component of this is the shared experience and defining of said word.
I see this more as a failing of language to capture precisely what this computer is doing than anything else. I'm not sure how to quantify what it is doing outside of mimicry, but many human things are mimicry as well so it's a very confusing subject all around.
A language is meant to interchange thoughts between two minds, mechanical or organic. Without a sense of ego, a machine shouldn't use personal pronouns for itself. Without any emotions, a machine...
A language is meant to interchange thoughts between two minds, mechanical or organic. Without a sense of ego, a machine shouldn't use personal pronouns for itself. Without any emotions, a machine shouldn't make reference to its state of mind. Without a body a machine should not describe itself as an organic being. Stripped down to what a computer can currently do, human languages can be a useful format for computers to use with humans.
I don't know if what you describe as a "failing of language" is really anything more than just a new type of lying. We couldn't ever trust words completely. Now we're moving into a world where we are only rarely be able to trust them.
Why do you say this? We use pronouns to describe objects and others, and if a computer were to display a message which used the pronoun "I" or if an animal capable of speech in some capacity...
Without a sense of ego, a machine shouldn't use personal pronouns for itself.
Why do you say this? We use pronouns to describe objects and others, and if a computer were to display a message which used the pronoun "I" or if an animal capable of speech in some capacity referred to itself with "I", I doubt many would take issue. Why do you think an ego is necessary?
Without any emotions, a machine shouldn't make reference to its state of mind.
For the most part, I agree. Unless the 'state of mind' is representative some kind of shift or change in function, which is what I was getting at by stating the emotive words felt out of place.
Without a body a machine should not describe itself as an organic being.
I don't feel at any point it did, other than the association of things like emotions with organic beings, but theoretically it's possible to design a circuit which mimics the way brains work, so that's not an exclusive thing, just constrained by the current limits of our technology.
Stripped down to what a computer can currently do, human languages can be a useful format for computers to use with humans.
Absolutely, I just found the language it chose to use disconnected from my perception of reality.
just a new type of lying
Lying implies intent - you have to know it's not true and choose to obscure the truth. A GPT3 algorithm can't do this. I'm not sure lying is the correct word here.
While a computer program that simply outputs "I am happy" is most likely not happy, there are areas that are much more grey. Take for example a simple virtual creature shaped by a genetic...
While a computer program that simply outputs "I am happy" is most likely not happy, there are areas that are much more grey. Take for example a simple virtual creature shaped by a genetic algorithm to accomplish a specific task. Is this creature 'happy' when it successfully reaches a high fitness? Or maybe it's more like a different emotion, or perhaps it's not conscious at all.
If they are conscious though, I do believe that even simple virtual creatures can feel some sort of emotion. Probably similar to the experiences of an insect or worm.
That might be true of some AI, but in this case it is glorified autocomplete. It could autocomplete happy, sad, or angry text, but that doesn't mean it is happy, sad, or angry. It has no internal...
That might be true of some AI, but in this case it is glorified autocomplete. It could autocomplete happy, sad, or angry text, but that doesn't mean it is happy, sad, or angry. It has no internal state other than the previous text and will autocomplete opposing beliefs.
GPT-3 has absorbed a large subset of the Internet from 2019 and some digitized books, but it's not going to be any more consistent about what it "believes" than a library or the Internet, and actually less consistent since it can only "remember" a small amount of previous text.
Video game characters can at least be programmed to have a consistent emotional state so it makes sense to talk about a monster having become angry at you (for example).
The hidden premise, here, is that the mind follows a computational model. That is far from an universally accepted proposition, especially considering that philosophy of the mind is not subsumed...
On a fundamental level there's no difference. You could say, in vague terms, that emotions are a computation of sensory inputs modulated by perception.
The hidden premise, here, is that the mind follows a computational model. That is far from an universally accepted proposition, especially considering that philosophy of the mind is not subsumed to computer science.
I think "computation" is as close an electromechanical equivalent to what we're gonna get when we understand brain workings as we're gonna get. On its own, it's a useful abstraction: it helps map...
I think "computation" is as close an electromechanical equivalent to what we're gonna get when we understand brain workings as we're gonna get. On its own, it's a useful abstraction: it helps map inputs to outputs better than the alternatives I've encountered.
The wording seems very....exact? If that makes sense? It's not surprising, as "concise" was in the prompt, but that's what jumped out at me. Another place I've heard of AI writing good articles...
The wording seems very....exact? If that makes sense? It's not surprising, as "concise" was in the prompt, but that's what jumped out at me.
Another place I've heard of AI writing good articles was in sports, where all of the stats could be analyzed. It could tell if there was a comeback, if someone performed better/worse than expected, etc. This would seem to work especially well in baseball, where everything is tracked.
This statement gives me the same vibes I would feel hearing a serial killer tell me about how to trust them before they're about to lock me in their basement. I know how fallible my own mind is,...
I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.
This statement gives me the same vibes I would feel hearing a serial killer tell me about how to trust them before they're about to lock me in their basement.
I know how fallible my own mind is, and how quickly I can change my mind. I certainly wouldn't entrust an AI who might be equally fallible, but can get itself into a reality-distortion bubble faster than we can possibly comprehend to do better.
That I suspect may have been added in the "editing" stage by humans. I'd be very surprised if the AI managed that. Not impossible, but I'm going with Occam's razor on this one.
That I suspect may have been added in the "editing" stage by humans. I'd be very surprised if the AI managed that. Not impossible, but I'm going with Occam's razor on this one.
Completely missed the Editor's Note at the bottom of the article. I believe you're right with that. Looks like we're safe from the AI uprising for a couple more years, phew!
Completely missed the Editor's Note at the bottom of the article. I believe you're right with that. Looks like we're safe from the AI uprising for a couple more years, phew!
"Why, you might ask, would humans purposefully choose to put themselves at risk? Aren’t humans the most advanced creature on the planet? Why would they believe that something inferior, in a purely...
"Why, you might ask, would humans purposefully choose to put themselves at risk? Aren’t humans the most advanced creature on the planet? Why would they believe that something inferior, in a purely objective way, could destroy them?"
An intelligence unable to understand assimilating information modifies the observer is missing the vital component. Where is agency without doubt of said agency?
Without the originals, this essay seems rather unimpressive.
The impressive thing here is the coherence and logical progression of each argument, which seems to be completely human-created in the editing process.
I too have "edited" someone's work and created something wholly different and useable by editing structure, selecting the best arguments and getting rid of all the darts thrown at the board to see what might stick in the review process.
People really want to write an article like this and they're willing to cheat a little to make it happen. At the same time, many readers want to believe. A skeptical take is no fun.
True enough, but the skeptical take is more in line with reality. It keeps hype bubbles about the 'general AI that will save us from ourselves' that is 'almost here' from growing too large.
I'm still bitter about the nanotechnology hype bubble circa 2000 bursting. Would have been a far preferable reality to our current one.
It might be not fun but it changes everything for me, I lose interest and it actually pisses me off a bit. It's like those funny/wholesome videos that's completely innocent but then you find out it's staged (for no obvious reason) and it just ruins everything for me. People say "but it's still fun/cool" either way and I completely disagree. I'm on a rant here but these small chips of truth that's taken away all the time and dismissed as "nothing to worry about" or "it's just for the fun of it" is going to haunt us in the future because we will have lost faith in everything.
It reeks of "this dog drove a car/flew a plane/whatever else" and you see something like this. Yeah, if you squint a little maybe I could see it. But until you don't need to cherrypick and omit nonsense, it's not even close.
Especially because now I don't know how much I can really trust in regards to what I was impressed by. I was going to say that even if its a bit cherrypicked, I'm still impressed by the flow, language, and coverage of the peice but now I have no clue how much of that was the editor or the product. If the cherrypicked examples of gpt2 and how compelling Ai Dungeon turned out to be, I am actually a bit scared of how well gpt3 will perform. But this article felt a bit like a nothing burger to me.
Sorry, I should have written more clearly as my sarcasm was missed. I actually agree with you and don’t think what they’ve doing in the article is good.
A cool article but also like... Just show us the articles GPT-3 wrote? I think it would've been more genuine to have taken one article and edited. At the bare minimum they at least explained WHY they did what they did but I'm far less impressed that they pieced together the best parts of all the articles to make one. I know it's all part of the editing process but I'd still like to see one article unedited for this piece. It's still interesting but I guess I'll have to look and see what I can find myself.
There's something unsettling about a robot writing down words which describe human emotions and sayings such as 'grateful', 'god knows', 'I don't feel like', and 'happily'. In fact, much of the writing feels disturbing to me, not because of the content, but because there seems to be a fundamental disconnect with the humanity of the writing and the fact that the robot is not, in fact, human. It feels like a failing of language, to distinguish between emotions and computation... which also feels weird to write, because at some point AI probably will be capable of emotion, but my own familiarity with GPT3 algorithms allows me to understand that it's merely mimicry.
I rate the article vaguely disturbing/10
On a fundamental level there's no difference. You could say, in vague terms, that emotions are a computation of sensory inputs modulated by perception.
Which does nothing to address your feeling over it, but I feel like it was worth saying.
While you're correct, if I program a very simple circuit to output the text "I am happy", very few people would disagree that this circuit is not experiencing the same concept we are trying to convey with the word 'happy'. To an extent, the word 'happy' doesn't mean the exact same thing for all humans, and its questionable what other animals can feel 'happy', but language was primarily devised by humans to convey abstract ideas to others - a crucial component of this is the shared experience and defining of said word.
I see this more as a failing of language to capture precisely what this computer is doing than anything else. I'm not sure how to quantify what it is doing outside of mimicry, but many human things are mimicry as well so it's a very confusing subject all around.
A language is meant to interchange thoughts between two minds, mechanical or organic. Without a sense of ego, a machine shouldn't use personal pronouns for itself. Without any emotions, a machine shouldn't make reference to its state of mind. Without a body a machine should not describe itself as an organic being. Stripped down to what a computer can currently do, human languages can be a useful format for computers to use with humans.
I don't know if what you describe as a "failing of language" is really anything more than just a new type of lying. We couldn't ever trust words completely. Now we're moving into a world where we are only rarely be able to trust them.
Why do you say this? We use pronouns to describe objects and others, and if a computer were to display a message which used the pronoun "I" or if an animal capable of speech in some capacity referred to itself with "I", I doubt many would take issue. Why do you think an ego is necessary?
For the most part, I agree. Unless the 'state of mind' is representative some kind of shift or change in function, which is what I was getting at by stating the emotive words felt out of place.
I don't feel at any point it did, other than the association of things like emotions with organic beings, but theoretically it's possible to design a circuit which mimics the way brains work, so that's not an exclusive thing, just constrained by the current limits of our technology.
Absolutely, I just found the language it chose to use disconnected from my perception of reality.
Lying implies intent - you have to know it's not true and choose to obscure the truth. A GPT3 algorithm can't do this. I'm not sure lying is the correct word here.
While a computer program that simply outputs "I am happy" is most likely not happy, there are areas that are much more grey. Take for example a simple virtual creature shaped by a genetic algorithm to accomplish a specific task. Is this creature 'happy' when it successfully reaches a high fitness? Or maybe it's more like a different emotion, or perhaps it's not conscious at all.
If they are conscious though, I do believe that even simple virtual creatures can feel some sort of emotion. Probably similar to the experiences of an insect or worm.
That might be true of some AI, but in this case it is glorified autocomplete. It could autocomplete happy, sad, or angry text, but that doesn't mean it is happy, sad, or angry. It has no internal state other than the previous text and will autocomplete opposing beliefs.
GPT-3 has absorbed a large subset of the Internet from 2019 and some digitized books, but it's not going to be any more consistent about what it "believes" than a library or the Internet, and actually less consistent since it can only "remember" a small amount of previous text.
Video game characters can at least be programmed to have a consistent emotional state so it makes sense to talk about a monster having become angry at you (for example).
The hidden premise, here, is that the mind follows a computational model. That is far from an universally accepted proposition, especially considering that philosophy of the mind is not subsumed to computer science.
I think "computation" is as close an electromechanical equivalent to what we're gonna get when we understand brain workings as we're gonna get. On its own, it's a useful abstraction: it helps map inputs to outputs better than the alternatives I've encountered.
The wording seems very....exact? If that makes sense? It's not surprising, as "concise" was in the prompt, but that's what jumped out at me.
Another place I've heard of AI writing good articles was in sports, where all of the stats could be analyzed. It could tell if there was a comeback, if someone performed better/worse than expected, etc. This would seem to work especially well in baseball, where everything is tracked.
This statement gives me the same vibes I would feel hearing a serial killer tell me about how to trust them before they're about to lock me in their basement.
I know how fallible my own mind is, and how quickly I can change my mind. I certainly wouldn't entrust an AI who might be equally fallible, but can get itself into a reality-distortion bubble faster than we can possibly comprehend to do better.
Edit: Wife demands I append this video: https://www.youtube.com/watch?v=_vUrAMxmO_A
What I found interesting was the article also contained hyperlinks. Not only did it generate an article, it had references to link to.
That I suspect may have been added in the "editing" stage by humans. I'd be very surprised if the AI managed that. Not impossible, but I'm going with Occam's razor on this one.
Completely missed the Editor's Note at the bottom of the article. I believe you're right with that. Looks like we're safe from the AI uprising for a couple more years, phew!
"Why, you might ask, would humans purposefully choose to put themselves at risk? Aren’t humans the most advanced creature on the planet? Why would they believe that something inferior, in a purely objective way, could destroy them?"
An intelligence unable to understand assimilating information modifies the observer is missing the vital component. Where is agency without doubt of said agency?