This was incredibly interesting, thanks for sharing! I work in marketing (not as a marketeer, but a supporting role in data governance / visualisation) and I can't tell you how often I see these...
This was incredibly interesting, thanks for sharing!
I work in marketing (not as a marketeer, but a supporting role in data governance / visualisation) and I can't tell you how often I see these signs when one of our clients or their competitors output a blog post or an email newsletter or anything brand related, really.
Here's a hint: It's ALL of them. ALL the time.
There are entire ridiculously expensive 'AI solutions' based purely on inputting as much information about your brand as possible, so that you can create AI writers that set certain tones when writing social media posts etc.
I know this because, unfortunately, I am the person whose job it is, amongst other things, to curate the client's data that's then input into these kinds of solutions. 🥲
Note: ... I insist on being part of projects where our clients are charities or otherwise businesses that, overall, are net positives to society... Lets me sleep a little better at night.
It seems practical that many of the signs of AI writing are also just bad writing, poor communication, non-NPOV, speculation, misattributed statements and so on. Almost all of the examples are...
It seems practical that many of the signs of AI writing are also just bad writing, poor communication, non-NPOV, speculation, misattributed statements and so on. Almost all of the examples are already frowned upon on Wikipedia for other reasons.
Having used large language models I also doubt that we're in a "well, that's just the LLM generated text you notice" type situation where there is a significant amount of it without glaring red flags. ChatGPT for example seems utterly incapable of some of the qualities that Wikipedia demand, like not attributing its own speculation to sources that may or may not exist. People are bad at this, too, but not as consistently bad as ChatGPT.
Huh, a lot of the style examples in the first 3 sections help explain the uncanny valley tells I've noticed in ai writing but haven't been able to fully articulate on my own. A few, though, are...
Huh, a lot of the style examples in the first 3 sections help explain the uncanny valley tells I've noticed in ai writing but haven't been able to fully articulate on my own. A few, though, are common writing devices I use pretty extensively (in cases where they make sense,I'd argue).
One concern (fear?) I have around reading ai-generated writing is that it's going to impact, and degrade, how I write. I've learned my craft through picking up conventions and ideas from colleagues. It can be tough to encounter a pattern repeatedly and not absorb it.
Very nice overview of things that have bugged me about AI-writing, that always made it feels"off" though I couldn't put it concrete terms on the why that is. It seems like its style is very heavy...
Very nice overview of things that have bugged me about AI-writing, that always made it feels"off" though I couldn't put it concrete terms on the why that is. It seems like its style is very heavy influenced by a style of writing I can best describe as SEO-blogspam or BuzzFeed. It is possible to steer the models in certain directions that does help some, but it still creeps in. I just recently wrote about some experiment I did with trying to use ChatGPT as a sparring partner. The style is obviously worst when you take the LLM output as is, and we are probably all encountering writing that have at the very least been AI-assisted to some degree without realizing it. It still seems to take a good human editor to reduce the worst of the models' signatures.
Agreed, the style and tone of AI-written content is always the first thing that stands out to me. No human talks like that! Companies do, and it's always been really obvious. I've found when...
Agreed, the style and tone of AI-written content is always the first thing that stands out to me. No human talks like that! Companies do, and it's always been really obvious.
I've found when scrolling YouTube Shorts or TikTok, there's a very specific way of talking that makes me scroll past immediately, and whenever I've cared to double check my intuition it's been corporate content like adverts or sponsorships every single time.
If I accept the premise that AI chatbots are useful, the choices behind the style and tone they use still seem really weird to me. I don't want to have a friendly, casual conversation with a computer that sounds like an advert! Star Trek got it right with "Tea, Earl Grey, hot".
This was incredibly interesting, thanks for sharing!
I work in marketing (not as a marketeer, but a supporting role in data governance / visualisation) and I can't tell you how often I see these signs when one of our clients or their competitors output a blog post or an email newsletter or anything brand related, really.
Here's a hint: It's ALL of them. ALL the time.
There are entire ridiculously expensive 'AI solutions' based purely on inputting as much information about your brand as possible, so that you can create AI writers that set certain tones when writing social media posts etc.
I know this because, unfortunately, I am the person whose job it is, amongst other things, to curate the client's data that's then input into these kinds of solutions. 🥲
Note: ... I insist on being part of projects where our clients are charities or otherwise businesses that, overall, are net positives to society... Lets me sleep a little better at night.
It seems practical that many of the signs of AI writing are also just bad writing, poor communication, non-NPOV, speculation, misattributed statements and so on. Almost all of the examples are already frowned upon on Wikipedia for other reasons.
Having used large language models I also doubt that we're in a "well, that's just the LLM generated text you notice" type situation where there is a significant amount of it without glaring red flags. ChatGPT for example seems utterly incapable of some of the qualities that Wikipedia demand, like not attributing its own speculation to sources that may or may not exist. People are bad at this, too, but not as consistently bad as ChatGPT.
Huh, a lot of the style examples in the first 3 sections help explain the uncanny valley tells I've noticed in ai writing but haven't been able to fully articulate on my own. A few, though, are common writing devices I use pretty extensively (in cases where they make sense,I'd argue).
One concern (fear?) I have around reading ai-generated writing is that it's going to impact, and degrade, how I write. I've learned my craft through picking up conventions and ideas from colleagues. It can be tough to encounter a pattern repeatedly and not absorb it.
Very nice overview of things that have bugged me about AI-writing, that always made it feels"off" though I couldn't put it concrete terms on the why that is. It seems like its style is very heavy influenced by a style of writing I can best describe as SEO-blogspam or BuzzFeed. It is possible to steer the models in certain directions that does help some, but it still creeps in. I just recently wrote about some experiment I did with trying to use ChatGPT as a sparring partner. The style is obviously worst when you take the LLM output as is, and we are probably all encountering writing that have at the very least been AI-assisted to some degree without realizing it. It still seems to take a good human editor to reduce the worst of the models' signatures.
Agreed, the style and tone of AI-written content is always the first thing that stands out to me. No human talks like that! Companies do, and it's always been really obvious.
I've found when scrolling YouTube Shorts or TikTok, there's a very specific way of talking that makes me scroll past immediately, and whenever I've cared to double check my intuition it's been corporate content like adverts or sponsorships every single time.
If I accept the premise that AI chatbots are useful, the choices behind the style and tone they use still seem really weird to me. I don't want to have a friendly, casual conversation with a computer that sounds like an advert! Star Trek got it right with "Tea, Earl Grey, hot".