This was incredibly interesting, thanks for sharing! I work in marketing (not as a marketeer, but a supporting role in data governance / visualisation) and I can't tell you how often I see these...
This was incredibly interesting, thanks for sharing!
I work in marketing (not as a marketeer, but a supporting role in data governance / visualisation) and I can't tell you how often I see these signs when one of our clients or their competitors output a blog post or an email newsletter or anything brand related, really.
Here's a hint: It's ALL of them. ALL the time.
There are entire ridiculously expensive 'AI solutions' based purely on inputting as much information about your brand as possible, so that you can create AI writers that set certain tones when writing social media posts etc.
I know this because, unfortunately, I am the person whose job it is, amongst other things, to curate the client's data that's then input into these kinds of solutions. 🥲
Note: ... I insist on being part of projects where our clients are charities or otherwise businesses that, overall, are net positives to society... Lets me sleep a little better at night.
It seems practical that many of the signs of AI writing are also just bad writing, poor communication, non-NPOV, speculation, misattributed statements and so on. Almost all of the examples are...
It seems practical that many of the signs of AI writing are also just bad writing, poor communication, non-NPOV, speculation, misattributed statements and so on. Almost all of the examples are already frowned upon on Wikipedia for other reasons.
Having used large language models I also doubt that we're in a "well, that's just the LLM generated text you notice" type situation where there is a significant amount of it without glaring red flags. ChatGPT for example seems utterly incapable of some of the qualities that Wikipedia demand, like not attributing its own speculation to sources that may or may not exist. People are bad at this, too, but not as consistently bad as ChatGPT.
This was incredibly interesting, thanks for sharing!
I work in marketing (not as a marketeer, but a supporting role in data governance / visualisation) and I can't tell you how often I see these signs when one of our clients or their competitors output a blog post or an email newsletter or anything brand related, really.
Here's a hint: It's ALL of them. ALL the time.
There are entire ridiculously expensive 'AI solutions' based purely on inputting as much information about your brand as possible, so that you can create AI writers that set certain tones when writing social media posts etc.
I know this because, unfortunately, I am the person whose job it is, amongst other things, to curate the client's data that's then input into these kinds of solutions. 🥲
Note: ... I insist on being part of projects where our clients are charities or otherwise businesses that, overall, are net positives to society... Lets me sleep a little better at night.
It seems practical that many of the signs of AI writing are also just bad writing, poor communication, non-NPOV, speculation, misattributed statements and so on. Almost all of the examples are already frowned upon on Wikipedia for other reasons.
Having used large language models I also doubt that we're in a "well, that's just the LLM generated text you notice" type situation where there is a significant amount of it without glaring red flags. ChatGPT for example seems utterly incapable of some of the qualities that Wikipedia demand, like not attributing its own speculation to sources that may or may not exist. People are bad at this, too, but not as consistently bad as ChatGPT.