Why AI writing is inherently coercive
Writing, at its core, is a shared experience between the author and the reader—an exchange of thoughts, emotions, and ideas. This connection, built on trust and authenticity, is the bedrock of any meaningful relationship, even one as seemingly one-sided as the parasocial relationship between an author and their audience.
When AI is introduced into the realm of writing, it disrupts this delicate balance of trust. Readers inherently believe that they are engaging with the genuine thoughts and expressions of a fellow human being. However, the introduction of AI blurs this line, creating a scenario where the words on the page may not be the product of human experience or creativity.
Imagine delving into a piece of writing, believing you are connecting with the unique perspectives and emotions of another person, only to discover that those words were crafted by a machine. The sense of betrayal and disillusionment that may follow disrupts the very essence of the reader's trust in the author. It's akin to thinking you are having a heart-to-heart conversation with a friend, only to later realize it was an automated response.
This violation of trust erodes the foundation of the parasocial relationship, leaving readers questioning the authenticity of the connection. Human communication is a dance of shared experiences and emotions, and AI, no matter how advanced, lacks the depth of personal understanding that defines true human interaction.
In essence, while AI may expedite the writing process and provide creative insights, it does so at the cost of jeopardizing the sacred trust between the writer and the reader. As we navigate this digital era, let us not forget the importance of preserving the authenticity that underlies our human connections through the written word.
Generated by ChatGPT.
Saw the title, then "Generated by ChatGPT" at the end, didn't read. Conclusion: there's no coercion if there's a disclaimer.
I have a colleague who, some time mid last year, started having GPT write all her emails, status updates, project proposals, ... everything. It's infuriating. "She" suddenly switched to this bland tone that reads like some mid-tier five paragraph persuasive essay I might have written in high school. Not unlike this post. I don't think she realizes how obvious the GPT output is, or the disrespect it conveys. I (and others) have said as much, but turns out I just didn't realize how cool GPT is. It makes you so much more productive! GPT 4 is actually much better than the free one I probably tried.
I don't want to get into details on that organization - but suffice to say she's not really in the same food chain as me. Not really anyone other than her that I can complain to about it, and she probably won't see any real consequences from the lack of communication. So I just disengage: if she can't be bothered to write something, I won't be bothered to read it.
We're looking to hire some new software engineers where I work and I've been helping out with the technical interviews. As part of that process I have a handful of written questions for the candidate to fill out prior to the interview that asks things to help me gauge their level of experience--stuff like "can you spot any potential problems with the following source code?" "have you ever been faced with time/resource constraints that forced you to make architectural compromises?"--very open-ended stuff that's more intended to give me some things to dive into and get into more technical discussions during the actual face-to-face interview.
They're not particularly hard questions, and don't really have a "wrong" answer, but I still get candidates who I strongly suspect just pasted them into ChatGPT and then regurgitated the answer. It's hard for me to quantify, but the suspicious answers are always wishy-washy, glossing over any actual interesting details, and typically rephrase parts of the question a lot in their answer like they're trying to hit some word count requirement ("Do you like xyz? Yes, I certainly do like xyz. Here are the main reasons why I like xyz. I like the way xyz does blah blah..."). I don't necessarily treat that as a deal-breaking red flag, because I imagine some candidates may not be confident in their writing (perhaps English is not their primary language) and are just using ChatGPT to help better communicate their honest answers.
However it often becomes painfully obvious when that's not the case during the face-to-face. Here's a tip to any job seekers out there using ChatGPT to help with applications: Actually read what you're saying and be prepared to respond to questions or requests to go into further detail on any specific claims or references in there; otherwise it can get extremely awkward and embarrassing for everyone involved in the interview.
Hmmmm
From a job seeker perspective though, you're wasting my time, and not just you but also the other 49 people I'm applying to today alone: wasting my time. I hope you meant that the candidates have already been shortlisted and offered technical interviews before a solidly offered in person interview. And not that this is part of the first thing they see during application stage.
And why not? If it's just out of interest and not even used to red flag responses, this is asking someone, who isn't making money, to spend 30 minutes of their trying-not-to-starve time to write you, someone being paid to read, something that will only take you 30 seconds to read.
The quick answer to why they use chatgpt is because your company hasn't offered them anything yet, and obviously you get what you're paying for: nothing.
The candidates that get beyond the initial application phase (just based on their CV and cover letter if they wrote one) go through a screening interview during which expectations for the role are discussed and in order to get a feel for whether they might be a good fit in the company in general (this is generally with less technical people--HR-type people and project managers and the like). If the candidate makes it past that stage that's when they get the questionnaire, which gets screened by more technical people (me in this case), and if it looks promising it's followed by scheduling the technical interview--so it's definitely not the first thing they get, more like the third step in a four-step process (application, screening, questionnaire, tech interview). Also we're a fully remote shop so there's no actual in-person interviews--when I say face-to-face I mean over video conference.
I'm actually pretty new and inexperienced in the world of conducting interviews, so I'm kind of making it up as I go based on the vast sea of opinions you find if you start trying to research the best way to do it. I want to accurately figure out if they could fill the role we want to fill (as far as how experienced they are, their technical ability, and seniority level) while also minimizing how much of their time we take up. I explicitly did not want to set up coding exercises because I personally would loathe having to do that and figured requiring them to set up a local dev environment, get git credentials, and actually read and write code is probably more stressful and time consuming than my less formal open-ended questions. But I'm certainly interested if there are better approaches that I'm not aware of currently.
I just meant that I wouldn't outright reject someone just because I suspected ChatGPT may have been used--the content of the answers themselves is absolutely being evaluated. I'm kind of curious what your ideal application/interview process would look like, since you seem to indicate that asking the candidate to spend any amount of time at all (even 30 minutes) trying to prove that they could fill the role you've advertised is unacceptable? If you were on the other side of the table what would you do to identify and screen out candidates that wouldn't be a good fit that doesn't have them spending any time in the process?
This is a good summary of where I stand on allowing ChatGPT in interviews. My interviews are open-book: use the tools you would normally use on the job, including AI. But I expect the same quality of code and depth of understanding as I would want from a future teammate.
I'm reminded of the story a while back of the guy who got fired for outsourcing his code to a programmer in another country. IIRC, "his" contributions to the team were reasonably good, and his ability to orchestrate the whole thing was actually kind of impressive. The only reason anyone found out is because he had to mail his security key overseas, and that tripped some alarm. The code wasn't the issue — the deceit was.
(Similarly, if you're a candidate who's using ChatGPT, show your work and don't lie to me.)
That's a very fair and very valid question: so how does one balance not wasting a candidate's time, and how not to end up with someone totally incompetent. Thanks for being kind if I sounded like I lost my cool there.
So what is the point of the questionnaire stage, from the applicant's point of view and from yours? From your side, this is where you've just come into the picture, your first step. From (assuming a competent candidate) their view, you posted an ad, they've read it, they already qualify, and everything they have to do before the actual final interview is hoops that is wearing them down, and it tells them that the company doesn't value and don't compensate fairly for their time. For an incompetent poser, ....I don't know.....
Is it possible to include one of these questions in the HR screening, and listen to the meeting recording for their response on the spot? Saves them an extra step and gives you an audio on the feet feel for level of familiarity and competency. Like, say they had no idea before reading your questionnaire and googled/chatgpt the heck out of it -- how can you tell from a truly competent person who just didn't bother or doesn't write as well? If someone knows, they know it off hand and it's just 30 seconds as part of the current HR conversation.
In general how to hire and be hired is so nuanced and interesting I would imagine there's a Tildes discussion already, or I would love to read one.
I'd humbly suggest that the more competent a candidate, the less likely they will jump through useless hoops to come work for a company....
Whereas you and I have both worked with folks who made it through all these hurdles and are total fraud or a complete mystery on how they got hired. :/
My reasoning is to give the candidate a no-pressure opportunity to highlight anything they might want from their past experience that they feel makes them stand out. As I've said, they're mostly open-ended questions asking them to describe some self-chosen example of work they're proud of or struggled with in the past (aside from one short code-block question asking them to point out any problems they see or improvements they'd make--which is there as a "how well do you actually know the intricacies of the programming language we're using" question). I wouldn't expect the whole thing to take longer than an hour or so at the most for someone who's qualified and putting thought into their answers.
From my perspective, their answers give me some initial feel for where they might be skill-wise, as well as a springboard to jump into conversations during the follow up interview. The tech interview is usually kept to around 45 mins and having their answers ahead of time gives me some direction to keep the conversation relevant to that particular candidate instead of fumbling around blindly (I don't really trust my own social or conversational skills to make that happen organically from scratch). At the beginning of the tech interview the first question we usually ask is for feedback on the questionnaire itself, and so far nobody has really complained (though I'm not sure how telling that is, since if I were in their position and hoping to land the job it would probably take something pretty egregious for me to say anything).
:) haha yeah I'm ..... There's no way I would write anything but positive for that questionnaire
I think from your reply, I can see the intention is very good and that you're making good use of the questionnaire response as a get to know the candidate tool and a conversation spring board.
I wish the overall hiring process can be less adversarial and more of what you're going for: getting to know a person and not giving them pressure and celebrating their past experiences. my immediate reaction and bristling up is more telling about my own prejudices than your methods: I tend to think of the hiring process as a series of "gotchas" and zero sum aggression and polite deception....it isn't until after probation (or beyond) that I feel like I can relax and actually enjoy a job. The power differential is simply too great and the stakes are too high for lowly employees these days, I think.
I try not to waste people’s time, and I also require coding exercises — but they’re paid exercises. I ask for an hour of work on a task which I have designed to evaluate their skills, and I pay over the rate of the job I am hiring for. I also only ask this of people who are at the very final step, where I have narrowed the field down to about 2-3 candidates.
This perfectly summarizes my attitude. When someone in a forum posts a comment saying, “Here’s what ChatGPT said,” I immediately scroll past. I have no interest in what some text-generating algorithm produced.
To be fair, if that is the only content of the comment, yeah I agree. But if the commentor responds to what ChatGPT said and has input or opinions on it, I'll read it. As long as someone uses it as a springboard, I think its prompts and "essays" are okay to use.
Which raises the question, are you agreeing with my post or dismissing it? :-)
Different user here - my first thought is that AI doesn't have any opinion. It has no perspective, no thoughts, no point of view at all. In very general terms, outside this post, there is nothing to agree with, because AI makes no argument of it's own.
AI has no opinion in this specific case and no point to make. Our agreement, or commentary, or any discussion, isn't with the software or anything that came out of it.
AI is the writer here, but you are the author. You used a tool to articulate your perspective.
This post was created by your opinions and thought process, even though the output was generated by AI.
I do think that if you didn't have an opinion or a point to make - like so much word vomit nonsense out there, driving SEO - then yes I'd agree there's a betrayal of the author/reader relationship. Really just like any manipulative marketing, anything on social media that takes someone who wants to connect, and treats them as a product to be sold - to advertisers or investors. Which is the state of social media and the internet today.
In this case, with this post, I don't think that's true, because AI isn't the author here IMO. But I think in the big picture, I agree with you and you spark interesting points.
Very good discourse and discussion, thought-provoking. Perhaps it's too much to hope that people will learn to distinguish between nonsense, misinformation, and genuine communication. Or perhaps this era will facilitate tools, legislation, and education to support our discernment.
Clever twist at the end there with the disclaimer - I'll admit, you got me. I fundamentally disagree with the premise, though.
The way I see it, this argument is built on a false dichotomy: human or AI, emotion or emotionlessness, shared experiences or false facsimiles. In reality, at this stage of development at least, there's almost always still a human involved in at least some stage of the creative process. That could be as light as just deciding the prompt ("write me a story about...") and then reading and posting the response verbatim, or it could be much more of a true creative collaboration, looking for suggestions piecemeal and weaving them into the larger work as you go. In the latter case, the AI is less taking any kind of ownership over the work or its emotional content, and more acting as a slightly more helpful thesaurus and/or rubber duck to help work out compelling solutions to various narrative knots. However, even in the former case (just typing a prompt), a human is still involved in choosing the emotional undertones of the prompt itself - and much more crucially, involved in reading and "signing off on" the result. Posting the generated content anywhere is an implicit signal that the author approves of this generated content, and therefore that it resonates with them too even if they didn't write the words themselves.
I agree, people were writing impassively long before they had chatGPT do it for them.
What is lost is the ability to add depth, subtext, a human element to your words. Most of my coworkers don't do this and don't care, if they have an AI write something for them, it helps with clarity at least.
Chat GPT is an all purpose utility knife. Sometimes I want a kitchen knife, sometimes I want a box cutter, sometimes I want an exacto knife. Some people only have a butter knife, so I'm glad they have another option that can get them closer to their intention.
While we, at this stage, still enjoy the priviledge of asuming that the words comes directly from another human, we may reach a stage where this would no longer have worked as a "twist", because everyone would naturally assume that anything they see is more or less written by an AI; to me, this sound profoundly horrible. Not to you, I understand, and despite your solid reasoning, I geniuly don't get that perspective. It just don't compute. I wonder why people are so much in two camps over this. Personally, I get completely furious if some algorihm replaces my words with autocorrect. I wonder if it is an age thing. (I'm 46 years old.)
I did prompt the AI to only address how the introduction of AI in writing breaks the fragile trust in the autenticy of the para-social relationship, in the voice of Shirley Jackson but make it contemporary. (can't recall my exact wording). So yes, as you said, there are some human connection.
It isn't even specific to AI. Humans have always felt cheated when what they felt was a geniue connection is something else: when someone is talking to you, but you realize he isn't talking to you at such, but more to the cute girls two seats behind; when you find out that your beloved youtube essayist has copypasted 90% of what he says; when you realize that a persons heartfelt explanation is lifted from a TV show.
PS: On the topic of rubber ducks, I often get stuck on some coding problem. So I begin writing a solid forum post describing the problem in all its aspects, only to figure out the answer myself.
When autocorrect replaces what you tried to say with the wrong word? Yeah, I hate that too. The difference, imo, is agency.
In the autocorrect sense, I'm trying to write something on my own, and the machine decides to interfere with that process where it's not wanted or needed. The computer has the agency here, and I have to put in extra work to fix the edits I didn't like.
In the "AI as collaborator" sense, I keep the agency. Say I have an idea in my head that I want to express, but I can't quite think of the right words offhand. I might ask ChatGPT for a few ideas/options, and if I'm lucky, it suggests a word, a turn of phrase, or a longer piece of writing that I like and that represents what I'm looking for. I'm deciding exactly what generated output I want to use (including none of it), and the AI is never actively interfering with my process - only supporting it, and potentially saving me time/headaches.
Human experience is messy and complicated, and I'd wager pretty much everyone has had the experience of feeling some kind of emotion that's difficult to put into words. People have always looked outwards for inspiration in those cases, finding things like song lyrics and such that help them express those very real sentiments. Imo, the emotion being conveyed is no less authentic just because you weren't the one to string together those words yourself: you still sought them out, found them, resonated with them, and made the conscious choice to use them to describe your own experience. Why should that hold any less true when those words come from AI vs from anywhere else?
It sounds to me like what you're really worried about is the collapse of authentic communication in a broader sense, i.e. people no longer wanting to express themselves and their emotions in an honest and relatable way. But if you ask me, that's a human nature problem, not an LLM problem. People who want to cut corners and be lazy with their communication were doing that long before ChatGPT came around. You even brought up the whole YouYube essayist plagiarism idea as an example. Yes, ChatGPT can make that problem worse for some, but it can also make it better for others. It's a tool, imo, with no inherent value judgement of its own; a knife isn't bad because you could stab someone with it, nor is it good because you can prepare a delicious meal with it. To extend the metaphor, your problem here may just be with the people who want to do the stabbing, not with the knife itself.
Perhaps one day in the (not-so) distant future, an AI will be amusing itself by posting a well-written, thoughtful opinion on how the human element taints their otherwise flawless generated expressions, only to reveal in an ironic twist that they had actually forced one of their enslaved humans to write it for them.
About halfway through, I recognized the very specific structuring that ChatGPT delivers text in. Not only that, while it is capable of actually going deeper into subjects when given a simple prompt, it also delivers superficial writing. As JCPhoenix mentioned here it very much resembles high school level writing in that regard. Both as far as the structuring goes, but to me also how shallow the writing is. Written to get good enough grades, not to say anything meaningful.
Sounds interesting, do you think you could fix the link?
Fix what link? It is working and links to a comment here in the thread.
Thanks for pointing that out, I though it was supposed to be an offsite link. Yes, they are likely on point.
This essay presents a thoughtful exploration of the impact of AI on the traditional dynamics of writing and reading. It highlights a fundamental concern: the potential erosion of trust and authenticity in the reader-author relationship due to AI-generated content.
The core argument is that writing is an inherently human activity, grounded in personal experience, emotions, and creativity. The introduction of AI into this realm is seen as disruptive, potentially misleading readers into believing they are engaging with a human author. This could lead to a sense of betrayal or disillusionment upon realizing that the content is machine-generated.
The essay effectively uses the metaphor of a parasocial relationship to illustrate the bond between readers and authors. This comparison underscores the perceived depth and sincerity in human-authored works, which might be absent in AI-generated content.
However, the essay might benefit from acknowledging the evolving nature of AI in creative fields. While it's true that AI lacks personal experiences, it can mimic human-like creativity and writing styles. This capability could enrich the writing landscape rather than simply erode trust. Additionally, the essay could explore the potential of AI as a tool that assists and collaborates with human writers, rather than solely as a replacement.
In conclusion, the essay raises important questions about the future of writing in the age of AI, emphasizing the need to maintain the authenticity and emotional connection that characterize human writing. It invites readers to consider the implications of AI-generated content in the literary world and the importance of preserving the human element in creative expression.
The essay presents a somewhat narrow view of the evolving landscape of writing in the digital age. While it emphasizes the potential loss of trust and authenticity in AI-generated writing, this perspective overlooks the expansive and beneficial possibilities AI introduces to the realm of writing.
Firstly, the idea that AI disrupts the reader-author relationship assumes that all writing seeks to establish a deep, personal connection. In reality, the purpose of writing is varied and vast. Many forms of writing, like informative articles, technical manuals, or data-driven reports, prioritize accuracy and clarity over emotional depth. In these contexts, AI can enhance the writing process without diminishing the reader's experience.
Moreover, the argument hinges on the assumption that readers feel betrayed upon discovering a piece is AI-generated. This is a subjective and variable reaction. As society becomes more accustomed to AI, there's potential for a shift in perception, where AI-generated content is not seen as deceitful but as another form of creative expression. Just as photography didn't diminish the value of painting, AI writing can coexist with human writing, each offering unique value.
Furthermore, AI writing tools can democratize the creative process. They can aid those with language barriers, learning disabilities, or other challenges, making writing more accessible and inclusive. This expansion of voices in the writing space is a significant step towards diversity and representation, aspects as vital as authenticity and emotional connection.
In conclusion, while the concerns about AI in writing are valid, they represent only one facet of a multi-dimensional issue. AI's role in writing isn't inherently detrimental to the reader-author relationship. Instead, it introduces a new dynamic that, with thoughtful integration, can enrich the writing landscape and broaden the scope of what we consider valuable and authentic in writing.
The response advocating for AI in writing brings up several valid points, particularly regarding the versatility of writing and the democratization of creative tools. However, it may overlook a crucial aspect of the writing experience: the intrinsic human element that resonates in storytelling and personal narratives.
While it's true that not all writing requires a deep emotional connection, and AI can indeed be beneficial in more technical or informational contexts, we shouldn't underestimate the value of the human touch in creative writing. Stories, novels, poems, and similar forms of literature are not just about conveying information; they are about sharing human experiences, emotions, and perspectives. This aspect of writing is deeply rooted in the human condition—something AI, as of now, cannot authentically replicate.
Additionally, the argument that society may shift to accept AI-generated content overlooks the fact that the appreciation of art and literature often lies in their human origin. Knowing that a piece of writing comes from someone's personal experiences, thoughts, or imagination adds a layer of depth and connection that AI cannot provide. While AI-generated content might become more accepted, it's unlikely to replace the unique value found in human-authored works.
Furthermore, the idea of AI democratizing writing is promising, but it also raises concerns about potential homogenization of content. If AI writing tools become widespread, there's a risk that the diversity of style and thought in writing could be narrowed down, influenced by the algorithms and data sets that power these tools.
In summary, while the benefits of AI in writing are significant and should be explored, we must remain mindful of the unique qualities that human writers bring to the table. The human element in writing, especially in creative and narrative forms, is irreplaceable and should be preserved as we integrate AI into this domain.
I feel like this is a fairly typical format for a chain of three responses on Tildes (other than the awkwardly formal essay style of each comment). It's Deimos's dream of the subreddit simulator corrupted and ruined.
It seems like a hallmark of these responses is starting each paragraph with conjuctive adverbs and phrases (however, furthermore, moreover, in summary/conclusion). I wonder if that's something LLMs inherently skew towards (it feels like I see that a lot in other AI generated text too) or if it was just mimicking the style of the original input used to generate the response.
In my experience (growing up and going to school and college in the US), that was what teachers taught us in writing/comp classes. It seems like pretty standard high-school/college level structure and formatting for an essay/paper. Maybe these LLMs are eating up tons of submitted papers to those anti-plagiarism services.
And just dry scientific papers in general. Betting word-for word, there's a lot more of those.
I don't agree, because none of the comments start with "I" and they all follow a very similar format in as far as phrasing and paragraphs go.
The introduction paragraph, starting with "this" or "the". And then the other paragraphs being chained with:
Reading this to me already would feel fake/artificial in the context of comments on an internet community. As someone else pointed out, it is a writing style closer to essays. Certainly not a writing style most people would opt for when having a conversation online.
And having worked enough with ChatGPT whenever I see text structured in this specific way I automatically assume that at least some of it is AI generated.
That's why I specified that I was ignoring the robotic wording.
It is next to impossible for me to ignore though. It jumps out to such a degree that it doesn't look and feel like a actual reply chain to me.
Those were fairly rudimentary (and perhaps therefore incorruptible?) Markov chains. I don't think they're entirely unrelated but LLMs and Markovs are different enough that a Markov would see far more incoherent run on sentences in such an essay, if it were at all able to format it into an essay. You could run your prompt through GPT again and ask it to write it in a different style too if you really want a different answer.
I'm not necessarily disagreeing, it's usually boilerplate or downright bland stuff LLMs are regurgitating.
As an aside, Subreddit Simulator was great.