Have you altered the way you write to avoid being perceived as AI?
I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything right, in that circumstance, I wrote in my "cold and precise" mode, which admittedly can sound robotic. However, my writing was pointed, perhaps even a little hostile, with a clear point of view. Not the kind of text AI generally produces. After the experience, I started to think of ways to write less like an AI -- which, paradoxically, means forcing my very organic self into adopting "human-like" language I don't necessarily care for. That made me think that AI is probably changing the way a lot of people write, perhaps in subtle ways. Have you noticed this happening with you or those around you?
No, and I am not going to.
Based on the fact that you wanted to write in "cold and precise" mode, I assume you were not writing a poem but some argument or something. In that context, accusing you of being AI is a form of ad hominem argument. People should argue against what was written, not against whoever (or whatever) wrote it.
Sure, "ad hominem" means "to the person" - I guess whoever named that fallacy did not see the age of AI coming.
Pretty much seconding this. It is what it is, it will be what it will be, and I think jumping through hoops now especially, when it's still so early in the rollout of AI, would be pointless.
I suppose we can start using ad machinam.
Edit: ugh, it's an ad automation service.
I've seen enough people make nonsensical allegations of GenAI based on incorrect syntax or grammar (the two things LLMs generally don't mess up) that I think it's a fool's errand trying to avoid such accusations. If someone's lobbing that claim, they already either consider the substance of your posts to be indistinguishable from slop or they're incapable of discerning substance from slop, so trying to appease their broken radar seems pointless.
I work on staff at a university, and some of my co-workers that teach adjunct, as well as my wife who is on faculty, frequently show me things they think are AI generated from students. Some things do seem sort of obvious, but we have plenty of international students and I also don't trust the general state of US education, so it's always hard for me to be like - yeah, absolutely. That's also how I feel about a set of comments the other day about Reddit bots - how are you actually sure.
One of the ways they are trying to "combat" AI in writing assignments is with Google Docs you share with the professor so they can check the edit history. If it all appeared at once, AI. If it appears line by line over time, you wrote it or, at least, you typed what AI was saying and maybe at least gained some insight.
Personally, I haven't changed how I write, but I also don't find AI all that useful for the (rare) things I am writing.
At least for product reviews/recommendations there are definitely already bots on reddit spamming in relevant (and sometime irrelevant) threads. They are relatively easy to spot though; You just need to look at their posting history. No real humans (other than paid shills) are so enthused about a particular product that's all they ever post about. ;)
That makes sense. I don't frequent Reddit enough anymore to bother with it, but I'll keep that in my pocket for next time I'm over there.
No. AI tends to be good at the technical parts of writing: spelling, grammar, syntax, and organizing ideas. In that sense, accusing me of using AI is just complimenting my competency at writing. AI's weakness is keeping its facts straight about e.g. how many rocks and how much glue to keep in my pantry. I would hope I typically make better points than that.
Yeah, I avoided the word "delve" in my college applications
Delve,tapestry, andinterplayare the three big ones in my opinion.Are these common in AI generated writing? I haven’t been following this scene very much.
If anything, I've noticed proclivity in auto-gen'd texts that use those particular words. As such, I've began avoiding them. Subsequentially, my writing has shifted a bit to become "harder to parse". Whether this is a devolution or evolution on my end is totally in the eye of the beholder. Take this for example:
Yes
Not really. If I write like an AI, then we're past a point where it doesn't matter anymore because they'll be that advanced with mimicking me. At the same time, with larger language structures such as manuals, documentation etc. it'll be difficult for LLMs (at this point) to take in that much context.
Of course, LLMs are a useful tool for writing such things so who knows, there's a chance the writing of myself and many others will be 'guided' that way. Whether that's a good thing or not, I think it's far too early to tell. Social media seemed beneficial at first but it turned into... well, we all know. LLMs could be the other way around.
New inventions have often had positive and negative effects. If you ask me, the only really new challenge about it is the frequency of changes. That is very high, given the high human population and exponential curves of technological advancements.
I've definitely been writing in a more unhinged way.
But I realize it's pretty easy for AI to mimick this style. I guess,
I can keep making it worse
Nope. I ignore people who whinge about it because they can't tell the difference - if they aren't that bright, ignoring them saves me time since I don't have to bother replying. I've been accused of being an AI many times, and I take it as a compliment instead. Reading a book a day for decades does make you better at written communication.
Consider this. AI has finally made it viable to create the internet's longest debated, most desired, and yet still conspicuously missing technology: the stupid-filter. The ability to detect rampant stupidity in written language. Attempts have been made in the past with bayesian tools, but they don't work very well. Large language models, on the other hand, are capable of doing this.
If you think the people whinging about everyone more articulate than they are is bad now, just wait until any comment that isn't at a minimum 10th grade level of English is instantly removed or automatically fact-checked, cited, and sourced the moment it gets posted. They'll never get a word in edgewise again. The average commenter can forget about being taken seriously by anyone outside of 4chan.
I don't have a problem with that, since most people providing 'expert' commentary aren't experts, they are just stroking their ego. Good news for brain work, bad news for bullshit, terrible news for trolls, spammers, and shills. It will happen just because nobody likes moderating comments and nobody has the time for it. Everyone will outsource it to LLMs as their electricity cost comes down. ;)
Not at all. I just do what I do and let my mind churn out what it does, if someone wants to question it I'll probably just say "in conclusion I'm actually chatgpt" and see what happens. If anything the proliferation of generated text has emboldened me to just put out the stream of consciousness more directly, be less of an editor about it, because if a model can match that the next step of my life will fall into place. I will have to set forth on a journey, to locate the source of the model and destroy it. There can be only one!
Hehe lou, this makes me smile a little, but please don’t take it the wrong way! You also posted the question about using chatGPT for proofreading, which makes this new question funny to me.
I don’t know you well enough to give any meaningful advice, but my first thought is to tell you to not be too self conscious. You’re totally fine in my opinion, so don’t read too much into what people say online, definitely not in a discussion.
Yeah after posting I wondered if anyone would find the two posts weird. But no I did not use GTP for proofreading this time, I don't do that often.
It is not that I am self-conscious in regard to being judged or anything. This is not that personal. That is more of a practical issue that scares me for the future since I am a writer.
Thanks ;)
One thing you could do to be absolutely certain to never be accused of using AI is to slam a bunch of the most egregious swear words into your writing (tastefully, if possible) because I think a lot of LLMs have guardrails in place to at least make that more difficult.
As an Australian for whom swear words are just a standard part of the vocabulary, I’m disappointed at how difficult it can be to ask a chatbot to casually swear
I used to aggressively correct any and all typos. Now I often leave them in if they're not too egregious.
Other than that, no, I'm not too worried about sounding like AI, at least in it's current state. I've never been one to write in the dreaded "formal essay" style (introduction paragraph, at least three main points, summary paragraph) or use an excessively friendly and encouraging tone, and I think that's 80% of what makes something sound like ChatGPT spit it out.
I have an additional question.
For the past few weeks, I have been writing fiction directly in English. For a variety of reasons, writing in English is more conducive to my creative writing. However, I must submit this work in my native Portuguese.
Translating entirely by hand would be very laborious and without benefits. I have noticed that GPT's translation is better than Google Translate. I, of course, fully verify, correct, and improve the translations. I will also recreate any tone or subtleties that were lost in the process. Many choices GPT makes in the translations are the same I would make. Some are better than mine, and some are inferior and need improving.
These are my questions:
I would never translate it all by hand, but I may use Google Translate if I think GPT translation could get me in trouble.