I find it hard commenting without it sounding like a critique. "Comparison is the thief of joy" sounds very apt here. Why would you undervalue your own important thoughts just because someone...
I find it hard commenting without it sounding like a critique. "Comparison is the thief of joy" sounds very apt here.
Why would you undervalue your own important thoughts just because someone else, or in this case something else could have done the thinking better? Would you undervalue your sports skills just because an athlete or a pro would have been better than you? Would you undervalue your cooking skills just because a chef would have done it better? We're all human, you can't compare yourself to experts all the time.
Also, why would you offset a part of what makes you you and then be surprised when that part of you is gone? It's like paying someone to finish your games but get sad because you've lost your gaming skills. I'm hopeful the author reached the same conclusions.
AI must be a tool, it's there to help you and make you do what you don't want to do. I would never use an AI to think about common problems if thinking is what I like to do. Or use AI to write when writing is what I like to do.
I'm a teacher and AI has been an immense help to save time when I get new ideas. But I'm happy every time I use it, because it saves time. My ideas are mine and I love thinking about them, I would never give my self value to an AI, especially knowing it can be better than me. I found the ideas and that's all that matters to me.
Thinking is what makes us humans, I would never completely trade that skill. I want AI to help me reduce the time, perhaps, but never do it entirely. If it ever gets to that point, then why even live?
Agreed. The linked post is just "I can't be the best, so why bother?" which is such a defeatist and harmful attitude. There's 8 billion people in the world. Most of us aren't the best at anything....
Agreed. The linked post is just "I can't be the best, so why bother?" which is such a defeatist and harmful attitude.
There's 8 billion people in the world. Most of us aren't the best at anything. Should we all give up?
And as with all tools, there are people who can use those tools to improve their work, and there are people who will eschew the tools, due to principle, to make it more "human", or just because they can. There are woodworkers who don't use power tools. There are bakers who bake in wood-fired stone ovens where it's more difficult to control temperature and humidity. There are people who still send letters, when an email would work just as well.
AI and LLMs don't need to replace thinking (though that seems to be what OP has used it for). They can be used to augment or accelerate your thinking, or you can just ignore them entirely. OP needs to think about out why they want to think for fun (that's a weird sentence) and then decide if AI makes it more fun or less fun.
For me, the end product is usually my goal. If an AI can get me there faster and with fewer mistakes, great. As with all tools, I have to be careful that I don't rely too much on it and use it where it's not appropriate (like using a big chef's knife when I need a pairing knife). But OP comes across as though the journey is what makes it fun for him. In that case, he should probably avoid AI.
Great reply! I love the way you describe the use of AI with the words "augment" and "accelerate", it's such a concise way of explaining and it works well, I'll use that in the future, thank you.
Great reply!
I love the way you describe the use of AI with the words "augment" and "accelerate", it's such a concise way of explaining and it works well, I'll use that in the future, thank you.
I needed this. AI has been messing me up and I can't tell how much of this is real and how much of that is me overreacting. Your post grounded me a little.
I needed this. AI has been messing me up and I can't tell how much of this is real and how much of that is me overreacting. Your post grounded me a little.
I find it difficult to comment on the article without having more context on how the author has been using LLMs. I do recognize some of it, but only in relation to the times when I experimented...
I find it difficult to comment on the article without having more context on how the author has been using LLMs. I do recognize some of it, but only in relation to the times when I experimented with heavily leaning into the generation side of LLMs. I commented on things like vibe coding before and together with all the issues mentioned there I also find that the lack of ownership over the code doesn't help either.
When I use LLMs purely as tools for specific purposes I don't experience the same thing. As I am the one at the steering wheel and firmly in control of what I end up using, if anything, from the LLM output.
It is also entirely possible that the troubles the author is facing have nothing to with the use of LLMs. People struggle with motivation and various creative blocks for a wide variety of reasons. People struggled with that sort of thing way before the digital age. While it might be easy to point to LLMs, I don't think that is true.
Finally,
The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will.
This simply is not true. To be frank, if someone feels like this they have drank too much of the AI cool aid. As I said in the earlier linked comment about vibe coding, LLMs still struggle with larger context (yes, also newer models with a million token context, yes also when combined with RAG systems and all that) and overviews. While their initial results might seem impressive the longer term results are less favorable. This isn't just my opinion, you'll find a steady stream of blog posts submitted to websites like hackernews where people come to that same conclusion.
In fact, this quoted line only strengthens my suspicion that "AI" isn't the problem here. It is the idea of AI, not their actual capabilities.
I have similar feelings about using AI when coding. It can get answers pretty quickly, and in my experience it seems to cause my problem solving skills to atrophy. I quickly become dependent on...
I have similar feelings about using AI when coding. It can get answers pretty quickly, and in my experience it seems to cause my problem solving skills to atrophy. I quickly become dependent on the AI to solve a problem, so I don't try to figure it out myself. After a short time I start to lose some of the ability to judge what the AI has done at all.
Pretty soon, if not tomorrow, AI will start replacing a lot of people. The AI is currently overhyped and can't really adequately replace most of them. But their boss is buying the hype and doesn't understand that the product will suffer. They won't understand that until the people are gone.
In the past, automation increased productivity and people eventually went on to do something else. But there are a few issues with these transitions:
The benefits of automation are not equally distributed (most goes to the owners)
The time and cost of retraining and changing careers a burden carried by the workers
It seems like we're at an endgame where we're running out of things that people can do better than machines
Luddites completely failed, so I'm not sure how much of a point they really had. Your points #1 and #2 seem far better addressed by Marxists than by Luddites. And I just don't really agree with...
Luddites completely failed, so I'm not sure how much of a point they really had. Your points #1 and #2 seem far better addressed by Marxists than by Luddites. And I just don't really agree with your third point.
So I had an interaction with ChatGPT (paid version) at work yesterday that addresses this commentary directly. I was bug-hunting in one of the work apps for a .NET server-side timeout issue on a...
So I had an interaction with ChatGPT (paid version) at work yesterday that addresses this commentary directly. I was bug-hunting in one of the work apps for a .NET server-side timeout issue on a big web query. The lead developer wasn't in the office, so I couldn't ask about it directly. Ordinarily, I'd go on an hours-long odyssey of digging through Microsoft documentation, learning many tangentially related (but ultimately useful!) things along the way.
ChatGPT solved my problem within two prompts, spitting out a web.config tweak that will probably fix the same issue in multiple house apps. In one more prompt, it offered a fix for the client-side timeout issue that the customer ServiceNow developer I was working with reported after I got my side of the API working.
Maybe full-stack software engineers already just know about these things, but I'm in an odd middle position where I try to learn everything so I can be a good integrator. ChatGPT made me more productive - problem solved in minutes, hurray! But long term, my ability to discover new solutions to the weird puzzles that cross my desk will atrophy. I'll lose the fuzzy insight that comes from picking up threads and unraveling them on my own - the partial understandings that permitted the effectively targeted prompts.
I don't think the author's point is a trivial concern.
Part of that is true, but it also really depends on how you go about getting the answer from chatGPT and also what you do after you have gotten an answer. One thing I almost always do is asking...
Part of that is true, but it also really depends on how you go about getting the answer from chatGPT and also what you do after you have gotten an answer. One thing I almost always do is asking more questions about the answer to provide context. Let it explain why it works like it does and for good measure I often also double check in the documentation because I now better know what to look for.
Basically I am not satisfied if something seemingly seems to work now, but I don't know why. I never accepted that from people explaining me stuff, based on the idea that if they just tell me to do stuff but can't tell me why they also don't understand it. I certainly see no reason to take answers from LLMs at face value either, even if the answer they do provide seems like the magic fix I need.
I do agree that I might not discover as much tangentially related things, but for every useful thing I would discover along the way there also always are five more things I never need and therefore never will be useful. Besides, many times LLMs will take me for a detour anyway when they do lose the plot anyway ;)
But, that is a bit of a tangent. My main point is that it is very easy to just lean into the "make my task easier for now" aspect of LLMs. But if you are intentional about how, when and why you use LLMs it can make a huge difference.
Another example of this would be the prompt itself. You can easily create a prompt stating "I want to do X with technology Y, do your thing" and get a workable answer. You can also ask it "I want to do X with technology Y. Explain how I can get started".
I appreciate this response. It's just that in my work context, I need the excuse of an unsolved problem to justify the time spent deeply understanding the answer. In the case above, it was a very...
I appreciate this response. It's just that in my work context, I need the excuse of an unsolved problem to justify the time spent deeply understanding the answer. In the case above, it was a very concrete, lock-and-key kind of fix, where the answer was the explanation. Got this particular timeout error message? Here's how to increase the allowed processing time. [I did a general Web search and search of Microsoft documentation afterward. This answer was very hard to find and would have taken me meandering, yet it did corroborate the AI result...] Test once, and it's ready to go.
[In terms of understanding deeper causes, there's a long fight to be had with the dev team about why the web service doesn't include some sensible parameters to reduce the amount of data a query returns. It's old code that hasn't received attention in a while, and we're now encountering scaling problems. Maybe I'll use my copious free time (/s) and take a stab at .NET programming with ChatGPT.]
I'm baffled by the sheer audacity of facing this dilemma for the first time now... This to me reads like run of the mill existential dread coupled with performance anxiety: what's the point?!?...
I'm baffled by the sheer audacity of facing this dilemma for the first time now...
This to me reads like run of the mill existential dread coupled with performance anxiety: what's the point?!? Somebody else is already better, more apt or whatever, why even try?!?!?
I envy anyone that hasn't faced this most of their life.
I have no wisdom to share, just depressing experiences about not even trying to become mediocre (at best) at anything.
There is probably some relief in practising self-compassion and mindfulness but I wouldn't know. I haven't been able to stick with it, for obvious reasons ;)
I would imagine that many tech people have encountered that feeling before, but have found a coping mechanism to overcome it. Often it’s not terribly healthy or socially responsible (eg. “I’m...
I'm baffled by the sheer audacity of facing this dilemma for the first time now...
This to me reads like run of the mill existential dread coupled with performance anxiety: what's the point?!? Somebody else is already better, more apt or whatever, why even try?!?!?
I would imagine that many tech people have encountered that feeling before, but have found a coping mechanism to overcome it. Often it’s not terribly healthy or socially responsible (eg. “I’m smarter/harder working than everyone else, and therefore exceptional” has been the prevailing reasoning I’ve seen in tech), but that — plus being socially validated by being “geniuses” — was enough. Less so these days, hence the spate of blog posts.
Overall, though, I hope you’re doing OK, and that you have someone to speak to about all this? It sounds like you have a lot on your shoulders.
My bias is clear and my melodrama is overflowing, so this is emotional commentary rather than invitation for debate. But, I empathize with the author in a way, in that, I will forever find it...
My bias is clear and my melodrama is overflowing, so this is emotional commentary rather than invitation for debate.
But, I empathize with the author in a way, in that, I will forever find it absolutely disgusting and soul-destroying that when I think about writing something, or doing anything creative, that I could just ask a computer to do it all for me. There is a sedating and deep acquiescing or resigning feeling within that accompanies the very thought sometimes. Like, that at some point, we're rendering humanity obsolete. Like "why are my thoughts even needed anymore, someone can just read what the machine has to say instead". I abhor and oppose with extreme (and nigh-militant) prejudice the technological path we're treading. I don't care if some people find it useful or it makes them more productive- all that means is that they'll get more dependent on it as our society starts requiring even higher productivity as a result of the usefulness. Eventually skills will rot as employers push for less time for understanding and only the time necessary for asking the bullshit machine to fix the problem.
That said, I find myself landing in a different place than the author- in that, while it can feel like this overwhelming, helpless, tidal wave of cold, heartless, dogshit smothering the very heart of human skill and creativity- at least on a personal level, I find value, hope, and personal fulfillment in creating and learning and doing things without using the tech, both as an opposition in and of itself and as a point of respect for the imbuing of all that is created with humanity rather than the absence of it. There is a pride in such work. Feels good to be a small signal in the noise, a small spear in the side of our new god.
So instead of feeling defeatist and obsoleted, transmute that into valuing your own work even more. Don't let them take that from you, because they're trying.
Maybe I'm just cocky, but I express my views in places because I think I have something to say that is valuable. Or at least, it amuses me to do so. Sometimes I really just want to make the joke I...
Maybe I'm just cocky, but I express my views in places because I think I have something to say that is valuable. Or at least, it amuses me to do so. Sometimes I really just want to make the joke I make.
I suppose, in actuality, that I might benefit from running my responses through AI from time to time - to prompt it "Where am I not explaining this fully yet succinctly enough?"
But I'm generally not worried about AI. It is coming, like it or not - and it is not here. People complain about the writing quality, but that will be solved, and not that far away I'm sure.
Accuracy of information is a concern - but read any news article about some topic that you have professional knowledge about, and you will already realize that that is a huge problem. If anything, AI might well help to improve that.
Alright, out of curiosity, I'm pasting the above into ChatGPT to see if it has any suggestions.
Maybe I’m just cocky, but I speak up because I feel like I have something to add — or sometimes, I just want to make the joke I thought of.
I’ve wondered if I’d benefit from running my responses through AI now and then, just to ask: “Am I being both clear and concise?”
As for AI itself, I’m not worried. It’s coming, whether we like it or not — but we’re not quite there yet. People complain about the writing quality, but that’s going to improve — and probably soon.
Sure, accuracy is a concern. But if you’ve ever read a news article on a topic you know well, you already know how bad things are. If anything, AI might help fix that.
Interesting. I think that it did a decent job overall. I lost a little meaning in a couple of places, but nothing critical. I didn't paste the rest of the feedback it gave - along with telling me that, considering my ADHD, I did well. lol (I mentioned it as part of my prompt).
I think AI is like CGI - good CGI we don't notice, bad CGI annoys us. Perhaps things like the above are not egregious misuses of the technology. I feel like it's still conveying most of what I wanted, although it is a bit shorter and concise. I think I'm fine with that.
We can tell the absolute dregs of writing that are horrible - but as pointed out by many: real humans write crap. It's not the tool, it's the crappiness of the writing that offends. (Or when you are seeking information or answers and instead get generalized fluff and platitudes from corporate writers)
I find it hard commenting without it sounding like a critique. "Comparison is the thief of joy" sounds very apt here.
Why would you undervalue your own important thoughts just because someone else, or in this case something else could have done the thinking better? Would you undervalue your sports skills just because an athlete or a pro would have been better than you? Would you undervalue your cooking skills just because a chef would have done it better? We're all human, you can't compare yourself to experts all the time.
Also, why would you offset a part of what makes you you and then be surprised when that part of you is gone? It's like paying someone to finish your games but get sad because you've lost your gaming skills. I'm hopeful the author reached the same conclusions.
AI must be a tool, it's there to help you and make you do what you don't want to do. I would never use an AI to think about common problems if thinking is what I like to do. Or use AI to write when writing is what I like to do.
I'm a teacher and AI has been an immense help to save time when I get new ideas. But I'm happy every time I use it, because it saves time. My ideas are mine and I love thinking about them, I would never give my self value to an AI, especially knowing it can be better than me. I found the ideas and that's all that matters to me.
Thinking is what makes us humans, I would never completely trade that skill. I want AI to help me reduce the time, perhaps, but never do it entirely. If it ever gets to that point, then why even live?
Agreed. The linked post is just "I can't be the best, so why bother?" which is such a defeatist and harmful attitude.
There's 8 billion people in the world. Most of us aren't the best at anything. Should we all give up?
And as with all tools, there are people who can use those tools to improve their work, and there are people who will eschew the tools, due to principle, to make it more "human", or just because they can. There are woodworkers who don't use power tools. There are bakers who bake in wood-fired stone ovens where it's more difficult to control temperature and humidity. There are people who still send letters, when an email would work just as well.
AI and LLMs don't need to replace thinking (though that seems to be what OP has used it for). They can be used to augment or accelerate your thinking, or you can just ignore them entirely. OP needs to think about out why they want to think for fun (that's a weird sentence) and then decide if AI makes it more fun or less fun.
For me, the end product is usually my goal. If an AI can get me there faster and with fewer mistakes, great. As with all tools, I have to be careful that I don't rely too much on it and use it where it's not appropriate (like using a big chef's knife when I need a pairing knife). But OP comes across as though the journey is what makes it fun for him. In that case, he should probably avoid AI.
Great reply!
I love the way you describe the use of AI with the words "augment" and "accelerate", it's such a concise way of explaining and it works well, I'll use that in the future, thank you.
I needed this. AI has been messing me up and I can't tell how much of this is real and how much of that is me overreacting. Your post grounded me a little.
I find it difficult to comment on the article without having more context on how the author has been using LLMs. I do recognize some of it, but only in relation to the times when I experimented with heavily leaning into the generation side of LLMs. I commented on things like vibe coding before and together with all the issues mentioned there I also find that the lack of ownership over the code doesn't help either.
When I use LLMs purely as tools for specific purposes I don't experience the same thing. As I am the one at the steering wheel and firmly in control of what I end up using, if anything, from the LLM output.
It is also entirely possible that the troubles the author is facing have nothing to with the use of LLMs. People struggle with motivation and various creative blocks for a wide variety of reasons. People struggled with that sort of thing way before the digital age. While it might be easy to point to LLMs, I don't think that is true.
Finally,
This simply is not true. To be frank, if someone feels like this they have drank too much of the AI cool aid. As I said in the earlier linked comment about vibe coding, LLMs still struggle with larger context (yes, also newer models with a million token context, yes also when combined with RAG systems and all that) and overviews. While their initial results might seem impressive the longer term results are less favorable. This isn't just my opinion, you'll find a steady stream of blog posts submitted to websites like hackernews where people come to that same conclusion.
In fact, this quoted line only strengthens my suspicion that "AI" isn't the problem here. It is the idea of AI, not their actual capabilities.
I have similar feelings about using AI when coding. It can get answers pretty quickly, and in my experience it seems to cause my problem solving skills to atrophy. I quickly become dependent on the AI to solve a problem, so I don't try to figure it out myself. After a short time I start to lose some of the ability to judge what the AI has done at all.
Pretty soon, if not tomorrow, AI will start replacing a lot of people. The AI is currently overhyped and can't really adequately replace most of them. But their boss is buying the hype and doesn't understand that the product will suffer. They won't understand that until the people are gone.
In the past, automation increased productivity and people eventually went on to do something else. But there are a few issues with these transitions:
Luddites had a point.
It doesn't look like governments are going to do much to help people with these issues. In fact, as is typical, US republicans are doing the opposite.
Luddites completely failed, so I'm not sure how much of a point they really had. Your points #1 and #2 seem far better addressed by Marxists than by Luddites. And I just don't really agree with your third point.
So I had an interaction with ChatGPT (paid version) at work yesterday that addresses this commentary directly. I was bug-hunting in one of the work apps for a .NET server-side timeout issue on a big web query. The lead developer wasn't in the office, so I couldn't ask about it directly. Ordinarily, I'd go on an hours-long odyssey of digging through Microsoft documentation, learning many tangentially related (but ultimately useful!) things along the way.
ChatGPT solved my problem within two prompts, spitting out a web.config tweak that will probably fix the same issue in multiple house apps. In one more prompt, it offered a fix for the client-side timeout issue that the customer ServiceNow developer I was working with reported after I got my side of the API working.
Maybe full-stack software engineers already just know about these things, but I'm in an odd middle position where I try to learn everything so I can be a good integrator. ChatGPT made me more productive - problem solved in minutes, hurray! But long term, my ability to discover new solutions to the weird puzzles that cross my desk will atrophy. I'll lose the fuzzy insight that comes from picking up threads and unraveling them on my own - the partial understandings that permitted the effectively targeted prompts.
I don't think the author's point is a trivial concern.
Part of that is true, but it also really depends on how you go about getting the answer from chatGPT and also what you do after you have gotten an answer. One thing I almost always do is asking more questions about the answer to provide context. Let it explain why it works like it does and for good measure I often also double check in the documentation because I now better know what to look for.
Basically I am not satisfied if something seemingly seems to work now, but I don't know why. I never accepted that from people explaining me stuff, based on the idea that if they just tell me to do stuff but can't tell me why they also don't understand it. I certainly see no reason to take answers from LLMs at face value either, even if the answer they do provide seems like the magic fix I need.
I do agree that I might not discover as much tangentially related things, but for every useful thing I would discover along the way there also always are five more things I never need and therefore never will be useful. Besides, many times LLMs will take me for a detour anyway when they do lose the plot anyway ;)
But, that is a bit of a tangent. My main point is that it is very easy to just lean into the "make my task easier for now" aspect of LLMs. But if you are intentional about how, when and why you use LLMs it can make a huge difference.
Another example of this would be the prompt itself. You can easily create a prompt stating "I want to do X with technology Y, do your thing" and get a workable answer. You can also ask it "I want to do X with technology Y. Explain how I can get started".
I appreciate this response. It's just that in my work context, I need the excuse of an unsolved problem to justify the time spent deeply understanding the answer. In the case above, it was a very concrete, lock-and-key kind of fix, where the answer was the explanation. Got this particular timeout error message? Here's how to increase the allowed processing time. [I did a general Web search and search of Microsoft documentation afterward. This answer was very hard to find and would have taken me meandering, yet it did corroborate the AI result...] Test once, and it's ready to go.
[In terms of understanding deeper causes, there's a long fight to be had with the dev team about why the web service doesn't include some sensible parameters to reduce the amount of data a query returns. It's old code that hasn't received attention in a while, and we're now encountering scaling problems. Maybe I'll use my copious free time (/s) and take a stab at .NET programming with ChatGPT.]
I'm baffled by the sheer audacity of facing this dilemma for the first time now...
This to me reads like run of the mill existential dread coupled with performance anxiety: what's the point?!? Somebody else is already better, more apt or whatever, why even try?!?!?
I envy anyone that hasn't faced this most of their life.
I have no wisdom to share, just depressing experiences about not even trying to become mediocre (at best) at anything.
There is probably some relief in practising self-compassion and mindfulness but I wouldn't know. I haven't been able to stick with it, for obvious reasons ;)
I would imagine that many tech people have encountered that feeling before, but have found a coping mechanism to overcome it. Often it’s not terribly healthy or socially responsible (eg. “I’m smarter/harder working than everyone else, and therefore exceptional” has been the prevailing reasoning I’ve seen in tech), but that — plus being socially validated by being “geniuses” — was enough. Less so these days, hence the spate of blog posts.
Overall, though, I hope you’re doing OK, and that you have someone to speak to about all this? It sounds like you have a lot on your shoulders.
Thanks <3
My bias is clear and my melodrama is overflowing, so this is emotional commentary rather than invitation for debate.
But, I empathize with the author in a way, in that, I will forever find it absolutely disgusting and soul-destroying that when I think about writing something, or doing anything creative, that I could just ask a computer to do it all for me. There is a sedating and deep acquiescing or resigning feeling within that accompanies the very thought sometimes. Like, that at some point, we're rendering humanity obsolete. Like "why are my thoughts even needed anymore, someone can just read what the machine has to say instead". I abhor and oppose with extreme (and nigh-militant) prejudice the technological path we're treading. I don't care if some people find it useful or it makes them more productive- all that means is that they'll get more dependent on it as our society starts requiring even higher productivity as a result of the usefulness. Eventually skills will rot as employers push for less time for understanding and only the time necessary for asking the bullshit machine to fix the problem.
That said, I find myself landing in a different place than the author- in that, while it can feel like this overwhelming, helpless, tidal wave of cold, heartless, dogshit smothering the very heart of human skill and creativity- at least on a personal level, I find value, hope, and personal fulfillment in creating and learning and doing things without using the tech, both as an opposition in and of itself and as a point of respect for the imbuing of all that is created with humanity rather than the absence of it. There is a pride in such work. Feels good to be a small signal in the noise, a small spear in the side of our new god.
So instead of feeling defeatist and obsoleted, transmute that into valuing your own work even more. Don't let them take that from you, because they're trying.
Maybe I'm just cocky, but I express my views in places because I think I have something to say that is valuable. Or at least, it amuses me to do so. Sometimes I really just want to make the joke I make.
I suppose, in actuality, that I might benefit from running my responses through AI from time to time - to prompt it "Where am I not explaining this fully yet succinctly enough?"
But I'm generally not worried about AI. It is coming, like it or not - and it is not here. People complain about the writing quality, but that will be solved, and not that far away I'm sure.
Accuracy of information is a concern - but read any news article about some topic that you have professional knowledge about, and you will already realize that that is a huge problem. If anything, AI might well help to improve that.
Alright, out of curiosity, I'm pasting the above into ChatGPT to see if it has any suggestions.
Interesting. I think that it did a decent job overall. I lost a little meaning in a couple of places, but nothing critical. I didn't paste the rest of the feedback it gave - along with telling me that, considering my ADHD, I did well. lol (I mentioned it as part of my prompt).
I think AI is like CGI - good CGI we don't notice, bad CGI annoys us. Perhaps things like the above are not egregious misuses of the technology. I feel like it's still conveying most of what I wanted, although it is a bit shorter and concise. I think I'm fine with that.
We can tell the absolute dregs of writing that are horrible - but as pointed out by many: real humans write crap. It's not the tool, it's the crappiness of the writing that offends. (Or when you are seeking information or answers and instead get generalized fluff and platitudes from corporate writers)