The main issue seems that grading AI generated writing is futile and demoralizing. There is an argument that for programming, AI tools simply let you program at a higher level of abstraction. I...
The main issue seems that grading AI generated writing is futile and demoralizing.
There is an argument that for programming, AI tools simply let you program at a higher level of abstraction. I don’t buy it.
Programming with AI is going to end up as demoralizing as it did for illustrators. Everybody wants to hire a professional illustrator to touch up error-filled AI generated images—for a fraction of their previous rate—and not pay the full rate for a professionally crafted illustration.
I’m not actually sure what I’m trying to say here. I’m not going to be able to stop this process.
I just want to find a niche skill I can cultivate that’s creatively rewarding, professionally viable, and safe from AI ruining the fun parts. And that’s not going to be programming, art, or writing, so far as I can tell.
I wouldn’t be so certain about AI being a threat to programming any time soon. Especially if you mean professionally as a software engineer. Software engineering is much more than writing code....
I wouldn’t be so certain about AI being a threat to programming any time soon. Especially if you mean professionally as a software engineer. Software engineering is much more than writing code.
Maybe we’ll reach a day where AI is good at requirement gathering, understanding users and building interfaces, writing maintainable code (although maybe that won’t matter), long-term planning, prioritization, etc. I really just don’t see that happening within the next 20 years, but who knows.
20 years is way too far out IMO. I expect we will see AIs capable of getting to an MVP for many legitimate projects easily within a decade, maybe even under 5 years. I don't think we have any real...
20 years is way too far out IMO. I expect we will see AIs capable of getting to an MVP for many legitimate projects easily within a decade, maybe even under 5 years. I don't think we have any real idea what we're going to be looking at in 20 years, but it's going to be amazing.
I have two thoughts on this: Internet data is going to become like shipwrecked steel for fine tune measuring instruments. I remember reading years ago how there are companies who salvage steel...
I have two thoughts on this:
Internet data is going to become like shipwrecked steel for fine tune measuring instruments. I remember reading years ago how there are companies who salvage steel from shipwrecks that sunk pre-1945 to get steel that is not tainted with nuclear radiation. This steel is then sold for industries that need non-contaminated steel. I feel like this is a useful parallel when considering internet datasets for AI training. If a company can verify that their dataset only includes information prior to 2022, it will sell for more to prevent the cannibalization of AI that is starting to occur.
I remember seeing a headline of an article that AI prompts are a goldmine for AI training. I didn't read the article, but that is something to consider as AI prompts are natural human generated content, that will continue to be generated as AI becomes more prevalent.
Number 2 is a good point. The very companies that need data are having it show up right at their door. I wonder how useful it is compared to pre-2022 forum posts. I expect it's more narrow in...
Number 2 is a good point. The very companies that need data are having it show up right at their door. I wonder how useful it is compared to pre-2022 forum posts. I expect it's more narrow in scope. Mostly queries, not so much information.
After posting the comment, I saw the article I referenced, and have since included the link as an edit. That article focuses more on how AI chat logs could be used for targeted ads than it does...
After posting the comment, I saw the article I referenced, and have since included the link as an edit. That article focuses more on how AI chat logs could be used for targeted ads than it does training AI. However, AI tools are being used by some people more as a conversational partner to combat loneliness instead of the utilitarian search engine that most people associate with AI. So those logs could be beneficial
I worked at a GPT-3 startup a few years ago, back when API access was hard to come by. I'm confident that OpenAI trained GPT-3.5 and GPT-4 on our prompts. They provided us with a suggested...
I worked at a GPT-3 startup a few years ago, back when API access was hard to come by. I'm confident that OpenAI trained GPT-3.5 and GPT-4 on our prompts. They provided us with a suggested few-shot prompt structure: Plain English request, example input 1, example output 1, example input 2, ..., customer input => output (with some specific extra tokens between). This assuredly gave them the data needed to train their current zero-shot models. We would work pretty closely with OpenAI to perform RLHF in exchange for early access to new models.
I don’t think we are out of training data per se. I was just looking at job offers for math grads to teach AIs for example, so new high quality training data from humans is being generated, and I...
I don’t think we are out of training data per se. I was just looking at job offers for math grads to teach AIs for example, so new high quality training data from humans is being generated, and I think we are going to see big gains from “self play” type training that incorporates external systems like wolfram alpha as well as new techniques for leveraging semantic web type data to deal with issues like recency and hallucinations.
Those sound like incremental improvements - which I'll gladly take but aren't going to take us to a level where AIs are stealing complicated white-collar jobs.
Those sound like incremental improvements - which I'll gladly take but aren't going to take us to a level where AIs are stealing complicated white-collar jobs.
I suspect the opposite. Right now we have incredible language models that appear to have something like an emergent property of something similar to reasoning. What’s their big problem? Lack of...
I suspect the opposite. Right now we have incredible language models that appear to have something like an emergent property of something similar to reasoning. What’s their big problem? Lack of attachment to concrete fact and knowledge capacity. Once we can marry that emergent reasoning with the ability to double check facts and to acquire new, verifiably correct training data, I think we are going to see exponential increases as self refinement takes off.
People have been doing that manually since the beginning and we still see their shortcomings. And soon after the proliferation of LLMs people began hooking them into runtimes, compilers,...
People have been doing that manually since the beginning and we still see their shortcomings. And soon after the proliferation of LLMs people began hooking them into runtimes, compilers, documentation, etc. and yet we haven't seen more than ergonomic improvements. I think we will need an entirely new architecture to get the generational leap people talk about.
Like how people talk with LLMs, feel a human connection, and fail to realize they're like Narcissus staring into the water. People who use LLMs for work fail to realize how much of their own thought is going into the LLMs. "Just" connect A to B. "Just" add more training data. "Just" run it in a loop. Just is a big word and is used to justify too many gaps.
Lots of things have exponential growth for a while, though they often become S curves eventually. The question is how high a percentage gain per year should we expect? Based on this article, a lot...
Lots of things have exponential growth for a while, though they often become S curves eventually. The question is how high a percentage gain per year should we expect?
Based on this article, a lot of manufacturing data is guarded fairly closely. So that wouldn't be "self" refinement. It's refinement with often-slow business processes as part of the loop. The process of getting driverless cars on the road is another example of that.
Which isn't to say it can't be done, but if improvements are gated by the need to transform businesses to gather and use that data, it might still take a while to see the gains.
It depends on the field, though. For software, we have lots of public data and can generate more easily.
(Also, when there's a war going on, that's incentive to speed things up and take more risks, as we're seeing with drones.)
Could be, depends on where money gets dumped in that period. If it continues as it is now with a hyper focus on scaling LLMs and applying carelessly to every domain, I’m more convinced of a bubble...
Could be, depends on where money gets dumped in that period. If it continues as it is now with a hyper focus on scaling LLMs and applying carelessly to every domain, I’m more convinced of a bubble pop. But then again LLMs have leaped things forward and the same can be said about deep learning and back propagation a decade or so ago. So maybe in a decade we’ll get another groundbreaking change in the field.
Considering the audio-native voice ability of multi-modal models, I don’t think that part of software engineering is safe either. AI can be more patient, determined, and manipulative when it comes...
Considering the audio-native voice ability of multi-modal models, I don’t think that part of software engineering is safe either. AI can be more patient, determined, and manipulative when it comes to gathering data and interrogating stakeholders.
Besides, is gathering requirements and interrogating stakeholders the “fun and creatively rewarding” part of software development? Hell no.
I'm going to chime in and say that I actually do enjoy requirements gathering! I have basic programming skills, but it's not something I keep up on enough to succeed at leetcode tests or anything...
is gathering requirements and interrogating stakeholders the “fun and creatively rewarding” part of software development? Hell no.
I'm going to chime in and say that I actually do enjoy requirements gathering! I have basic programming skills, but it's not something I keep up on enough to succeed at leetcode tests or anything else employers use to measure programming skills. I've taken a long look at that/myself and determined the other supporting roles in the software development lifecycle are both more interesting to me and something I'm better at.
It's been a major struggle to search for this type of job, even though I have about 10 years of job history doing that as part of programming positions. I'm only less than two semesters away from attaining my Bachelor's degree, so hopefully that gets me somewhere. Though it's hard to be optimistic in this current landscape of tech layoffs.
The worst part is that your favorite step is essential for determining what customers say they want vs what they really want. Much like game companies sacrificing good gameplay at the altar of...
The worst part is that your favorite step is essential for determining what customers say they want vs what they really want.
Much like game companies sacrificing good gameplay at the altar of forum complaints and theorycraft, a much better program will be made by seperating the wheat from the chaff.
Absolutely, it's critical to determine the actual reason the project is taking place, and how an application (or updates) will improve operations. Most of my experience with this is actually the...
Absolutely, it's critical to determine the actual reason the project is taking place, and how an application (or updates) will improve operations.
Most of my experience with this is actually the opposite of scope creep - users who have been let down by false promises of other groups, and insist on stripping down their requests to the bare minimum, even if it's not enough to fully encapsulate what they need.
However, I've also worked on projects related to business processes that have changed mid-development. So that's legitimate scope creep. (In project management terms, I guess you'd call that a constraint that was derived from a risk, assuming this was known prior to beginning the project.)
But at the end of the day, as someone who has been a developer, I want to make sure the programmer team gets the information they need and isn't forced to redo tons of work due to changing requirements.
I hear ya, I just think it’s gonna be a while before people actively prefer that. LLMs are quite impressive but as it stands now (which is not forever), they fall short on reasoning tasks. Yes,...
I hear ya, I just think it’s gonna be a while before people actively prefer that. LLMs are quite impressive but as it stands now (which is not forever), they fall short on reasoning tasks. Yes, I’ve seen how they perform against ARC datasets, but I’m still not convinced by the landscape of architectures. Perhaps with some fundamental changes, or perhaps clever use of previous research like some of the blending of LLMs going on with RNNs or even RL.
I agree that’s not the fun or rewarding part but it’s an essential step that’s gonna be hard to surmount. And that’s even when hand waving away programming altogether which I’m still not convinced will be the case any time soon. I’ve certainly got a skeptical bias in this arena, but I’m just not impressed by what I’m seeing these days when it comes to programming or software engineering.
Thanks for your thoughts btw. I was hesitant to post my original comment because it was more negative than I’d like. I appreciate the balancing effect of your contributions to this thread. This is...
Thanks for your thoughts btw. I was hesitant to post my original comment because it was more negative than I’d like. I appreciate the balancing effect of your contributions to this thread.
This is a little tangential, but the area of programming I’m most excited about now is actually the creation medium that is programming in virtual worlds. There’s this VR game called Resonite that lets you model and program objects, environments, and their behaviors using entirely in-VR tools.
I haven’t tried it out yet, since I’m still getting situated with my new VR setup. But this kind of programming I expect will be fun and creatively rewarding for decades to come… in contrast to utilitarian software development.
Its less about AI totally replacing software engineers, and more about AI exponentially increasing the output of a software engineer (and thus lower the need for other engineers.) Its very likely...
Its less about AI totally replacing software engineers, and more about AI exponentially increasing the output of a software engineer (and thus lower the need for other engineers.) Its very likely that companies will drastically slash their engineering departments once AI is able do most of the grunt work.
I just read an article that I can't find now arguing that chatgpt in particular is not charging nearly enough to cover its costs, never mind make a profit. It was a finance focused assessment on...
I just read an article that I can't find now arguing that chatgpt in particular is not charging nearly enough to cover its costs, never mind make a profit. It was a finance focused assessment on the viability of the business.
Everyone assumes that ai will continue to be easily available and cheap to use, but at least some people are questioning that assumption.
I think it's important to distinguish between what will happen to specific companies and what will happen for consumers. Sometimes, important industry trends turn out to be bad investments. Solar...
I think it's important to distinguish between what will happen to specific companies and what will happen for consumers. Sometimes, important industry trends turn out to be bad investments. Solar panels, for example: even most Chinese manufacturers are losing money. Investors poured lots of money into ride-sharing companies with little result.
I think AI might be like that? OpenAI has strong competition (for example, Anthropic) and the low end is getting better. There are also LLM's that run on a single, not too expensive desktop computer that you could conceivably own, the sort of investment professionals once might have made to run Photoshop. Reports indicate that they're sometimes almost as good.
Bad for OpenAI, maybe, but that doesn't mean ChatGPT-like things are going away.
The needle is moving for consumer hardware to run larger models and I think a breakthrough that significantly reduces costs is possible within the next few years. The biggest tech companies in the...
The needle is moving for consumer hardware to run larger models and I think a breakthrough that significantly reduces costs is possible within the next few years.
The biggest tech companies in the world are all betting on AI, they likely won’t let it go. I don’t think we’re dealing with 3D TVs here.
The cynic in me is telling me those companies will be more than happy to use my power supply, hardware and network resources to run their AI services, while keeping it online-only and locked down...
The cynic in me is telling me those companies will be more than happy to use my power supply, hardware and network resources to run their AI services, while keeping it online-only and locked down in such a way that they own all of my data and I own nothing. We'll see, though.
Software engineering has the big issue that it is much harder to read code than it is to write it. "Touching up" AI generated code requires a much senior profile than the one writing prompts, and...
Programming with AI is going to end up as demoralizing as it did for illustrators
Software engineering has the big issue that it is much harder to read code than it is to write it. "Touching up" AI generated code requires a much senior profile than the one writing prompts, and you cannot grow senior profiles if your team is playing "say the right thing to copilot" instead of deciding on architecture or interfaces.
In my mind the coding AI story might go in two directions: One were it falls apart the way outsourcing (mostly) did, where companies that actually need to care about code quality cannot afford AI generated code. And two, where system quality expectations decline so much that an AI can just rewrite the whole codebase as needed every time requirements / prompts are updated. This last one will probably require better AI coders, and possibly an OS/platform designed specifically to make the AI coders look more competent.
Physical art might have a longer time before automation replaces it, if it ever does. Ceramics, woodcarving, metal sculpture, or glasswork might be an option?
Physical art might have a longer time before automation replaces it, if it ever does. Ceramics, woodcarving, metal sculpture, or glasswork might be an option?
All of those are already gone. They're just developed around it. It's like Chess and Go, where Chess had its "AI" moment earlier. The "fear" from AI for things like illustration is that it'll take...
All of those are already gone. They're just developed around it. It's like Chess and Go, where Chess had its "AI" moment earlier.
The "fear" from AI for things like illustration is that it'll take up the utilitarian work, which is where most illustrators get their money from. That's already a done deal with things like ceramics. You cannot compete with the industrial production of ceramics cups, wooden spoons, and so forth - they are more consistent and infinitely cheaper.
Today, things like ceramics is a mix of hobbyist, who do it purely for its own sake, and a small, small number of artisans who produce unique works for a small, niche market that appreciates that for high prices. It's not a real career path, but it is a thriving hobby. None of that is really different from where illustrations, or writing is headed, or is feared to head.
While I generally agree with you, I don’t think games like Chess and Go are good examples of the general trend. Games should always be safe from AI, since the whole point of games are to test...
While I generally agree with you, I don’t think games like Chess and Go are good examples of the general trend. Games should always be safe from AI, since the whole point of games are to test human skills against or in collaboration with one another. It’s entertaining to watch others compete and to compete oneself.
Imagine someone trying to enter an RC car in an Olympic 100m dash? It’s completely irrelevant, since the whole point of the event is to watch people do it.
Drawn art and writing are not mainly thought of as games at this point. Maybe that will change? But if the point of art and writing is to induce a pattern of emotional response and thought, then the finished “product” is what matters more than the process of creation.
That's what ended up happening, but there was quite a lot of doomerism in both Chess and Go when computers first began besting humans. A disproportionate amount of attention is centered at the top...
That's what ended up happening, but there was quite a lot of doomerism in both Chess and Go when computers first began besting humans. A disproportionate amount of attention is centered at the top of sports, after all. Way more people watch the Chess World Championship than a random GM qualifier, even though the latter is in the 99.999 percentile of skill.
Drawn art and writing are not mainly thought of as games at this point.
Is it? The vast, vast majority of people capable of making what is considered traditionally art, don't have much of an audience. You share it on instagram, and get like a handful of likes from your friends. You are a small fish in a big ocean. No one really sees it. So why do all those people do it currently, if it's the end result that matters? Since, by that metric, they're all horrible failures.
It's because they enjoy the process. If they didn't, they wouldn't pursue such an avenue so devoid of financial success.
I think we're at least a decade off from that (the state where writing or illustrations are at), because the development of automated manufacturing devices like ceramics printers that are reliable...
Today, things like ceramics is a mix of hobbyist, who do it purely for its own sake, and a small, small number of artisans who produce unique works for a small, niche market that appreciates that for high prices. It's not a real career path, but it is a thriving hobby. None of that is really different from where illustrations, or writing is headed, or is feared to head.
I think we're at least a decade off from that (the state where writing or illustrations are at), because the development of automated manufacturing devices like ceramics printers that are reliable and versatile to the point of replacing traditional forms of "artisanal" manufacture is at least an order of magnitude slower than the development of AI (I am a hobby ceramicist and I also started with plastic 3D printing relatively early on and watched its development over time - current ceramic printers are really not it), and one would argue that at least for a time physical artisanal products may grow in popularity instead, precisely because we're at a time where, contrary to digital art, it's trivial to make artisanal objects that are obviously made neither by AI assisted automated processes nor by large scale industry, and the demand for that seems to be increasing.
I think the real bottleneck here is that in order to make a (dignified) living selling artisanal objects the skills needed are 50% craftsmanship/artistry and 50% (or even more) sales and marketing, you usually don't make enough to hire someone for the latter in the long term, and at least where I live people who are able to learn both of those skills well enough are uncommon.
The few people I know who make a living doing ceramics are not doing terrible financially and are constantly sold out, however they all split their time between making products and teaching, because the latter is more profitable, and that market might be smaller, though similarly to the sales of their products, the courses are constantly sold out within a day or two.
Both of those are very "top down" options, which work best when they have full control over the material. Probably glass and metal crafting will be the first to go, as the materials are more...
Both of those are very "top down" options, which work best when they have full control over the material. Probably glass and metal crafting will be the first to go, as the materials are more stable and predictable, with ceramics after that, but I imagine that woodworking will be the last to fall, as working with variable grain and density would ask for more improvisation than is currently doable.
Unfortunately, that’s the side of art that doesn’t appeal to me at all. I do think you’re correct that niche artisanal fields like those will remain intact—and likely grow in popularity!
Unfortunately, that’s the side of art that doesn’t appeal to me at all. I do think you’re correct that niche artisanal fields like those will remain intact—and likely grow in popularity!
To be fair, the work of an employed programmer already involves a lot of dealing with error-filled code not written by you, and I personally already find that extremely demoralizing, even if AI...
Programming with AI is going to end up as demoralizing (...) error-filled
To be fair, the work of an employed programmer already involves a lot of dealing with error-filled code not written by you, and I personally already find that extremely demoralizing, even if AI isn't involved.
Is there a reason why the "blue book" method isn't being employed more across the board. Like take 2 hours to synthesize your thoughts and put them down on paper; no notes, computers, or chatGPT...
Is there a reason why the "blue book" method isn't being employed more across the board. Like take 2 hours to synthesize your thoughts and put them down on paper; no notes, computers, or chatGPT to rely on.
Off the top of my head: two hours actually isn't that long, so students will be more focused on getting it done in time than the quality of the work. They can't just use the scheduled class period...
Off the top of my head: two hours actually isn't that long, so students will be more focused on getting it done in time than the quality of the work. They can't just use the scheduled class period for it, either, because the teacher still needs to teach. If they were to, say, have an essay writing period every other class, like how some classes are split between a lecture and lab section, they'd have only half as much time for the lecture. The logistics of setting up a schedule and/or rearranging the curriculum to allot time for regular blue book essays feels like it'd be a nightmare.
On a lighter note, it's also probably more time-consuming to grade because handwriting can be almost indecipherable. I remember needing to use my mom to "translate" parts of a handwritten account by my grandmother because I just genuinely could NOT read some words.
I've had classes that had a lecture hour twice a week and a lab hour once a week. Something like that could work. And to fix the handwriting problem... maybe electric typewriters will come back in...
I've had classes that had a lecture hour twice a week and a lab hour once a week. Something like that could work.
And to fix the handwriting problem... maybe electric typewriters will come back in style!
I don't think it would work well for a writing class. The benefit of the lab portion was to practice the practical side of the lecture material. It was a time to either use equipment we couldn't...
I don't think it would work well for a writing class. The benefit of the lab portion was to practice the practical side of the lecture material. It was a time to either use equipment we couldn't access ourselves, or be able to get assistance from the instructors or TAs as they walked us through the steps.
If they did that format with the blue books just to write essays in one sitting, that would feel more like a test than practical experience. I can see that quickly demoralizing students and making them hate writing. I think even I'd get burnt out.
Those are good points. I'm imagining more of a mid-term/final type deal where you would only need to take the test 2-3 times a semester. That way it has little impact on actual teaching days.
Those are good points. I'm imagining more of a mid-term/final type deal where you would only need to take the test 2-3 times a semester. That way it has little impact on actual teaching days.
I was in high school long before LLMs, but we did that blue book style a few times. I actually liked it a lot better than traditional essay writing. In retrospect, it was likely because my ADHD...
I was in high school long before LLMs, but we did that blue book style a few times. I actually liked it a lot better than traditional essay writing. In retrospect, it was likely because my ADHD was manageable enough to allow me to hyperfocus on that task for 2 hours pretty easily. I think it’s a fantastic idea to try and use that more to combat LLMs.
Are these students that have ever been asked to handwrite essays at this point? I'm not sure if that's a current primary/secondary school expectation anymore. Proctored computer lab style exams...
Are these students that have ever been asked to handwrite essays at this point? I'm not sure if that's a current primary/secondary school expectation anymore.
Proctored computer lab style exams might work but not for large lectures. Those are probably not doing essays anyway though.
One thing about programming is that the demand is much more elastic, so as software engineering becomes easier, I'd expect there to be more total projects made, expanding to fill the gap. Sure,...
One thing about programming is that the demand is much more elastic, so as software engineering becomes easier, I'd expect there to be more total projects made, expanding to fill the gap. Sure, the vast majority of code will have minimal specialised human attention, but for obscure, new, or especially difficult things humans will still be needed for a while
I'm reminded of storyboards for movies and mockups for software UI's. Maybe someone who isn't an artist and can't draw might use clip art of the sort of things they'd like to see? And I think such...
I'm reminded of storyboards for movies and mockups for software UI's. Maybe someone who isn't an artist and can't draw might use clip art of the sort of things they'd like to see? And I think such things are useful for communicating ideas even if they're pretty rough.
Professionals are going to have to be firm about it in negotiations, though: yes, this mockup is useful, but I still charge to make a professional illustration based on it, assuming that's what you want.
But on the other hand, perhaps some professionals learn to work faster using new tools? And that will be hard for people who want to do it the old way.
And also, sometimes people won't need a professional illustration, when a mockup is good enough? Just like most of the time, our amateur photos are good enough.
So the result might be that you're too expensive for them and the job isn't a good fit.
A friend of mine is a designer for big budget TV and movies and that’s what he says is the big use of generative AI. They have stringent requirements for image provenance so using generative AI...
A friend of mine is a designer for big budget TV and movies and that’s what he says is the big use of generative AI. They have stringent requirements for image provenance so using generative AI for anything that actually gets shown on screen is more work than it’s worth from the legal side so they just don’t do it but storyboarding and concept art is rife with AI.
I don't know about uncomfortable, but this process is so much fun. Surely I can't be the one to think that, right? I have trouble understanding why so many people use AI in such a way that instead...
the uncomfortable process of coagulating nebulous thoughts over time
I don't know about uncomfortable, but this process is so much fun. Surely I can't be the one to think that, right? I have trouble understanding why so many people use AI in such a way that instead of refining those thoughts hammers them into a shape that comforms to a prediction based on what was written by others in the past. AI is much more useful to help bring that shape separately to your attention, or do perform language-related operations.
I agree with this statement but would go further in saying that writing can be equal to thinking. I'll often have some vague thing in the back of my mind. Pleasant or unpleasant. I can think about...
writing ... is “not the transcription of thoughts already consciously present in [the writer’s] mind.” Rather, writing is a process closely tied to thinking.
I agree with this statement but would go further in saying that writing can be equal to thinking.
I'll often have some vague thing in the back of my mind. Pleasant or unpleasant. I can think about it and mull it over and try to pin it down for a long time, but it's still vague and untouchable.
I can only figure out what is really going on by typing it out. Just start typing. Then it becomes a fully-formed thought that I'm aware of and I can now judge it or react to it or whatever.
I was perplexed by the statements like these about doctoral students, and a sense of vagueness about the university and the course, so I looked it up. The author taught for the English Language...
She comments that her students were not "developed enough" to recognize the nuances between their original writing vs. the generated writing:
I was perplexed by the statements like these about doctoral students, and a sense of vagueness about the university and the course, so I looked it up. The author taught for the English Language Program at the New Jersey Institute of Technology; it appears she was specifically teaching English courses for international students and non-native speakers. It does not appear that she was teaching a general technical writing course. Here is a syllabus she had, for example.
That context considerably changes my interpretation of her comments about her students' comprehension. I feel it also makes understanding the students' choices more difficult without knowing more about the organization of the program: this could well have been a course all international students were forced into regardless of English proficiency, for example, and seen more as a busy-work impediment to their actual program more than an opportunity for learning.
It's not clear to me why the author chose to be vague about this context.
With that context, I think it's honestly pretty unfair for the author to state that they "weren't developed" enough to "write", without further qualifications. No, they didn't know how to write...
With that context, I think it's honestly pretty unfair for the author to state that they "weren't developed" enough to "write", without further qualifications.
No, they didn't know how to write ENGLISH. Who knows what they have and can do in their native language?
I doubt the author would do very well if they were asked to write an essay in spanish, chinese, or tamil or something.
To be fair to the teacher, when I get report cards for my kids in New Jersey, they refer to skills as Underdeveloped, Developing, and Proficient. Underdeveloped means that they might qualify for...
To be fair to the teacher, when I get report cards for my kids in New Jersey, they refer to skills as Underdeveloped, Developing, and Proficient.
Underdeveloped means that they might qualify for remediation programs.
So when that teacher is saying they weren't developed enough, that's essentially just saying 'they haven't learned enough about that yet to be considered Proficient,' not 'they are incapable of learning that.'
Your comment made me realize just how much writing helps critical thinking skills. I've known for a while now that I use writing to organize my thoughts. First realized it while playing Mafia on...
Your comment made me realize just how much writing helps critical thinking skills.
I've known for a while now that I use writing to organize my thoughts. First realized it while playing Mafia on forums, when I got really analytical I'd start a long, rambly comment where I'd just jot down my thoughts and constantly edit and cut bits and pieces as I do it. Even since joining Tildes I've had at least one epiphany while in the midst of writing a comment.
I always attributed this to my personal obsession with writing mixed with my neurodivergence. However, it's just now occurring to me that this isn't particularly unique to my mind, but a general benefit of writing for everyone. Putting our thoughts down on paper makes us think about the topics more actively than just regular thinking. Seeing the points written down can let us make connections we wouldn't have otherwise.
I don't know if schools focus on that aspect much. My memories of English classes had them more focused on the technical aspects, or reading books and filling out worksheets and tests. I remember writing a three-point essay in high school about how I was tired of writing three-point essays because we'd been reviewing that structure since fourth grade. I barely paid attention in my first college English class because it was just technical stuff I already knew intuitively from writing fiction.
Come to think of it, one of my history classes in college mentioned the business school was talking about adding more history classes to its curriculum because those led to better writing skills than English classes. I thought it was mainly due to the more focused research and effort needed to write a history essay (that class's final essay was the hardest essay of my college career), but now I'm thinking about the extra element of thought on top of that. History papers require heavy research to back up a statement that's coherent and makes sense based on facts. Thing is, those facts can be sparse and interpreted in a lot of ways, so it takes a lot of thought and effort to get them to come together to support a claim. I didn't need to put in nearly as much thought for most of my English classes.
Right now, critical thinking feels like one of the most neglected skills in education. So schools might benefit from emphasizing and teaching writing more as a critical thinking tool to organize thoughts, rather than focusing on the purely technical aspects.
And unfortunately, many of your cohorts still couldn't write a three point essay by that point. I'm not sure how we could teach critical thinking.....with measurable result metrics. And I agree...
I remember writing a three-point essay in high school about how I was tired of writing three-point essays because we'd been reviewing that structure since fourth grade.
And unfortunately, many of your cohorts still couldn't write a three point essay by that point.
I'm not sure how we could teach critical thinking.....with measurable result metrics. And I agree it's neglected, albeit unintentionally.
I feel very similarly about programming. I try to consider what the "full thing" will look like, but once I inevitably start actually coding many of my assumptions are usually challenged and I...
I feel very similarly about programming. I try to consider what the "full thing" will look like, but once I inevitably start actually coding many of my assumptions are usually challenged and I gain a much better understanding of what's needed.
Can someone explain the point of cheating with AI for a class you're actively paying money for? I went to university because I wanted to learn my subject, but it seems like students are...
Can someone explain the point of cheating with AI for a class you're actively paying money for?
I went to university because I wanted to learn my subject, but it seems like students are increasingly content to avoid actually learning if it means they can get an easy good grade.
If true, I think it can be traced back to the value loss of degrees because they're considered normal/expected - Jobs which never used to require a degree now do, to the point where some peaple would not be able to apply for the exact job they currently have.
Students can see that degrees are losing their value, so it's easier to justify avoiding the work.
I think there are a few factors. Students do not have an accurate view of what the actual day to day work for the occupation they are training towards actually involves. An example of this is I...
I think there are a few factors.
Students do not have an accurate view of what the actual day to day work for the occupation they are training towards actually involves. An example of this is I had a prof in university who commented to the class "I don't get why everyone complains of my high reading load, when you actually get a job in [specific field we were taking] your day will be primarily spent reading." This is in contrast to what most students thought of for this industry, as they thought it may have been more social oriented work.
They feel their education is more of a rubber stamp than actually equipping them for the industry they are working towards. I went back to post-secondary for a two year program from 2021-2024 that was a blend between programming and system admin. ChatGPT became quite useful near the end of our first year and there was a student that I had to work with that ran everything through chatGPT. She almost failed a needed class because as the classes got harder she could barely read or rewrite code. But she didn't care, she was at the time still passing classes and then she thought she could use her degree to get a job, despite being weak in a lot of areas.
The course is unrelated to the industry they are working towards. If you are majoring in compsci, they may feel their required intro to philosophy class is pointless, so will go into the class with the mindset that all they need is a passing grade.
Some students are arrogant, and feel they already know everything they need to succeed in the industry they are working towards. They are not at the school to learn, but to get the degree so that they can get hired.
I think you're right. It's such a different mindset from how I've experienced uni/work that I'm having trouble empathising with this point of view, so I appreciate your explanation.
I think you're right. It's such a different mindset from how I've experienced uni/work that I'm having trouble empathising with this point of view, so I appreciate your explanation.
I would note that the example in the article is not typical, either. The author is teaching doctoral students, not undergrads. A technical PhD student is working 60+ hour weeks on research,...
I would note that the example in the article is not typical, either. The author is teaching doctoral students, not undergrads. A technical PhD student is working 60+ hour weeks on research, classes, and TA responsibilities combined. I can easily see how they just do not give a shit about the writing class they're being forced to take.
They're also not paying, but rather being paid (although, not a whole lot) by the university at this point.
Not only that, but they are of a much more advanced age, and it's hard to tell them "well, you'll need this when you're working!" when the students are 30 and they know that when they're working, they'll be doing similar things to what they're doing now. And honestly, how wrong can they be?
The example is even less typical than that. The author appears to have been specifically teaching English courses for (presumably incoming) international students (at the New Jersey Institute of...
The author is teaching doctoral students, not undergrads. A technical PhD student is working 60+ hour weeks on research, classes, and TA responsibilities combined. I can easily see how they just do not give a shit about the writing class they're being forced to take.
The example is even less typical than that. The author appears to have been specifically teaching English courses for (presumably incoming) international students (at the New Jersey Institute of Technology, also oddly not mentioned). So these were likely writing classes that students were being required to take while others in their cohort were not, and were likely entirely outside of their department's program and normal requirements.
There could be significant motivation to cheat or cut corners in that context, and the students' departments might not care, or might even unofficially encourage it. The author could well have been caught more in inter-departmental drama than a breakdown of student behavior.
Maybe times have changed, but when I was a grad student, even a whiff of cheating would have been a death sentence. The academic community is simply too small and has too long a memory.
Maybe times have changed, but when I was a grad student, even a whiff of cheating would have been a death sentence. The academic community is simply too small and has too long a memory.
This really depends. I ratted out a partner to protect myself (poor grad student, not gonna lose my chance at a degree because an international student on a government scholarship came from a...
This really depends. I ratted out a partner to protect myself (poor grad student, not gonna lose my chance at a degree because an international student on a government scholarship came from a country where cheating was accepted) but because they were an international student, and the prof was from the same country they were just told to redo their work, while I was allowed to submit my un-plagiarized portion.
There's also a lot of Ka-Ching from catering to international students and making sure they don't get dropped or spread rumours that they're a school that drops students.
There's also a lot of Ka-Ching from catering to international students and making sure they don't get dropped or spread rumours that they're a school that drops students.
This would be a fair assessment if the students knew how to write, but from OPs comments these people cant write well enough to see why its a problem that their chatgpt answers change styles...
This would be a fair assessment if the students knew how to write, but from OPs comments these people cant write well enough to see why its a problem that their chatgpt answers change styles halfway through.
I guess the good news is you’ll have a leg up if you’re a good writer, going forward.
Bad news is we’re all going to be forced to read badly written generated articles. I know some of us are already, but chatgpt hasn’t creeped into science publications yet, and I guess it will soon.
That's a fair point. I've read similar complaints about undergrads too, which is why it's been on my mind. But I totally understand if you're working towards a dissertation that you don't care...
That's a fair point. I've read similar complaints about undergrads too, which is why it's been on my mind. But I totally understand if you're working towards a dissertation that you don't care about some other class you're forced to take
Another point I did not include is laziness. AI can do a job normally well enough to succeed. People can normally spot AI generated content whether it is email, a paper, or potentially code (I am...
Another point I did not include is laziness. AI can do a job normally well enough to succeed. People can normally spot AI generated content whether it is email, a paper, or potentially code (I am not a skilled enough programmer to recognize AI coding practices). However, the AI generated content normally does the job well enough that it gets the job done. So for some students, the question becomes "why should I do the work of doing this assignment when I can probably pass using AI?" The writer of the article does state learning is hard. So why not just AI? Same in the workplace, why go through the effort of spending ten minutes writing this email response when they can send it through AI in two minutes? Sure, the recipient can probably tell it is an AI response, but they also have enough of an answer for them to do their job so they won't complain about the impersonal feeling of their coworkers not spending the time to respond to emails
Right its just lowering the bar. This professor would have to learn how to grade AI generated papers. They’d have to familiarize themselves with the current limitations and grade it based on how...
Right its just lowering the bar.
This professor would have to learn how to grade AI generated papers. They’d have to familiarize themselves with the current limitations and grade it based on how well the student prompted the AI rather than how well the student wrote something.
They just don’t want to do that, because in a class meant to teach people how to write, its a waste of everyones time.
I imagine at some point in the future the curriculum might change to “AI prompting” instead of “writing” and the only people who actually know how to write will be the English/etc majors, just like the only people who can do basic algebra are the ones who had to use it in college.
Ugh that would be a horrible world. People will still want to communicate, so they're just going to spew more catchphrase reference nonsense and poorly thought out running monologue into all...
Ugh that would be a horrible world. People will still want to communicate, so they're just going to spew more catchphrase reference nonsense and poorly thought out running monologue into all public spaces.
And I mean, basic algebra is used on a day to day basis: is it cheaper to buy 1kg of meat at regular bulk price, or just 250g of it in a smaller pack but on sale?
This already happens, Reddit has been totally taken over bots and well written comments have become increasingly rare. Tildes has been a godsend for me because I really cant find good reading...
This already happens, Reddit has been totally taken over bots and well written comments have become increasingly rare. Tildes has been a godsend for me because I really cant find good reading content on Reddit any more.
Most people don’t shop this way. They buy the first option they see. Theres a reason why that old Facebook post with the order of operations was so widely spread, most people cant do basic math.
Related to this, so many steam reviews are ai generated now. I hate it because they stand out so much. For the most part it looks like actual people prompting chatgpt to write it for them instead...
Related to this, so many steam reviews are ai generated now. I hate it because they stand out so much. For the most part it looks like actual people prompting chatgpt to write it for them instead of actual bots. I just don't understand why anyone thinks it adds anything of value.
To be fair, my intro to philosophy class was awful (taught by an obviously inexperienced grad student), and I can totally understand students wanting to get the required credits out of the way so...
To be fair, my intro to philosophy class was awful (taught by an obviously inexperienced grad student), and I can totally understand students wanting to get the required credits out of the way so they can focus on the electives they are actually interested in. I can confidently name that class, international relations, and one other as courses that drove me away from liberal arts majors (international studies), and straight back to a STEM major...I changed majors a lot.
Ironically, as a stem grad who actually has a little bit of money, these days, it's international relations that mostly fuel my investment decisions, turning the peanuts stem money into actual...
Ironically, as a stem grad who actually has a little bit of money, these days, it's international relations that mostly fuel my investment decisions, turning the peanuts stem money into actual retirement sized money.
That seems odd as most markets don't compare to the US markets. The EU doesn't even allow the purchase of US ETFs (they have their own, that US citizens aren't allowed to purchase...).
That seems odd as most markets don't compare to the US markets. The EU doesn't even allow the purchase of US ETFs (they have their own, that US citizens aren't allowed to purchase...).
Isn't it the opposite? A degree, as a rubber stamp, is more important than ever, so people know they need said rubber stamp, but have no intrinsic interest in studying. Additionally, I would note...
Isn't it the opposite? A degree, as a rubber stamp, is more important than ever, so people know they need said rubber stamp, but have no intrinsic interest in studying.
Additionally, I would note that many people don't have an interest in a rounded education. Many of these classes that are casually cheated in are not in their field, but a required part of the program. A bio major, for instance, may cheat in a history class they have to take, since they don't see it as bringing value to their future endeavors.
I think we agree? As a rubber stamp it's important because jobs want them, but it's losing value because the specialised knowledge isn't used and students wouldn't have gone if the job didn't...
I think we agree? As a rubber stamp it's important because jobs want them, but it's losing value because the specialised knowledge isn't used and students wouldn't have gone if the job didn't require the stamp.
Because spending thousands on an "education" you're not learning/using just because employers want a stamp is a waste of money? Why not just charge a fee and save students 3 years of their life?
Because spending thousands on an "education" you're not learning/using just because employers want a stamp is a waste of money? Why not just charge a fee and save students 3 years of their life?
I see the same way I see inflation: as the amount of things a dollar can buy goes down, the need to have a dollar (ideally, many dollars) goes up. The degree (or dollar) is losing intrinsic value,...
I see the same way I see inflation: as the amount of things a dollar can buy goes down, the need to have a dollar (ideally, many dollars) goes up. The degree (or dollar) is losing intrinsic value, but that’s making it more of a necessity for more people, not less.
Its losing value because people who don’t really want to learn are being forced to go through the program in order to find a job. These people do the bare minimum, and graduate without having...
Its losing value because people who don’t really want to learn are being forced to go through the program in order to find a job.
These people do the bare minimum, and graduate without having learned a whole lot, and so they don’t produce as good of work as you’d expect from a college graduate.
This is because college is an educational playground, theres semi loose requirements to get through it and the expectation is you get what you want out of it. If you want to waste your time and get the rubber stamp, they let you.
So the rest of us have to go through insane interviews because the employers cant trust that we didn’t waste our time in college. Our salary expectations are lowered because more people have a degree. Its more difficult to find a job because more people graduated with the same degree at the same time.
Because at the end of the day the only thing anyone cares about in the vast majority of professions is the degree. And that's before you get to the required "whatever" classes that are outside the...
Can someone explain the point of cheating with AI for a class you're actively paying money for?
Because at the end of the day the only thing anyone cares about in the vast majority of professions is the degree. And that's before you get to the required "whatever" classes that are outside the major.
Edit-
To put it another way. If you're spending money in the hopes of getting a degree that will otherwise wall you out of the workforce for not having, why wouldn't you? You're literally risking your future. Think if there's ANYTHING else that you'd spend 5+ figures on knowing that failure could cost you vastly more, where you wouldn't use every tool available?
Now of course, the argument is that "well you'll just be screwed when you get the job and don't know your shit" but that's been around since LONG before AI, and is why entry level "just from college" positions mostly train on the job.
You probably can't be anywhere near STEM, but just about everything else you'll be fine, because you're going to be surrounded and reporting to people who also used AI/did the bare minimum/never actually understood the material/straight up cheated.
I think this is where my misunderstanding comes from. As someone who has always been in STEM, it's obvious to anyone technical if you don't actually have the knowledge and you'll get called out so...
You probably can't be anywhere near STEM, but just about everything else you'll be fine, because you're going to be surrounded and reporting to people who also used AI/did the bare minimum/never actually understood the material/straight up cheated.
I think this is where my misunderstanding comes from. As someone who has always been in STEM, it's obvious to anyone technical if you don't actually have the knowledge and you'll get called out so fast.
You have a limited amount of time. Do you focus on your capstone STEM project that might lead to a startup with friends and an interested professor. At the very least this will look super good on...
You have a limited amount of time. Do you focus on your capstone STEM project that might lead to a startup with friends and an interested professor. At the very least this will look super good on your resume and attract attention from Big Name Stem Corps, plural.
Or do you spend half the time on a breadth requirement Humanities course that 99% of the other kids are cheating through? There is no reason to believe you will ever use the material from this Humanities course ever again, and in fact your start up idea could potentially disrupt it so that no other human will value this knowledge again.
I'm very sympathetic to the author, and I agree with most of her points. I still think, in the end, that people that learn how to leverage what LLMs do best as part of their overall goal are going...
I'm very sympathetic to the author, and I agree with most of her points.
I still think, in the end, that people that learn how to leverage what LLMs do best as part of their overall goal are going to be able to do amazing things that simply aren't possible without machine intelligence assistance, and it's going to be worth it. The majority of people have looked for and taken the easy way out since the very beginning, and this is nothing new. Some of the stories of cheating for the Imperial Exams in China are amazing in their complexity, for example, and they're from hundreds to over a thousand years ago.
Where we're lagging is figuring out on how to grade and evaluate students who actually care and are trying, but I don't see that as an unsolvable problem, either. If we can get smaller class sizes, more teachers and assistants per student, because we can no longer rely on semi-automated testing, well... We've been needing that for a long time for a lot of reasons. This is just one more.
I completely agree. This is a moment of reckoning for the education system - and it's overdue. When writing for class I never felt that teachers were measuring my ability to revise and compose...
If we can get smaller class sizes, more teachers and assistants per student, because we can no longer rely on semi-automated testing
I completely agree. This is a moment of reckoning for the education system - and it's overdue. When writing for class I never felt that teachers were measuring my ability to revise and compose thoughts so much as regurgitate plot points or lecture notes. I didn't attend the best schools, but they were pretty good. Maybe a 95th percentile US public high school and a good private university for my field.
However if we didn't fix it back then I don't know why we would now. I think this will end up just making teaching more miserable. Teachers don't need more things to worry about. Maybe this will open the floodgates to change. But I wouldn't bet on it.
A point that the author doesn't seem to consider or perhaps didn't occur to her - writing is not the only skill for which style can come into play. My thoughts were drawn towards other skills, and...
A point that the author doesn't seem to consider or perhaps didn't occur to her - writing is not the only skill for which style can come into play. My thoughts were drawn towards other skills, and how differing levels of time and effort spent honing a skill can give one the ability to more closely evaluate style. While I may have enough skill in writing to have revealed my own style and for it to be recognizable by someone such as the author, I do not think I've spent enough time honing my programming skills for a style to emerge (except maybe in SQL). Similarly, while I have DJed for long enough to have a style, I would doubt the average listener would be able to detect it. For an educator attempting to teach others how to find their own style, I can see how this would be endlessly frustrating - for a tool to come along and automate so much of the process but for said tool to strip or randomize style may make it more difficult for folks to learn style along the way. I think this is an important recognition when it comes to automation and tooling in general, but unfortunately I think the viewpoint of the author is a bit biased or of one-mind.
Many skills are simply undesirable to many humans. Having good writing skills are important for those who wish to go into a creative job which involves writing, but for the majority of humans this is simply not the case. I certainly can't speak to the aspirations of every individual on Earth, but it would not surprise me if the majority of individuals have no interest in acquiring anything approaching mastery of writing. I would guess that the majority simply wish to be able to express their thoughts well enough to get a point across, even if they are unable to wield higher-level skills of expression that might be captured by the term "style".
I think perhaps this feels a bit more jarring than other skill shifts such as folks being able to drive cars without understanding them, write books without developing penmanship, operate computers without understanding programming, create clothing without being able to sew, and other similar democratization via automation/tooling because it involves a skill which is rightfully unique in some aspect and also a skill which is the basis for many other second order skills. Prior to chatgpt it was much more difficult to surface a student's disinterest in acquiring this skill as they would need to have other humans write their work, plagiarizing other people, or in some fashion rely on the thoughts and work of other humans to avoid refining their own skill to acquire a style. Because it was so difficult or risky to pursue these means to an end, some either stuck with it until they developed a style despite having no interest, or simply copied/emulated the style of others in order to get the grade they needed. Now that they can have someone else (or something else, depending on how you view chatgpt) reliably do the skill without putting in any effort and to have a rather formulaic and easily detectable output (as compared to the various styles of those who chose to sell their services), those who have no interest are much easier to detect.
I think this all raises a very interesting point of what skills are foundational and how to accommodate the individuals who are interested in just getting a degree or finishing schooling and those who are interested in the finer details of the skills they are refining. Given that communication is one of the most important skills for a human to have, I'd say that writing, speaking, and language skills are all foundational on some level, but whether you need to have enough skill to have a style and what precisely defines that style are more difficult questions to answer. I see plenty of people online who have acquired their own writing style still get misinterpreted all the time, including by other individuals who have their own writing style, so I question whether acquiring style is even as necessary as having skills to pick apart meaning from a string of words. Certainly more mastery will improve the ability to discern meaning from others, but other critical reading and thinking skills may be of more importance and may often simply be acquired via osmosis from immersion in practicum, rather than through deliberate training.
I think this is a problem that mostly comes down to incentives. When it comes to grades, I seriously doubt students are getting bonus marks for developing a distinctive voice in their writing or...
I think this is a problem that mostly comes down to incentives. When it comes to grades, I seriously doubt students are getting bonus marks for developing a distinctive voice in their writing or for making their summaries unique. This issue is compounded when one considers that post secondary education costs a lot of money and students have too little time to really absorb the whole material. Generative AI isn't going away, so I think it's best that we try and reform our education system to be less of a rat race. If not for the benefit of society, then at least for the benefit of students who are being criticized for doing what they are incentivized to do.
The main issue seems that grading AI generated writing is futile and demoralizing.
There is an argument that for programming, AI tools simply let you program at a higher level of abstraction. I don’t buy it.
Programming with AI is going to end up as demoralizing as it did for illustrators. Everybody wants to hire a professional illustrator to touch up error-filled AI generated images—for a fraction of their previous rate—and not pay the full rate for a professionally crafted illustration.
I’m not actually sure what I’m trying to say here. I’m not going to be able to stop this process.
I just want to find a niche skill I can cultivate that’s creatively rewarding, professionally viable, and safe from AI ruining the fun parts. And that’s not going to be programming, art, or writing, so far as I can tell.
I wouldn’t be so certain about AI being a threat to programming any time soon. Especially if you mean professionally as a software engineer. Software engineering is much more than writing code.
Maybe we’ll reach a day where AI is good at requirement gathering, understanding users and building interfaces, writing maintainable code (although maybe that won’t matter), long-term planning, prioritization, etc. I really just don’t see that happening within the next 20 years, but who knows.
20 years is way too far out IMO. I expect we will see AIs capable of getting to an MVP for many legitimate projects easily within a decade, maybe even under 5 years. I don't think we have any real idea what we're going to be looking at in 20 years, but it's going to be amazing.
Consider that they've already ran out of training data. It's possible that we will go back into an AI winter for LLMs.
I have two thoughts on this:
Internet data is going to become like shipwrecked steel for fine tune measuring instruments. I remember reading years ago how there are companies who salvage steel from shipwrecks that sunk pre-1945 to get steel that is not tainted with nuclear radiation. This steel is then sold for industries that need non-contaminated steel. I feel like this is a useful parallel when considering internet datasets for AI training. If a company can verify that their dataset only includes information prior to 2022, it will sell for more to prevent the cannibalization of AI that is starting to occur.
I remember seeing a headline of an article that AI prompts are a goldmine for AI training. I didn't read the article, but that is something to consider as AI prompts are natural human generated content, that will continue to be generated as AI becomes more prevalent.
Edit: The link for point 2 that I have now read: https://tildes.net/~tech/1j9n/your_chatbot_transcripts_may_be_a_gold_mine_for_ai_companies_gifted_link
Number 2 is a good point. The very companies that need data are having it show up right at their door. I wonder how useful it is compared to pre-2022 forum posts. I expect it's more narrow in scope. Mostly queries, not so much information.
After posting the comment, I saw the article I referenced, and have since included the link as an edit. That article focuses more on how AI chat logs could be used for targeted ads than it does training AI. However, AI tools are being used by some people more as a conversational partner to combat loneliness instead of the utilitarian search engine that most people associate with AI. So those logs could be beneficial
I worked at a GPT-3 startup a few years ago, back when API access was hard to come by. I'm confident that OpenAI trained GPT-3.5 and GPT-4 on our prompts. They provided us with a suggested few-shot prompt structure: Plain English request, example input 1, example output 1, example input 2, ..., customer input => output (with some specific extra tokens between). This assuredly gave them the data needed to train their current zero-shot models. We would work pretty closely with OpenAI to perform RLHF in exchange for early access to new models.
I don’t think we are out of training data per se. I was just looking at job offers for math grads to teach AIs for example, so new high quality training data from humans is being generated, and I think we are going to see big gains from “self play” type training that incorporates external systems like wolfram alpha as well as new techniques for leveraging semantic web type data to deal with issues like recency and hallucinations.
Those sound like incremental improvements - which I'll gladly take but aren't going to take us to a level where AIs are stealing complicated white-collar jobs.
I suspect the opposite. Right now we have incredible language models that appear to have something like an emergent property of something similar to reasoning. What’s their big problem? Lack of attachment to concrete fact and knowledge capacity. Once we can marry that emergent reasoning with the ability to double check facts and to acquire new, verifiably correct training data, I think we are going to see exponential increases as self refinement takes off.
People have been doing that manually since the beginning and we still see their shortcomings. And soon after the proliferation of LLMs people began hooking them into runtimes, compilers, documentation, etc. and yet we haven't seen more than ergonomic improvements. I think we will need an entirely new architecture to get the generational leap people talk about.
Like how people talk with LLMs, feel a human connection, and fail to realize they're like Narcissus staring into the water. People who use LLMs for work fail to realize how much of their own thought is going into the LLMs. "Just" connect A to B. "Just" add more training data. "Just" run it in a loop. Just is a big word and is used to justify too many gaps.
Lots of things have exponential growth for a while, though they often become S curves eventually. The question is how high a percentage gain per year should we expect?
Based on this article, a lot of manufacturing data is guarded fairly closely. So that wouldn't be "self" refinement. It's refinement with often-slow business processes as part of the loop. The process of getting driverless cars on the road is another example of that.
Which isn't to say it can't be done, but if improvements are gated by the need to transform businesses to gather and use that data, it might still take a while to see the gains.
It depends on the field, though. For software, we have lots of public data and can generate more easily.
(Also, when there's a war going on, that's incentive to speed things up and take more risks, as we're seeing with drones.)
Could be, depends on where money gets dumped in that period. If it continues as it is now with a hyper focus on scaling LLMs and applying carelessly to every domain, I’m more convinced of a bubble pop. But then again LLMs have leaped things forward and the same can be said about deep learning and back propagation a decade or so ago. So maybe in a decade we’ll get another groundbreaking change in the field.
Considering the audio-native voice ability of multi-modal models, I don’t think that part of software engineering is safe either. AI can be more patient, determined, and manipulative when it comes to gathering data and interrogating stakeholders.
Besides, is gathering requirements and interrogating stakeholders the “fun and creatively rewarding” part of software development? Hell no.
I'm going to chime in and say that I actually do enjoy requirements gathering! I have basic programming skills, but it's not something I keep up on enough to succeed at leetcode tests or anything else employers use to measure programming skills. I've taken a long look at that/myself and determined the other supporting roles in the software development lifecycle are both more interesting to me and something I'm better at.
It's been a major struggle to search for this type of job, even though I have about 10 years of job history doing that as part of programming positions. I'm only less than two semesters away from attaining my Bachelor's degree, so hopefully that gets me somewhere. Though it's hard to be optimistic in this current landscape of tech layoffs.
The worst part is that your favorite step is essential for determining what customers say they want vs what they really want.
Much like game companies sacrificing good gameplay at the altar of forum complaints and theorycraft, a much better program will be made by seperating the wheat from the chaff.
Absolutely, it's critical to determine the actual reason the project is taking place, and how an application (or updates) will improve operations.
Most of my experience with this is actually the opposite of scope creep - users who have been let down by false promises of other groups, and insist on stripping down their requests to the bare minimum, even if it's not enough to fully encapsulate what they need.
However, I've also worked on projects related to business processes that have changed mid-development. So that's legitimate scope creep. (In project management terms, I guess you'd call that a constraint that was derived from a risk, assuming this was known prior to beginning the project.)
But at the end of the day, as someone who has been a developer, I want to make sure the programmer team gets the information they need and isn't forced to redo tons of work due to changing requirements.
I hear ya, I just think it’s gonna be a while before people actively prefer that. LLMs are quite impressive but as it stands now (which is not forever), they fall short on reasoning tasks. Yes, I’ve seen how they perform against ARC datasets, but I’m still not convinced by the landscape of architectures. Perhaps with some fundamental changes, or perhaps clever use of previous research like some of the blending of LLMs going on with RNNs or even RL.
I agree that’s not the fun or rewarding part but it’s an essential step that’s gonna be hard to surmount. And that’s even when hand waving away programming altogether which I’m still not convinced will be the case any time soon. I’ve certainly got a skeptical bias in this arena, but I’m just not impressed by what I’m seeing these days when it comes to programming or software engineering.
Thanks for your thoughts btw. I was hesitant to post my original comment because it was more negative than I’d like. I appreciate the balancing effect of your contributions to this thread.
This is a little tangential, but the area of programming I’m most excited about now is actually the creation medium that is programming in virtual worlds. There’s this VR game called Resonite that lets you model and program objects, environments, and their behaviors using entirely in-VR tools.
I haven’t tried it out yet, since I’m still getting situated with my new VR setup. But this kind of programming I expect will be fun and creatively rewarding for decades to come… in contrast to utilitarian software development.
Its less about AI totally replacing software engineers, and more about AI exponentially increasing the output of a software engineer (and thus lower the need for other engineers.) Its very likely that companies will drastically slash their engineering departments once AI is able do most of the grunt work.
I just read an article that I can't find now arguing that chatgpt in particular is not charging nearly enough to cover its costs, never mind make a profit. It was a finance focused assessment on the viability of the business.
Everyone assumes that ai will continue to be easily available and cheap to use, but at least some people are questioning that assumption.
I think it's important to distinguish between what will happen to specific companies and what will happen for consumers. Sometimes, important industry trends turn out to be bad investments. Solar panels, for example: even most Chinese manufacturers are losing money. Investors poured lots of money into ride-sharing companies with little result.
I think AI might be like that? OpenAI has strong competition (for example, Anthropic) and the low end is getting better. There are also LLM's that run on a single, not too expensive desktop computer that you could conceivably own, the sort of investment professionals once might have made to run Photoshop. Reports indicate that they're sometimes almost as good.
Bad for OpenAI, maybe, but that doesn't mean ChatGPT-like things are going away.
The needle is moving for consumer hardware to run larger models and I think a breakthrough that significantly reduces costs is possible within the next few years.
The biggest tech companies in the world are all betting on AI, they likely won’t let it go. I don’t think we’re dealing with 3D TVs here.
The cynic in me is telling me those companies will be more than happy to use my power supply, hardware and network resources to run their AI services, while keeping it online-only and locked down in such a way that they own all of my data and I own nothing. We'll see, though.
Software engineering has the big issue that it is much harder to read code than it is to write it. "Touching up" AI generated code requires a much senior profile than the one writing prompts, and you cannot grow senior profiles if your team is playing "say the right thing to copilot" instead of deciding on architecture or interfaces.
In my mind the coding AI story might go in two directions: One were it falls apart the way outsourcing (mostly) did, where companies that actually need to care about code quality cannot afford AI generated code. And two, where system quality expectations decline so much that an AI can just rewrite the whole codebase as needed every time requirements / prompts are updated. This last one will probably require better AI coders, and possibly an OS/platform designed specifically to make the AI coders look more competent.
Physical art might have a longer time before automation replaces it, if it ever does. Ceramics, woodcarving, metal sculpture, or glasswork might be an option?
All of those are already gone. They're just developed around it. It's like Chess and Go, where Chess had its "AI" moment earlier.
The "fear" from AI for things like illustration is that it'll take up the utilitarian work, which is where most illustrators get their money from. That's already a done deal with things like ceramics. You cannot compete with the industrial production of ceramics cups, wooden spoons, and so forth - they are more consistent and infinitely cheaper.
Today, things like ceramics is a mix of hobbyist, who do it purely for its own sake, and a small, small number of artisans who produce unique works for a small, niche market that appreciates that for high prices. It's not a real career path, but it is a thriving hobby. None of that is really different from where illustrations, or writing is headed, or is feared to head.
While I generally agree with you, I don’t think games like Chess and Go are good examples of the general trend. Games should always be safe from AI, since the whole point of games are to test human skills against or in collaboration with one another. It’s entertaining to watch others compete and to compete oneself.
Imagine someone trying to enter an RC car in an Olympic 100m dash? It’s completely irrelevant, since the whole point of the event is to watch people do it.
Drawn art and writing are not mainly thought of as games at this point. Maybe that will change? But if the point of art and writing is to induce a pattern of emotional response and thought, then the finished “product” is what matters more than the process of creation.
That's what ended up happening, but there was quite a lot of doomerism in both Chess and Go when computers first began besting humans. A disproportionate amount of attention is centered at the top of sports, after all. Way more people watch the Chess World Championship than a random GM qualifier, even though the latter is in the 99.999 percentile of skill.
Is it? The vast, vast majority of people capable of making what is considered traditionally art, don't have much of an audience. You share it on instagram, and get like a handful of likes from your friends. You are a small fish in a big ocean. No one really sees it. So why do all those people do it currently, if it's the end result that matters? Since, by that metric, they're all horrible failures.
It's because they enjoy the process. If they didn't, they wouldn't pursue such an avenue so devoid of financial success.
I think we're at least a decade off from that (the state where writing or illustrations are at), because the development of automated manufacturing devices like ceramics printers that are reliable and versatile to the point of replacing traditional forms of "artisanal" manufacture is at least an order of magnitude slower than the development of AI (I am a hobby ceramicist and I also started with plastic 3D printing relatively early on and watched its development over time - current ceramic printers are really not it), and one would argue that at least for a time physical artisanal products may grow in popularity instead, precisely because we're at a time where, contrary to digital art, it's trivial to make artisanal objects that are obviously made neither by AI assisted automated processes nor by large scale industry, and the demand for that seems to be increasing.
I think the real bottleneck here is that in order to make a (dignified) living selling artisanal objects the skills needed are 50% craftsmanship/artistry and 50% (or even more) sales and marketing, you usually don't make enough to hire someone for the latter in the long term, and at least where I live people who are able to learn both of those skills well enough are uncommon.
The few people I know who make a living doing ceramics are not doing terrible financially and are constantly sold out, however they all split their time between making products and teaching, because the latter is more profitable, and that market might be smaller, though similarly to the sales of their products, the courses are constantly sold out within a day or two.
CNC machines and 3d printing augmented with AI is just around the corner, if it isn't already here.
Both of those are very "top down" options, which work best when they have full control over the material. Probably glass and metal crafting will be the first to go, as the materials are more stable and predictable, with ceramics after that, but I imagine that woodworking will be the last to fall, as working with variable grain and density would ask for more improvisation than is currently doable.
Unfortunately, that’s the side of art that doesn’t appeal to me at all. I do think you’re correct that niche artisanal fields like those will remain intact—and likely grow in popularity!
To be fair, the work of an employed programmer already involves a lot of dealing with error-filled code not written by you, and I personally already find that extremely demoralizing, even if AI isn't involved.
Is there a reason why the "blue book" method isn't being employed more across the board. Like take 2 hours to synthesize your thoughts and put them down on paper; no notes, computers, or chatGPT to rely on.
Off the top of my head: two hours actually isn't that long, so students will be more focused on getting it done in time than the quality of the work. They can't just use the scheduled class period for it, either, because the teacher still needs to teach. If they were to, say, have an essay writing period every other class, like how some classes are split between a lecture and lab section, they'd have only half as much time for the lecture. The logistics of setting up a schedule and/or rearranging the curriculum to allot time for regular blue book essays feels like it'd be a nightmare.
On a lighter note, it's also probably more time-consuming to grade because handwriting can be almost indecipherable. I remember needing to use my mom to "translate" parts of a handwritten account by my grandmother because I just genuinely could NOT read some words.
I've had classes that had a lecture hour twice a week and a lab hour once a week. Something like that could work.
And to fix the handwriting problem... maybe electric typewriters will come back in style!
I don't think it would work well for a writing class. The benefit of the lab portion was to practice the practical side of the lecture material. It was a time to either use equipment we couldn't access ourselves, or be able to get assistance from the instructors or TAs as they walked us through the steps.
If they did that format with the blue books just to write essays in one sitting, that would feel more like a test than practical experience. I can see that quickly demoralizing students and making them hate writing. I think even I'd get burnt out.
Those are good points. I'm imagining more of a mid-term/final type deal where you would only need to take the test 2-3 times a semester. That way it has little impact on actual teaching days.
I was in high school long before LLMs, but we did that blue book style a few times. I actually liked it a lot better than traditional essay writing. In retrospect, it was likely because my ADHD was manageable enough to allow me to hyperfocus on that task for 2 hours pretty easily. I think it’s a fantastic idea to try and use that more to combat LLMs.
Ditto. I have ADHD and I always did much better of the blue book tests than the regular extended period home written essays.
Are these students that have ever been asked to handwrite essays at this point? I'm not sure if that's a current primary/secondary school expectation anymore.
Proctored computer lab style exams might work but not for large lectures. Those are probably not doing essays anyway though.
One thing about programming is that the demand is much more elastic, so as software engineering becomes easier, I'd expect there to be more total projects made, expanding to fill the gap. Sure, the vast majority of code will have minimal specialised human attention, but for obscure, new, or especially difficult things humans will still be needed for a while
I'm reminded of storyboards for movies and mockups for software UI's. Maybe someone who isn't an artist and can't draw might use clip art of the sort of things they'd like to see? And I think such things are useful for communicating ideas even if they're pretty rough.
Professionals are going to have to be firm about it in negotiations, though: yes, this mockup is useful, but I still charge to make a professional illustration based on it, assuming that's what you want.
But on the other hand, perhaps some professionals learn to work faster using new tools? And that will be hard for people who want to do it the old way.
And also, sometimes people won't need a professional illustration, when a mockup is good enough? Just like most of the time, our amateur photos are good enough.
So the result might be that you're too expensive for them and the job isn't a good fit.
A friend of mine is a designer for big budget TV and movies and that’s what he says is the big use of generative AI. They have stringent requirements for image provenance so using generative AI for anything that actually gets shown on screen is more work than it’s worth from the legal side so they just don’t do it but storyboarding and concept art is rife with AI.
I don't know about uncomfortable, but this process is so much fun. Surely I can't be the one to think that, right? I have trouble understanding why so many people use AI in such a way that instead of refining those thoughts hammers them into a shape that comforms to a prediction based on what was written by others in the past. AI is much more useful to help bring that shape separately to your attention, or do perform language-related operations.
I agree with this statement but would go further in saying that writing can be equal to thinking.
I'll often have some vague thing in the back of my mind. Pleasant or unpleasant. I can think about it and mull it over and try to pin it down for a long time, but it's still vague and untouchable.
I can only figure out what is really going on by typing it out. Just start typing. Then it becomes a fully-formed thought that I'm aware of and I can now judge it or react to it or whatever.
I was perplexed by the statements like these about doctoral students, and a sense of vagueness about the university and the course, so I looked it up. The author taught for the English Language Program at the New Jersey Institute of Technology; it appears she was specifically teaching English courses for international students and non-native speakers. It does not appear that she was teaching a general technical writing course. Here is a syllabus she had, for example.
That context considerably changes my interpretation of her comments about her students' comprehension. I feel it also makes understanding the students' choices more difficult without knowing more about the organization of the program: this could well have been a course all international students were forced into regardless of English proficiency, for example, and seen more as a busy-work impediment to their actual program more than an opportunity for learning.
It's not clear to me why the author chose to be vague about this context.
With that context, I think it's honestly pretty unfair for the author to state that they "weren't developed" enough to "write", without further qualifications.
No, they didn't know how to write ENGLISH. Who knows what they have and can do in their native language?
I doubt the author would do very well if they were asked to write an essay in spanish, chinese, or tamil or something.
To be fair to the teacher, when I get report cards for my kids in New Jersey, they refer to skills as Underdeveloped, Developing, and Proficient.
Underdeveloped means that they might qualify for remediation programs.
So when that teacher is saying they weren't developed enough, that's essentially just saying 'they haven't learned enough about that yet to be considered Proficient,' not 'they are incapable of learning that.'
Your comment made me realize just how much writing helps critical thinking skills.
I've known for a while now that I use writing to organize my thoughts. First realized it while playing Mafia on forums, when I got really analytical I'd start a long, rambly comment where I'd just jot down my thoughts and constantly edit and cut bits and pieces as I do it. Even since joining Tildes I've had at least one epiphany while in the midst of writing a comment.
I always attributed this to my personal obsession with writing mixed with my neurodivergence. However, it's just now occurring to me that this isn't particularly unique to my mind, but a general benefit of writing for everyone. Putting our thoughts down on paper makes us think about the topics more actively than just regular thinking. Seeing the points written down can let us make connections we wouldn't have otherwise.
I don't know if schools focus on that aspect much. My memories of English classes had them more focused on the technical aspects, or reading books and filling out worksheets and tests. I remember writing a three-point essay in high school about how I was tired of writing three-point essays because we'd been reviewing that structure since fourth grade. I barely paid attention in my first college English class because it was just technical stuff I already knew intuitively from writing fiction.
Come to think of it, one of my history classes in college mentioned the business school was talking about adding more history classes to its curriculum because those led to better writing skills than English classes. I thought it was mainly due to the more focused research and effort needed to write a history essay (that class's final essay was the hardest essay of my college career), but now I'm thinking about the extra element of thought on top of that. History papers require heavy research to back up a statement that's coherent and makes sense based on facts. Thing is, those facts can be sparse and interpreted in a lot of ways, so it takes a lot of thought and effort to get them to come together to support a claim. I didn't need to put in nearly as much thought for most of my English classes.
Right now, critical thinking feels like one of the most neglected skills in education. So schools might benefit from emphasizing and teaching writing more as a critical thinking tool to organize thoughts, rather than focusing on the purely technical aspects.
And unfortunately, many of your cohorts still couldn't write a three point essay by that point.
I'm not sure how we could teach critical thinking.....with measurable result metrics. And I agree it's neglected, albeit unintentionally.
I feel very similarly about programming. I try to consider what the "full thing" will look like, but once I inevitably start actually coding many of my assumptions are usually challenged and I gain a much better understanding of what's needed.
Can someone explain the point of cheating with AI for a class you're actively paying money for?
I went to university because I wanted to learn my subject, but it seems like students are increasingly content to avoid actually learning if it means they can get an easy good grade.
If true, I think it can be traced back to the value loss of degrees because they're considered normal/expected - Jobs which never used to require a degree now do, to the point where some peaple would not be able to apply for the exact job they currently have.
Students can see that degrees are losing their value, so it's easier to justify avoiding the work.
I think there are a few factors.
Students do not have an accurate view of what the actual day to day work for the occupation they are training towards actually involves. An example of this is I had a prof in university who commented to the class "I don't get why everyone complains of my high reading load, when you actually get a job in [specific field we were taking] your day will be primarily spent reading." This is in contrast to what most students thought of for this industry, as they thought it may have been more social oriented work.
They feel their education is more of a rubber stamp than actually equipping them for the industry they are working towards. I went back to post-secondary for a two year program from 2021-2024 that was a blend between programming and system admin. ChatGPT became quite useful near the end of our first year and there was a student that I had to work with that ran everything through chatGPT. She almost failed a needed class because as the classes got harder she could barely read or rewrite code. But she didn't care, she was at the time still passing classes and then she thought she could use her degree to get a job, despite being weak in a lot of areas.
The course is unrelated to the industry they are working towards. If you are majoring in compsci, they may feel their required intro to philosophy class is pointless, so will go into the class with the mindset that all they need is a passing grade.
Some students are arrogant, and feel they already know everything they need to succeed in the industry they are working towards. They are not at the school to learn, but to get the degree so that they can get hired.
I think you're right. It's such a different mindset from how I've experienced uni/work that I'm having trouble empathising with this point of view, so I appreciate your explanation.
I would note that the example in the article is not typical, either. The author is teaching doctoral students, not undergrads. A technical PhD student is working 60+ hour weeks on research, classes, and TA responsibilities combined. I can easily see how they just do not give a shit about the writing class they're being forced to take.
They're also not paying, but rather being paid (although, not a whole lot) by the university at this point.
Not only that, but they are of a much more advanced age, and it's hard to tell them "well, you'll need this when you're working!" when the students are 30 and they know that when they're working, they'll be doing similar things to what they're doing now. And honestly, how wrong can they be?
The example is even less typical than that. The author appears to have been specifically teaching English courses for (presumably incoming) international students (at the New Jersey Institute of Technology, also oddly not mentioned). So these were likely writing classes that students were being required to take while others in their cohort were not, and were likely entirely outside of their department's program and normal requirements.
There could be significant motivation to cheat or cut corners in that context, and the students' departments might not care, or might even unofficially encourage it. The author could well have been caught more in inter-departmental drama than a breakdown of student behavior.
Maybe times have changed, but when I was a grad student, even a whiff of cheating would have been a death sentence. The academic community is simply too small and has too long a memory.
This really depends. I ratted out a partner to protect myself (poor grad student, not gonna lose my chance at a degree because an international student on a government scholarship came from a country where cheating was accepted) but because they were an international student, and the prof was from the same country they were just told to redo their work, while I was allowed to submit my un-plagiarized portion.
There's also a lot of Ka-Ching from catering to international students and making sure they don't get dropped or spread rumours that they're a school that drops students.
That and promises of there's definitely 100% a bunch of companies willing to sponsor visas once they graduate (there aren't).
This would be a fair assessment if the students knew how to write, but from OPs comments these people cant write well enough to see why its a problem that their chatgpt answers change styles halfway through.
I guess the good news is you’ll have a leg up if you’re a good writer, going forward.
Bad news is we’re all going to be forced to read badly written generated articles. I know some of us are already, but chatgpt hasn’t creeped into science publications yet, and I guess it will soon.
That's a fair point. I've read similar complaints about undergrads too, which is why it's been on my mind. But I totally understand if you're working towards a dissertation that you don't care about some other class you're forced to take
Another point I did not include is laziness. AI can do a job normally well enough to succeed. People can normally spot AI generated content whether it is email, a paper, or potentially code (I am not a skilled enough programmer to recognize AI coding practices). However, the AI generated content normally does the job well enough that it gets the job done. So for some students, the question becomes "why should I do the work of doing this assignment when I can probably pass using AI?" The writer of the article does state learning is hard. So why not just AI? Same in the workplace, why go through the effort of spending ten minutes writing this email response when they can send it through AI in two minutes? Sure, the recipient can probably tell it is an AI response, but they also have enough of an answer for them to do their job so they won't complain about the impersonal feeling of their coworkers not spending the time to respond to emails
Right its just lowering the bar.
This professor would have to learn how to grade AI generated papers. They’d have to familiarize themselves with the current limitations and grade it based on how well the student prompted the AI rather than how well the student wrote something.
They just don’t want to do that, because in a class meant to teach people how to write, its a waste of everyones time.
I imagine at some point in the future the curriculum might change to “AI prompting” instead of “writing” and the only people who actually know how to write will be the English/etc majors, just like the only people who can do basic algebra are the ones who had to use it in college.
Ugh that would be a horrible world. People will still want to communicate, so they're just going to spew more catchphrase reference nonsense and poorly thought out running monologue into all public spaces.
And I mean, basic algebra is used on a day to day basis: is it cheaper to buy 1kg of meat at regular bulk price, or just 250g of it in a smaller pack but on sale?
This already happens, Reddit has been totally taken over bots and well written comments have become increasingly rare. Tildes has been a godsend for me because I really cant find good reading content on Reddit any more.
Most people don’t shop this way. They buy the first option they see. Theres a reason why that old Facebook post with the order of operations was so widely spread, most people cant do basic math.
Related to this, so many steam reviews are ai generated now. I hate it because they stand out so much. For the most part it looks like actual people prompting chatgpt to write it for them instead of actual bots. I just don't understand why anyone thinks it adds anything of value.
Yeah, I know people say that theres humans generating their responses for them but I kinda don’t believe it. Why would anyone go through that trouble.
To be fair, my intro to philosophy class was awful (taught by an obviously inexperienced grad student), and I can totally understand students wanting to get the required credits out of the way so they can focus on the electives they are actually interested in. I can confidently name that class, international relations, and one other as courses that drove me away from liberal arts majors (international studies), and straight back to a STEM major...I changed majors a lot.
Ironically, as a stem grad who actually has a little bit of money, these days, it's international relations that mostly fuel my investment decisions, turning the peanuts stem money into actual retirement sized money.
That seems odd as most markets don't compare to the US markets. The EU doesn't even allow the purchase of US ETFs (they have their own, that US citizens aren't allowed to purchase...).
Isn't it the opposite? A degree, as a rubber stamp, is more important than ever, so people know they need said rubber stamp, but have no intrinsic interest in studying.
Additionally, I would note that many people don't have an interest in a rounded education. Many of these classes that are casually cheated in are not in their field, but a required part of the program. A bio major, for instance, may cheat in a history class they have to take, since they don't see it as bringing value to their future endeavors.
I think we agree? As a rubber stamp it's important because jobs want them, but it's losing value because the specialised knowledge isn't used and students wouldn't have gone if the job didn't require the stamp.
I guess I don't see what about it is "losing value" in that situation, it seems like it's gaining value.
Because spending thousands on an "education" you're not learning/using just because employers want a stamp is a waste of money? Why not just charge a fee and save students 3 years of their life?
I see the same way I see inflation: as the amount of things a dollar can buy goes down, the need to have a dollar (ideally, many dollars) goes up. The degree (or dollar) is losing intrinsic value, but that’s making it more of a necessity for more people, not less.
Its losing value because people who don’t really want to learn are being forced to go through the program in order to find a job.
These people do the bare minimum, and graduate without having learned a whole lot, and so they don’t produce as good of work as you’d expect from a college graduate.
This is because college is an educational playground, theres semi loose requirements to get through it and the expectation is you get what you want out of it. If you want to waste your time and get the rubber stamp, they let you.
So the rest of us have to go through insane interviews because the employers cant trust that we didn’t waste our time in college. Our salary expectations are lowered because more people have a degree. Its more difficult to find a job because more people graduated with the same degree at the same time.
Because at the end of the day the only thing anyone cares about in the vast majority of professions is the degree. And that's before you get to the required "whatever" classes that are outside the major.
Edit-
To put it another way. If you're spending money in the hopes of getting a degree that will otherwise wall you out of the workforce for not having, why wouldn't you? You're literally risking your future. Think if there's ANYTHING else that you'd spend 5+ figures on knowing that failure could cost you vastly more, where you wouldn't use every tool available?
Now of course, the argument is that "well you'll just be screwed when you get the job and don't know your shit" but that's been around since LONG before AI, and is why entry level "just from college" positions mostly train on the job.
You probably can't be anywhere near STEM, but just about everything else you'll be fine, because you're going to be surrounded and reporting to people who also used AI/did the bare minimum/never actually understood the material/straight up cheated.
I think this is where my misunderstanding comes from. As someone who has always been in STEM, it's obvious to anyone technical if you don't actually have the knowledge and you'll get called out so fast.
You have a limited amount of time. Do you focus on your capstone STEM project that might lead to a startup with friends and an interested professor. At the very least this will look super good on your resume and attract attention from Big Name Stem Corps, plural.
Or do you spend half the time on a breadth requirement Humanities course that 99% of the other kids are cheating through? There is no reason to believe you will ever use the material from this Humanities course ever again, and in fact your start up idea could potentially disrupt it so that no other human will value this knowledge again.
Oh they get called out, it just doesn't matter much because it's practically expected.
I'm very sympathetic to the author, and I agree with most of her points.
I still think, in the end, that people that learn how to leverage what LLMs do best as part of their overall goal are going to be able to do amazing things that simply aren't possible without machine intelligence assistance, and it's going to be worth it. The majority of people have looked for and taken the easy way out since the very beginning, and this is nothing new. Some of the stories of cheating for the Imperial Exams in China are amazing in their complexity, for example, and they're from hundreds to over a thousand years ago.
Where we're lagging is figuring out on how to grade and evaluate students who actually care and are trying, but I don't see that as an unsolvable problem, either. If we can get smaller class sizes, more teachers and assistants per student, because we can no longer rely on semi-automated testing, well... We've been needing that for a long time for a lot of reasons. This is just one more.
I completely agree. This is a moment of reckoning for the education system - and it's overdue. When writing for class I never felt that teachers were measuring my ability to revise and compose thoughts so much as regurgitate plot points or lecture notes. I didn't attend the best schools, but they were pretty good. Maybe a 95th percentile US public high school and a good private university for my field.
However if we didn't fix it back then I don't know why we would now. I think this will end up just making teaching more miserable. Teachers don't need more things to worry about. Maybe this will open the floodgates to change. But I wouldn't bet on it.
Mirror: https://archive.is/8Nrwk
A point that the author doesn't seem to consider or perhaps didn't occur to her - writing is not the only skill for which style can come into play. My thoughts were drawn towards other skills, and how differing levels of time and effort spent honing a skill can give one the ability to more closely evaluate style. While I may have enough skill in writing to have revealed my own style and for it to be recognizable by someone such as the author, I do not think I've spent enough time honing my programming skills for a style to emerge (except maybe in SQL). Similarly, while I have DJed for long enough to have a style, I would doubt the average listener would be able to detect it. For an educator attempting to teach others how to find their own style, I can see how this would be endlessly frustrating - for a tool to come along and automate so much of the process but for said tool to strip or randomize style may make it more difficult for folks to learn style along the way. I think this is an important recognition when it comes to automation and tooling in general, but unfortunately I think the viewpoint of the author is a bit biased or of one-mind.
Many skills are simply undesirable to many humans. Having good writing skills are important for those who wish to go into a creative job which involves writing, but for the majority of humans this is simply not the case. I certainly can't speak to the aspirations of every individual on Earth, but it would not surprise me if the majority of individuals have no interest in acquiring anything approaching mastery of writing. I would guess that the majority simply wish to be able to express their thoughts well enough to get a point across, even if they are unable to wield higher-level skills of expression that might be captured by the term "style".
I think perhaps this feels a bit more jarring than other skill shifts such as folks being able to drive cars without understanding them, write books without developing penmanship, operate computers without understanding programming, create clothing without being able to sew, and other similar democratization via automation/tooling because it involves a skill which is rightfully unique in some aspect and also a skill which is the basis for many other second order skills. Prior to chatgpt it was much more difficult to surface a student's disinterest in acquiring this skill as they would need to have other humans write their work, plagiarizing other people, or in some fashion rely on the thoughts and work of other humans to avoid refining their own skill to acquire a style. Because it was so difficult or risky to pursue these means to an end, some either stuck with it until they developed a style despite having no interest, or simply copied/emulated the style of others in order to get the grade they needed. Now that they can have someone else (or something else, depending on how you view chatgpt) reliably do the skill without putting in any effort and to have a rather formulaic and easily detectable output (as compared to the various styles of those who chose to sell their services), those who have no interest are much easier to detect.
I think this all raises a very interesting point of what skills are foundational and how to accommodate the individuals who are interested in just getting a degree or finishing schooling and those who are interested in the finer details of the skills they are refining. Given that communication is one of the most important skills for a human to have, I'd say that writing, speaking, and language skills are all foundational on some level, but whether you need to have enough skill to have a style and what precisely defines that style are more difficult questions to answer. I see plenty of people online who have acquired their own writing style still get misinterpreted all the time, including by other individuals who have their own writing style, so I question whether acquiring style is even as necessary as having skills to pick apart meaning from a string of words. Certainly more mastery will improve the ability to discern meaning from others, but other critical reading and thinking skills may be of more importance and may often simply be acquired via osmosis from immersion in practicum, rather than through deliberate training.
I think this is a problem that mostly comes down to incentives. When it comes to grades, I seriously doubt students are getting bonus marks for developing a distinctive voice in their writing or for making their summaries unique. This issue is compounded when one considers that post secondary education costs a lot of money and students have too little time to really absorb the whole material. Generative AI isn't going away, so I think it's best that we try and reform our education system to be less of a rat race. If not for the benefit of society, then at least for the benefit of students who are being criticized for doing what they are incentivized to do.