27 votes

Students and their ChatGPT

9 comments

  1. [2]
    ShroudedScribe
    Link
    @dlay I have some suggestions for improving this article. I don't know what a core competency tree looks like or what it contains. From googling I think this is an example of one? Explaining the...

    @dlay I have some suggestions for improving this article.

    • I don't know what a core competency tree looks like or what it contains. From googling I think this is an example of one? Explaining the expectation of your instructor clearly will help readers understand why ChatGPT did not perform the task appropriately.

    • Tied to the first one, unless this is an insanely simple task, 15 minutes to create an output that requires research seems very difficult. While using ChatGPT blindly isn't justifiable, I at least understand the motivation when students are under stress.

    • One of your paragraphs asks what LLMs are even truly good for. I actually asked ChatGPT this very question, and it included two of the use cases where I believe they do have genuine value: Translation between languages, and programming assistance. Like everything else done with an LLM, you should not use these capabilities to generate anything important blindly. But they can useful in a pinch.

    I hope that was helpful. I do think it's valuable for people to share their experiences of how their peers use LLMs, how reliable the output is, and what the consequences of their usage may be.

    21 votes
    1. dlay
      Link Parent
      Thank you for your feedback! It means a lot, really. This was the thing we needed to make. The idea was for us to make a simple first draft of this tree for the given company. Doable in 15...

      Thank you for your feedback! It means a lot, really.

      1. This was the thing we needed to make.

      2. The idea was for us to make a simple first draft of this tree for the given company. Doable in 15 minutes. In short, students had to look up, for this company specifically: what they sell, what they’re exceptional at, and how those two are connected. It’s very understandable that students’ first reacting is to open ChatGPT, I should’ve clarified that. I wanted the blog post to be about the apparent lack of ability to distinguish what LLMs are and aren’t useful for.

      3. LLMs do have valid use cases. I frequently use them myself.

      I’m still practicing writing in this format, let alone writing in English. Thanks again for the feedback. I jotted down above answers to your comments both to answer you and to base my future iteration of the blog post on.

      15 votes
  2. [6]
    Abdoanmes
    Link
    I totally get where you're coming from. It is frustrating to see classmates outsourcing their thinking to Ai without questioning it. And you’re right, it’s not just about them turning in low...

    I totally get where you're coming from. It is frustrating to see classmates outsourcing their thinking to Ai without questioning it. And you’re right, it’s not just about them turning in low effort assignments, it's about missing out on the actual learning process.

    But zooming out for a second, what we’re seeing here is part of a much bigger shift in how we learn, work, and think. Every major technological leap whether it was the printing press, calculators, the internet, or now Ai, has changed how we interact with knowledge. For centuries, education has been about memorization and analytical thinking because humans had to be the processors of information. But now, when an AI can generate answers in seconds, we have to ask: what does learning even look like in this new reality?

    That doesn’t mean students should just blindly accept whatever AI spits out, far from it. If anything, it means we need more critical thinking, not less. The real skill of the future isn’t just knowing things, it’s knowing how to question, how to verify, and how to apply information in meaningful ways. Unfortunately, most students haven’t been taught how to engage with AI properly, which is why so many assume that “ChatGPT said it, so it must be right.”

    So what do we do about it right now? That's a hard question to answer but some are trying to answer it. As someone who has worked in universities for over 20 years, I feel it. Here are some ideas that we have been talking about about in our Center for Teaching and Learning as well as Academic Technology teams in IT.

    Instead of banning it or ignoring it, professors could require students to explain why an Ai generated response is wrong or incomplete. That way, students engage with the material instead of just copy and pasting. If students understood when Ai is useful and when it fails, they’d be much less likely to trust it blindly. A few well-placed lessons showing ChatGPT confidently making things up would go a long way.

    If an AI can do the entire task instantly, maybe the assignment needs to evolve. Maybe it’s less about “fill in the core competence tree” and more about “explain why this tree is structured this way and what it tells us about the company.”

    You’re right that exams force students to actually learn since they can’t rely on AI there but I think there’s an opportunity to use AI in a way that enhances learning instead of replacing it. Right now, we’re in the messy middle of figuring out that balance.

    I really appreciate you bringing this up because these are exactly the kinds of conversations we need to be having. Thanks for sharing your experience and using critical thinking.

    15 votes
    1. [4]
      Weldawadyathink
      Link Parent
      You sound like some of my math teachers in high school with regard to calculators. We were allowed calculators on every test, but if you were using it very much, you probably didn’t understand the...

      If an AI can do the entire task instantly, maybe the assignment needs to evolve.

      You sound like some of my math teachers in high school with regard to calculators. We were allowed calculators on every test, but if you were using it very much, you probably didn’t understand the material very well and wouldn’t get a good grade in that test. They were testing if you knew the correct process to solve their problems, not if you could get the correct answer. If you made a stupid arithmetic mistake like flipping a sign or calculating addition when it should have been multiplication, the teachers would not make that problem incorrect. You would lose some points, but only a small fraction. Writing tests in this way is different than writing tests before calculators existed. But, because teachers adapted their teaching materials, I received a better education. I had teachers before those that parroted the line «  you won’t always have a calculator with you ». Well, actually, I do.

      I think LLMs might be the calculator moment for subjects that require more language. They are not going away. You can try all you want to ban them from schools, but it won’t work. Teachers instead need to adapt their teaching strategies to this new tool.

      13 votes
      1. [3]
        sparksbet
        Link Parent
        And I think the AI course posted to Tildes relatively recently (I don't have a link rn but will try to find it later) would be super useful to English teachers -- knowing the limits of LLMs and...

        You can try all you want to ban them from schools, but it won’t work. Teachers instead need to adapt their teaching strategies to this new tool.

        And I think the AI course posted to Tildes relatively recently (I don't have a link rn but will try to find it later) would be super useful to English teachers -- knowing the limits of LLMs and having the skills to avoid their pitfalls already has a lot of overlap with the types of critical thinking humanities courses are already supposed to be trying to teach, even without considering how important it'll be in the future. LLMs aren't magic and they can't solve every problem, but they're not going to stop existing. Students would be well-served to learn more about how to avoid relying on them in ways they're not suited for rather than just ordered to never use them.

        2 votes
        1. [2]
          WeAreWaves
          Link Parent
          I think you mean this one? https://thebullshitmachines.com

          I think you mean this one? https://thebullshitmachines.com

          4 votes
          1. sparksbet
            Link Parent
            Yes, thank you! I couldn't remember the name of it and didn't want to sort through every GenAI related thread to find it.

            Yes, thank you! I couldn't remember the name of it and didn't want to sort through every GenAI related thread to find it.

    2. Habituallytired
      Link Parent
      I've long said that education needs to evolve from memorizing information to learning where information comes from, how to get to that information, and parsing what is correct and what isn't. 100%...

      I've long said that education needs to evolve from memorizing information to learning where information comes from, how to get to that information, and parsing what is correct and what isn't.

      100% teaching more critical thinking, so students can distinguish between correct or necessary information.

      2 votes
  3. nic
    Link
    @dlay you should praise your teacher for finding assignments that ChatGPT can't answer. I tried feeding in a picture of what the teacher was looking for, and ChatGPT still struggled to create...

    @dlay you should praise your teacher for finding assignments that ChatGPT can't answer. I tried feeding in a picture of what the teacher was looking for, and ChatGPT still struggled to create anything coherent.

    3 votes