I really like the idea of kids asking questions to ChatGPT. Part of it makes me sad, because one of my favorite things as a teacher is answering kids’ questions, but that’s just my own ego...
I really like the idea of kids asking questions to ChatGPT. Part of it makes me sad, because one of my favorite things as a teacher is answering kids’ questions, but that’s just my own ego talking.
I’m at a considerably lower level of education than the people in the article, but I definitely think kids could still get a lot of use out of it in the same manner. I especially think more shy students, or those with social anxiety, would benefit from it. Asking a question in-class is a giant hill to climb for some of them, and mustering up the wherewithal to ask the question likely requires so much of them that their actual learning of the content suffers at that point. This could be an easy, low-stakes way of asking their question in the flow of the class without having to surmount social obstacles along the way.
Also, apart from education and just relative to development, I also think something like this could be good for those sorts of adolescent/teenage questions that are embarrassing to ask others about. I don’t know if ChatGPT specifically is up to that kind of task currently, but I could see a variation of it being set up as a private, judgment-free support that answered kids’ uncomfortable questions in age-appropriate ways.
I think it is important to remember that this positive experience was in the context of computer literacy and coding. As quoted above, ChatGPT has significant weaknesses in other contexts. When I...
Of course we also saw ChatGPT’s limitations this week. Hiromitsu Higashi, a thoughtful student with an exceptionally broad range, pointed out that the system is good at some things but not as good at others: it created fake names in literature reviews, and has no concept of accuracy. “... Don’t trust book recommendations. It will hallucinate. It will make mistakes...
I think it is important to remember that this positive experience was in the context of computer literacy and coding. As quoted above, ChatGPT has significant weaknesses in other contexts. When I was researching a topic and asked some general questions about that topic and the people who had written about it, ChatGPT seemed fairly knowledgeable. But when I asked it to pull quotes from those works (written in the 1600, and in the public domain) it "hallucinated" and made up the quotes out of thin air, complete with the correct authors and book titles and a reasonable approximation of writing style. But when I went to verify the quotes and pull page numbers I found the fabrication.
ChatGPT's confidence even when it is wrong, combined with human laziness/busyness are a bad combination. I think this limitations in ChatGPT can be corrected by forcing it to use an internal fact checker but as it stands right now it is a dangerously flawed tool because it can unwittingly trick us into trusting its output too much.
I am fascinated by where this will take us I just want to make sure it is somewhere that we want to go.
I've been a skeptic of ChatGPT for the reasons you say. It's often wrong and sometimes hallucinates. I've been thinking that it might be good for fiction or in situations where you check its...
Exemplary
I've been a skeptic of ChatGPT for the reasons you say. It's often wrong and sometimes hallucinates. I've been thinking that it might be good for fiction or in situations where you check its results, but certainly not as a search engine.
I thought this article was interesting because it shows how I might be wrong about how bad it is to be wrong. Maybe there are situations where being wrong sometimes isn't so bad? Maybe getting an answer quickly is more important?
Do you think that when a student asks a classmate about something, the answer is always right? Maybe being wrong and dealing with mistakes is part of the learning process. Maybe school is a relatively safe space where it's okay to be wrong a lot, on the way to learning how to be less wrong?
If kids use ChatGPT and it sometimes gives the wrong answer, maybe they'll learn not to trust it too much, in a situation where that's relatively safe. This is similar to what happens when they get wrong answers from a classmate.
Also, have you seen the Internet? Lots of wrong information out there. Learning to navigate a world with lots of wrong information is valuable.
I'm not entirely convinced by this contrarian argument, but I'm less convinced than I was about the "dangerously" part of "dangerously wrong."
Last week brought two related features of artificial intelligence in education into sharp relief: the first is that all that talk about plagiarism and cheating and abuse is uninspiring and counterproductive. Yes, some unambitious students will use this new tool to cover subpar performance, and yes, we could talk about how to detect or disincentivize such behavior. The far more inspiring conversation is a different one: how can the most creative, the most ambitious, and the most brilliant students achieve even better results faster? How can educators help them along the way? And how can we both use machines that learn, and help learn, to push out the edge of human knowledge through cutting-edge research faster and in new ways?
From the article: I’ve heard of people using it instead of a search engine, but this the first explanation I’ve seen that makes it sound compelling. For basic (but specialized) definitions, I...
From the article:
The second benefit follows: “you no longer disrupt the flow of the class,” as several students pointed out when we wrapped up — for example with a question like, “What’s an ‘offset’ in a binary file?” Or: “What is an embedded resource in malware?” You don’t want to interrupt the class — ask ChatGPT. Back in the day you had to Google for a few minutes at a minimum, jump hectically from result to result, wade through some forum, until you finally found a useful response; by then the class conversation had moved on. ChatGPT will give you the response in 5 to 15 seconds, literally. That response speed was game-changing last week, because we could keep up with the instructor in real time, reading ChatGPT’s explanation of embedded resources while listening to Juan Andres talking about the same thing.
I’ve heard of people using it instead of a search engine, but this the first explanation I’ve seen that makes it sound compelling. For basic (but specialized) definitions, I suppose the risk of ChatGPT getting it wrong isn’t too great since you’ll likely figure out from other things you’re doing in class?
Most of the time I’ve used it, I start with chat gpt and then use google to prove its answer. It’s easier for me to find out an answer if I already start with what ChatGPT told me because I have...
Most of the time I’ve used it, I start with chat gpt and then use google to prove its answer. It’s easier for me to find out an answer if I already start with what ChatGPT told me because I have all the terms ready to search.
I imagine future tech will just do this step for me and link to the proofs.
The future is already here, it is just not very evenly distributed. ;) Deepmind's Sparrow works this way. Instead of answering immediately using its internalized knowledge, it can decide to do a...
I imagine future tech will just do this step for me and link to the proofs.
The future is already here, it is just not very evenly distributed. ;)
Deepmind's Sparrow works this way. Instead of answering immediately using its internalized knowledge, it can decide to do a Google search using a query it formulates on the fly. Then it will summarize the top result(s) and show you a snippet and a link to the source.
Though one still has to be careful with it. IIUC, the evidence it cites does not support the summary itself 20% of the time, so I imagine the overall error rate is at least that high.
Edit: The way I phrased that may imply that access to Sparrow is publicly available, which is not the case yet.
I want to say this is a great use for ChatGPT. I was messing with it a couple weeks ago to get it to spit out some SuperCollider code, to try to get it to make music, and it oddly never worked....
I want to say this is a great use for ChatGPT. I was messing with it a couple weeks ago to get it to spit out some SuperCollider code, to try to get it to make music, and it oddly never worked.
I'm doing a project for fun that requires me to use stuff I don't know and it's a great way to get the general idea of how stuff works, like how people typically format HTML and general practices surrounding tagging, etc, so I can have short easy examples. It's really streamlining the "easy" questions.
I really like the idea of kids asking questions to ChatGPT. Part of it makes me sad, because one of my favorite things as a teacher is answering kids’ questions, but that’s just my own ego talking.
I’m at a considerably lower level of education than the people in the article, but I definitely think kids could still get a lot of use out of it in the same manner. I especially think more shy students, or those with social anxiety, would benefit from it. Asking a question in-class is a giant hill to climb for some of them, and mustering up the wherewithal to ask the question likely requires so much of them that their actual learning of the content suffers at that point. This could be an easy, low-stakes way of asking their question in the flow of the class without having to surmount social obstacles along the way.
Also, apart from education and just relative to development, I also think something like this could be good for those sorts of adolescent/teenage questions that are embarrassing to ask others about. I don’t know if ChatGPT specifically is up to that kind of task currently, but I could see a variation of it being set up as a private, judgment-free support that answered kids’ uncomfortable questions in age-appropriate ways.
I think it is important to remember that this positive experience was in the context of computer literacy and coding. As quoted above, ChatGPT has significant weaknesses in other contexts. When I was researching a topic and asked some general questions about that topic and the people who had written about it, ChatGPT seemed fairly knowledgeable. But when I asked it to pull quotes from those works (written in the 1600, and in the public domain) it "hallucinated" and made up the quotes out of thin air, complete with the correct authors and book titles and a reasonable approximation of writing style. But when I went to verify the quotes and pull page numbers I found the fabrication.
ChatGPT's confidence even when it is wrong, combined with human laziness/busyness are a bad combination. I think this limitations in ChatGPT can be corrected by forcing it to use an internal fact checker but as it stands right now it is a dangerously flawed tool because it can unwittingly trick us into trusting its output too much.
I am fascinated by where this will take us I just want to make sure it is somewhere that we want to go.
I've been a skeptic of ChatGPT for the reasons you say. It's often wrong and sometimes hallucinates. I've been thinking that it might be good for fiction or in situations where you check its results, but certainly not as a search engine.
I thought this article was interesting because it shows how I might be wrong about how bad it is to be wrong. Maybe there are situations where being wrong sometimes isn't so bad? Maybe getting an answer quickly is more important?
Do you think that when a student asks a classmate about something, the answer is always right? Maybe being wrong and dealing with mistakes is part of the learning process. Maybe school is a relatively safe space where it's okay to be wrong a lot, on the way to learning how to be less wrong?
If kids use ChatGPT and it sometimes gives the wrong answer, maybe they'll learn not to trust it too much, in a situation where that's relatively safe. This is similar to what happens when they get wrong answers from a classmate.
Also, have you seen the Internet? Lots of wrong information out there. Learning to navigate a world with lots of wrong information is valuable.
I'm not entirely convinced by this contrarian argument, but I'm less convinced than I was about the "dangerously" part of "dangerously wrong."
Good article.
From the article:
I’ve heard of people using it instead of a search engine, but this the first explanation I’ve seen that makes it sound compelling. For basic (but specialized) definitions, I suppose the risk of ChatGPT getting it wrong isn’t too great since you’ll likely figure out from other things you’re doing in class?
Most of the time I’ve used it, I start with chat gpt and then use google to prove its answer. It’s easier for me to find out an answer if I already start with what ChatGPT told me because I have all the terms ready to search.
I imagine future tech will just do this step for me and link to the proofs.
The future is already here, it is just not very evenly distributed. ;)
Deepmind's Sparrow works this way. Instead of answering immediately using its internalized knowledge, it can decide to do a Google search using a query it formulates on the fly. Then it will summarize the top result(s) and show you a snippet and a link to the source.
Though one still has to be careful with it. IIUC, the evidence it cites does not support the summary itself 20% of the time, so I imagine the overall error rate is at least that high.
Edit: The way I phrased that may imply that access to Sparrow is publicly available, which is not the case yet.
Check out Perplexity AI. It does something like what you're talking about.
A version of this that limits its sources to scholarly materials would be amazing for self learning.
I want to say this is a great use for ChatGPT. I was messing with it a couple weeks ago to get it to spit out some SuperCollider code, to try to get it to make music, and it oddly never worked.
I'm doing a project for fun that requires me to use stuff I don't know and it's a great way to get the general idea of how stuff works, like how people typically format HTML and general practices surrounding tagging, etc, so I can have short easy examples. It's really streamlining the "easy" questions.