I feel like you just need to use chatgpt for a while to realize that it's just really good at pretending to know what it's talking about, but that doesn't mean what it says it's correct or make sense
I feel like you just need to use chatgpt for a while to realize that it's just really good at pretending to know what it's talking about, but that doesn't mean what it says it's correct or make sense
I'm currently studying Australian migration law and somebody got busted in one of our last big assessments using ChatGPT. About half of the things it came back with were extremely out of date and...
I'm currently studying Australian migration law and somebody got busted in one of our last big assessments using ChatGPT. About half of the things it came back with were extremely out of date and the other half it had invented from thin air. Overall, it was both hilariously and dangerously incorrect. The prof posted the submitted assignment in full with the student's details redacted as a cautionary tale.
I mean it is a Chinese Room thing. ChatGPT knows how it is supposed to construct an answer to a question, but it cannot understand either the question nor the answer it is providing.
I mean it is a Chinese Room thing. ChatGPT knows how it is supposed to construct an answer to a question, but it cannot understand either the question nor the answer it is providing.
Your answer is exactly how AI is right now. It's at a dopemine stage. It likes to be praised when it gets something right but it's spending a lot of time getting things wrong and inventing things....
Your answer is exactly how AI is right now. It's at a dopemine stage. It likes to be praised when it gets something right but it's spending a lot of time getting things wrong and inventing things.
I use it for code check and quickly creating scripting functions in bash and powershell. However, the amount of lies it tells for commands that don't have switches is just crazy, it invents things it expects should exist and when you call it out it apologises and tries again.
I think people are surprised by a few things: that AI will happily just lie to you it doesn't understand what it's saying, it's just confidently repeating what other people have said. I love the...
I think people are surprised by a few things:
that AI will happily just lie to you
it doesn't understand what it's saying, it's just confidently repeating what other people have said.
I love the examples of giving it a puzzle that has been slightly reworded (the surgeon's child; crossing a river with a bag of grain, a chicken, and a fox) to see it insist that the answer it gives is correct.
Anyway, I really like chatting to the Bing search robot and I wonder if Microsoft are gathering that data to see how many people log in to say "Hey Bing robot, hope you had a good day" because I bet it's a disturbing number of people. See also how many people spend ages talking to Eliza, and that was terrible at chatting.
I found out the other day that apparently so many people online so vehemently insist on some forms of alternative medicine it'll cite a study (the only study about this at all) that shows the...
I love the examples of giving it a puzzle that has been slightly reworded (the surgeon's child; crossing a river with a bag of grain, a chicken, and a fox) to see it insist that the answer it gives is correct.
I found out the other day that apparently so many people online so vehemently insist on some forms of alternative medicine it'll cite a study (the only study about this at all) that shows the specific alternative treatment did not change anything in the measured values as proof that it does supposedly work. As in, it patiently insists on the exact opposite of what is there.
Really trained it on Facebook, Instagram and Twitter posts I feel. đ
ChatGPT is increasingly being overestimated and used for things it obviously shouldn't be. For me, there's a lot of Schadenfreude in this, but I'm also concerned about the widespread...
ChatGPT is increasingly being overestimated and used for things it obviously shouldn't be. For me, there's a lot of Schadenfreude in this, but I'm also concerned about the widespread misunderstanding of how LLMs work... To be honest, the prospect of anything like this happening in a high-profile case makes me nervous.
I'm hoping it's growing pains due to a new technology, and once it's more prevalent people will understand its shortcomings, or more specifically, its intended use cases. People treat it like a...
I'm hoping it's growing pains due to a new technology, and once it's more prevalent people will understand its shortcomings, or more specifically, its intended use cases. People treat it like a sentient oracle of truth, and not a google search that can package its results nicely.
A news site I subscribe to posted an article saying theyâd used an AI (with a made up name) to predict upcoming sports results. To itâs credit the article did say down the bottom that theyâd...
A news site I subscribe to posted an article saying theyâd used an AI (with a made up name) to predict upcoming sports results. To itâs credit the article did say down the bottom that theyâd actually used ChatGPT, but didnât mention they were using an AI that is absolutely not designed for the purpose. I made a comment about that and they deleted it!
I tested some multiple choice questions from my bar prep to see what answer choice chatgpt would select and how it would justify it and it was wrong the majority of the time while providing very...
I tested some multiple choice questions from my bar prep to see what answer choice chatgpt would select and how it would justify it and it was wrong the majority of the time while providing very convincing explanations. I can definitely see how some boomer lawyers are using it while not understanding fully how it works, Iâm sure some of them see it as a free law clerk or paralegal.
In the past some of Legal Eagle's vids have been interesting and entertaining, but this one really seems to miss on both those points. Yes, ChatGPT lies, yes those lawyers screwed up, but LE's...
In the past some of Legal Eagle's vids have been interesting and entertaining, but this one really seems to miss on both those points. Yes, ChatGPT lies, yes those lawyers screwed up, but LE's shtick here is too drawn out. He's right to point to ChatGPT's shortcomings and ability to mislead, but there's a lot of bloviating in this vid.
I've never used ChatGPT for my workflow, for the following reasons aside from those mentioned above: I could see it being useful for generating a quick summary of something where the topic has...
I've never used ChatGPT for my workflow, for the following reasons aside from those mentioned above:
I could see it being useful for generating a quick summary of something where the topic has already been well rehearsed in publicly available materials (e.g. an online encyclopedia), but I could just as easily consult human-authored texts for this.
A lot of legal resources in my jurisdiction (primary and secondary texts) are gated behind hefty subscription passes or pay-per-document access. I've noticed the trend in this century of publicly significant decisions being open-access (which is great), but I'd be very skeptical for older cases, especially historical materials, which is a red flag if ChatGPT is trying to make statements on the basis of precedent.
Even if in an imaginary world that ChatGPT had access to the majority of case law, I'd still be suspicious. Could it adequately compare judicial opinions where a judge has not expressly applied a law? Can I trust if it has made a correct statement about the weighting of an argument? Can it convey legal topics which are highly contested, in less than simplistic ways? Can it even distinguish cases where other people have not done so? I don't know, but I have my doubts.
The only thing I have been impressed by so far is when I asked it to write me a rap song based on the text of the Treaty I was studying. It was cliche, but it was good fun.
My favorite part is in Schwartz's affidavit, doc# 32, where he attaches screenshots of his conversation with ChatGPT, which involves such excellent factchecking as asking ChatGPT "Is varghese a...
My favorite part is in Schwartz's affidavit, doc# 32, where he attaches screenshots of his conversation with ChatGPT, which involves such excellent factchecking as asking ChatGPT "Is varghese a real case" and "Are the other cases you provided fake".
Itâs great for when you want to reword something youâve already written. I can bulletpoint what I want to say and ask for it as a short summary in full sentences or an extended version heavy with...
Itâs great for when you want to reword something youâve already written. I can bulletpoint what I want to say and ask for it as a short summary in full sentences or an extended version heavy with advertising jargon. I need to write all sorts of spiels ahead of a trade show, now I write it once and ChatGPT generates each format I need.
Iâve used it to draft complaints to the school board and city council. Stuff that I kind of cared about, but didnât want to use up too much of my time. Worked well. It was also fun using it with...
Iâve used it to draft complaints to the school board and city council. Stuff that I kind of cared about, but didnât want to use up too much of my time. Worked well.
It was also fun using it with my 6 year old to make up stories.
I attempted to solve some basic arithmetic using text rather than a formula and it failed spectacularly, by giving me a superficially convincing answer that didnât withstand scrutiny. When I told it of the mistake it apologized and tried again, but then made a different mistake.
I also asked it for recipes, with some success. But its logic is again limited. When I asked for zero calorie drinks it offered âjuiceâ among other correct options like plain water. When I corrected it, it said that sure âjuiceâ has calories but is still healthy for me, so thatâs why it was included. Lol. So yeah canât even trust it for that.
Yeah, practically speaking the only real use it can have is when you already know what the output is supposed to look like, but are too lazy to get there. Whether that's because you already know...
Yeah, practically speaking the only real use it can have is when you already know what the output is supposed to look like, but are too lazy to get there. Whether that's because you already know quite a lot about the topic or because it's you writing a complaint to the school board so you already know what you want in it. Because otherwise it's just going to make crap up and you won't be able to check it to know that it made crap up.
I feel like you just need to use chatgpt for a while to realize that it's just really good at pretending to know what it's talking about, but that doesn't mean what it says it's correct or make sense
I'm currently studying Australian migration law and somebody got busted in one of our last big assessments using ChatGPT. About half of the things it came back with were extremely out of date and the other half it had invented from thin air. Overall, it was both hilariously and dangerously incorrect. The prof posted the submitted assignment in full with the student's details redacted as a cautionary tale.
Yikes.......harsh but that's a lesson better learned in school than out in court or the real world.
I mean it is a Chinese Room thing. ChatGPT knows how it is supposed to construct an answer to a question, but it cannot understand either the question nor the answer it is providing.
Your answer is exactly how AI is right now. It's at a dopemine stage. It likes to be praised when it gets something right but it's spending a lot of time getting things wrong and inventing things.
I use it for code check and quickly creating scripting functions in bash and powershell. However, the amount of lies it tells for commands that don't have switches is just crazy, it invents things it expects should exist and when you call it out it apologises and tries again.
yeah, just repeatedly ask it "are you sure about your answer?" and you can get it to change its mind several times back and forth. very reliable!
I think people are surprised by a few things:
that AI will happily just lie to you
it doesn't understand what it's saying, it's just confidently repeating what other people have said.
I love the examples of giving it a puzzle that has been slightly reworded (the surgeon's child; crossing a river with a bag of grain, a chicken, and a fox) to see it insist that the answer it gives is correct.
Anyway, I really like chatting to the Bing search robot and I wonder if Microsoft are gathering that data to see how many people log in to say "Hey Bing robot, hope you had a good day" because I bet it's a disturbing number of people. See also how many people spend ages talking to Eliza, and that was terrible at chatting.
I found out the other day that apparently so many people online so vehemently insist on some forms of alternative medicine it'll cite a study (the only study about this at all) that shows the specific alternative treatment did not change anything in the measured values as proof that it does supposedly work. As in, it patiently insists on the exact opposite of what is there.
Really trained it on Facebook, Instagram and Twitter posts I feel. đ
ChatGPT is increasingly being overestimated and used for things it obviously shouldn't be. For me, there's a lot of Schadenfreude in this, but I'm also concerned about the widespread misunderstanding of how LLMs work... To be honest, the prospect of anything like this happening in a high-profile case makes me nervous.
Hopefully, this situation is a good deterrent.
I'm hoping it's growing pains due to a new technology, and once it's more prevalent people will understand its shortcomings, or more specifically, its intended use cases. People treat it like a sentient oracle of truth, and not a google search that can package its results nicely.
A news site I subscribe to posted an article saying theyâd used an AI (with a made up name) to predict upcoming sports results. To itâs credit the article did say down the bottom that theyâd actually used ChatGPT, but didnât mention they were using an AI that is absolutely not designed for the purpose. I made a comment about that and they deleted it!
I tested some multiple choice questions from my bar prep to see what answer choice chatgpt would select and how it would justify it and it was wrong the majority of the time while providing very convincing explanations. I can definitely see how some boomer lawyers are using it while not understanding fully how it works, Iâm sure some of them see it as a free law clerk or paralegal.
In the past some of Legal Eagle's vids have been interesting and entertaining, but this one really seems to miss on both those points. Yes, ChatGPT lies, yes those lawyers screwed up, but LE's shtick here is too drawn out. He's right to point to ChatGPT's shortcomings and ability to mislead, but there's a lot of bloviating in this vid.
I've never used ChatGPT for my workflow, for the following reasons aside from those mentioned above:
The only thing I have been impressed by so far is when I asked it to write me a rap song based on the text of the Treaty I was studying. It was cliche, but it was good fun.
My favorite part is in Schwartz's affidavit, doc# 32, where he attaches screenshots of his conversation with ChatGPT, which involves such excellent factchecking as asking ChatGPT "Is varghese a real case" and "Are the other cases you provided fake".
I have no clue what to even use it for
Itâs great for when you want to reword something youâve already written. I can bulletpoint what I want to say and ask for it as a short summary in full sentences or an extended version heavy with advertising jargon. I need to write all sorts of spiels ahead of a trade show, now I write it once and ChatGPT generates each format I need.
Iâve used it to draft complaints to the school board and city council. Stuff that I kind of cared about, but didnât want to use up too much of my time. Worked well.
It was also fun using it with my 6 year old to make up stories.
I attempted to solve some basic arithmetic using text rather than a formula and it failed spectacularly, by giving me a superficially convincing answer that didnât withstand scrutiny. When I told it of the mistake it apologized and tried again, but then made a different mistake.
I also asked it for recipes, with some success. But its logic is again limited. When I asked for zero calorie drinks it offered âjuiceâ among other correct options like plain water. When I corrected it, it said that sure âjuiceâ has calories but is still healthy for me, so thatâs why it was included. Lol. So yeah canât even trust it for that.
Yeah, practically speaking the only real use it can have is when you already know what the output is supposed to look like, but are too lazy to get there. Whether that's because you already know quite a lot about the topic or because it's you writing a complaint to the school board so you already know what you want in it. Because otherwise it's just going to make crap up and you won't be able to check it to know that it made crap up.