This article gives a good overview of how AI writing detectors work, and why they might fail. It also briefly discusses how these AI detectors have been used to accuse students of academic...
This article gives a good overview of how AI writing detectors work, and why they might fail. It also briefly discusses how these AI detectors have been used to accuse students of academic dishonesty. As someone going through college right now, this is a subject that I am moderately worried about. In the article, an accused student defends themselves using google docs edit history. I use a different editor which does not have edit history. If I were accused of academic dishonesty, I don't know how I would defend myself. I think this serves as a good example of why the idea of "innocent until proven guilty" is so important.
You could run a script which would take a screenshot every X minutes (ironically, chatgpt will probably give you the entire code for that :P). If you're sufficiently afraid, just capture video....
I use a different editor which does not have edit history
You could run a script which would take a screenshot every X minutes (ironically, chatgpt will probably give you the entire code for that :P).
If you're sufficiently afraid, just capture video. Maybe that's overkill, but it's pretty simple with OBS.
When I was in highschool I was accused of cheating once. Turns out I just wrote really well for my age :P
Same! When I was like 13 I was accussed of cheating on my English homework... not like my parents would even be able to help me and google translate was still terrible at that time. I think the...
Same! When I was like 13 I was accussed of cheating on my English homework... not like my parents would even be able to help me and google translate was still terrible at that time.
I think the teacher was forcing me to admit that my parents helped me which is just dumb
I would recommend that you switch to something like Google Docs, if you are concerned. The reality is that it will take time for professors to get up to speed and these AI detectors are being...
I would recommend that you switch to something like Google Docs, if you are concerned. The reality is that it will take time for professors to get up to speed and these AI detectors are being pushed as a solution. Alternatively, you could put your papers through the AI detector yourself before submitting and make sure they come out clean. Many students already do this with Turn It In for plagiarism. My understanding is that Turn It In also has AI detection now, so if your school uses that, I would recommend you test with it prior to submitting.
If you do get accused, don't take it personally. Behave professionally and politely. Offer to provide evidence that you did not cheat. You may offer to defend your paper orally, to prove you know the material in your paper. Or simply ask what you can do to ally their suspicions. The key is to be polite and accountable, not insulting or offering excuses.
If Turnitin does have AI detection, it isn't very good. I personally haven't done it, but I know people who have pretty blatantly copied Chat GPT with minimal editing and submitted it through...
If Turnitin does have AI detection, it isn't very good. I personally haven't done it, but I know people who have pretty blatantly copied Chat GPT with minimal editing and submitted it through Turnitin and they haven't been caught. Either that or my university doesn't have an anti-AI policy which I find unlikely.
It's not very good - I'm not sure any AI detector is actually good. They have high false positives which is a bigger problem than false negatives. However, your university may not have a policy...
It's not very good - I'm not sure any AI detector is actually good. They have high false positives which is a bigger problem than false negatives.
However, your university may not have a policy yet. The education system can move pretty slowly. Or, like my institution, they have left it up to the professors' discretion. Some professors are embracing ChatGPT, so the university may not want a blanket ban.
Outside of Academia, artists have already been seeing stuff like that too, getting accused of trying to pass off AI art as their own... even though it's their own. It's getting to the point where...
Outside of Academia, artists have already been seeing stuff like that too, getting accused of trying to pass off AI art as their own... even though it's their own. It's getting to the point where we'll have to document ourselves doing our own work, just to prove that we ourselves did it. Not to mention all the AI text bots that seem to be active all over the internet as well, it's getting pretty frustrating. The genie is kind of already out of the bag on that one, and I don't know that it's going to get any better.
What may be interesting one day is if we eventually see an entirely AI online community show up. We'll see AI bots getting into nonsensical arguments with each other, passing off their own AI-generated content, and accusing each other of being A.I.s.... I don't know if that's when the AI will truly gain sentience or if it'll be just AI mimicing humans in some weird bot society.
A more manual option than what others are suggestion (depending on your preferred workflow) might be to save major drafts as different files by version number as you go (e.g., v0.1 v0.2 v0.3...)...
A more manual option than what others are suggestion (depending on your preferred workflow) might be to save major drafts as different files by version number as you go (e.g., v0.1 v0.2 v0.3...) Also, if you jot down preliminary ideas (brainstorms, sketching out overviews, etc.), save those. It'll show your work and I would assume give you some credibility based on the file creation timestamps.
Even way back when I was in school around 2010, I was accused of not submitting an essay (it must have gotten lost), but the file timestamp saved me and proved I had completed the work on time, so I was allowed to re-submit :)
You could keep your documents stored on a cloud service, like Dropbox or OneDrive since they all typically have version history built in. How long they store previous versions varies though, e.g....
If I were accused of academic dishonesty, I don't know how I would defend myself.
You could also start using git + github/gitlab, to save version history on your own, regardless of if your document editor supports it.
Or do both, which as someone who used to work in the Data Recovery industry, I would honestly recommend just for backup / redundancy purposes anyways. I have some horror stories about academic and business data loss I could share that would probably break your heart. One student whose broken USB drive I worked on contained the only copy of his finalized PhD thesis, and I was unfortunately not able to recovery it. :(
Wow, that would be horrific to lose your entire thesis. That being said, that's pure insanity to keep your sole copy of anything on a USB drive. I graduated high school 20 years ago and even back...
Wow, that would be horrific to lose your entire thesis. That being said, that's pure insanity to keep your sole copy of anything on a USB drive.
I graduated high school 20 years ago and even back then, we were taught to keep multiple copies and that USB drives were easily corrupted.
I sincerely hope they were able to rebuild it without being really delayed. At the very least, I'll be that made defending it easier.
I'm a little horrified that the comments accepted the fact you should be defending yourself instead of disputing the accusations. Think of it this way: one of you is doing the work, the other uses...
I'm a little horrified that the comments accepted the fact you should be defending yourself instead of disputing the accusations. Think of it this way: one of you is doing the work, the other uses tech to do the work because they would not be able otherwise.
What's the difference? If you dispute the accusation you inevitably have to defend yourself. It'd take longer to disprove whatever AI detector they have than to simply show timestamps and/or be...
I'm a little horrified that the comments accepted the fact you should be defending yourself instead of disputing the accusations.
What's the difference? If you dispute the accusation you inevitably have to defend yourself. It'd take longer to disprove whatever AI detector they have than to simply show timestamps and/or be grilled on your report.
In my insignificant opinion, if you can cheat a report or project with a bot, the project isn't sufficiently testing your ability on the content to begin with. I've met dumb cheaters but also some very smart cheaters that simply cheated to turn a 95 into a 100. i.e. they can't really be "caught" on any given take home assignment. That says a lot more about the acedemic environment that creates that pressure, but I digress.
That’s why I haven’t responded to anything yet. This is a social problem, and people are proposing a technical solution. That can work in some situations, but I don’t think it’s a very good...
That’s why I haven’t responded to anything yet. This is a social problem, and people are proposing a technical solution. That can work in some situations, but I don’t think it’s a very good solution in this instance. One teacher can accuse a student of something so problematic that it can stop a student’s learning and career in its tracks. Academic dishonesty should not be a charge thrown around without consideration. The onus should be on the teacher to prove the academic dishonesty. These tools are demonstrably inaccurate and should not be used as proof of anything. I should not have to create a digital paper trail because some teacher doesn’t understand how technology works.
I doubt anyone who offered you technical advice disagrees with you that these “anti-cheat”/“AI detection” programs are total bullshit, and shouldn’t ever be used against students. But the simple...
I doubt anyone who offered you technical advice disagrees with you that these “anti-cheat”/“AI detection” programs are total bullshit, and shouldn’t ever be used against students. But the simple and sad fact of the matter is that they are already being used by ignorant educators/administrators, will likely continue to be used regardless of how any us here feels about them, and none of us really has the power to change that. So covering your ass against an accusation by being prepared to defend yourself, should it ever come to that, seems a prudent thing to do, especially when saving a version history of your work is so trivial to do these days. People are simply trying to help you and others as best they can, by sharing the knowledge they have about version control systems. cc: @elfpie
I didn't mean to downplay any response. It was more just the fact that no other solutions were even discussed, with the exception of /u/creesch. Also, I may not have worded myself properly, but I...
I didn't mean to downplay any response. It was more just the fact that no other solutions were even discussed, with the exception of /u/creesch. Also, I may not have worded myself properly, but I am not that worried for myself. I think I do have some sort of version backups, but it might be difficult to find on a moment's notice. I also find that honesty and an in-person discussion can do wonders to handle situations.
I am much more worried about students who do not have the privilege that I have. It wasn't mentioned in this article, but these detectors also have a high false-positive rate for non-native English writers. This demographic also may not be able to take the time to argue against a false accusation. Most importantly, they may not have the technical understanding to even know that they need versioned backups. I have been following the recent LLM developments since chatgpt was launched, so I am quite knowledgeable about what new products exist. Surprising as it may sound to us technical people, some people don't know or care about large language models. Those people won't even know that they could be accused of academic dishonesty without even touching an AI tool.
Yeah, I get being a bit upset that there wasn't more discussion about potential solutions to the root issue, instead of just suggestions for protecting yourself in case you get targeted by it. And...
Yeah, I get being a bit upset that there wasn't more discussion about potential solutions to the root issue, instead of just suggestions for protecting yourself in case you get targeted by it. And yeah, it's a shitty situation all around, and it really sucks that so many people are going to fall through the cracks, and are likely to get targeted more often, like ESL students. I wish I could do more to help prevent it. :(
In a practical way, all the solutions are welcome and I'm glad people come forth to offer them. That said, I can't advocate an attitude that rewards power abuse. Prepare for the worst, but don't...
In a practical way, all the solutions are welcome and I'm glad people come forth to offer them. That said, I can't advocate an attitude that rewards power abuse. Prepare for the worst, but don't accept the worst as the norm.
But you also can't advocate fantasy. Like it or not, we live in a society whose rules and values aren't entirely empirically based or run by those with proven merit. A lack of technological...
But you also can't advocate fantasy. Like it or not, we live in a society whose rules and values aren't entirely empirically based or run by those with proven merit. A lack of technological understanding is the norm, not a competent understanding of it.
Refusing to cover oneself out of principle isn't admirable, it's risky. We know what they are looking for and it would be easy enough to defend against it. But to refuse to do it and tempt fate by demanding they prove it at all turns when you could have easily had the evidence yourself? Practically, it winds up being not much different from those people that claim sovereign citizenry as a reason for why they should be able to walk across borders with no consequences. That's just not how it works no matter how much many people do want it to work that way.
A truly good understanding of technology also means how to use it with people that have zero clue about it, and how to accommodate for that. It does not mean making other people catch up and insist they adapt to your own workflows to do it. In this situation, it's asking for trouble to know how to mitigate a real risk and simply refuse to do it because you don't want to.
In this vein, none of us want to be beholden to superiors or people who can do damage to our interests or career paths... but we are. We follow the rules and protect ourselves. It's not rewarding anyone, it's simply surviving.
I don't see being prepared to defend yourself as rewarding the bad behavior. Hoping for the best, but preparing for the worst is never bad advice, IMO... regardless of the circumstances.
I don't see being prepared to defend yourself as rewarding the bad behavior. Hoping for the best, but preparing for the worst is never bad advice, IMO... regardless of the circumstances.
It would seem an easy first step to take is to explicitly ban using chatbot models to look for AI content. They are modeled after our writings, then we use them to see if our writings are similar...
It would seem an easy first step to take is to explicitly ban using chatbot models to look for AI content.
They are modeled after our writings, then we use them to see if our writings are similar to what they're trained on. Which is our writings.
It seems less than surprising to many of us that our own written texts do indeed follow the stylistic patterns we've already used for a long time. Which is all these language bots can compare it against.
The solution is to simply ban the use of random thingamabobs, and only allow the use of approved programs that look for already existing text matches. These programs already exist and have been used for a while now.
I find it a lot more worrying that adults with higher education in subjects that should make the illogical use of chatbot obvious to them, don't understand what they did wrong.
Asking "does this text match the stylistic choices that all other texts you're trained on also show?" only for it to say yes. Then acting like that somehow means anything more than just that?
It is worrying that so many seem to truly believe there is a sentient, complex creature in there. A creature as competent as us when it comes to understanding and interpreting context and wider implications, tailoring their answer to a degree of truth and reality that humans do...
Skynet isn't real. That's just a Hollywood movie. I thought we all new that movies aren't real.
Assuming you are a student, you would offer your professor to walk them through the process of you writing the paper you submitted. Explain decisions you made while writing the paper, etc, etc.
I don't know how I would defend myself.
Assuming you are a student, you would offer your professor to walk them through the process of you writing the paper you submitted. Explain decisions you made while writing the paper, etc, etc.
This article gives a good overview of how AI writing detectors work, and why they might fail. It also briefly discusses how these AI detectors have been used to accuse students of academic dishonesty. As someone going through college right now, this is a subject that I am moderately worried about. In the article, an accused student defends themselves using google docs edit history. I use a different editor which does not have edit history. If I were accused of academic dishonesty, I don't know how I would defend myself. I think this serves as a good example of why the idea of "innocent until proven guilty" is so important.
You could run a script which would take a screenshot every X minutes (ironically, chatgpt will probably give you the entire code for that :P).
If you're sufficiently afraid, just capture video. Maybe that's overkill, but it's pretty simple with OBS.
When I was in highschool I was accused of cheating once. Turns out I just wrote really well for my age :P
Same! When I was like 13 I was accussed of cheating on my English homework... not like my parents would even be able to help me and google translate was still terrible at that time.
I think the teacher was forcing me to admit that my parents helped me which is just dumb
I would recommend that you switch to something like Google Docs, if you are concerned. The reality is that it will take time for professors to get up to speed and these AI detectors are being pushed as a solution. Alternatively, you could put your papers through the AI detector yourself before submitting and make sure they come out clean. Many students already do this with Turn It In for plagiarism. My understanding is that Turn It In also has AI detection now, so if your school uses that, I would recommend you test with it prior to submitting.
If you do get accused, don't take it personally. Behave professionally and politely. Offer to provide evidence that you did not cheat. You may offer to defend your paper orally, to prove you know the material in your paper. Or simply ask what you can do to ally their suspicions. The key is to be polite and accountable, not insulting or offering excuses.
If Turnitin does have AI detection, it isn't very good. I personally haven't done it, but I know people who have pretty blatantly copied Chat GPT with minimal editing and submitted it through Turnitin and they haven't been caught. Either that or my university doesn't have an anti-AI policy which I find unlikely.
It's not very good - I'm not sure any AI detector is actually good. They have high false positives which is a bigger problem than false negatives.
However, your university may not have a policy yet. The education system can move pretty slowly. Or, like my institution, they have left it up to the professors' discretion. Some professors are embracing ChatGPT, so the university may not want a blanket ban.
Outside of Academia, artists have already been seeing stuff like that too, getting accused of trying to pass off AI art as their own... even though it's their own. It's getting to the point where we'll have to document ourselves doing our own work, just to prove that we ourselves did it. Not to mention all the AI text bots that seem to be active all over the internet as well, it's getting pretty frustrating. The genie is kind of already out of the bag on that one, and I don't know that it's going to get any better.
What may be interesting one day is if we eventually see an entirely AI online community show up. We'll see AI bots getting into nonsensical arguments with each other, passing off their own AI-generated content, and accusing each other of being A.I.s.... I don't know if that's when the AI will truly gain sentience or if it'll be just AI mimicing humans in some weird bot society.
A more manual option than what others are suggestion (depending on your preferred workflow) might be to save major drafts as different files by version number as you go (e.g., v0.1 v0.2 v0.3...) Also, if you jot down preliminary ideas (brainstorms, sketching out overviews, etc.), save those. It'll show your work and I would assume give you some credibility based on the file creation timestamps.
Even way back when I was in school around 2010, I was accused of not submitting an essay (it must have gotten lost), but the file timestamp saved me and proved I had completed the work on time, so I was allowed to re-submit :)
You could keep your documents stored on a cloud service, like Dropbox or OneDrive since they all typically have version history built in. How long they store previous versions varies though, e.g. Dropbox only stores old versions for anywhere from 30-365 days (depending on your account tier). Whereas OneDrive only stores the last 25 versions of all your files.
You could also start using git + github/gitlab, to save version history on your own, regardless of if your document editor supports it.
Or do both, which as someone who used to work in the Data Recovery industry, I would honestly recommend just for backup / redundancy purposes anyways. I have some horror stories about academic and business data loss I could share that would probably break your heart. One student whose broken USB drive I worked on contained the only copy of his finalized PhD thesis, and I was unfortunately not able to recovery it. :(
Wow, that would be horrific to lose your entire thesis. That being said, that's pure insanity to keep your sole copy of anything on a USB drive.
I graduated high school 20 years ago and even back then, we were taught to keep multiple copies and that USB drives were easily corrupted.
I sincerely hope they were able to rebuild it without being really delayed. At the very least, I'll be that made defending it easier.
I'm a little horrified that the comments accepted the fact you should be defending yourself instead of disputing the accusations. Think of it this way: one of you is doing the work, the other uses tech to do the work because they would not be able otherwise.
What's the difference? If you dispute the accusation you inevitably have to defend yourself. It'd take longer to disprove whatever AI detector they have than to simply show timestamps and/or be grilled on your report.
In my insignificant opinion, if you can cheat a report or project with a bot, the project isn't sufficiently testing your ability on the content to begin with. I've met dumb cheaters but also some very smart cheaters that simply cheated to turn a 95 into a 100. i.e. they can't really be "caught" on any given take home assignment. That says a lot more about the acedemic environment that creates that pressure, but I digress.
That’s why I haven’t responded to anything yet. This is a social problem, and people are proposing a technical solution. That can work in some situations, but I don’t think it’s a very good solution in this instance. One teacher can accuse a student of something so problematic that it can stop a student’s learning and career in its tracks. Academic dishonesty should not be a charge thrown around without consideration. The onus should be on the teacher to prove the academic dishonesty. These tools are demonstrably inaccurate and should not be used as proof of anything. I should not have to create a digital paper trail because some teacher doesn’t understand how technology works.
I doubt anyone who offered you technical advice disagrees with you that these “anti-cheat”/“AI detection” programs are total bullshit, and shouldn’t ever be used against students. But the simple and sad fact of the matter is that they are already being used by ignorant educators/administrators, will likely continue to be used regardless of how any us here feels about them, and none of us really has the power to change that. So covering your ass against an accusation by being prepared to defend yourself, should it ever come to that, seems a prudent thing to do, especially when saving a version history of your work is so trivial to do these days. People are simply trying to help you and others as best they can, by sharing the knowledge they have about version control systems. cc: @elfpie
I didn't mean to downplay any response. It was more just the fact that no other solutions were even discussed, with the exception of /u/creesch. Also, I may not have worded myself properly, but I am not that worried for myself. I think I do have some sort of version backups, but it might be difficult to find on a moment's notice. I also find that honesty and an in-person discussion can do wonders to handle situations.
I am much more worried about students who do not have the privilege that I have. It wasn't mentioned in this article, but these detectors also have a high false-positive rate for non-native English writers. This demographic also may not be able to take the time to argue against a false accusation. Most importantly, they may not have the technical understanding to even know that they need versioned backups. I have been following the recent LLM developments since chatgpt was launched, so I am quite knowledgeable about what new products exist. Surprising as it may sound to us technical people, some people don't know or care about large language models. Those people won't even know that they could be accused of academic dishonesty without even touching an AI tool.
Yeah, I get being a bit upset that there wasn't more discussion about potential solutions to the root issue, instead of just suggestions for protecting yourself in case you get targeted by it. And yeah, it's a shitty situation all around, and it really sucks that so many people are going to fall through the cracks, and are likely to get targeted more often, like ESL students. I wish I could do more to help prevent it. :(
In a practical way, all the solutions are welcome and I'm glad people come forth to offer them. That said, I can't advocate an attitude that rewards power abuse. Prepare for the worst, but don't accept the worst as the norm.
But you also can't advocate fantasy. Like it or not, we live in a society whose rules and values aren't entirely empirically based or run by those with proven merit. A lack of technological understanding is the norm, not a competent understanding of it.
Refusing to cover oneself out of principle isn't admirable, it's risky. We know what they are looking for and it would be easy enough to defend against it. But to refuse to do it and tempt fate by demanding they prove it at all turns when you could have easily had the evidence yourself? Practically, it winds up being not much different from those people that claim sovereign citizenry as a reason for why they should be able to walk across borders with no consequences. That's just not how it works no matter how much many people do want it to work that way.
A truly good understanding of technology also means how to use it with people that have zero clue about it, and how to accommodate for that. It does not mean making other people catch up and insist they adapt to your own workflows to do it. In this situation, it's asking for trouble to know how to mitigate a real risk and simply refuse to do it because you don't want to.
In this vein, none of us want to be beholden to superiors or people who can do damage to our interests or career paths... but we are. We follow the rules and protect ourselves. It's not rewarding anyone, it's simply surviving.
I don't see being prepared to defend yourself as rewarding the bad behavior. Hoping for the best, but preparing for the worst is never bad advice, IMO... regardless of the circumstances.
It would seem an easy first step to take is to explicitly ban using chatbot models to look for AI content.
They are modeled after our writings, then we use them to see if our writings are similar to what they're trained on. Which is our writings.
It seems less than surprising to many of us that our own written texts do indeed follow the stylistic patterns we've already used for a long time. Which is all these language bots can compare it against.
The solution is to simply ban the use of random thingamabobs, and only allow the use of approved programs that look for already existing text matches. These programs already exist and have been used for a while now.
I find it a lot more worrying that adults with higher education in subjects that should make the illogical use of chatbot obvious to them, don't understand what they did wrong.
Asking "does this text match the stylistic choices that all other texts you're trained on also show?" only for it to say yes. Then acting like that somehow means anything more than just that?
It is worrying that so many seem to truly believe there is a sentient, complex creature in there. A creature as competent as us when it comes to understanding and interpreting context and wider implications, tailoring their answer to a degree of truth and reality that humans do...
Skynet isn't real. That's just a Hollywood movie. I thought we all new that movies aren't real.
Assuming you are a student, you would offer your professor to walk them through the process of you writing the paper you submitted. Explain decisions you made while writing the paper, etc, etc.
You should have at least a seperate rough draft anyways.
But your editor sounds like crap tbh, use libreoffice or something.