33 votes

A peer reviewed journal with nonsense AI images was just published

38 comments

  1. [21]
    creesch
    Link
    This feels like clickbait to me, as the fundamental problem clearly isn't AI. At least to me it isn't. The title would have worked as well without AI in the title. The fact that the images are AI...

    This feels like clickbait to me, as the fundamental problem clearly isn't AI. At least to me it isn't. The title would have worked as well without AI in the title. The fact that the images are AI generated isn't even that relevant. What is worrying is that the peer review process, at least for this journal clearly is faulty as no actual review of the material took place.

    If we do want to talk about AI. I am impressed how well the model managed to actually create text made up of actual letters resembling words. From what I have seen so far that is often just as difficult for these models as hands are.

    37 votes
    1. [12]
      Gaywallet
      Link Parent
      I disagree. A journal is supposed to be factual. Generating an image (of particular note, figure 2 which is supposed to be a flow chart of how JAK/STAT signaling works) to explain what's going on...

      The fact that the images are AI generated isn't even that relevant.

      I disagree. A journal is supposed to be factual. Generating an image (of particular note, figure 2 which is supposed to be a flow chart of how JAK/STAT signaling works) to explain what's going on in an abstract way can theoretically be fine, but it is certainly not accurate nor was it checked for accuracy.

      What is worrying is that the peer review process, at least for this journal clearly is faulty as no actual review of the material took place.

      Yes, the deeper issue is 'pay to publish' journals, such as this one. Of note, it's technically against the TOS of this journal, but it's transparently clear that it was either not reviewed or the process needs to be fixed as the images were not. But to say that there isn't an issue with the images being AI generated feels dismissive as it ignores the intended purpose of journal articles - to be scientific. While we can generate images to help explain abstract ideas (like flowcharts) and sometimes we draw things, there's generally a process to ensure accuracy and maintain scientific credibility and AI generated images are not created in a way that ensures said accuracy.

      20 votes
      1. [5]
        vektor
        Link Parent
        I disagree, generally speaking. You can use generative AI to aid in the accurate communication of ideas, and there's no reason this should be different in science. The crucial element is that you...

        While we can generate images to help explain abstract ideas (like flowcharts) and sometimes we draw things, there's generally a process to ensure accuracy and maintain scientific credibility and AI generated images are not created in a way that ensures said accuracy.

        I disagree, generally speaking. You can use generative AI to aid in the accurate communication of ideas, and there's no reason this should be different in science. The crucial element is that you verify carefully that what you received out of an AI is exactly what you want to communicate. Most researchers need to create complicated diagrams and illustrations, or communicate abstract ideas in complex writing, and often there's no specialized professional to do that for them. They can't all be experts in so many different fields - illustrations like this is just one tiny part of the extremely broad skill set required of an academic. It's in my opinion entirely acceptable (though should be disclosed) that AI was used to generate an illustration like in the OP. However, researchers should exercise close control over what is illustrated, and double and triple check the output. The same should happen at peer review stage. Clearly, neither stage properly checked this figure, or the peer reviewers never saw the figure (added after peer review, only seen by editor). This is entirely a story about "peer review is broken", maybe "shitty journal is shitty" (can't quite tell, some orange flags but no red ones I could see) and only tangentially about "AI generated thing was published in a journal".

        I'd eat my hat if there wasn't already AI-generated art and text published, and no one except the authors knows. Probably plenty of such cases. The thing that makes that unremarkable is that both authors and reviewers exercised proper control, and anything the AI generated could plausibly be generated by humans or was rejected.

        23 votes
        1. Gaywallet
          (edited )
          Link Parent
          Completely agreed. To be clear I'm not saying we should never use AI. I'm saying that simplifying this down to an issue of just the review process flattens out the problem that generative AI does...

          I disagree, generally speaking. You can use generative AI to aid in the accurate communication of ideas, and there's no reason this should be different in science.

          Completely agreed. To be clear I'm not saying we should never use AI. I'm saying that simplifying this down to an issue of just the review process flattens out the problem that generative AI does not think in the same way that generative human content does. There's additional considerations that need to be made when considering using generative AI, namely that generative AI does not have a sum of knowledge to pull from in order to keep certain ideas in check, such as how large an object should appear and it doesn't have the ability to fact check relevancy with other objects within the image.

          We need to think about these issues in depth because we are introducing a non-human, specific kind of bias into literature. If we don't think about it systematically we can't create a process which intends to limit or reduce the amount of bias introduced by allowing this kind of content. Yes, the review process can and should already catch a lot of this, but I'm not convinced that waving our hands and saying that review is enough is adequate to fully address the biases we may be introducing.

          I think there's a much higher chance of introducing bias or false information in highly specialized fields where the knowledge necessary to determine if something was generated incorrectly, since generative AI does not draw upon facts or fact check, is in fact, correct. Reviewers are not perfect, and may miss things. If we then draw upon this knowledge in the future to direct additional studies we might create a house of cards which becomes very difficult to undo. We already have countless examples of this in science where a study with falsified data or poor methodology breeds a whole field of research which struggles to validate the original studies and eventually needs to be retracted. We could potentially have situations in which the study is validated but an image influences how we even think (or can acquire funding for) a process should work. Having strong protections such as requiring that AI images be clearly notated that they were created via AI, can help to mitigate these kinds of issues.

          I'd eat my hat if there wasn't already AI-generated art and text published, and no one except the authors knows.

          Absolutely, but it takes blatantly incorrect images like this to help draw attention to why it's problematic and how it's problematic in specific ways unique to the medium being used.

          9 votes
        2. [3]
          raze2012
          Link Parent
          I agree with the core of your argument, but I really don't like the counter-point of "we didn't look hard enough for a human expert to do this". The modern job market as of now isn't even...

          Most researchers need to create complicated diagrams and illustrations, or communicate abstract ideas in complex writing, and often there's no specialized professional to do that for them

          I agree with the core of your argument, but I really don't like the counter-point of "we didn't look hard enough for a human expert to do this". The modern job market as of now isn't even Ai-reliant yet but this justification has been used for decades to say "well we couldn't find domestic employees" or "well we couldn't find external employees let's hire the one on the inside".

          I'd eat my hat if there wasn't already AI-generated art and text published, and no one except the authors knows.

          Ideally that's how we should use AI tooling. as an assistive tool to automate various aspects of works. generating visualizations based on manually gathered data is a great use case for AI.

          And I imagine that's where it'll end up in the long term. But I unfortunately know we're in for the next 5-10 years of "AI can replace labor!" instead o "AI can make existing labor more efficient!"

          3 votes
          1. [2]
            vektor
            Link Parent
            The problem with finding professionals to do those odd jobs around research papers is that they're odd. There are some well-organized universities (I hear) that have a university-level team of...

            The problem with finding professionals to do those odd jobs around research papers is that they're odd. There are some well-organized universities (I hear) that have a university-level team of professionals for writing and illustration assistance that can be utilized by researchers. Most universities do not have this. So if that's not a thing, what do you do? Hire someone to do this thing at your chair? Do you have the money? Do you have enough work for them to justify the expense? Can you even legally shell out research grants or public money for illustration work? Or do you hire a self-employed illustrator on a per-figure basis? If so, again, where's the money and can you just decide to spend it on that?

            It's in a really inconvenient place where the demand for that service is so low, maybe 1-2 illustrations per researcher per year, that it doesn't make sense to stand up infrastructure for it, yet you sometimes need a figure at a real quick turnaround time because your paper got accepted with 2 glowing reviews but really desperately needs an illustration to appease reviewer 3; camera-ready papers are due in 2 weeks. This is not an environment any self-employed illustrator wants to work in I imagine, and even for a permanent position it sounds like a stressful gig.

            Absent more support from the university level, or similarly substantial changes, this is always going to end up in an awkward place of researchers just winging it, even if it ends up ugly or (as in the OP) wrong. AI is just another tool in the toolbox here that a pressed-for-time author can use. Hopefully responsibly.

            3 votes
            1. raze2012
              Link Parent
              Yes? It's a job, it needs to be done right, the expertise has impact on the larger society, and ultimately they want to research to get funding from the government. If they need to pay to attract...

              Most universities do not have this. So if that's not a thing, what do you do? Hire someone to do this thing at your chair?

              Yes? It's a job, it needs to be done right, the expertise has impact on the larger society, and ultimately they want to research to get funding from the government. If they need to pay to attract talent, then they need to pay. If they can volunteer some talent's help for a favor, that's also fine. But less likely if this is anymore than a simple commission for visualizing existing data.

              You don't necessarily need to have someone in a full time chair, but it's not like universities aren't familiar with unconventional jobs. They can contract someone out for short term, or even find an agency if the job is truly that specialized and the hiring process is too annoying. With more remote work happening they also don't need to limit themselves to their location.

              Can you even legally shell out research grants or public money for illustration work?

              If it's important for the research work, I don't see why not. But if that's truly an issue, you just contract them in. If you can have students assist in research for free I don't see an issue with hiring contractors for a university.

              it doesn't make sense to stand up infrastructure for it, yet you sometimes need a figure at a real quick turnaround time because your paper got accepted with 2 glowing reviews but really desperately needs an illustration to appease reviewer 3; camera-ready papers are due in 2 weeks

              well other businesses have urgent issues and hire consultants. Universities shouldn't be much different. And if it's that important I'm sure a professor can make some sort of diagram themself in 2 weeks. Not perfect, but clearly the point there isn't to make a perfect illustration. Just good enough for reviewer 3. We technically failed that benchmark here, but I guess the university was lucky reviewer 3 was slacking on their job.

              AI is just another tool in the toolbox here that a pressed-for-time author can use. Hopefully responsibly.

              we're off to a great start in that case. You can see why trust in AI is so high as more of these stories circulate.

              2 votes
      2. [6]
        creesch
        Link Parent
        I know journals are supposed to be factual, but clearly in this case if other bogus images would have been used that were not AI created the result and issue would still be the same. The problem...

        I know journals are supposed to be factual, but clearly in this case if other bogus images would have been used that were not AI created the result and issue would still be the same.

        But to say that there isn't an issue with the images being AI generated feels dismissive as it ignores the intended purpose of journal articles

        The problem is bogus images, period. No matter if the image is AI generated or human generated, through some random artist they found on Fiverr to create a fictional but realistic looking image.

        I am not being dismissive to that, I am being dismissive to the tweet making it purely about AI while that completely ignores the core issue we both do seem to agree on.

        18 votes
        1. [5]
          dpkonofa
          Link Parent
          I agree. The only reason this is being mentioned is because they used AI in the title and that scares people. Whether the image was AI-generated or human-generated is irrelevant to the issue at...

          I agree. The only reason this is being mentioned is because they used AI in the title and that scares people. Whether the image was AI-generated or human-generated is irrelevant to the issue at hand, itā€™s still nonsense.

          6 votes
          1. [3]
            raze2012
            Link Parent
            I mean, it is relevant. Sure, car crashes happen all the time but when a "self driving" car crashes it's headline news. Should it be? Maybe not in 20 years when actual self driving is (maybe)...

            I mean, it is relevant. Sure, car crashes happen all the time but when a "self driving" car crashes it's headline news. Should it be? Maybe not in 20 years when actual self driving is (maybe) achieved, but with current rushed to market,, dubiously advertised variants I think it's good to stay on top of that.

            It's the same with AI IMO. This stuff clearly isn't ready to just be fired and forgotten despite the marketing. So bad examples should be called out before more serious documents or cases are compromised.

            6 votes
            1. [2]
              dpkonofa
              Link Parent
              But itā€™s not the AI submitting these papers in the way that a self-driving car is the one in control. Itā€™s still people submitting these.

              But itā€™s not the AI submitting these papers in the way that a self-driving car is the one in control. Itā€™s still people submitting these.

              4 votes
              1. raze2012
                (edited )
                Link Parent
                "Legally" the human is in control of a self driving car in the fine print. I don't see much of the difference. That's the other issue ongoing, a lot of stuff is trying to dismiss the tool to blame...

                "Legally" the human is in control of a self driving car in the fine print. I don't see much of the difference. That's the other issue ongoing, a lot of stuff is trying to dismiss the tool to blame instead of the operator, but the marketing wants such tools to drive everything (metaphorically in this case. Though literally in others)

                1 vote
          2. Diff
            Link Parent
            Like with other arenas, fakery has always existed, and (in almost all cases) better than any AI could manage. The difference AI makes is the barrier to entry and the mass-produce-ability. You...

            Like with other arenas, fakery has always existed, and (in almost all cases) better than any AI could manage. The difference AI makes is the barrier to entry and the mass-produce-ability. You don't need a PhD to make a convincing photoshop, but you need substantial skill and experience and time investment. And that time investment is constant regardless of the form the fakery takes. But this? This can be automated, and can be set in motion by any one of billions of human beings with access to the open internet.

            The topic of peer review failure in this journal is here regardless of whether it's human or computer generated nonsense. The novel part is the growing question of how we discern and moderate content on the increasingly massive scales it can be generated on.

            4 votes
    2. [8]
      terr
      Link Parent
      AI image generation has come a long way in a very short time. Hands are rarely a problem any more, and at least a small amount of text that you specifically request comes out well at least some of...

      AI image generation has come a long way in a very short time. Hands are rarely a problem any more, and at least a small amount of text that you specifically request comes out well at least some of the time. Just yesterday I had DALL-E create an image of a dog dressed up as cupid with a bunch of Valentine's day decorations behind a sign that read "Merry McLove Day" and that's exactly what I got.

      Granted, there's a banner in the background that reads "Medy McL ovev" and a box that says "Meerpy Love", but at least the main sign was legible and I got that image in a single prompt attempt.

      4 votes
      1. [7]
        DefinitelyNotAFae
        Link Parent
        Hey there's also a box that says... Doye...? Maybe

        Hey there's also a box that says... Doye...? Maybe

        2 votes
        1. [6]
          dpkonofa
          Link Parent
          And a part of the heart banner that says ā€œChop something something?ā€ And ends with ā€œNotable T somethingā€? This is hilarious.

          And a part of the heart banner that says ā€œChop something something?ā€ And ends with ā€œNotable T somethingā€?

          This is hilarious.

          1. [3]
            DefinitelyNotAFae
            Link Parent
            Yeah I couldn't get the details of the words on the banner. I maintain my dislike for AI art though so I am biased.

            Yeah I couldn't get the details of the words on the banner.

            I maintain my dislike for AI art though so I am biased.

            2 votes
            1. [2]
              dpkonofa
              Link Parent
              I mostly dislike it too but also find it wildly fascinating. Itā€™s like watching something burn down - Itā€™s objectively a bad thing but you canā€™t help but be fascinated by the change.

              I mostly dislike it too but also find it wildly fascinating. Itā€™s like watching something burn down - Itā€™s objectively a bad thing but you canā€™t help but be fascinated by the change.

              2 votes
  2. [2]
    Minty
    (edited )
    Link
    This is exactly why I absolutely loathe when people say peer review is not a good idea because it's not effective etc. etc. No, peer review is an amazing and necessary idea. Without it, you get...

    This is exactly why I absolutely loathe when people say peer review is not a good idea because it's not effective etc. etc.

    No, peer review is an amazing and necessary idea. Without it, you get this exactly. You just have to do it. There need to be incentives for doing it right, and as such consequences for failing. Now, we have journals expecting academics to work for them for free, while making you pay to publish and accessā€”even such a disaster. It's a circus and everyone involved in this story is a clown. Especially the editor will have a lot of explaining to do.

    17 votes
    1. vektor
      Link Parent
      I feel the reason that peer review used to work better (at least I'm starting from that idea, and I'm not perfectly confident it's accurate) is one of scale. Fields have diversified to such a...

      I feel the reason that peer review used to work better (at least I'm starting from that idea, and I'm not perfectly confident it's accurate) is one of scale. Fields have diversified to such a degree that judging the far end of your own field is often outlandish. Yet if we diversify the set of venues accordingly, you end up with a mess of journals so vast no one has any idea what's what. There's too many kitchens, too many cooks and just too much variety on the menu. The old system of at least partially enforcing proper quality by social connections just does not scale. Shitty researchers can't build up a reputation of being shitty, so they get to be shitty without editors or PCs desk-rejecting their shit.

      As so many things to do with information distribution in the modern digital world. Fucking scale.

      6 votes
  3. [10]
    TallUntidyGothGF
    (edited )
    Link
    Nonsense papers and the like have been an issue in science for quite some time. See, for example, the Sokal Affair. Like with many threats from generative AI, it's not so much that it enables a...

    Nonsense papers and the like have been an issue in science for quite some time. See, for example, the Sokal Affair. Like with many threats from generative AI, it's not so much that it enables a harm, but that it vastly scales up the throughput and lowers the bar of access to it. In this way it is like some of the other issues surrounding potential harms discussed here recently, e.g. generation of election-related fraudulent content.

    Particular to this setting, though, is peer review, and I think in this lies the particular harm potential of this use of generative AI - that the bullshit asymmetry ratio is much greater. To illustrate: I once had a paper to review - actually back in 2015 or so when I was a PhD student, so before the current round of generative models - that I (as well as my supervisor) was pretty sure was generated. It was a good fake, though, with the usual kind of veneer of making sense without really saying anything. It took quite a long time to 'review,' because I wanted to be absolutely sure it was generated, and it wasn't just that I didn't understand it. Part of this was down to my inexperience at the time, but there are very many really obliquely written 'honest,' papers out there. Imagine mistakenly suggesting to an editor that a paper you're reviewing was generated.

    I think this is also exacerbated by research being increasingly specific and/or intersectional. A single peer reviewer is usually only able to critically evaluate one aspect of a paper (and maybe then perhaps not even being an exact match), and the rest is down to evaluating internal consistency etc. Finding a spread of reviewers that can cover every aspect of expertise in a paper can be very difficult. In discussions of these issues, people often seem to want to jump to blame the editors and peer reviewers for dereliction of duty, but ultimately these people are behaving with a large degree of expectation of good faith on behalf of the submitters. We aren't really expected or educated to go into it forensically, suspecting fraud at every turn. These issues are things I am becoming far more empathetic towards, having picked up multiple associate editor positions in the last year - it is difficult.

    9 votes
    1. [6]
      Gaywallet
      Link Parent
      I think this highlights an issue with the process itself. Why isn't there a simple requirement that all images be attributed? If software was used to generate the image, it should be attributed...

      It took quite a long time to 'review,' because I wanted to be absolutely sure it was generated, and it wasn't just that I didn't understand it. Part of this was down to my inexperience at the time, but there are very many really obliquely written 'honest,' papers out there. Imagine mistakenly suggesting to an editor that a paper you're reviewing was generated.

      I think this highlights an issue with the process itself. Why isn't there a simple requirement that all images be attributed? If software was used to generate the image, it should be attributed (I'd argue the prompt should be attributed too, if you use AI). If you paid someone to draw your figure, they should be attributed. If attribution is a requirement to publish, you save the reviewers time and reduce the headache of these kinds of things. Of course, people can lie about the attribution and it won't solve that, but it will solve people who are trying to use the system honestly.

      Having a system like this also shifts credit for blame clearly back on the person submitting an article where it should be. Yes, the review process failed here, and likely would fail regardless as even to a layperson these are clearly AI generated nonsense, but the review process isn't the only process which failed and I think we ought to highlight how the current system makes it difficult to review at the level of depth we sometimes want review to happen. We need to enable our reviewers and make their life easier, by putting strict restrictions on how data must be presented and what is required to submit to be published.

      5 votes
      1. [5]
        sparksbet
        Link Parent
        My understanding is that attributing figures and tables that weren't made by the authors themselves is already common practice. It certainly was in my field. The only circumstance in which there...

        If software was used to generate the image, it should be attributed

        My understanding is that attributing figures and tables that weren't made by the authors themselves is already common practice. It certainly was in my field. The only circumstance in which there wouldn't be attribution (though like a straight-up citation generally) would be if you created the figure/table yourself.

        I'm not sure I agree that this requires you to attribute the figure or table to the software used to generate it -- should I really have to label every figure in my paper with the python library I used to generate it? That's generally completely unnecessary information and distracts from actually useful attribution (such as where the data in the figure comes from).

        The issue here is that this wasn't just a failure of the review process, it's a continuation of a long preexisting problem with garbage journals that prey on inexperienced, marginalized, or unscrupulous academics who desperately need to get papers published to survive in a publish-or-perish academic culture. As someone else has noted, this journal is on a list of such predatory journals.

        1 vote
        1. [4]
          raze2012
          Link Parent
          If this keeps up, and people really don't want to put money or more time into the review process: yes. I'd rather do the latter, or even correct the culture that number of published papers...

          should I really have to label every figure in my paper with the python library I used to generate it?

          If this keeps up, and people really don't want to put money or more time into the review process: yes. I'd rather do the latter, or even correct the culture that number of published papers correlates with prestige. But I don't know if that's more or less feasible.

          2 votes
          1. [3]
            sparksbet
            Link Parent
            Honestly looking deeper into this, the authors of the paper did label the figures as generated by Midjourney in the text, which I guess is the kind of attribution you mention. So I'm not convinced...

            Honestly looking deeper into this, the authors of the paper did label the figures as generated by Midjourney in the text, which I guess is the kind of attribution you mention. So I'm not convinced it would really address these problems. After all, the issue is less that it's an AI generated figure and more that the figure was obviously nonsensical and meaningless to the extent that even a layperson can tell, much less an actual peer reviewer, yet it clearly got published without anyone giving a shit about that. I think changing academic publish-or-perish culture and stamping out these pay-to-publish shitty journals is what's ultimately necessary to avoid stuff like this, but unfortunately that's not an easy task by any means.

            4 votes
            1. [2]
              vektor
              Link Parent
              It's a culture shift that (imo) can not start in academia, or at least not solely. Funding agencies need to be closely involved. They want published research for their money, and that's what...

              It's a culture shift that (imo) can not start in academia, or at least not solely. Funding agencies need to be closely involved. They want published research for their money, and that's what they're getting. They need to find better metrics. Before that, meaningfully better outcomes are impossible.

              2 votes
              1. sparksbet
                Link Parent
                I honestly don't even necessarily think funding agencies are the primary issue -- the ability to get a tenure-track position is just as big if not a bigger issue ime. Without a tenure-track...

                I honestly don't even necessarily think funding agencies are the primary issue -- the ability to get a tenure-track position is just as big if not a bigger issue ime. Without a tenure-track position there's almost zero stability working in academia, and the fierce competition means you need to get your name on whatever you can. And once you've got a tenure-track position, the pressure to publish or perish is still looming over you until you actual get tenure itself. While I think funding agencies may contribute in some cases, I think it's an issue even independent of funding due to how competitive and unstable the environment is for anyone who's not already well-established. I honestly don't think funding agencies would be able to fix that without the culture shift happening in academia first.

                I miss parts of academia so much. The passion I feel for my particular niche is unparalleled, but it's like an ex that I can't talk to anymore because being part of the academic environment is necessary to engage with ongoing research and discuss it with other knowledgeable people. But the work environment of academia is so toxic and attempting to re-enter it carries no guarantee of future success or even stability. No matter how much I love theoretical linguistics, I need to eat and pay rent, and it's not the type of field where I could do research in industry. So if I want normal shit like work-life balance and not having to move around the world constantly seeking short-term postdoc positions for an indefinite amount of time... unless I become independently wealthy enough that I can do research and buy journal subscriptions without affiliation with a university, I'm cut off from something I genuinely love. It sucks.

                3 votes
    2. [2]
      Minty
      Link Parent
      I agreed until the bit about benefit of the doubt before accusations of dereliction of duty. It's one thing to be a puzzle, like your 2015 find, and it's another to have alien script on nonsense...

      I agreed until the bit about benefit of the doubt before accusations of dereliction of duty.

      It's one thing to be a puzzle, like your 2015 find, and it's another to have alien script on nonsense diagrams.

      4 votes
      1. TallUntidyGothGF
        Link Parent
        yes, for sure, I was kind of going off on one - we should definitely be able to filter out alien script nonsense šŸ˜‚

        yes, for sure, I was kind of going off on one - we should definitely be able to filter out alien script nonsense šŸ˜‚

        1 vote
    3. creesch
      Link Parent
      I can't speak for others, but not so much blame as recognition. For the fact that even with the idea that there is an editor and reviewers who are supposed to be experts in field of the paper...

      In discussions of these issues, people often seem to want to jump to blame the editors and peer reviewers for dereliction of duty, but ultimately these people are behaving with a large degree of expectation of good faith on behalf of the submitters.

      I can't speak for others, but not so much blame as recognition. For the fact that even with the idea that there is an editor and reviewers who are supposed to be experts in field of the paper there are a lot of papers slipping through cracks.

      Somewhere along the line the system couldn't keep up anymore. But the system still very much works in ways it has for decades.

      We aren't really expected or educated to go into it forensically, suspecting fraud at every turn.

      Which is fair to some degree. Except that increasingly reality is that there is a reasonable change for fraud. I don't think you should be educated to go int it forensically, either. At the same time, to me, it seems that somewhere in the process someone should. I am sure there are people actively thinking about this issue and working on solutions on it, but from an outside perspective it does very much look like a system that is well overdue for a review and upgrade of process.

      1 vote
  4. majromax
    Link
    Note that the journal, Frontiers in Cell and Developmental Biology, is listed among predatory journals, and the publisher (Frontiers) does not have a good reputation. This is not so much a failure...

    Note that the journal, Frontiers in Cell and Developmental Biology, is listed among predatory journals, and the publisher (Frontiers) does not have a good reputation.

    This is not so much a failure of peer review as its absence, and it should not be surprising that a predatory journal publishes nonsense.

    6 votes
  5. [2]
    oniony
    Link
    Is that "sterm" or "stemm" cells?

    Is that "sterm" or "stemm" cells?

    1 vote
    1. updawg
      Link Parent
      I was thinking stenm. I'm curious how they relate to the dck and the testtomcels.

      I was thinking stenm. I'm curious how they relate to the dck and the testtomcels.

      1 vote
  6. krellor
    Link
    I've seen a lot of discussion of what should be allowed in terms of AI-generated images or content in a paper, so I'll chime in from the perspective of someone who has written such policies for a...

    I've seen a lot of discussion of what should be allowed in terms of AI-generated images or content in a paper, so I'll chime in from the perspective of someone who has written such policies for a research institution and who oversees a broad portfolio of research compliance.

    In general, the primary concerns are around reproducibility, bias in the training data, intellectual property, and disclosures. Most images in research papers are fairly technical and simply outputs of common research tools or pictures of wet lab tests such as biomarkers, assays, etc. These aren't pretty, photorealistic pictures like the one linked here. These are technical pictures necessary to support the paper's claims and to aid reproducibility. It wouldn't be appropriate to use AI-generated content that presented itself as supporting evidence of a claim.

    Outside of that, explanatory visuals or content could be AI-generated with the following caveats: 1) The researcher is prepared to address questions related to any potential biases in the training data; 2) the use of AI-generated content will not hinder the reproduction of findings; 3) AI-generated content is accompanied by a disclaimer commensurate in prominence with the extent of its use and importance to the content of the section it is in; 4) the use of AI-generated content does not conflict with the terms of a publication; 5) the PI is prepared to address IP related concerns or modifications as case law is established. The use of AI tools to re-word sections of researcher-written text, such as Grammarly, would not be considered AI-generated content or require disclosure.

    The sort of image linked here is really only ever created as marketing materials for prominent publications. At this point, you can hire a graphic designer. If your paper is taken up by a major publication like Nature that requests marketing materials, I assure you the institution will assign folks from the marketing or communications department to gin up some graphics. Generally speaking, the use of AI-generated content in research papers really isn't worth the effort at this stage. There are too many unknowns and relatively little to gain. Researchers are used to publishing pretty ugly graphics and plots so long as they convey meaning. In fact, publishing standard outputs from domain standard tools helps convey understanding. "Prettier" graphics are mostly useful in public outreach or maybe some areas of humanities or behavioral studies that include perception of how people/things look. At which point, why not include actual pictures of the things or people?

    More institutions are starting to use scanning services to detect doctored or AI-generated images and text pre-publication as a way to protect researchers from inadvertent error, and to protect institutional reputation from fraud. I suspect we will continue to see bad behavior in these predatory journals while reputable journals will introduce new steps to prevent this content.

    1 vote