63 votes

Curl will end its bug bounty program by the end of January due to excessive AI generated reports

56 comments

  1. [44]
    polle
    Link
    I posted here earlier about the issues faced by the ubiquitous software package curl with respect to AI generated bug reports and their bug bounty program. Today they have announced they will end...

    I posted here earlier about the issues faced by the ubiquitous software package curl with respect to AI generated bug reports and their bug bounty program.

    Today they have announced they will end their bug bounty program by the end of January. Daniel gives more context in his blog post which I linked.

    We started out the week receiving seven Hackerone issues within a sixteen hour
    period. Some of them were true and proper bugs, and taking care of this lot
    took a good while. Eventually we concluded that none of them identified a
    vulnerability and we now count twenty submissions done already in 2026.

    We made some noise as I mentioned my PR in progress [1] that is about to
    remove all mentioned of a bug-bounty from the curl documentation. It is still
    in the planning phase and there will be more communication done about it, but
    we aim at shutting this down by the end of January 2026.

    The main goal with shutting down the bounty is to remove the incentive for
    people to submit crap and non-well researched reports to us. AI generated or
    not. The current torrent of submissions put a high load on the curl security
    team and this is an attempt to reduce the noise.

    We believe, hope really, that we still will get actual security
    vulnerabilities reported to us even if we do not pay for them. The future will
    tell.

    Here is a (clearly AI generated) report on hackerone. I can really feel the frustration and totally get why Daniel decides this just isn't worth it anymore.

    40 votes
    1. [11]
      Fiachra
      Link Parent
      Makes me wonder if the long term effect of LLMs will be to drive monetary incentives out of online spaces altogether.

      Makes me wonder if the long term effect of LLMs will be to drive monetary incentives out of online spaces altogether.

      20 votes
      1. [4]
        Roobxyz
        Link Parent
        I reckon LLMs are going to usher in a new era of ignorance and laziness. When they are as proliferated as social media sites we can expect that the internet will just become mostly slop noise. We...

        I reckon LLMs are going to usher in a new era of ignorance and laziness.

        When they are as proliferated as social media sites we can expect that the internet will just become mostly slop noise. We will crystallise on dead internet theory, because humans simply won’t be able to produce or decode the amount of crap that the LLMs produce. Some of them may produce “good” stuff but it’s easier to produce lazy, derivative or propagandistic slop, so that will dominate.

        We collectively seem to be less composed and convicted, partially I think, due to the information warfare that has been waged on us since corporate interests realised that the internet was a vehicle for their advertising. So when that’s ratcheted to the next level, with state-of-the-art machines, what hope do we have to preserve our own values based on curiosity, community and creativity.

        In wonder if the future of the internet will be a series of private subnets, and a bunch of community sites and services that are genuinely helpful and hidden from the “public” internet, now a veritable wasteland of LLM bile flowing back and forth between counterparties who continue to harvest and distort every morsel of information they can find.

        Through VPNs and small human led sites, maybe we can get back to some semblance of how it used to be. Where the internet was a place where dorks and nerds rejoiced at the fact other people had similar hobbies that they thought were too exotic for whichever tiny town they grew up in.

        Curl is a great tool it’s pretty sad to see the devs get overwhelmed and have to remove an initiative that was designed to reward care being afforded by skilled developers to their tool.

        26 votes
        1. [3]
          DynamoSunshirt
          Link Parent
          One potential silver lining: could slop kill the undead corporate social networks? Could it create an opportunity for nonprofit community projects like Mastodon, who have no profit motive and thus...

          One potential silver lining: could slop kill the undead corporate social networks? Could it create an opportunity for nonprofit community projects like Mastodon, who have no profit motive and thus no reason to enable slop? Maybe I'm being too optimistic -- I suspect a lot of social media users will continue to browse Facebook, Instagram, and TikTok looong after the human content evaporates. I don't really consider most influencers "human content" (more "marketing lite"), and that utterly dominates the socials today, but it hasn't driven people away yet.

          8 votes
          1. stu2b50
            Link Parent
            If anything it'll kill nonprofit community projects who don't have the resources to deal with "slop" first.

            If anything it'll kill nonprofit community projects who don't have the resources to deal with "slop" first.

            14 votes
          2. ThrowdoBaggins
            Link Parent
            Alas, the network itself having no profit motive does not mean it will be free from profit motives moving in to extract what they can. Anywhere that there are people, there are opportunities to...

            Alas, the network itself having no profit motive does not mean it will be free from profit motives moving in to extract what they can.

            Anywhere that there are people, there are opportunities to sell people stuff, or manipulate their political beliefs. Therefore anywhere that people go to, in order to seek other people and/or information, will be a place that marketing and propaganda also wants to be. If anything, as @stu2b50 said, online spaces will need to adapt and filter out the effortless slop world, or else be drowned in it.

            6 votes
      2. [6]
        glesica
        Link Parent
        Another possibility (I don't know how feasible this is, but I'm speaking conceptually) would be to require a deposit to submit a report. If the report is verified as a genuine vulnerability, you...

        Another possibility (I don't know how feasible this is, but I'm speaking conceptually) would be to require a deposit to submit a report. If the report is verified as a genuine vulnerability, you get your deposit back, plus the bounty. If it isn't, then you lose your deposit (perhaps unless the maintainers decide it was a near miss or something like that). The deposit might not need to be that high. A big part of the problem, I suspect, is that AI makes generating reports almost costless, whereas in the past it took significant time to generate one, even if it was of fairly low quality.

        18 votes
        1. ShroudedScribe
          Link Parent
          This is a rare scenario where I agree with requiring a payment method. At the end of the day, if you're awarded a bounty, you will need to have a valid payout method anyway. It's very important...

          This is a rare scenario where I agree with requiring a payment method. At the end of the day, if you're awarded a bounty, you will need to have a valid payout method anyway.

          It's very important that it is a minimal deposit amount, but since it does add the potential complexity of payment processing, and requires human labor to review the bug itself for validity, I could see this being around $10-20.

          While there could be scammy activity on the other side (project maintainers never processing bug bounty reports, resulting in effective theft of the deposit), I would hope someone submitting a bug report does due diligence in validating the project is legitimate and well established.

          5 votes
        2. [3]
          F13
          Link Parent
          This might work in some cases, but there are also tons of bad actors in this space who take genuine reports and ignore, mislabel, or outright steal them and never credit the author.

          This might work in some cases, but there are also tons of bad actors in this space who take genuine reports and ignore, mislabel, or outright steal them and never credit the author.

          4 votes
          1. [2]
            raze2012
            Link Parent
            Isn't that the case today? A verification mechanism would not change that.

            Isn't that the case today? A verification mechanism would not change that.

            4 votes
            1. F13
              Link Parent
              My point being: now they not only have your work but also your deposit. Still, I don't think this would be a major issue, as all it would require to avoid is bounty hunters (hah) to vet the...

              My point being: now they not only have your work but also your deposit. Still, I don't think this would be a major issue, as all it would require to avoid is bounty hunters (hah) to vet the organizations they're trying to get bounties from, which I don't think is really a big barrier.

              1 vote
        3. Fiachra
          Link Parent
          Proof of stake, but for bug bounties. I like it. Could be a pretty tiny deposit required and it would still probably work, since AI schemes like this rely on high volume.

          Proof of stake, but for bug bounties. I like it. Could be a pretty tiny deposit required and it would still probably work, since AI schemes like this rely on high volume.

          4 votes
    2. [23]
      slade
      Link Parent
      I've had conversations strikingly similar with coworkers. I've asked my employer for a code of conduct/ethical use around AI, and have received no traction. I think AI is fine for many tasks, and...

      I've had conversations strikingly similar with coworkers. I've asked my employer for a code of conduct/ethical use around AI, and have received no traction. I think AI is fine for many tasks, and the biggest reasons why it's destroying so many things fall under a familiar umbrella: lazy or irresponsible use of technology by individuals.

      8 votes
      1. [22]
        DynamoSunshirt
        Link Parent
        A moral LLM code of conduct increasingly boils down to "just don't," unfortunately. If you weigh in the idea that it's all fed on stolen IP, it always was! Costs are beginning to increase, rate...

        A moral LLM code of conduct increasingly boils down to "just don't," unfortunately. If you weigh in the idea that it's all fed on stolen IP, it always was! Costs are beginning to increase, rate limiting is becoming a big issue, and advertising is coming to most chatbots and APIs this year. Trying to find a reasonable way to incorporate these shifting sands into any real workflows is becoming increasingly problematic.

        11 votes
        1. [14]
          okiyama
          Link Parent
          Not Gemini. Say what you will of Google but they own the data they train on. They've been scanning books for 20 years. They obviously have millions and millions and millions of lines of code to...

          Not Gemini. Say what you will of Google but they own the data they train on. They've been scanning books for 20 years. They obviously have millions and millions and millions of lines of code to train on, they own YouTube data, etc etc.

          Anthropic as well to my knowledge has never stolen any data.

          OpenAI are criminals, and they will collapse, taking a good chunk of Microsoft down with them but the governmental ties with Microsoft are so deep it won't affect the core business in a particularly meaningful way, another reason windows is getting worse and worse, there's just no revenue in it like there was.

          I use transformer based large language models on a daily basis. It's a hell of a new hammer, and I've found a lot of good nails for it. Most people are hammering in screws.

          I do cede your point for image generation. The stealing everything that ain't nailed down was downright disgusting, a true pox on humanity.

          That said I just simply cannot see eye to eye with computer scientists and software engineers who are not intellectually engaging with the computation going on. In many ways they are quite simple systems that are quite easy to use. Yes, they hallucinate, but so do I.

          Again, new hammer, everyone's nailing in screws because of general ignorance but saying, "LLM ai use is immoral" is a shallow view of things.

          It seems your perspective is more on the side of personal use? Costs and what not tend not to enter my day to day use because I'm paid to solve solutions quickly and effectively, my company has an exclusivity deal with Google that they won't touch or train on our data and again, Google aren't criminals in all this, they made the darn thing, they fired the guy that said lambda (2019 internal Google chat bot, basically ChatGOT before ChatGPT was illegally and dangerously released) was conscience.

          Obligatory note that the whole data center nuclear reactor explosion thing is definitely bad. I am investigating organoid intelligence as an avenue to improve energy efficiency by a couple orders of magnitude.

          9 votes
          1. [11]
            DefinitelyNotAFae
            Link Parent
            You're certain that Google's Gemini team purchased a copy of every book they used to train on? Because Anthropic didn't. They used LibGen. Anthropic Settles High-Profile AI Copyright Lawsuit...

            You're certain that Google's Gemini team purchased a copy of every book they used to train on? Because Anthropic didn't. They used LibGen.

            Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors | WIRED

            Publishers seek to join lawsuit against Google over AI training | Reuters

            Google is being sued by authors, musical labels, visual artists, and other copyright holders for the same reason.

            The publishers on Thursday cited 10 examples of their textbooks and other books that Google allegedly misused from authors, including Scott Turow and N.K. Jemisin to train its Gemini large language model. They asked the court for an unspecified amount of monetary damages on behalf of themselves and a larger class of authors and publishers.

            Idk how Google isn't just as liable as everyone else for blatant copyright violations.

            21 votes
            1. [10]
              okiyama
              Link Parent
              Touche on anthropic though textbook piracy is moral, albeit illegal in my view. To my knowledge, yes. Google fought the books fight 20 years ago when they made Google books and since then have...

              Touche on anthropic though textbook piracy is moral, albeit illegal in my view.

              To my knowledge, yes. Google fought the books fight 20 years ago when they made Google books and since then have scanned more or less everything published by humanity.

              https://en.wikipedia.org/wiki/Google_Books

              Looks like 40 million books as of 2019, if you buy a book, scan it, and ocr it, you have it as part of your personal library. Like I said, this was a huge legal battle when Google books first started and the decision iirc is a spin on, "yes, you can purchase and read books".

              The recent lawsuits will shake out, I'm not putting too much stock in them because I'm not lawyer so I'll wait for them to figure it out.

              Don't get it twisted though, xai, meta, OpenAI, basically every single player are criminals. Stealing as much digital knowledge as fast as possible was a necessary step for those businesses to exist. Google is fundamentally different. For one, they literally invented the transformer, for another Google Brain is not a new project, they've had AGI on their eyes for a very long time. They've just been doing it internally and safely while, y'know, taking over the entire public mainstream internet so they could collect all the data both for ads and for the brain project.

              As an aside, nice to see a familiar name, I fell off the earth for a good while but to reconnect with the more interesting and fun sides of the internet!

              9 votes
              1. [9]
                DefinitelyNotAFae
                Link Parent
                It seems to me the two situations are different or Anthropic wouldn't have been looking at 1 trillion in damages. And they did settle with the publishers, the way the authors lost was due to the...

                It seems to me the two situations are different or Anthropic wouldn't have been looking at 1 trillion in damages. And they did settle with the publishers, the way the authors lost was due to the preview feature, something irrelevant if you use the entire book as training data.

                I don't agree with your interpretation of the facts either. They weren't buying the books for that project, they worked with publishers and libraries to scan them. That's why they settled with publishers. The complaint against Google in this lawsuit alleges they downloaded pirated copies for training. By scraping sites like Z-library and circumventing subscription services like Scribd. That's quite different

                I find it very hard to trust someone who insists that all the other guys are evil but this guy is definitely one of the good ones even though he's doing the same exact thing.

                4 votes
                1. [4]
                  sparksbet
                  Link Parent
                  iirc damages in copyright cases are very often determined based on a set amount per copy. Setting aside whether using something to train such a model is infringement to begin with, it very much...

                  It seems to me the two situations are different or Anthropic wouldn't have been looking at 1 trillion in damages.

                  iirc damages in copyright cases are very often determined based on a set amount per copy. Setting aside whether using something to train such a model is infringement to begin with, it very much isn't settled what part of the counts as making one copy for these purposes, and such a question is arguably highly technical in nature. A definition that artificially inflates the number by, for instance, claiming every work is copied once per epoch in training, would artificially inflate the number of copies, and thus raise the amount of statutory damages compared to counting each work once or even once per model.

                  Since you mentioned they settled, I assume the trillion in damages is what the plaintiff demanded in their filings, which is almost certainly erring on the side of higher damages, but I don't think it's necessarily right to take the plaintiff's claims of how high the damages were in a case that settled out of court at face value, af least not without other information and context. It's possible that even had they gone to trial and the judge ruled against Anthropic that damages would be substantially lower, because arguing over how to determine damages would be a huge part of a case like this.

                  4 votes
                  1. [3]
                    DefinitelyNotAFae
                    Link Parent
                    I don't think it was a "per copy" issue. As you note, there are statutory damages because of the anti-piracy laws, and it was a class action suit. I believe the math was up to $150k per work for 7...

                    I don't think it was a "per copy" issue. As you note, there are statutory damages because of the anti-piracy laws, and it was a class action suit. I believe the math was up to $150k per work for 7 million works, and that was just the author lawsuit. That 150k is the maximum per work and involves willful and intentional infringement. I think it fair to describe downloading essentially all of the books as willful and intentional. (This also led many authors to find out their publishers failed to file for copyright excluding them from statutory damages. That was a moment. )

                    My point though was that Anthropic settled because they had pirated, which Google seems to have done for their AI, and how this is different from the Google Books project in several ways that seem unfavorable despite them also settling with publishers in that case as well.

                    Which all adds up to me having a hard time believing that Google is somehow different. Especially as Anthropic was explicitly mentioned as not having "stolen" any data while settling for having pirated books. The fact that the damages may have ended up much lower in court isn't really relevant and tangential to what I was saying. Replace it with "Big Number of Money" as needed. I'm focused on their actions.

                    1 vote
                    1. [2]
                      sparksbet
                      Link Parent
                      Yeah I think if it were found to be infringement it would probably be found to be willful and intentional, but I do recall reading about at least some AI copyright where the plaintiffs did try the...

                      Yeah I think if it were found to be infringement it would probably be found to be willful and intentional, but I do recall reading about at least some AI copyright where the plaintiffs did try the "per copy" thing. But that would be for suing someone over training on the data, rather than for the piracy (which is legally a lot more cut and dry), so it was probably a different lawsuit entirely (shows why I shouldn't be commenting in the wee hours lol)

                      2 votes
                      1. DefinitelyNotAFae
                        Link Parent
                        Yeah not sure, I haven't followed all the suits just the ones with authors when I come across them as I follow many via Bluesky

                        Yeah not sure, I haven't followed all the suits just the ones with authors when I come across them as I follow many via Bluesky

                        1 vote
                2. [4]
                  okiyama
                  Link Parent
                  1 trillion dollars in damages just makes me not take them seriously at all,but again, I'm not a lawyer and I'll let the courts decide the legality. Agreed, we'll see how the lawsuits work out, Sam...

                  1 trillion dollars in damages just makes me not take them seriously at all,but again, I'm not a lawyer and I'll let the courts decide the legality.

                  Agreed, we'll see how the lawsuits work out, Sam Altman forced Google's hand by releasing a dangerously underdeveloped product which has provably taken lives. As I said, Google already had ChatGPT internally. Corporations can and do develop enormously damaging technology without destroying the world. I understand the urge to go "all corpos bad" but there's a really big difference between Elon Musk opening a methane burning data center in Tennessee when the law already explicitly says you can't do that and causing a huge spike in cancer in the region. Google didn't want it to go this way, a criminal made things go this way.

                  I really can't say this enough times. Google has the most data in the world by at least a couple orders of magnitude. They are different. They are positioned differently, they were treating the technologies differently, they provably were deciding not to shove AI into every nook and cranny until ChatGPT made their employees shit their pants. I'm really deep in the industry, I have spoken with about a half dozen current google people and worked alongside stripes from every MAANGA company (or whatever it is) (as an aside, Netflix is actually built different, but that's a story for another day).

                  I'm not denying that actors within Google didn't break the law after ChatGPT released, I'm saying that ChatGPT releasing was one of the worst things that's happened in recent history and that without that dangerous, illegal move by criminals (who by the way are directly associated with the likes of Peter Thiel and Y Combinator, which Google made Alphabet so they didn't have to touch those assholes with a 10 foot ethernet cable). Yes, they are evil, but they are just factually different in the way they structure their business and means of working.

                  To put a fine point on it, for years and years Google was infamously terrible at making new products. https://killedbygoogle.com/ is a, largely bygone, by-product of this era. Sam Altman forced their hand, they would not have released all this BS on anywhere near this fast a timeline without that bombshell. The way I described it to my dad is, I know a guy that's working for Anthropic right now. He and I worked at a startup together and became fast friends. He was a machine learning engineer at google for about 10 years (during the Obama campaign, when the misinformation machine was really ramping up and making his job really, really difficult). I asked him, "so, you saw Attention Is All You Need when it came out, right?" he said, "yeah, it was metaphorically being passed down the halls". I asked, "did you know it was going to change everything?" he said, "everyone that knew what they were reading knew it was going to change everything". The entire field of neural network based prediction engines were chasing after a scaling input to output mapper, and,out of the clear blue sky, the Attention mechanism solved it. Overnight entire subdisciplines of Natural Language Processing research were rendered obsolete. Academics that spent decades optimizing little corners of NLP and NN ML research woke up to, "oh, well, that solves that, I guess I should have been innovating instead of optimizing. Rats."

                  So there was a bombshell. Not a bomb, but a shell. Anyone that read it knew they were looking at a bombshell, and Sam Altman is the villain that refined the uranium 235 (sorry, metaphor, stole all of the well structured data the internet had to offer), and armed the warhead.

                  Does any of that make sense? These are very distinct entities that, prior to ChatGPTs release, were working under extremely different conditions with extremely different release cadences and care put into their work.

                  That ex-Google Anthropic guy and my new bosses bosses boss is the guy that ran Google Drive (like, he was the technical guy in charge of the whole thing, it's astounding his level of technical expertise) for 8 years and they both agree with me that Google is well aware they have the power of a nation state and that crucially, they actually act like it.

                  4 votes
                  1. [3]
                    DefinitelyNotAFae
                    Link Parent
                    The trillion dollars was the max calculated statutory damages not a ridiculous lawsuit claim. Anthropic settled rather than risking that. The courts have essentially figured that out because...

                    The trillion dollars was the max calculated statutory damages not a ridiculous lawsuit claim. Anthropic settled rather than risking that. The courts have essentially figured that out because Anthropic settled. Google is still being sued but I think the evidence that they copied the zlib and scribed data is not contested. Maybe this judge rules it's legal, I'll still think it was shitty.

                    Who is the "them" you're not taking seriously in that?
                    I genuinely am not arguing about Google other than they literally pirated data, they downloaded and scraped data without paying for it.

                    Idk what else to say because all of this impassioned defense isn't relevant to anything I said. Google did something because Altman "forced them" still means they did the thing. I don't personally absolve them of responsibility, if it were up to me, and since it isn't I don't really have interest in finger pointing, even at really awful people.

                    2 votes
                    1. [2]
                      sparksbet
                      Link Parent
                      Just to be clear, this is not true. Both parties settle, not just one side or the other, and it is perfectly possible and extremely common to settle a case even if you otherwise would've...

                      The courts have essentially figured that out because Anthropic settled.

                      Just to be clear, this is not true. Both parties settle, not just one side or the other, and it is perfectly possible and extremely common to settle a case even if you otherwise would've definitely won it based in the facts and the law for a variety of reasons (chief among them often being the sheer expense involved in filing a lawsuit and then taking the case all the way to trial.) Settling does not entail admitting guilt or the court finding any particular claims either party made to be true. You could just as readily frame this settlement as the plaintiffs settling for doubtlessly much less than a trillion dollars because they didn't want to risk losing and getting nothing but the costs of a very expensive lawsuit.

                      A settlement is both parties agreeing not to sue each other over the same issue anymore in exchange for one or both parties doing specific things (money being one of them but generally also other things like non-disclosure are in the agreement). The fact that this case ended in a settlement means we cannot know how the court would have rules on the relevant legal or factual questions because it did not do so. People can opine on what they think the court would have found had the case proceeded to trial, but the court did not actually find anything afaik so it is all inevitably speculation.

                      I know this is tangential to your main point but I think it's something that's important to get right.

                      3 votes
                      1. DefinitelyNotAFae
                        Link Parent
                        Yeah I am aware of all this but also too over it to bother anymore. I'm just going to get another "sounds fake here's why Google is noble actually." The court did make rulings that led to the...

                        Yeah I am aware of all this but also too over it to bother anymore. I'm just going to get another "sounds fake here's why Google is noble actually." The court did make rulings that led to the settlement which was essentially that fair use was reasonable but the issue of how they acquired the works was relevant. And so it was going to trial, not being dismissed. If anyone wants further precision they can look up the case.

                        I'm not even a strong copyright advocate, I just don't think the answer to that is being fine with these people, Google included, exploiting the work of others to (try to) make billions while making my life more annoying. And I think authors, many of whom can't even afford to just write, should get paid for their work.

                        I appreciate you I just don't care to be precise when they're not actually replying to me. So yeah it's tangential and frankly super unimportant IMO.

          2. [2]
            ShroudedScribe
            Link Parent
            Are you able to elaborate? I'm interested in new perspectives even though I'm currently an AI skeptic. But perhaps I'm just working with too many screws. :)

            I use transformer based large language models on a daily basis. It's a hell of a new hammer, and I've found a lot of good nails for it. Most people are hammering in screws.

            Are you able to elaborate? I'm interested in new perspectives even though I'm currently an AI skeptic. But perhaps I'm just working with too many screws. :)

            1. okiyama
              Link Parent
              Happily! In what context? I use Gemini and Claude at work for programming tasks. Gemini has a real knack for debugging, the one case that jumps to mind was a particularly thorny dependency issue...

              Happily! In what context? I use Gemini and Claude at work for programming tasks. Gemini has a real knack for debugging, the one case that jumps to mind was a particularly thorny dependency issue in a meta maven project type thing. Basically, there's a wrapper parent library defined by a maven file that has sub maven files for the individual parts of the libraries. I was contributing a new sub library and battling with some very difficult constraints. The parent library couldn't upgrade to a modern Java version because AWS kda didn't have proper support for flink 2 or something or other (this was months ago).

              I created a flink debugger Gemini gem with a bunch of context on things like generating dependency trees, internal context on work specific constraints.

              The true superpower is that these systems give me a pair programmer that has a disparate skill set from myself. I've heard it well described as "a junior programmer that just graduated from MIT". Very very sharp, but with this distinct lack of intuition and experience that I fill in.

              Then implementation is windsurf with Claude sonnet. It writes good code, and is the opposite of lazy, especially with testing. It's quite important that this uses Claude. They have the best coding model and the only one I feel is roughly at par with my own ability. The point about lair programming stands as well, windsurf makes for a delightfully fluid experience where I can cut it off can jump in when thing go away from expectations.

              In my personal life, Deep Research from Gemini is indispensable for my ongoing scientific interests. ChatGPT Extended Thinking is best bang for you buck in raw compute, but not worth the cost. Pro tip there is if you buy ChatGOT ultra mega expensive edition, use it a few days, then cancel, they'll refund and give you the month for free. Extended Thinking is the only one I can convince to work on a problem for 15 or 30 minutes which again is the most raw compute I've been able to find. Great for broad research and interests, and you'll have to get it while you can because me oh my is OpenAI going to implode very very soon.

              I don't do any image or video generation, that still strikes me as a parlor trick and more in line with replacing good product with cheap bad product whereas the code I'm producing is of similar quality and higher velocity than I could deliver before all this.

              Any other questions I clearly love chatting this stuff!

              2 votes
        2. [7]
          sparksbet
          Link Parent
          I think it's a bad idea to equate "violating IP law" (something that may arguably be happening but absolutely isn't settled on a legal front even when it comes to the companies training these...

          I think it's a bad idea to equate "violating IP law" (something that may arguably be happening but absolutely isn't settled on a legal front even when it comes to the companies training these models, and which simply isn't happening on your part when you're generating) with immorality, and I say this as someone who is pretty critical of how these companies source their data. Choosing not to use generative AI because of your moral qualms with various aspects of its training and use is absolutely valid. But equating IP law with morality is not -- people absolutely extrapolate that type of thinking about intellectual property to stifle real human creativity.

          9 votes
          1. [2]
            raze2012
            Link Parent
            I mostly wield IP law in the schadenfreude sort of way. They spent centuries gaming it to make it impossible for stuff to properly hit public domain. Now that it's profitable to ignore it they are...

            I mostly wield IP law in the schadenfreude sort of way. They spent centuries gaming it to make it impossible for stuff to properly hit public domain. Now that it's profitable to ignore it they are caught in a web of their own making. A proper IP law would have a good 90% of all recorded media be up for grabs, all the way to 1997.

            Instead, 99% of the 20th century is stuck in the hands of people who sit on it. It's bad, but especially bad for companies. Deservedly so.

            4 votes
            1. sparksbet
              Link Parent
              Stringent IP law is and has always disproportionately benefitted large companies and the already wealthy, and if the interpretation of IP law is made stricter in response to generative AI, it will...

              Stringent IP law is and has always disproportionately benefitted large companies and the already wealthy, and if the interpretation of IP law is made stricter in response to generative AI, it will disproportionately harm the exact small independent creatives that people are mad at those companies for stealing from. I think honestly it'll harm them far more than it'll harm generative AI companies and it'll harm those companies more than it'll harm the big copyright holders who championed stricter IP laws in the past.

              4 votes
          2. [4]
            DynamoSunshirt
            Link Parent
            I don't actually care about IP law, I have violated it plenty in my life. I just mean it's a dick move to train a model on writing and art from independent creators without compensation.

            I don't actually care about IP law, I have violated it plenty in my life. I just mean it's a dick move to train a model on writing and art from independent creators without compensation.

            2 votes
            1. [2]
              sparksbet
              Link Parent
              Oh yeah, I'm not necessarily saying your opinion is wrong on that front. But I see way too much backlash against AI online that turns into this weird veneration of IP law and even calls to...

              Oh yeah, I'm not necessarily saying your opinion is wrong on that front. But I see way too much backlash against AI online that turns into this weird veneration of IP law and even calls to strengthen existing IP law in ways that would hurt artists far more. So I wanted to really emphasize that whether something is a violation of IP law and whether something is immoral are two completely orthogonal questions -- something can be a violation of IP law and be totally moral, or something can be totally legal but still immoral.

              6 votes
              1. DynamoSunshirt
                Link Parent
                Excellent point, IP law is toxic as hell and one of the worst possible outcomes of LLMs is strengthened IP law!

                Excellent point, IP law is toxic as hell and one of the worst possible outcomes of LLMs is strengthened IP law!

                3 votes
            2. redwall_hp
              (edited )
              Link Parent
              And open source software. The only reason these things can program is because they've ingested tons of source code that people gave away for free, under licenses intended to keep it that way, as a...

              And open source software. The only reason these things can program is because they've ingested tons of source code that people gave away for free, under licenses intended to keep it that way, as a generous gift to society. Now companies generate code that they intend to sit on and profit from, laundering copyright and laying people off.

              Before ChatGPT, when GitHub Copilot was first released, people very quickly noticed that it frequently generated code with comments identical to open source projects.

              1 vote
    3. [9]
      Bullmaestro
      Link Parent
      I'm not a programmer and don't have anywhere near the technical understanding to review code, but even I can tell that report is AI slop.

      I'm not a programmer and don't have anywhere near the technical understanding to review code, but even I can tell that report is AI slop.

      6 votes
      1. [8]
        ali
        Link Parent
        You’re absolutely right — As someone who uses AI tools himself (and worked with it and studied it for a long time) it’s absolutely insane to me to see the amount of verbose bullshit. Like, do...

        You’re absolutely right —

        As someone who uses AI tools himself (and worked with it and studied it for a long time) it’s absolutely insane to me to see the amount of verbose bullshit.

        Like, do people not realize that the shit they post is so obviously AI generated? Or do people just not see it?

        7 votes
        1. [7]
          snake_case
          Link Parent
          Wow I didn’t know I could be so triggered And yeah, I think people just don’t care. Millions of people bought those stupid live laugh love pictures and put them all over their houses. People don’t...

          You’re absolutely right —

          Wow I didn’t know I could be so triggered

          And yeah, I think people just don’t care. Millions of people bought those stupid live laugh love pictures and put them all over their houses. People don’t give a shit.

          10 votes
          1. [6]
            okiyama
            Link Parent
            I'm having trouble following how people taking simple joys in life has to do with this? Live Laugh Love is a wonderful mantra.

            I'm having trouble following how people taking simple joys in life has to do with this? Live Laugh Love is a wonderful mantra.

            4 votes
            1. [5]
              snake_case
              Link Parent
              Its a really basic style of art mass produced and available at any Walmart, basically ai art before there was ai art

              Its a really basic style of art mass produced and available at any Walmart, basically ai art before there was ai art

              2 votes
              1. [4]
                okiyama
                Link Parent
                Ahh I see what you mean, but I do just fundamentally disagree. There's wisdom in the crowd, and elitism over that crowd betrays the heart's yearning for openness, connection, and belonging.

                Ahh I see what you mean, but I do just fundamentally disagree. There's wisdom in the crowd, and elitism over that crowd betrays the heart's yearning for openness, connection, and belonging.

                1 vote
                1. [3]
                  snake_case
                  (edited )
                  Link Parent
                  It's not about the message, it's the fact that it's mass produced, the same everywhere, and in everyone's houses, everyone's cafe bathroom, It's not original, I wouldn't even really call it art,...

                  It's not about the message, it's the fact that it's mass produced, the same everywhere, and in everyone's houses, everyone's cafe bathroom, It's not original, I wouldn't even really call it art, it's just... a regurgitated shadow of what once was a cute idea that's been digested by the corporate machine and thrown back at us.

                  But there's millions of people, like you, who don't see it like that.

                  and there's millions of people like OP who don't see AI produced content like that

                  This is my bias, and this is most everyone on Tildes' bias. I'm not telling you that you should only use real art made by real human artists. I do, but I understand that it's unreasonable for everyone to do that in this day and age. Just like how it's unreasonable to expect everyone to commission real product descriptions, or write them themselves, we think that just because we're all good at writing everyone else has that same skill level and they're just being lazy when that's not true, I think it's really come time that it's unreasonable to expect a business owner to either pay someone or write descriptions themselves, just like how it's unreasonable to expect everyone to pay artists to decorate their houses instead of just buying live laugh love type pictures from Walmart. People don't see what I see, people don't care about the things I care about.

                  4 votes
                  1. [2]
                    okiyama
                    Link Parent
                    Ahh I do see a lot more clearly where you're coming from after all that, thank you for taking the time. There is a certain sadness in the inherent desperation inflicted by capitalism that the...

                    Ahh I do see a lot more clearly where you're coming from after all that, thank you for taking the time. There is a certain sadness in the inherent desperation inflicted by capitalism that the "average" (whatever that means) American is reduced the these minimalistic experiences. I do stand by the idea that live laugh love is a beautiful mantra which struck a deep chord with millions and that that alone has value and merit that's quite far flung from "I enjoy AI slop, actually" but I do see a lot more clearly how you're connecting those two, to me, disparate ideas.

                    Thanks again, love the username.

                    1 vote
                    1. snake_case
                      Link Parent
                      Any time. AI produced content isn’t inherently bad just cause its AI, a lot of the time it comes up with pretty good stuff, and pretty much all the time it comes up with “good enough” stuff. We...

                      Any time. AI produced content isn’t inherently bad just cause its AI, a lot of the time it comes up with pretty good stuff, and pretty much all the time it comes up with “good enough” stuff.

                      We went from “wow” to “slop” so quickly because of HOW people are using it, we kinda lost the wow factor when people do use it in a constructive way. Even if that constructive way is putting real humans out of work.

  2. [6]
    skybrian
    Link
    I wonder if it would make sense to change it to an invite-only program rather than ending it? For Tildes, getting an invite isn’t so hard, but it seems effective in keeping things under control.

    I wonder if it would make sense to change it to an invite-only program rather than ending it? For Tildes, getting an invite isn’t so hard, but it seems effective in keeping things under control.

    12 votes
    1. [2]
      LukeZaz
      Link Parent
      Getting an invite for Tildes has no financial incentive behind it. This would. That alone would mean all the LLM use would still be present, except instead of reviewing a huge pile of bogus...

      Getting an invite for Tildes has no financial incentive behind it. This would. That alone would mean all the LLM use would still be present, except instead of reviewing a huge pile of bogus security reports, Curl would instead have to review a huge pile of bogus invite requests.

      10 votes
      1. skybrian
        Link Parent
        It seems like this depends on the criteria for getting an invite. For Tildes, there's the advantage that Deimos doesn't have to do the vetting. It's outsourced to users, who can do the vetting...

        It seems like this depends on the criteria for getting an invite. For Tildes, there's the advantage that Deimos doesn't have to do the vetting. It's outsourced to users, who can do the vetting however they like. You can give invites just to people you know, or who have a social media presence that looks reasonable, or whatever.

        1 vote
    2. [3]
      TurtleCracker
      Link Parent
      It’s either that or some sort of reputational system that projects can sort by would be nice. If someone that has previously found several legitimate security issues submits a defect I’d want to...

      It’s either that or some sort of reputational system that projects can sort by would be nice. If someone that has previously found several legitimate security issues submits a defect I’d want to prioritize looking at it.

      9 votes
      1. [2]
        bme
        Link Parent
        The problem you then have is the Sybil problem. It has to cost enough to create an account that you can't just spin up a sub community to all self validate. It's genuinely very difficult to not...

        The problem you then have is the Sybil problem. It has to cost enough to create an account that you can't just spin up a sub community to all self validate.

        It's genuinely very difficult to not have to really slam the doors and punish people that just want to participate in good faith.

        7 votes
        1. raze2012
          Link Parent
          I guess at that point the only thing to really do is require a government ID in order to be a valid account.

          I guess at that point the only thing to really do is require a government ID in order to be a valid account.

  3. [5]
    entitled-entilde
    Link
    I’m sure at some point their system could get overwhelmed but this seems pretty premature. 7 issues in a sixteen hour period, becomes 20 over sixteen days. It sounds like people were bored during...

    I’m sure at some point their system could get overwhelmed but this seems pretty premature. 7 issues in a sixteen hour period, becomes 20 over sixteen days. It sounds like people were bored during the holidays. Plenty of other open source projects get hit with similar “yes it’s a bug but not a real one” requests. Considering how critical curl is, and its history of real security vulnerabilities, getting rid of the bounty system is bad… I’d prefer they got more money for security people.

    6 votes
    1. [2]
      Zorind
      Link Parent
      I think it’s a time commitment issue on the developer’s side of things - curl is still lead basically by one guy (though there are lots of people who are “maintainers” and contribute, but not as...

      I think it’s a time commitment issue on the developer’s side of things - curl is still lead basically by one guy (though there are lots of people who are “maintainers” and contribute, but not as much as the lead).

      It's about sustainability too. #curl is a small project. We cannot spend multiple hours every day arguing with people who want money for having found what is perhaps a bug - but often is not even that.
      It drains us. It drowns us.
      Onward and upward!

      https://mastodon.social/@bagder/115893088600630096

      14 votes
      1. okiyama
        Link Parent
        Yeah it's that XKCD comic about the stick holding up the internet. It's frankly astounding that so, so many tools we take for granted are so extremely under funded and under maintained. I don't...

        Yeah it's that XKCD comic about the stick holding up the internet. It's frankly astounding that so, so many tools we take for granted are so extremely under funded and under maintained. I don't have a solution, but man, it's just crazy that curl like, curl, the thing that's in every shitty one off script you write that turns into a 30 year core part of your production product that "we'll fix later" is basically one dude keeping the lights on.

        9 votes
    2. [2]
      DynamoSunshirt
      Link Parent
      LLM garbage PRs take a surprisingly long time to review. You have to start in good faith, because you never know if they used LLMs for all of it. Could just be an ESL person using LLMs to help...

      LLM garbage PRs take a surprisingly long time to review. You have to start in good faith, because you never know if they used LLMs for all of it. Could just be an ESL person using LLMs to help write English, after all. And security claim validation could take hours to setup, try to replicate, and declare failure.

      11 votes
      1. gary
        Link Parent
        Yes, this was my experience with H1 reports as well. It's a lot more work than most people expect. There's also the back-and-forth with reporters because they have a financial incentive to push...

        Yes, this was my experience with H1 reports as well. It's a lot more work than most people expect. There's also the back-and-forth with reporters because they have a financial incentive to push for their report to be accepted and paid out. That exchange takes a while because you're also trying to not piss them off. If they are a legitimate reporter that happened to just have a bad report, you want to be polite enough that they don't withhold a real vulnerability in the future. One report/day would have made me very miserable and I was paid to work on that stuff (not as my main work).

        8 votes
  4. zwro
    Link
    It seems to me that the author sees personal ridicule as a necessary evil. Not only is it not, it targets exactly those who are less deserving of being a target. Purely profit driven bounty...

    It seems to me that the author sees personal ridicule as a necessary evil. Not only is it not, it targets exactly those who are less deserving of being a target. Purely profit driven bounty hunters using AI because it's cheap and delegate verification to maintainers are those who care less for reputation. Personal shaming only affects those who care and could learn from the mistake. It's not an effective or positive pedagogical tool.

    Personally, I see the end of most bounties as a silver lining, being generally against the practice, but I don't think AI makes proper incentive systems impossible. We just have to be smart in how we deal with it.