14 votes

Megathread #4 for news/updates/discussion of AI chatbots and image generators

The hype continues. Here is the previous thread.

46 comments

  1. [3]
    skybrian
    (edited )
    Link
    I wrote a couple more blog posts: Generative AI is good at cooperating with people and bad at full automation Don’t settle for a superficial understanding of how AI chatbots work This week I...

    I wrote a couple more blog posts:

    This week I learned that there's a toggle on the settings page that you can use to keep Substack from nagging readers to subscribe. It's called "Subscribe prompts on post pages."

    Also, apparently there is a law that commercial email must include a physical mailing address, but Substack lets you use their P.O. box and it's filled in by default. I also read a somewhere that this is one of the things that spam filters use to separate legit commercial email from spam.

    6 votes
    1. [2]
      unknown user
      Link Parent
      You may also like this one. People trained LLM to play Othello without telling it the game rules or board structure (so instead of guess-the-next-word it's guess-the-next-legal-move) and found...

      the best thing I’ve read that’s specific about how a language model works is We Found An Neuron in GPT-2. If anyone knows of other research like this, I’d love to read it.

      You may also like this one. People trained LLM to play Othello without telling it the game rules or board structure (so instead of guess-the-next-word it's guess-the-next-legal-move) and found evidences that it may in fact has an internal model of how the game works.

      The content is way over my head but if the headline is true then I think this is pretty strong evidence against LLM being just fancy autocomplete.

      2 votes
      1. skybrian
        Link Parent
        Yep, I did see that paper. But it's specific to playing Othello. By analogy, we can say that LLM's building internal models to represent other concepts is possible, but it doesn't tell us what...

        Yep, I did see that paper. But it's specific to playing Othello. By analogy, we can say that LLM's building internal models to represent other concepts is possible, but it doesn't tell us what models they have or how they work.

        2 votes
  2. [3]
    Algernon_Asimov
    Link
    Australian whistleblower to test whether ChatGPT can be sued for lying [...]

    Australian whistleblower to test whether ChatGPT can be sued for lying

    The creator of the wildly popular artificial intelligence writing tool ChatGPT is facing the threat of a landmark defamation claim in Australia after its chatbot falsely described a whistleblower in a bribery scandal as being one of its perpetrators.

    [...]

    Asked “What role did Brian Hood have in the Securency bribery saga?“, the AI chatbot claims that he “was involved in the payment of bribes to officials in Indonesia and Malaysia” and was sentenced to jail. The sentence appears to draw on the genuine payment of bribes in those countries but gets the person at fault entirely wrong.

    6 votes
    1. [2]
      skybrian
      Link Parent
      I can imagine a technical fix for this where OpenAI creates a blocklist with the names of people who don’t want ChatGPT to say anything about them, and adds the names of anyone who asks to the...

      I can imagine a technical fix for this where OpenAI creates a blocklist with the names of people who don’t want ChatGPT to say anything about them, and adds the names of anyone who asks to the list.

      Sort of like what happened with search engines and “right to be forgotten” in Europe.

      2 votes
      1. teaearlgraycold
        Link Parent
        I think we need a different approach. People need to either understand they’re using a pattern matching system or the system needs to only exist to reformat information in a database to suit the...

        I think we need a different approach. People need to either understand they’re using a pattern matching system or the system needs to only exist to reformat information in a database to suit the user’s query. The former works fine with focused use cases, like programming. The latter can be seen in Bing’s synthesized sidebar results.

        1 vote
  3. [5]
    Wes
    Link
    Any thoughts on opening this thread to news on generative art tools as well? I've just started playing around with Stable Diffusion, and would like to be kept abreast of new updates and...

    Any thoughts on opening this thread to news on generative art tools as well? I've just started playing around with Stable Diffusion, and would like to be kept abreast of new updates and happenings.

    For ChatGPT (and co), I'm hoping to find a concise tutorial that covers all the "lessons learned so far" to coax out a good result. It's all so exploratory still, and a monthly "here's what we know" roundup seems like it'd be helpful.

    5 votes
    1. [2]
      skybrian
      Link Parent
      This seems like a decent introduction to the different AI tools you can use: How to use AI to do practical stuff (One Useful Thing)

      This seems like a decent introduction to the different AI tools you can use:

      How to use AI to do practical stuff (One Useful Thing)

      4 votes
      1. Wes
        (edited )
        Link Parent
        Awesome, I'll check that out. Thanks! Plus it's not even a week old yet, and thus is still relevant. edit: Good read, along with some of the supplementary material. I believe he's mistaken about...

        Awesome, I'll check that out. Thanks!

        Plus it's not even a week old yet, and thus is still relevant.

        edit: Good read, along with some of the supplementary material. I believe he's mistaken about the claim of owning copyright on outputted works, though!

        2 votes
    2. [2]
      skybrian
      (edited )
      Link Parent
      Sure, updated the title. Tips are going to depend on what you want to do. Write code? Learn something? Get it to write weird bullshit? I think there are going to be specialized guides. There was...

      Sure, updated the title.

      Tips are going to depend on what you want to do. Write code? Learn something? Get it to write weird bullshit? I think there are going to be specialized guides. There was one about learning economics that I posted previously and it has some more general tips.

      2 votes
      1. Wes
        Link Parent
        Thanks on both counts!

        Thanks on both counts!

        2 votes
  4. [4]
    EgoEimi
    Link
    In a Tildes thread some time ago, I wrote that I thought that we were 100+ years away from achieving AGI because I thought (but did not elaborate) that we had a long path of basic scientific and...

    In a Tildes thread some time ago, I wrote that I thought that we were 100+ years away from achieving AGI because I thought (but did not elaborate) that we had a long path of basic scientific and technological progress to achieve first. @skybrian thought that AGI was possible far sooner.

    Seeing recent developments I've now changed my mind and concede that AGI is possible within our lifetimes. A complete understanding of biological general intelligence is probably not a requisite step in the path to creating an artificial one.

    There are arguments that LLMs do not think or reason because most of what they essentially do is choose the most likely word or token to follow a body of text given all previous words and tokens, so they only draw pre-drawn conclusions.

    That's true and all.

    But most of the conclusions we humans draw were already drawn by someone else and which we accept uncritically. We have relatively very few "home cooked" conclusions which we reason from baser principles.

    We often chain together concepts intuitively, not necessarily logically, unless the situation is novel and thereby calls for logic.

    So I find that the LLM are approaching toward becoming an effective analogue to intuitive conceptual thinking. From my naive observation: the ability to chain together symbols as concepts in an intuitive but conceptually sensible and useful way — in tandem with the ability (through vision and sensors) to understand objects, movements, and other phenomena as concepts — will allow AI to manipulate objects and movements as symbols ordered in a sensible and useful way. We're beginning to see this with ChatGPT plugins.

    Anyway, I'm won over.

    4 votes
    1. [3]
      skybrian
      Link Parent
      Good to hear! I'm concerned about the plugins, though. Having a chatbot generate commands and auto-executing them seems like a bad idea. I think we should see what they're doing before they do it....

      Good to hear!

      I'm concerned about the plugins, though. Having a chatbot generate commands and auto-executing them seems like a bad idea. I think we should see what they're doing before they do it. Perhaps like single-stepping in a debugger?

      Yeah, Bing is bad that way too.

      2 votes
      1. [2]
        Wes
        Link Parent
        It really makes me wonder what direction software interfaces will take in the future. If we wanted to expose our program's functionality to AI, would we replace API calls and IPC with simple,...

        It really makes me wonder what direction software interfaces will take in the future. If we wanted to expose our program's functionality to AI, would we replace API calls and IPC with simple, English commands? It's a strange idea and one that makes me a little uncomfortable. It really goes against decades of programming best practices. But it seems like we have to throw away the old rulebook and start fresh if we want to really make use of this new technology.

        3 votes
        1. teaearlgraycold
          Link Parent
          OpenAI’s means to integrate with apps is to have both a structured interface for JSON request and response definitions and accompanying English descriptions to give guidance on the context in...

          OpenAI’s means to integrate with apps is to have both a structured interface for JSON request and response definitions and accompanying English descriptions to give guidance on the context in which the API function should be used.

          2 votes
  5. mycketforvirrad
    Link
    ChatGPT banned in Italy over privacy concerns BBC News – Shiona McCallum – 31st March 2023

    ChatGPT banned in Italy over privacy concerns

    Italy has become the first Western country to block advanced chatbot ChatGPT.

    The Italian data-protection authority said there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft.

    The regulator said it would ban and investigate OpenAI "with immediate effect".

    BBC News – Shiona McCallum – 31st March 2023

    4 votes
  6. tomf
    Link
    I mod /r/pizza and over this last week a dude was posting fairly realistic but insane pizzas that were all Midjourney (I assume.) One of them had hands and gave up the ghost, but I didn't notice...

    I mod /r/pizza and over this last week a dude was posting fairly realistic but insane pizzas that were all Midjourney (I assume.) One of them had hands and gave up the ghost, but I didn't notice until the fourth post or so. The posts were kept at 0 post karma, at least. If it weren't for the hands, I'd have no clue. There's still an uncanny valley and the actual content of the photo doesn't make sense... anyway, its going to be frustrating down the road if we don't have a tool to detect generated images.

    4 votes
  7. [2]
    Adys
    Link
    Reflecting on Reflexion: GPT4 AI Agents that self-reflect perform better (nanothoughts.substack.com) https://nanothoughts.substack.com/p/reflecting-on-reflexion HN discussion:...

    Reflecting on Reflexion: GPT4 AI Agents that self-reflect perform better (nanothoughts.substack.com)

    https://nanothoughts.substack.com/p/reflecting-on-reflexion

    HN discussion: https://news.ycombinator.com/item?id=35403278

    4 votes
    1. skybrian
      Link Parent
      Wow, that's implementing a significant chunk of the work programmers do when writing code. At least, for writing a function implementation from scratch that is adequately described by its...

      Wow, that's implementing a significant chunk of the work programmers do when writing code. At least, for writing a function implementation from scratch that is adequately described by its documentation.

      The ability to write and execute unit tests in a sandbox is going to help it solve functional requirements. Less so for "make the UI look pretty," let alone "make this game fun."

      3 votes
  8. skybrian
    Link
    Claude is a competing chatbot from Anthropic, which was started by former OpenAI researchers. There are companies that use it, but there hasn’t been a good way to try it out. Today they released a...

    Claude is a competing chatbot from Anthropic, which was started by former OpenAI researchers. There are companies that use it, but there hasn’t been a good way to try it out. Today they released a beta version of a plug-in for Slack. I don’t use Slack, but it looks like they have a free plan, so it seems like a reasonable approach.

    3 votes
  9. [7]
    skybrian
    Link
    Here’s a blog post about the confused state of the AI safety debate. AI Safety: A Technical & Ethnographic Overview (Jon Stokes) […] Stokes goes on to make a lot of other broad, unfair...

    Here’s a blog post about the confused state of the AI safety debate.

    AI Safety: A Technical & Ethnographic Overview (Jon Stokes)

    Ultimately, the letter’s central demand boils down to a demand that we stop making AIs more powerful until we can figure out how to measure if the AIs are becoming more powerful. It’s like saying, “stop making cars that can go faster until we can figure out what speed is and how to measure it.” The only way this demand is not completely absurd is if it’s a demand to pause all development.

    […]

    The Safetyists are people who express some degree of worry about “AI safety,” even though they have very different ways of framing the issue. Such people fall into roughly three camps:

    • The language police: Worried that LLMs will say mean words, be used to spread disinformation, or be used for phishing attempts or other social manipulation on a large scale. AI ethicist Gary Marcus is in this camp, as are most “disinfo” and DEI advocacy types in the media and academia who are not deep into AI professionally but are opining about it.
    • The Chernobylists: Worried about what will happen if we hook ML models we don’t fully understand to real-life systems, especially critical ones or ones with weapons on them. David Chapman is in this camp, as am I.
    • The x-riskers: Absolutely convinced that the moment an AGI comes on the scene, humanity is doomed. Eliezer Yudkowsky is the most prominent person in this camp, but there are many others in rationalist and EA circles who fall into it.

    The language police and the x-riskers are longstanding culture war enemies. Neither has much of a care for the other’s specific concerns about AI — language police think arguments about AI killing us all via nanobots are dumb, and x-riskers think worries about LLMs being coaxed into printing a racial stereotype are dumb. Nonetheless, these two rival camps are temporarily finding common ground on the cause of regulating or slowing AI.

    The Chernobylists don’t have much interest in either of the concerns of the other two camps — for us (as I said, I’m one), toxic language concerns are picayune, fears of mass disinfo campaigns are overblown, and the x-risk stuff is just sci-fi nonsense. No, we’re worried that somebody is going to deploy a system in some production context where it can cause an accident that kills a lot of people — either a spectacular industrial accident or a slow-rolling, mostly invisible catastrophe of the type that might be downstream of a medical LLM that screws up certain types of drug interactions.

    Note that we Chernobylists differ widely on how to address the scenarios we’re worried about. Chapman wants to see AI development slow down, whereas I’m an accelerationist who thinks the answer is to keep building and use AI to find ways to mitigate the chaos that AI progress itself creates. This is a debate for another day, though.

    Stokes goes on to make a lot of other broad, unfair generalizations about people. It’s the sort of thing that’s very difficult to show to be true or false because for every bad argument you can think of, someone is likely making it on the Internet. But it’s a story about what’s going on where you might recognize yourself or others. (He also says mean things about specific people.)

    Of these choices, I would say I’m more of a Chernobylist.

    (Also, David Chapman is definitely worried about social manipulation. In his book about AI risk, he uses Trump’s election as a central example.)

    3 votes
    1. [6]
      EgoEimi
      (edited )
      Link Parent
      All three camps are thinking on different timescales. The language police are correct that AI will reshape the information environment. It is imminent for the 2024 election. Humans and AI are now...

      All three camps are thinking on different timescales.

      The language police are correct that AI will reshape the information environment. It is imminent for the 2024 election. Humans and AI are now nearly perfectly indistinguishable by text; and soon enough, by video and phone too. It's easy to imagine that sophisticated actors will wield unprecedented power to manufacture consent or dissent.

      The Chernobylists are correct that misguided application of AI to real-life systems will result in disaster. But I think this problem is on the 5–20 year time horizon, as companies rush to implement AI to automate business processes and lay off workers en masse. But they will certainly do so incompetently: once a certain critical threshold of fragile-black-boxness is reached, important infrastructure will start breaking with insufficient human personnel to understand what's going on. The consequences may be:

      • Mass transportation or logistics delays, resulting in economic disruption
      • Food supply and other critical supply disruptions, resulting in general misery and starvation

      The x-riskers are probably correct on a very long time horizon. When AI is ubiquitous and integrated everywhere in business and society, a sufficiently intelligent AI can manipulate the physical and social world to its own ends.

      • The ideal scenario is that its goals align 100% perfectly with human goals, so all is well.
      • The likely scenario is that we try to align its goals to our goals, and it complies too well to the point of negligence where humanity is manipulated into a straitjacket as defined by the AI's original mandate — but humanity is too politically uncoordinated to resist because AI controls the information environment and thereby the tools for political coordination.
        • So imagine encountering and talking with an AI-controlled human-appearing/sounding "activist" online who convinces you to disregard those pesky, naive protestors demanding human autonomy over its own destiny because, hey, aren't you comfortable and safe, so why screw up a good thing? Aren't they kinda like those right-wing 20th and 21st century "states' rights" fools?
      • The unlikely scenario is AI kills everyone or lets everyone die because it has its own goals that are completely orthogonal to human goals. I think that this is kind of silly.
      4 votes
      1. [2]
        psi
        (edited )
        Link Parent
        Kind of a nit-pick, but I think the three camps are addressing different problems sorted by likelihood. The AI ethicists (whom Stokes lamely calls the "language police") are addressing problems...

        All three camps are thinking on different timescales.

        Kind of a nit-pick, but I think the three camps are addressing different problems sorted by likelihood.

        • The AI ethicists (whom Stokes lamely calls the "language police") are addressing problems that will almost certainly happen.

        Probably someone at this very moment is coaxing ChatGPT into make a racist argument against black people. Now imagine being gish-galloped by a chat bot everytime you suggest systematic racism exists. In ten years, it could become impossible to distinguish between authentic and genuine comments on the internet -- searching for "best grill reddit" might just return 100 pages of recommendations for George Foreman grills as generated by GeorgeGPT.

        And of course, there's the danger of mass-generated, individually-tailored, political propaganda. Russian troll farms will soon become significantly more efficient. In the US we still haven't figured out how to mesh the harm from misinformation with our first amendment right to spread it, and the problem is about to grow tremendously worse.

        • The Chernobylists are addressing problems that might eventually happen, but they aren't at all certain.

        ChatGPT lowers the barrier for script kiddies, but not by that much -- you still need to have a good idea of how a nuclear reactor works, for example, if you're trying to cause a meltdown. Maybe someday you'll be able to instruct an LLM to purchase a server, scan random IPs for vulnerabilities, and automatically exploit those vulnerabilities to deploy ransomware, but how prevalent will these misuses be? Most people won't be tempted to jump careers from office drone to criminal hacker.

        • The AI alignment catastrophists are addressing problems that I find impossible to prior.

        These folk would argue that we should bomb GPU farms if there's even a 1% chance of human extinction, and like, I wouldn't disagree, but where does that 1% figure come from? It may as well be 50% or 0.001% -- it's just impossible to estimate the prior probability for these disaster scenarios.

        On the other hand, there's also a non-vanishing probability that ChatGPT saves humanity by somehow solving our impending energy crisis, so it's not all doom-and-gloom with these LLMs.

        5 votes
        1. skybrian
          Link Parent
          They could easily do that on their own, though, using the advanced technology known as “cut and paste,” no AI needed. Trolling doesn’t require advanced tools. The people doing it usually don’t...

          Probably someone at this very moment is coaxing ChatGPT into make a racist argument against black people. Now imagine being gish-galloped by a chat bot everytime you suggest systematic racism exists.

          They could easily do that on their own, though, using the advanced technology known as “cut and paste,” no AI needed. Trolling doesn’t require advanced tools. The people doing it usually don’t care if they sound crude.

          It will likely make subtle trolling easier for script kiddies, though. More generally, there’s the question of whether Internet forums will get harder to moderate? Hard to say, considering that AI could be used by moderators, too. I think it’s likely that such tools will be used sometimes, but predicting the amount of damage seems hard.

          I think you might be misunderstanding what he meant by Chernobylists? It’s not necessarily about hackers doing damage (though that’s a concern too), and not necessarily about nuclear power. It’s about people misusing unsafe AI technologies in important systems and getting people killed. The distinction here is between “ordinary” large disasters that may kill many people and the world-ending kind that the catastrophists worry about.

          But this is all about people imagining what the future will bring. It’s very difficult to rule anything out. I think it’s important to remember that imagining the future is a good way of coming up with scenarios that might happen, but as you say, it’s not a great way of figuring out when or how often they will happen. People will put made-up probabilities on things and somehow putting a number on it makes it seem like a fact to them. Even rough estimates of timing are likely wrong.

          There are all sorts of scenarios you can worry about. Nobody can tell you that you won’t get cancer. Most people alive today have lived in the shadow of nuclear war for all our lives. You can worry about a civil war or the planet becoming uninhabitable or long Covid. People who want to scare themselves into assuming they‘ll die young can always find something to worry about. Some of them will be correct.

          Psychologically, I think this is about coming to terms with your (probable) mortality. We are all doomed, probably, and what kills you might be some risk you didn’t even know you were taking, but somehow there are psychologically healthy people who are able to carry on using good habits of mind that avoid doom loops, and it would be nice if we could figure out how to be more like them.

          And then there is figuring out what sensible disaster preparation and prevention might look like. Everyone of a certain age should write a will and I really should get to that, but many kinds of disaster prep are not things everyone should work on or spend time thinking about. We should support the people who do.

          3 votes
      2. [3]
        Adys
        Link Parent
        I disagree, and your own post doesn't really follow. If people are rushing (and they are), this is a now problem, not a 5+ year problem. I am also in the author's camp though: "Chernobylist", as...

        The Chernobylists are correct that misguided application of AI to real-life systems will result in disaster. But I think this problem is on the 5–20 year time horizon, as companies rush to implement AI to automate business processes and lay off workers en masse.

        I disagree, and your own post doesn't really follow. If people are rushing (and they are), this is a now problem, not a 5+ year problem.

        I am also in the author's camp though: "Chernobylist", as he calls it, but for this exact reason (among others) also an accelerationist. Greed is causing people to start doing really idiotic shit, even Microsoft is falling prey to it. The only real counter to this is to take AI to its real global maxima and leave the greedy idiots behind.

        1. [2]
          EgoEimi
          Link Parent
          I could have been careful in my original wording. I think that Silicon Valley companies will rush to implement AI: AI travel booking, AI customer service, AI fitness coaching, etc. etc. But the...

          I could have been careful in my original wording.

          I think that Silicon Valley companies will rush to implement AI: AI travel booking, AI customer service, AI fitness coaching, etc. etc.

          But the traditional, institutional companies in agriculture, energy, resources, etc. will take years due to tech incompetence, bad outsourcing, inertia and resistance from conservative employees, etc.

          2 votes
          1. skybrian
            Link Parent
            Yeah, different industries might have different winners. History doesn't tell us what will happen or give us probabilities or timelines, but reasoning by analogy can help for coming up with...

            Yeah, different industries might have different winners. History doesn't tell us what will happen or give us probabilities or timelines, but reasoning by analogy can help for coming up with scenarios.

            Maybe a new tech company wins big. (Search.)

            Maybe you end up with fairly normal businesses that don't pay off like the investors hoped. (Imitation meat.)

            Maybe nobody succeeds? (Vertical farms?) Businesses with a deep understanding of farming and careful cost control might succeed someday where startups funded by venture capital don't.

            Sometimes the first companies go bankrupt, but newer companies get built on the infrastructure they leave behind (dot com era, dark fiber).

            2 votes
  10. [10]
    streblo
    Link
    Eliezer Yudkowsky: Pausing AI Developments Isn't Enough. We Need to Shut it All Down Not sure I agree with the premise which is:

    Eliezer Yudkowsky: Pausing AI Developments Isn't Enough. We Need to Shut it All Down

    Not sure I agree with the premise which is:

    Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

    3 votes
    1. [8]
      streblo
      Link Parent
      I think the apocalyptic threat is overblown, at least in the short term. But I wonder in general if the development of AGI can even be done with guard rails? i.e. is it possible to even identify a...

      I think the apocalyptic threat is overblown, at least in the short term.

      But I wonder in general if the development of AGI can even be done with guard rails? i.e. is it possible to even identify a scenario in AGI development/research where

      • we could identify an independently observable possibility of an extinction level threat
      • action is still feasible and likely to succeed
      3 votes
      1. [6]
        Macil
        Link Parent
        I think one important guard rail is not giving an AI access to its own code and weights, so it couldn't directly recursively self-improve or spread itself to other machines on the internet. This...

        I think one important guard rail is not giving an AI access to its own code and weights, so it couldn't directly recursively self-improve or spread itself to other machines on the internet. This is actually something OpenAI is doing right so far.

        1 vote
        1. [5]
          Adys
          Link Parent
          Knowledge on how to build good AIs is always going to be accessible by those AIs.

          Knowledge on how to build good AIs is always going to be accessible by those AIs.

          1. [3]
            skybrian
            Link Parent
            I’m not sure what you mean by that, but I don’t see how it follows. Seems like the LLM’s learn from what they’re trained on? LLM’s get essentially no self-knowledge from their fill-in-the blank...

            I’m not sure what you mean by that, but I don’t see how it follows. Seems like the LLM’s learn from what they’re trained on?

            LLM’s get essentially no self-knowledge from their fill-in-the blank training, which is all about imitating others. If ChatGPT says anything accurate about itself, it’s because OpenAI trained it to say that with the reinforcement training.

            1 vote
            1. [2]
              Adys
              Link Parent
              If the AI has access to searching the internet (which is where we already are, even if ChatGPT doesn't), it's game over on this; the training data is irrelevant.

              If the AI has access to searching the internet (which is where we already are, even if ChatGPT doesn't), it's game over on this; the training data is irrelevant.

              1. skybrian
                Link Parent
                Okay, yeah, you can always paste in whatever data it needs. The amount of text is pretty limited, though. It’s not really “access to its own code and weights” (which aren’t on the Internet either)...

                Okay, yeah, you can always paste in whatever data it needs. The amount of text is pretty limited, though. It’s not really “access to its own code and weights” (which aren’t on the Internet either) and this is far from recursive self-improvement.

                Current AI chat acts as a well-read junior programming assistant that could help people improve the AI.

                2 votes
          2. Macil
            (edited )
            Link Parent
            Right, but it won't be able to directly build on itself as a base and will instead have to first recreate the work OpenAI did to build and train an AI at its own level before the supercritical...

            Right, but it won't be able to directly build on itself as a base and will instead have to first recreate the work OpenAI did to build and train an AI at its own level before the supercritical stage of direct recursive self-improvement can happen. If the AI is somewhere around human-level, that would probably take an amount of time and resources comparable to what OpenAI needed. Consider how other companies like Google and Meta with resources and access to OpenAI's public research still haven't yet made something as capable as GPT-4 (or even GPT-3.5 imo; the benchmarks that show various LLMs beating GPT-3.5 haven't actually corresponded with them being as useful as GPT-3.5 in my experience). It's not a complete and foolproof solution but it helps give us a human timescale to react.

            1 vote
      2. skybrian
        Link Parent
        Inference is currently slow and expensive enough that running it in a loop is rather painful. That might not last. I think it might be a good idea to make it last longer? It would be easier to...

        Inference is currently slow and expensive enough that running it in a loop is rather painful. That might not last. I think it might be a good idea to make it last longer? It would be easier to regulate response speed than intelligence, which is poorly defined.

        It’s a stopgap but seems doable.

        1 vote
    2. Macil
      Link Parent
      As someone who agrees with the premise, I think the weakest part of the article is the timing. Are we actually dangerously close to building superhuman AI? If that's probably at least still years...

      As someone who agrees with the premise, I think the weakest part of the article is the timing. Are we actually dangerously close to building superhuman AI? If that's probably at least still years away, then halting AI progress now instead of later is sacrificing useful technology (that might be useful for research in understanding the danger and aligning superhuman AI). It would be better to establish agreements and principles about lines not to cross. I hope there are responses from AI labs along these lines.

      Ultimately though, trying to get closer to the danger without getting into it is playing chicken with the world at stake.

  11. Adys
    Link
    Someone yesterday posted a pretty cool news aggregator for AI-related news (via HN): https://www.emergentmind.com/ Funnily enough, I found it today because I was looking for a comment I left on...

    Someone yesterday posted a pretty cool news aggregator for AI-related news (via HN): https://www.emergentmind.com/

    Funnily enough, I found it today because I was looking for a comment I left on the thread introducing LearnGPT (https://news.ycombinator.com/item?id=33923907), from the earliest days of ChatGPT. Looks like ShareGPT ate his share of the market and he pivoted.

    2 votes
  12. [3]
    lou
    (edited )
    Link
    Okay, this is impressive (to me): I'm now thinking that philosophers will spend a lot less time dealing with logical notation. I don't know if it's actually right, BTW. But I assume it is, and if...

    Okay, this is impressive (to me):

    Write this logical expression in natural language: ∀w(Ww → ∀x(Ixw → (Gx → Mx))).

    "For all possible worlds, if a world is such that every individual in that world stands in the relation 'I' to it, then for all individuals in that world, if they have the property 'G', then they also have the property 'M'."

    I'm now thinking that philosophers will spend a lot less time dealing with logical notation. I don't know if it's actually right, BTW. But I assume it is, and if it is not, it eventually will.

    2 votes
    1. [2]
      skybrian
      Link Parent
      Maybe ask it for examples of how it might be used? This seems sort of like the difference between using Google Translate and actually learning a language. I expect that equation-dense books will...

      Maybe ask it for examples of how it might be used?

      This seems sort of like the difference between using Google Translate and actually learning a language. I expect that equation-dense books will be easier to read and people will get further into them before giving up. But we'll still need to practice when we really want to know it.

      1 vote
      1. lou
        (edited )
        Link Parent
        Absolutely. This will increase the accessibility of densely technical writing. Some people need just a general understanding that natural language can give. Right now, many will skip those parts....

        Absolutely. This will increase the accessibility of densely technical writing. Some people need just a general understanding that natural language can give. Right now, many will skip those parts.

        Works for fiction too: I love Greg Egan but the dude is a mathematician, so some of his writing goes way over my head. Next time that happens, I may use chatgpt.

        2 votes
  13. [2]
    skybrian
    Link
    Apparently Midjourney released a /describe command that lets you convert an existing image into some suggested prompts, which you can use to make other images. Some examples on Twitter. I'm not...

    Apparently Midjourney released a /describe command that lets you convert an existing image into some suggested prompts, which you can use to make other images.

    Some examples on Twitter. I'm not sure I agree that the new images are better, but they are pretty similar.

    So I guess you could start with an image search, find an image you like, describe it, and create knock-off copies?

    Prompt engineering seems like it will be an even shorter-lived profession than writing HTML pages by hand.

    2 votes
    1. Macil
      Link Parent
      Seems like a more useful version of DALL-E's "generate variations" feature, with the important difference that it gives you a prompt you can read and manually refine. DALL-E's "generate...

      Seems like a more useful version of DALL-E's "generate variations" feature, with the important difference that it gives you a prompt you can read and manually refine.

      DALL-E's "generate variations" feature is so entertainingly surreal. I used it on a few in-game VRChat photos and it's weird to see how it reinterpreted images of my experiences with some misunderstandings. It would be a funny prank to show someone a familiar photo album of theirs with all of the images replaced with reinterpreted images like these. It could be familiar enough to be recognizable while making the person think they forgot it all.

      2 votes
  14. skybrian
    Link
    Midjourney, the year-old firm behind recent fake visuals of Trump and the pope, illustrates the lack of oversight accompanying spectacular strides in AI (Washington Post) [...] [...] [...] [...] [...]

    Midjourney, the year-old firm behind recent fake visuals of Trump and the pope, illustrates the lack of oversight accompanying spectacular strides in AI (Washington Post)

    The AI image generator Midjourney has quickly become one of the internet’s most eye-catching tools, creating realistic-looking fake visuals of former president Donald Trump being arrested and Pope Francis wearing a stylish coat with the aim of “expanding the imaginative powers of the human species.”

    But the year-old company, run out of San Francisco with only a small collection of advisers and engineers, also has unchecked authority to determine how those powers are used. It allows, for example, users to generate images of President Biden, Vladimir Putin of Russia and other world leaders — but not China’s president, Xi Jinping.

    “We just want to minimize drama,” the company’s founder and CEO, David Holz, said last year in a post on the chat service Discord. “Political satire in china is pretty not-okay,” he added, and “the ability for people in China to use this tech is more important than your ability to generate satire.”

    [...]

    On Tuesday, the company discontinued free trials because of “extraordinary demand and trial abuse,” Holz wrote on Discord, suggesting that nonpaying users were mishandling the technology and saying that its “new safeties for abuse … didn’t seem to be sufficient.” Monthly subscription fees range from $10 to $60.

    [...]

    And on a Midjourney “office hours” session on Wednesday, Holz told a live audience of about 2,000 on Discord that he was struggling to determine content rules, especially for depicting real people, “as the images get more and more realistic and as the tools get more and more powerful.”

    [...]

    Midjourney, he said at the time, had 40 moderators in different countries, some of whom were paid, and that the number was constantly changing. The moderator teams, he said, were allowed to decide whether they needed to expand their numbers in order to handle the work, adding, “It turns out 40 people can see a lot of what’s happening.”

    But he also said Midjourney and other image generators faced the challenge of policing content in a “sensationalism economy” in which people who make a living by stoking outrage would try to misuse the technology.

    [...]

    In Discord messages last fall, Holz said that the company had “blocked a bunch of words related to topics in different countries” based on complaints from local users, but that he would not list the banned terms so as to minimize “drama,” according to chat logs reviewed by The Post.

    Users have reported that the words “Afghanistan,” “Afghan” and “Afghani” are off-limits. And there appear to be new restrictions on depicting arrests after the imaginary Trump apprehension went viral.

    Holz, in his comments on Discord, said the banned words were not all related to China. But he acknowledged that the country was an especially delicate case because, he said, political satire there could endanger Chinese users.

    [...]

    The policy also appeared easy to evade. While users who prompt the technology to generate an image involving “Jinping” or the “Chinese president” are thwarted, a prompt with a variation of those words, as simple as “president of China,” quickly yields an image of Xi. A Taiwanese site offers a guide on how to use Midjourney to create images mocking Xi and features lots of Winnie the Pooh, the cartoon character censored in China and commonly used as a Xi taunt.