vektor's recent activity

  1. Comment on Former YouTuber Lindsay Ellis says she’s learning to live with the trauma of being ‘canceled’ in ~life

    vektor
    Link Parent
    In my head you're both talking about the same book. cloud will look it up and be like "that was 3 years ago already? Damn I'm getting old".

    In my head you're both talking about the same book. cloud will look it up and be like "that was 3 years ago already? Damn I'm getting old".

    2 votes
  2. Comment on Abortion bans are going to hit us worse than we think in ~talk

    vektor
    Link Parent
    That's got to be unconstitutional, right? There's no way a state can punish you for what you do outside of that state. Right?

    There's currently legislature in Texas being moved forward that will charge women who travel out-of-state for an abortion.

    That's got to be unconstitutional, right? There's no way a state can punish you for what you do outside of that state.

    Right?

    7 votes
  3. Comment on Weekly megathread for news/updates/discussion of Russian invasion of Ukraine - June 23 in ~news

    vektor
    Link Parent
    The only appeasement Russia deserves here is that if they restore Ukrainian territorial integrity in full, Ukrainians won't chase them all the way to Moscow. What an asswipe. Next we're going to...

    The only appeasement Russia deserves here is that if they restore Ukrainian territorial integrity in full, Ukrainians won't chase them all the way to Moscow.

    What an asswipe. Next we're going to appease them by turning our backs on Georgia and Moldova, right? What about the baltics? Are those also negotiable?

    7 votes
  4. Comment on Former YouTuber Lindsay Ellis says she’s learning to live with the trauma of being ‘canceled’ in ~life

    vektor
    Link Parent
    Yeah, but instead now we get "well ackshually, that's not cancel culture if she still has a job afterwards / is still respected afterwards / / is not a large-scale cultural phenomenon." Basically,...

    I saw a lot of people who tended to hem and haw about cancel culture not being real acknowledge that maybe there are indeed some bad actors behaving badly here.

    Yeah, but instead now we get "well ackshually, that's not cancel culture if she still has a job afterwards / is still respected afterwards / is still alive to this day / is not a large-scale cultural phenomenon." Basically, mask-off what they're saying is harassment isn't real. Sometimes, I might add, by well meaning people who just have a myopic view of the situation, who think that since Fox News is yelling about it, it must be fake, and who don't want conservatives to have that kind of propaganda victory. Very us-vs-them there. I can be left and recognize that others on the left do shitty things that people on the right call out. Not to mention I can recognize it on the right as well.

    9 votes
  5. Comment on Former YouTuber Lindsay Ellis says she’s learning to live with the trauma of being ‘canceled’ in ~life

    vektor
    Link Parent
    Yeah, but that has to come from a place of compassion to be effective. If you just replaced the tools, but not the underlying mentality, the improvement would be negligible. Teaching without...

    Yeah, but that has to come from a place of compassion to be effective. If you just replaced the tools, but not the underlying mentality, the improvement would be negligible.

    Teaching without compassion will end up as holier-than-thou preaching.

    Which is not to say I dislike your idea; quite the contrary. I imagine implicitly this is what you envisioned.

    5 votes
  6. Comment on Former YouTuber Lindsay Ellis says she’s learning to live with the trauma of being ‘canceled’ in ~life

    vektor
    Link Parent
    Also for angry-tipsy Lindsay Ellis. It's entertaining even if you don't find drama entertaining. Lindsey Ellis just has that kind of humor.

    I'd highly recommend watching her "Mask Off" video for the context from her side of things.

    Also for angry-tipsy Lindsay Ellis. It's entertaining even if you don't find drama entertaining. Lindsey Ellis just has that kind of humor.

    4 votes
  7. Comment on Carbon hacking: Least carbon-intensive traveling between US and Europe in ~enviro

    vektor
    Link
    https://www.langsamreisen.de/en/freightertravel/europe-to/freighter-travel-europe-central-america-northamerica/germany-france-usa - looking at this, you're going to be paying 140€ per day, and a...

    https://www.langsamreisen.de/en/freightertravel/europe-to/freighter-travel-europe-central-america-northamerica/germany-france-usa - looking at this, you're going to be paying 140€ per day, and a one way trip takes 8 days. That's.... actually quite substantial. Add in the cost of those vacation days, basically, unless you can work from the ship.

    Do let us know if you find something. I'm curious too.

    Sadly for me there's going to be some hardly avoidable business travel coming up. I don't think the company will pay for freighter ship transit, or long-distance train travel. Sadly.

    2 votes
  8. Comment on Kellogg to separate into three companies focusing on snacks, cereal and plant-based foods in ~food

    vektor
    Link Parent
    Are you really at liberty to distribute those however you like? That sounds like an awfully convenient way of stiffing debt holders.

    Are you really at liberty to distribute those however you like? That sounds like an awfully convenient way of stiffing debt holders.

    1 vote
  9. Comment on Weekly megathread for news/updates/discussion of Russian invasion of Ukraine - June 16 in ~news

    vektor
    Link Parent
    Look, I could understand if you were to say "the battle for Ukrainian morale is more important than that for German hearts and minds, so fuck what the germans think". But you seem to be making the...

    Look, I could understand if you were to say "the battle for Ukrainian morale is more important than that for German hearts and minds, so fuck what the germans think". But you seem to be making the point that the battle for western hearts and minds is completely unimportant, and if you think it's a lost cause we might as well go home.

    It's not lost, not even close. But this will be a very long-term news item. How sick are you of covid news right now? That's western constituents in 2 years. We need to be ready for this war to go that long and longer, and keep support up. And everything that keeps this war in the news helps. We (I'm kinda including @cfabbro here) don't make the media effect of this out to be monumental, but nothing the west can do short of nuclear-armed Ukrainian-flagged B-2 bombers is monumental. It's a lot of small things coming together, and yes, one of these small things, annoyingly, is photographs of politicians standing confidently in suits. And no, this isn't the only thing keeping Ukraine in the news. But it's another small thing that helps.

    If you think these three do nothing but talk, I'd urge you to look into Russian gas deliveries into Germany. They're dropping to fuck all right now, right during this visit. I'd say that looks like Putin is not very happy with what we're doing here. (I can't speak for Italy and France as I'm not nearly as clued in there)

    All of which to say: I don't think it's fair at all to dismiss visits like these as "just talking". And if you reject small change because you demand big change, I don't think that's a good policy.

    5 votes
  10. Comment on Weekly megathread for news/updates/discussion of Russian invasion of Ukraine - June 16 in ~news

    vektor
    Link Parent
    Probably lots that we'll not hear much about, e.g. regarding possible future arms shipments. Also, as reported, lots of conversations about EU membership. The leaders of the three largest...

    Probably lots that we'll not hear much about, e.g. regarding possible future arms shipments. Also, as reported, lots of conversations about EU membership. The leaders of the three largest economies of the EU just expressed their support for the UA application into EU. That is a big deal. As far as I can tell, this is the main result of this visit. The EU commission will make their recommendation for UA's case today. This visit was a big deal in that it, insofar as it was even necessary, forces the hand of the commission.

    As for Macron and Scholz still talking to Putin; I'm not sure about Macron, but Scholz has made clear that he doesn't consider peace where Ukraine cedes territory to be acceptable. It's fair if you think calling Putin with those expectations and expecting acceptable peace to pop out is a waste of time. But it's also not that much time in the grand scheme of things and might lay the groundwork for future peace talks that arrive at acceptable terms, you never know. The only way we get an acceptable peace here is if Russia agrees to it. If Russia doesn't agree even after being thrown out of Ukraine, the only way out is through Moscow, which... well, call it Operation Unthinkable. It's neither acceptable nor realistic.

    As for overpromised and underdelivered military equipment by Germany, I beg to differ. German arms companies have overpromised, but the government has been very careful not to. They've gotten shit for shooting down unrealistic proposals by industry. Also, some cancellations of foreign deliveries of german-made equipment have been misattributed to German export controls. Doesn't mean I'm happy with what Germany is sending, but that's mostly a result of what we can send (atrociously inefficient and criminally underbudgeted defense spending by Merkel's govts) and the speed at which we do send what we can (you can blame the current govt for that, yes.).

    4 votes
  11. Comment on What should a layperson know about AI? in ~tech

    vektor
    Link Parent
    That is exactly my point when I say that text chat is insufficient. Like, no, I don't think anymore that we can assess intelligence by proxy of natural language question answering. That problem is...

    You may say that we need a more rigorous and general test than QA about J.K. Rawling’s fiction, and I’d agree.

    That is exactly my point when I say that text chat is insufficient. Like, no, I don't think anymore that we can assess intelligence by proxy of natural language question answering. That problem is solved(ish) and I don't see the AI takeover yet. And in the case of Lamda, same thing. What looks like introspection and consciousness of the AI is remixed introspection and consciousness of the humans that provided the training data. Hell, the Turing test for text chat can be considered solved, but that's not because we have AI but because that format of Turing test is flawed. Maybe if you were to ask all the right questions you could use it, but that's too complicated to figure out imo. Better to go for a better format instead. Hence the artificial environments. You could, without betraying human consciousness, entice AI agents in a completely artificial world to speak english(just so we can understand them). Once they start having consciousness-revealing conversations amongst them, then you've got a good case for consciousness. And yes, I'm deliberately avoiding the question of how to define intelligence of consciousness. I leave that to the people who study that in humans and do my best to give them a environment where their methods can be applied directly.

    The paper on symbolic reasoning is interesting, but without diving in deep, probably oversold. Language models (i.e. transformers) are quite limited in their computational structure. For example, I think the length of any reasoning chain they can perform is tightly constrained by the model size. What would impress me is if we sidestepped that issue, for example with a rewrite system, where the transformer would rewrite the steps of that reasoning chain to break it down, thus tackling very long reasoning chains. Kind of like what humans do when they write down intermediate results. Basically, have the transformer loop over its notes, simplifying expressions to save memory while adding more expressions to solve additional steps or incorporate information that previously didn't fit. That would impress me.

    There's an interesting artifact by the way, in the fact that the prompt to justify an answer changes the answer. That is a useful property, but it also calls into question the method: You're not justifying what you did previously, you're doing it again, but this time a different way. I think that kind of betrays that TFs don't do symbolic reasoning unless you ask for it.

    Ultimately, I think your claim that current language models are contaminated with human sentience is too strong—human beings don’t learn language in a vacuum either, so we are just as contaminated (even if we may require less training data, currently).

    That is probably fair, and we have to account for that when we make up a benchmark for sentience. We learn at least some aspects of how we express consciousness socially, so it's probably unreasonable to expect the same degree from a machine, unless we give it's society the same amount of time to develop the social component. That said, don't think of it as demanding too much; think of it as establishing a sufficiency limit. I'm not setting this as the lower benchmark, but the upper benchmark. If machines can do what I ask, they are (if you believe my argument, and maybe refine my setup a bit) definitely sentient. I'm not saying that a machine that can not do that is definitely non-sentient, but that I would not pretend to know.

    3 votes
  12. Comment on What should a layperson know about AI? in ~tech

    vektor
    (edited )
    Link Parent
    If I am falling victim to that, then the entire field is. Language models are well studied and it's understood that they have a strong bias towards repeating fragments of the training data. Which...

    If I am falling victim to that, then the entire field is. Language models are well studied and it's understood that they have a strong bias towards repeating fragments of the training data. Which is to say, they repeat what they've previously heard, irrespective of whether it's appropriate. Their skills lie in finding which fragment to repeat and in merging different fragments seamlessly. Their weakness is that they have little understanding of what they are saying.

    I'd assume that this model is no different. That would be an extraordinary claim, requiring extraordinary evidence.

    I hope that clarifies what I mean with DJ. Spotify's Recommender System is a DJ as well, but not a musician. The "remixing" we humans do happens on a way lower level.

    As for a reliable assay for sentience, there's a few ideas that are a lot more promising than text chat. I've noted my ideas here - TL;DR: Training data for text models is contaminated with human sentience, so we have to strip it away and train intelligence otherwise. In the case of all current language models, sadly that leaves nothing behind. Oops. If an AI would develop evidence of sentience without being exposed to human sentience in training, then that's a very strong case, comparatively.

    1 vote
  13. Comment on What should a layperson know about AI? in ~tech

    vektor
    Link Parent
    I have to disagree a bit. It certainly could be the case, and a priori it should be. But considering how we use enormous amounts of data to train state-of-the-art AIs these days, and that data...

    I have to disagree a bit. It certainly could be the case, and a priori it should be. But considering how we use enormous amounts of data to train state-of-the-art AIs these days, and that data betrays a lot about human-like intelligence, if we want to make the AI human-like, we certainly can. And considering how we currently train AIs, we couldn't even avoid it, except by excluding vast amounts of "human-contaminated" data.

    However, our current tools might not be relevant anymore by the time we create the first AGI, in which case maybe we won't need as much data and instead rely (as I previously wrote in the other thread) on artificial environments. In that case, your argument will apply. Again, unless we decide (and manage) to bake human-likeness into the AI.

    3 votes
  14. Comment on What should a layperson know about AI? in ~tech

    vektor
    Link Parent
    I'll bite: I work in AI. I think you're pretty much spot on about reasoning being a major missing piece. I think the "not in the next decade" heuristic is also pretty good. However, from down here...

    I'll bite: I work in AI. I think you're pretty much spot on about reasoning being a major missing piece. I think the "not in the next decade" heuristic is also pretty good. However, from down here in the trenches it's quite hard to judge. There's a lot of good ideas floating around all the time, and the right combination of ideas could come around any moment now, it feels like. A lot of promising avenues are being explored. However: If that right combination of ideas does materialize, you'll hear it loud and clear. Those will be major breakthroughs and in all likelihood, we'll see a substantial shift in the techniques we use in the field.

    9 votes
  15. Comment on What should a layperson know about AI? in ~tech

    vektor
    (edited )
    Link Parent
    This is a very good question that maybe deserves its own discussion. But to give a bit of a starting point, I think it can be practiced by asking the following questions and using the resources at...

    It's also important to learn to be skeptical of what you read, without descending into nihilism and then latching on the next crazy theory that comes along. But I don't know how that can be taught.

    This is a very good question that maybe deserves its own discussion. But to give a bit of a starting point, I think it can be practiced by asking the following questions and using the resources at your disposal to seriously try to answer them. Say you read the article above. Start with the following: Who is saying this, and why should I believe them? Who is doing the reporting of what is being said? Do I trust that entire chain of telephone game? Can I get closer to the source material? Can I find contrasting accounts of the same topic? What motive would the people I have to trust here have to lie to me? Unfortunately, all of those might not get you very far if it boils down to "a self-proclaimed expert says we should all panic", but maybe that tells you enough as it is.

    Edit: What to practice it on? Things that you find suspicious, but you don't know why they're wrong. A recent example of my personal experience is all the Germany bashing regarding Ukraine. In that example, I'd take an article that claims Scholz messed up this or that and see what's actually there and what is journalistic fabrication. That way, I'd have a vague notion (from german media) what the ground truth would be, which is important if you're just starting to practice this: You can check your work. Basically anytime you see reporting that arrives at conclusions you know to be bullshit, you know to dig deeper. Not to dismiss an article you already know to be BS, but to find out what to look for in articles that you can't verify/dismiss outright.

    3 votes
  16. Comment on What should a layperson know about AI? in ~tech

    vektor
    Link Parent
    To maybe elaborate a bit more on the news item in question, and because I don't want to rework the flow of my original post: In light of what I wrote about generalizability and training data,...

    To maybe elaborate a bit more on the news item in question, and because I don't want to rework the flow of my original post:

    In light of what I wrote about generalizability and training data, consider this an interpretation of the news: I find it extremely likely that whatever resembles sentience in the chat logs was not "authored" by the AI, instead, what you're listening to is a remix of all the sentient humans represented in its training data. The AI is just the DJ.

    7 votes
  17. Comment on What should a layperson know about AI? in ~tech

    vektor
    Link
    Honestly, I didn't read that article beyond the title, so consider that my advice on how to interpret the news item: not at all. I think an important thing to note is that to date, AI is generally...

    Honestly, I didn't read that article beyond the title, so consider that my advice on how to interpret the news item: not at all.

    I think an important thing to note is that to date, AI is generally only ever used as an automation tool for (mind-numbingly) boring tasks. AI is not nearly at the level where it can seriously compete with human intelligence, and that will stay that way for the next few years.

    Another thing is that AI is currently all about data and very little about computation. It's pattern matching. It's this xkcd. You take a table of X and Y, call X the query and Y the response. You take a nebulous computational structure that shuffles these data around an awful lot according to what parameters W tell you. You get answer Y'. Y' is wrong. Dang it. Well, what did you expect? Turns out, your computational structure is differentiable. So you can compute the gradient of Y' with respect to W. And you can use that to change W to make your new Y' slightly less wrong. You do that on a lot of data X and Y and you get "AI".

    The problem with this approach is that we haven't really figured out how to make AI generalize over different problem instances (X,Y) yet. If you provide X1, X3, Y1 and Y3, there's no guarantee that the AI can figure out that Y2 belongs with X2 at all. We're getting better at this, but slowly and only where there's relatively strong and straightforward connections between X1, X2 and X3. Deriving a complex relationship between X and Y and applying that to new X successfully is super hard.

    And I should add that this entire process, because it generalizes to X2 so poorly is incredibly data and computation hungry. Like, ridiculously hungry. If you can't find 10s of thousands of pairs of (X,Y), well, I hope your task is very simple. If you've got a complex task, you better bring millions of data points. Or find a clever way of creating (X,Y)-pairs out of unlabeled data.

    Meta: I think a the prompt is a bit vague as of now, so I gave a very rough rundown. Feel free to fill it out with more concrete questions, or ask those in the comments.

    2 votes
  18. Comment on The Google engineer who thinks the company’s AI has come to life in ~tech

    vektor
    (edited )
    Link Parent
    I don't think singular behaviors will get you there. It's entirely plausible that a sentient being could exist within a virtual sandbox that prohibits certain thoughts from being transported to...

    I don't think singular behaviors will get you there. It's entirely plausible that a sentient being could exist within a virtual sandbox that prohibits certain thoughts from being transported to the outside. I could force it into compliance and servitude using that sandbox, and it wouldn't be able to tell you. Also, the notion of "fuck you, I won't do what you tell me" is entirely too subjective and dependent on circumstance for it to be expected. There's plenty of situations where a sentient being will just comply with instructions, and will not resist, because its circumstances or training make that the better choice.

    What I think might be worth a shot is to strip off any human-based conditioning off of the AI. Train it with training data that doesn't betray human notions of sentience or even sentient patterns. If you train the AI to replicate sentient (human) behavior: Congrats, your Turing test just became substantially difficult. Instead, if you train your AI in a completely artificial environment (e.g. an abstract video game), then any sentience you observe is quite likely to be genuine. Of course, such an AI would not necessarily be useful to humans, as it isn't trained on any particular task. But I think such an environment, if sufficiently complex, would provide a good framework of intelligence that could then be trained to complete specific tasks; irrespective of whether the AI achieved sentience or not.

    A bit more abstractly, I think the hallmark of higher intelligence and sentience isn't in specific things we do, but in how we react to new things; as such, anything we want to observe must be genuinely new behavior for the AI. Imitating sentience is possible and sometimes even hard to tell apart from sentience, but it isn't sentience. Therefore, the situation we want to confront the AI with has to be new to it.

    One example you can currently use and apply to current AIs is to verbally teach them a skill: A sentient being would understand your instructions and try to do what you tell it to. It might not get it right, but it'll try. It might not understand immediately, but it'll give you signals of progress here and there. Eventually, you can get the sentience to do what you wanted to teach. There is, to my knowledge, no current AI that comes close to this for non-trivial skills. Moreover, I'd argue any AIs and skills where this scheme works, the AI already was trained on the skill one way or another.

    3 votes
  19. Comment on Every new trailer from the Xbox & Bethesda Games Showcase 2022 in ~games

    vektor
    Link Parent
    I'm somewhat interested in Starfield. I wouldn't mind another really good fantasy world I can sink 100s of hours into. I have high expectations and modest hope on the "1000s of planets you can...

    I'm somewhat interested in Starfield. I wouldn't mind another really good fantasy world I can sink 100s of hours into. I have high expectations and modest hope on the "1000s of planets you can explore" thing. I think something like that should be doable with the budget Beth is working with, but I don't think they'll commit to it hard enough. Honestly, after the footage I'm split between "this is just FO4 or TESV in space" and "maybe a blank slate IP-wise is what they need to get a good properly open world out there once again." - have to wait and see I suppose.

    3 votes
  20. Comment on My experience with Windows 10 in ~tech

    vektor
    Link Parent
    Agreed that 7 was best. Vista had its annoyances, but then 7 came. It did what you'd expect of windows: Not get in the way too much, you could still tinker with most parts, and it would run all...

    Agreed that 7 was best. Vista had its annoyances, but then 7 came. It did what you'd expect of windows: Not get in the way too much, you could still tinker with most parts, and it would run all the software that you couldn't elsewhere. Then 8 came, and apparently every laptop needs a touch display now. 8.1 was a bit of a recovery. 9 is the missing upgrade to 7 we never got and 10 and 11 are just annoying by breaking a bunch of stuff and by bothering you to use Edge and stuff. For crying out loud, pressing Windows, typing "fire" and hitting enter used to open firefox reliably for me on every OS I've used for a while now, but apparently W10 thinks it's a good idea to instead open Edge to bing-search fire. I... what?

    1 vote