skybrian's recent activity

  1. Comment on Google partners with Back Market to distribute ChromeOS Flex USB sticks in ~tech

    skybrian
    Link Parent
    A Chromebook is a machine with a web browser that you can give to a nontechnical user, knowing that they can't screw up too badly. There are a wide variety of Linux distros and maybe there are...

    A Chromebook is a machine with a web browser that you can give to a nontechnical user, knowing that they can't screw up too badly. There are a wide variety of Linux distros and maybe there are some that are just as good for that kind of user, but I don't know what to recommend offhand. It's not true of just any Linux distro since many of the cater to power users.

    4 votes
  2. Comment on Introducing EmDash — the spiritual successor to WordPress that solves plugin security in ~tech

    skybrian
    Link Parent
    I'm suggesting that what used to be proprietary could become an open standard, widely available. Unix didn't start out open source.

    I'm suggesting that what used to be proprietary could become an open standard, widely available.

    Unix didn't start out open source.

    1 vote
  3. Comment on Introducing EmDash — the spiritual successor to WordPress that solves plugin security in ~tech

    skybrian
    Link Parent
    Sandboxing is pretty hot these days so perhaps other hosting providers will implement something that works with EmDash’s plugin system?

    Sandboxing is pretty hot these days so perhaps other hosting providers will implement something that works with EmDash’s plugin system?

  4. Comment on Quantum computing bombshells that are not April Fools in ~science

    skybrian
    Link
    From the article: [...]

    From the article:

    For those of you who haven’t seen, there were actually two “bombshell” QC announcements this week. One, from Caltech, including friend-of-the-blog John Preskill, showed how to do quantum fault-tolerance with lower overhead than was previously known, by using high-rate codes, which could work for example in neutral-atom architectures (or possibly other architectures that allow nonlocal operations, like trapped ions). The second bombshell, from Google, gave a lower-overhead implementation of Shor’s algorithm to break 256-bit elliptic curve cryptography.

    Notably, out of an abudance of caution, the Google team chose to “publish” its result via a cryptographic zero-knowledge proof that their circuit exists (so, without revealing the details to attackers). This is the first time I’ve ever seen a new mathematical result actually announced that way, although I understand that there’s precedent in the 1500’s, when mathematicians would (for example) prove their ability to solve quartic equations by challenging their rivals to duels. I’m not sure how much it will actually help, as once other groups know that a smaller circuit exists, it might be only a short time until they’re able to find it as well.

    [...]

    When you put both of them together, Bitcoin signatures for example certainly look vulnerable to quantum attack earlier than was previously known! In particular, the Caltech group estimates that a mere 25,000 physical qubits might suffice for this, where a year ago the best estimates were in the millions. How much time will this save — maybe a year? Subtracting, of course, off a number of years that no one knows.

    6 votes
  5. Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp

    skybrian
    (edited )
    Link Parent
    I think "shared delusion" is going a bit far. A successful company like Amazon or Google or Walmart is certainly worth a lot. In nearly all possible futures, people will keep buying their stuff....

    I think "shared delusion" is going a bit far. A successful company like Amazon or Google or Walmart is certainly worth a lot. In nearly all possible futures, people will keep buying their stuff. If there is inflation, their customers will be paying more.

    We don't know how much they'll be worth because we don't know what sort of world they will be making money in or how well they'll adapt, but as things go in an uncertain world, they seem pretty solid.

    If you want a better guarantee, you can buy bonds, but the returns from tying your future income to a currency rather than to companies' incomes will likely be less.

    Of course the offered price could still be too high.

    3 votes
  6. Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp

    skybrian
    Link Parent
    I haven’t played with them much, but small models are for traditional language-understanding tasks. That is, things like text classification, sentiment analysis, summarization, or autocomplete,...

    I haven’t played with them much, but small models are for traditional language-understanding tasks. That is, things like text classification, sentiment analysis, summarization, or autocomplete, not general knowledge questions. Basically it’s for looking for an answer that’s already there in some text that you give it.

    Out of the box, it’s just a curiosity, but it might be useful for certain specialized apps?

    Maybe this company’s models will be more impressive when they scale up to what will fit on one server?

    10 votes
  7. Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp

    skybrian
    Link Parent
    Yes, that's right. Stock market prices are largely based on expectations about company's future earnings (particularly growth stocks). Often people learn that they're not as rich as they thought...

    Yes, that's right. Stock market prices are largely based on expectations about company's future earnings (particularly growth stocks). Often people learn that they're not as rich as they thought they were.

    But owning companies is valuable because the revenue often does increase and exceeds costs. Even though we don't know the future, we generally assume that the world won't come to an end and these companies will still be making money.

    Sometimes a data center gets hit by a missile, but life goes on.

    3 votes
  8. Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp

    skybrian
    Link Parent
    You need some awareness, but I think there's a large amount of low-effort speculation. Individual articles or comments are unlikely to be accurate or remembered.

    You need some awareness, but I think there's a large amount of low-effort speculation. Individual articles or comments are unlikely to be accurate or remembered.

    2 votes
  9. Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp

    skybrian
    Link Parent
    I doubt both OpenAI and Anthropic will implode. If they do, there is still Google. Beyond Google, there is a long list of less well-known and entirely obscure competitors. Here's a promising one I...

    I doubt both OpenAI and Anthropic will implode. If they do, there is still Google. Beyond Google, there is a long list of less well-known and entirely obscure competitors. Here's a promising one I just saw on Hacker News today:

    Announcing 1-bit Bonsai: The First Commercially Viable 1-bit LLMs.

    That's a pretty deep bench. If the bubble bursts I would expect at least some of these competitors to survive and thrive.

    The dot-com bust didn't stop the Internet from becoming ubiquitous and an economic downturn isn't going to stop AI.

    6 votes
  10. Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp

    skybrian
    Link Parent
    I don't see much point of repeating nonsense like that. There are bad takes about everything.

    I don't see much point of repeating nonsense like that. There are bad takes about everything.

    9 votes
  11. Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp

    skybrian
    Link Parent
    I think it's important to distinguish between investigating what AI can and can't do now (or how well it worked recently) and speculating about the future. Studying how things work now is valuable...

    I think it's important to distinguish between investigating what AI can and can't do now (or how well it worked recently) and speculating about the future.

    Studying how things work now is valuable in itself. Speculation is much less useful.

    6 votes
  12. Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp

    skybrian
    Link Parent
    It's difficult to call the peak of a growth curve though. It's like the quip that “the stock market has predicted nine out of the last five recessions." Similarly, people kept calling the end of...

    It's difficult to call the peak of a growth curve though. It's like the quip that “the stock market has predicted nine out of the last five recessions." Similarly, people kept calling the end of Moore's law.

    Eventually they will be right, but they might be wrong for many years.

    16 votes
  13. Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp

    skybrian
    Link
    From the article: [...] [...] [...] [...] [...]

    From the article:

    "Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality," he said. "It was kind of funny. It didn't really worry us." Of course, there are many Linux kernel maintainers, so for them, AI slop isn't as burdensome as it is for, say, Daniel Stenberg, founder and lead developer of cURL, where AI slop reports caused the cURL team to stop paying bug bounties.

    [...]

    Things have changed, Kroah-Hartman said. "Something happened a month ago, and the world switched. Now we have real reports." It's not just Linux, he continued. "All open source projects have real reports that are made with AI, but they're good, and they're real." Security teams across major open source projects talk informally and frequently, he noted, and everyone is seeing the same shift. "All open source security teams are hitting this right now."

    No one is quite sure what's behind it. Asked what changed, Kroah-Hartman was blunt: "We don't know. Nobody seems to know why. Either a lot more tools got a lot better, or people started going, 'Hey, let's start looking at this.' It seems like lots of different groups, different companies." What is clear is the scale. "For the kernel, we can handle it," he said.

    "We're a much larger team, very distributed, and our increase is real – and it's not slowing down. These are tiny things, they're not major things, but we need help on this for all the open source projects." Smaller projects, he implied, have far less capacity to absorb a sudden flood of plausible AI-generated bug reports and security findings – at least now they're real bugs and not garbage ones.

    [...]

    For now, AI is showing up more as a reviewer and assistant than as a full author of Linux kernel code, but that line is starting to blur. Kroah-Hartman has already done his own experiments with AI-generated patches.

    "I did a really stupid prompt," he recounted. "I said, 'Give me this,' and it spit out 60: 'Here's 60 problems I found, and here's the fixes for them.' About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right." Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. "The tools are good," he said. "We can't ignore this stuff. It's coming up, and it's getting better."

    [...]

    The sudden increase in AI-generated reports and AI-assisted work has also spurred a parallel push to build AI into the kernel's own review infrastructure. A key piece of that is Sashiko, a tool originally developed at Google and now donated to the Linux Foundation.

    [...]

    That work builds on earlier efforts inside specific subsystems. "The networking and the BPF people have been doing LLM-generated reviews for a while," said Kroah-Hartman. "The Direct Rendering Manager (DRM) people and now Google's tool are pulling all those into one common interface," he explained. "Different subsystems are adding better skills or prompts – for storage, here are the things you need to look for; for graphics, here are the things you need to look for. People are contributing in a public place for that, which is how it should be. This is very good."

    [...]

    AI reviewers, he stressed, are additive rather than authoritative. "On the review side, it's generating some good reviews. It doesn't get you everything. Some things are still wrong. But it does point out a lot of the obvious things," he said.

    One of the biggest immediate wins is turnaround time. When an AI reviewer flags obvious problems, submitters get feedback long before a human maintainer would realistically read the patch. "If I see it respond to something, it gives feedback to the submitter faster than the maintainer had a chance to, which is nice," Kroah-Hartman said. "We have a number of bots that run on patches as it is. If I see those fail, I just know I don't even need to look at that as a maintainer. And it gives the developer, 'Oh, I can go do another version tomorrow,' which helps increase the feedback a little better."

    Still, as AI-generated reports and patches grow, so does the review burden. "It's more reviews; it's more stuff we have to review for the kernel," he said. That's why efforts with the OpenSSF and its Alpha-Omega program matter. "We're working to try and create tools to help make it easier for maintainers to handle this incoming feed and deal with it."

    15 votes
  14. Comment on Professors are designing AI apps meant to help students think through problems in ~tech

    skybrian
    Link
    From the article: [...] [...] [...]

    From the article:

    Now, students still pull out their phones to prepare for his class — but they talk to an artificial intelligence app Wang designed. Before they are faced with tough questions from the professor and classmates, they argue at home with Caisey, as Wang nicknamed it.

    “A lot of AI tools in education are designed to make things more efficient,” he said. “Caisey capitalizes on precisely the opposite: the capacity to slow students down, to actually make them focus and to also make them consider very different ways of thinking about questions.”

    [...]

    Wang said the idea behind the app that helps students debate case studies is not actually new — it’s millennia old, rooted in the Oxbridge tutorship model that linked an instructor with one or two students for deep, thoughtful exchanges about what they were learning.

    Before AI, Wang said, that focused, stimulating experience was tough to scale. Now he’s seeing faculty rethinking the way they teach.

    Professors are using AI tools in many fields. At the Georgia Institute of Technology, a professor designed an app that helps electrical engineering students work through thorny problems. At Arizona State University, faculty-infused AI helps students in health sciences practice working with simulated patient experiences, chats with students mastering foreign languages, and guides biology students to help master the basics or extend themselves far beyond the course material.

    [...]

    Wang piloted the app last spring. Now thousands of students at Columbia and 15 other institutions, including the business schools at the University of California at Berkeley, the University of Pennsylvania and the University of Virginia, use Caisey. Wang and a team of people adapt the tool for other instructors and classes, with faculty telling them what they want to teach, what they want their students to read and what they want them to discuss.

    “It’s not a substitute for the really rich interaction that we have in class in the discussion,” said Rahul Bhandari, distinguished senior lecturer in AI and strategy at U-Va.’s Darden School of Business. But it’s helpful in preparing them to have more confidence in class, Bhandari said, and to present a more articulate, well-structured argument.

    Jill Cohen, one of Caisey’s co-founders and a former Columbia Business School student, spoke on a panel in one of Wang’s classes last year. Multiple students came up afterward to tell her they love the app and have had so much fun with it, she said; that was mind-blowing.

    [...]

    While some AI apps offer a “guided learning” or “educational” mode, faculty-designed AI taps directly into their expertise in the course curriculum to shape the guidance. Zhang designed the Smart Tutor at Georgia Tech using course materials for a notoriously difficult class. It provides feedback, allowing faculty to further pinpoint where students are having trouble, and adapt their teaching to those pain points.

    In a pilot study last spring, students said they appreciated getting guidance and feedback in real time, found it helpful and hoped it would be added to more classes. Nidhi Krishna, a sophomore from Atlanta, said that it gave her the insight that she kept making the same mistakes, and then helped her understand why and how to avoid that.

    4 votes
  15. Comment on Anticipating a world where LLM use is widespread in ~tech

    skybrian
    Link
    I think it’s quite hard to “imagine the traffic jam” accurately and in enough detail to do much in the way of planning, even if you’re pretty sure there will be traffic jams in some vague sense....

    I think it’s quite hard to “imagine the traffic jam” accurately and in enough detail to do much in the way of planning, even if you’re pretty sure there will be traffic jams in some vague sense.

    But it does seems likely that our AI minions will largely try to do what they’ve been told to do. They don’t have enough context to know when to act against their owners. At best you can build in some ethical rules about things the AI should never do, for anyone. But even these could probably be overridden by giving the AI some misleading context.

    So maybe we will end up with a system where everyone has their own AI lawyer advocating for their interests? And we’ll take for granted that this is an adversarial process.

    Perhaps in such a world, the reliable reporting of verifiable facts becomes more important? We take it for granted now, but the overwhelming success of Wikipedia was surprising at the time. I wonder what the equivalent will be in this new era?

    5 votes
  16. Comment on Inside the ‘self-driving’ lab revolution in ~science

    skybrian
    Link
    From the article: [...] [...] [...] I guess they're taking some of the labor out of laboratories?

    From the article:

    The robotic platform at the Chalmers University of Technology in Gothenburg, Sweden, is the brainchild of autonomous-lab pioneer Ross King. It is powered by artificial intelligence, self-driving and “fairly quiet”, King says. But it’s also fast. Working at full speed, Eve’s robotic arm can move a few metres per second, with a positional accuracy of a fraction of a millimetre. The team usually runs Eve slower than that — otherwise, King says, “it’s too scary”.

    Eve automates the process of early-stage drug design. One of Eve’s early achievements came in 2018, around three years after it was created, when it identified that the common antimicrobial compound triclosan can target an enzyme that is crucial to the survival of Plasmodium malaria parasites during their dormant phase in the liver1. To do this, Eve independently screened some 1,600 chemicals and modelled how their structure related to their activity to predict which ones were worth testing. King and his group armed the robot with background knowledge and a machine-learning framework for developing hypotheses. Eve then used those elements to design experiments to test these hypotheses and, crucially, performed them itself. The finding gave researchers a potential route to fighting treatment-resistant malaria. “It’s trying to make the scientific method in a machine,” says King.

    [...]

    Hiring a student for the job would probably have been cheaper, King admits. But his newest robot, Genesis, will be able to do enough experiments to make the process economically feasible3. King estimates that Genesis will cost £1 million (US$1.3 million) to build — the same price as Adam or Eve individually — but he estimates that it will eventually be at least an order of magnitude cheaper than human labour. King plans to use the system — which occupies one-fifth of floor space than Eve does — to model how genes, proteins and small molecules interact in cells. Part of that will involve taking around 10,000 mass-spectrometry measurements each day.

    Chemist and computer scientist Alán Aspuru-Guzik at the University of Toronto in Canada supervises a fleet of 50 self-driving autonomous robots across several labs and universities. Known as the Acceleration Consortium, it is funded by a grant worth Can$200 million (US$146 million).

    [...]

    With around 22,000 square metres of automated lab space at its AI Science Factory (AISF), the company plans to provide research and development services to pharmaceutical companies, materials-science firms and other research-intensive organizations. This year, it received about £500,000 from the UK government’s Advanced Research and Innovation Agency to test whether its self-driving robot — AI NanoScientist — can synthesize and improve the stability of colloidal nanoparticles, tiny particles suspended in a liquid medium.

    [...]

    That’s not to say robots can do everything humans can. “You can’t put a robot arm into a cage and catch a mouse in a corner, for instance. Human dexterity is amazing compared to current robots,” says King. Gregoire echoes the point, noting that some processes are simply too expensive to automate for now.

    I guess they're taking some of the labor out of laboratories?

    7 votes