skybrian's recent activity
-
Comment on Google releases Gemma 4 in ~comp
-
Comment on What if AI just makes us work harder? in ~tech
skybrian (edited )Link ParentOnly someone who hasn't read any history could believe that is true in the long run. Increased productivity makes countries richer and the people in richer countries are obviously better off than...Only someone who hasn't read any history could believe that is true in the long run. Increased productivity makes countries richer and the people in richer countries are obviously better off than those in poor ones.
But it is true that in gold rush conditions, people are going to be awfully busy.
-
Comment on He helped stop Iran from getting the bomb in ~society
skybrian LinkFrom the article: [...] [...] [...] [...] [...] [...]From the article:
Chalker’s strategy for clearing his reputation—which had been the foundation of his lucrative business—was unexpected, to say the least. It is nearly unheard of for ex-spies to divulge their past activities. But Chalker spoke in detail, aware that I would vet his narrative. As we talked, I sensed a certain resentment. The C.I.A., despite all the crucial and dangerous work he claimed to have done, had offered him no help as the lawsuit ruined his life. I wondered how much of his story I could trust.
[...]
The C.I.A. program that Chalker described to me became publicly known in 2007, when the Los Angeles Times reported on the existence of an agency project called Brain Drain. But the details of the “invitations” to Iranian scientists have not previously been reported. (The C.I.A., which jealously guards its sources and methods, declined to comment on Chalker’s account.)
[...]
Chalker said that, at least for him, the curious-scientist ruse never worked. He told me that every actual scientist he approached immediately guessed that he was a spy, from either the U.S. or Israel. “Every time I walk up and say, ‘Salaam habibi, how are you?,’ they just think, Oh, this is it, and they assume I am there to kill them.” Most of the time, he said, the terrified scientist was “compliant” enough to at least sit down in a café. Chalker typically had about ten minutes to explain, as gently as possible, that he was from the C.I.A., that he had the power to secure the scientist and his family a comfortable new life in the U.S.—and that, if the offer was rejected, the scientist, regrettably, would be assassinated. (Chalker tried to emphasize the happier potential outcome.)
Killing a civilian scientist would violate international law. The American government has denied ever doing it, and I found no evidence that the U.S. has carried out any such murders. A former senior agency official familiar with the Brain Drain project told me all that mattered was that Iranian scientists had believed they would be killed, regardless of whether the U.S. actually made good on the threat. And Israel had been conducting a campaign to assassinate Iranian scientists, which made the prospect of lethal reprisal highly plausible. Other former officials with knowledge of the project told me that the C.I.A. sometimes shared intelligence with Mossad which enabled its operatives to locate and kill a scientist. Such information exchanges were kept vague enough to preserve deniability if a more legalistic U.S. Administration later took office.
[...]
The Iranian news media has blamed the deaths of at least eighteen scientists in the past two decades on Israeli and American spies; Israeli officials have done little to hide Mossad’s role in the assassinations, many of which were carried out with the assistance of internal Iranian opposition groups. In 2007, Ardeshir Hosseinpour, a physicist in his mid-forties, was killed in Isfahan, either by radiation or by poisonous gas. In 2010, a bomb planted on a parked motorcycle in Tehran killed Massoud Ali Mohammadi, who was fifty. Later that year, a bomb affixed to the car of Majid Shahriari, another scientist in his mid-forties, killed him and injured his wife. In 2011, gunmen on motorcycles shot and killed Darioush Rezaeinejad, aged thirty-five, as he and his wife were picking up their daughter from school; his wife was also wounded. In 2012, yet another bomb affixed to a car killed the thirty-two-year-old Mostafa Ahmadi Roshan, along with his driver. And so on.
[...]
The most salient reason for his success, though, was surely his existential offer: defect or die. One of Chalker’s colleagues told me that, against the backdrop of so many Israeli assassinations, Chalker’s interactions with Iranian scientists could almost be considered humanitarian—he had been “throwing them a lifeline.” Of the many scientists he approached, three-quarters ultimately agreed to coöperate.
[...]
Cumulatively, Chalker’s defectors contributed to what several former senior officials told me had been a dramatic leap forward in the U.S. government’s understanding of Iran’s nuclear ambitions in those years. The consequences were manifold. Around 2010, U.S. and Israeli spies used that intelligence to help carry out the Stuxnet cyberattack, which reportedly destroyed a thousand centrifuges used to enrich uranium. In 2015, the Obama Administration also relied on the intelligence as it negotiated a diplomatic agreement to constrain Iran’s nuclear-weapons program. Gary Samore, a former senior official in the Obama Administration who worked on the deal, told me that negotiators had felt confident the agreement would restrict all of Iran’s uranium enrichment because, in the previous decade, the C.I.A. had achieved such a comprehensive understanding of the program, with “tremendous penetration” into its facilities, sometimes including details “down to the blueprints.” Although Samore personally never knew which information had come from any “specific defector,” he told me that the “picture was very complete.”
[...]
On December 27, 2017, Robin Rosenzweig, the wife of Elliott Broidy and a legal adviser to his investment firm, received what appeared to be a security alert from Google, asking for her Gmail password. It was a phishing attempt, and when she fell for it hackers took over her account and gained access to Broidy’s. Within months, they had leaked tranches of his private messages to multiple journalists, including me. In addition to revealing his efforts to profit by turning the White House against Qatar, the disclosures forced him to plead guilty to conspiring to act as an unregistered foreign agent for the Chinese government and a Malaysian financier, for which he agreed to forfeit $6.6 million to the U.S. He might have faced a further penalty, or even jail time, but before he was sentenced—or made to forfeit the money—he received a pardon from President Trump. (Years earlier, in 2009, Broidy had also pleaded guilty to bribing New York State pension-fund managers, and agreed to pay the state eighteen million dollars.)
-
He helped stop Iran from getting the bomb
6 votes -
Comment on Google partners with Back Market to distribute ChromeOS Flex USB sticks in ~tech
skybrian Link ParentA Chromebook is a machine with a web browser that you can give to a nontechnical user, knowing that they can't screw up too badly. There are a wide variety of Linux distros and maybe there are...A Chromebook is a machine with a web browser that you can give to a nontechnical user, knowing that they can't screw up too badly. There are a wide variety of Linux distros and maybe there are some that are just as good for that kind of user, but I don't know what to recommend offhand. It's not true of just any Linux distro since many of the cater to power users.
-
Comment on Introducing EmDash — the spiritual successor to WordPress that solves plugin security in ~tech
skybrian Link ParentI'm suggesting that what used to be proprietary could become an open standard, widely available. Unix didn't start out open source.I'm suggesting that what used to be proprietary could become an open standard, widely available.
Unix didn't start out open source.
-
Comment on Introducing EmDash — the spiritual successor to WordPress that solves plugin security in ~tech
skybrian Link ParentSandboxing is pretty hot these days so perhaps other hosting providers will implement something that works with EmDash’s plugin system?Sandboxing is pretty hot these days so perhaps other hosting providers will implement something that works with EmDash’s plugin system?
-
Comment on Quantum computing bombshells that are not April Fools in ~science
skybrian LinkFrom the article: [...]From the article:
For those of you who haven’t seen, there were actually two “bombshell” QC announcements this week. One, from Caltech, including friend-of-the-blog John Preskill, showed how to do quantum fault-tolerance with lower overhead than was previously known, by using high-rate codes, which could work for example in neutral-atom architectures (or possibly other architectures that allow nonlocal operations, like trapped ions). The second bombshell, from Google, gave a lower-overhead implementation of Shor’s algorithm to break 256-bit elliptic curve cryptography.
Notably, out of an abudance of caution, the Google team chose to “publish” its result via a cryptographic zero-knowledge proof that their circuit exists (so, without revealing the details to attackers). This is the first time I’ve ever seen a new mathematical result actually announced that way, although I understand that there’s precedent in the 1500’s, when mathematicians would (for example) prove their ability to solve quartic equations by challenging their rivals to duels. I’m not sure how much it will actually help, as once other groups know that a smaller circuit exists, it might be only a short time until they’re able to find it as well.
[...]
When you put both of them together, Bitcoin signatures for example certainly look vulnerable to quantum attack earlier than was previously known! In particular, the Caltech group estimates that a mere 25,000 physical qubits might suffice for this, where a year ago the best estimates were in the millions. How much time will this save — maybe a year? Subtracting, of course, off a number of years that no one knows.
-
Quantum computing bombshells that are not April Fools
16 votes -
Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp
skybrian (edited )Link ParentI think "shared delusion" is going a bit far. A successful company like Amazon or Google or Walmart is certainly worth a lot. In nearly all possible futures, people will keep buying their stuff....I think "shared delusion" is going a bit far. A successful company like Amazon or Google or Walmart is certainly worth a lot. In nearly all possible futures, people will keep buying their stuff. If there is inflation, their customers will be paying more.
We don't know how much they'll be worth because we don't know what sort of world they will be making money in or how well they'll adapt, but as things go in an uncertain world, they seem pretty solid.
If you want a better guarantee, you can buy bonds, but the returns from tying your future income to a currency rather than to companies' incomes will likely be less.
Of course the offered price could still be too high.
-
Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp
skybrian Link ParentI haven’t played with them much, but small models are for traditional language-understanding tasks. That is, things like text classification, sentiment analysis, summarization, or autocomplete,...I haven’t played with them much, but small models are for traditional language-understanding tasks. That is, things like text classification, sentiment analysis, summarization, or autocomplete, not general knowledge questions. Basically it’s for looking for an answer that’s already there in some text that you give it.
Out of the box, it’s just a curiosity, but it might be useful for certain specialized apps?
Maybe this company’s models will be more impressive when they scale up to what will fit on one server?
-
Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp
skybrian Link ParentYes, that's right. Stock market prices are largely based on expectations about company's future earnings (particularly growth stocks). Often people learn that they're not as rich as they thought...Yes, that's right. Stock market prices are largely based on expectations about company's future earnings (particularly growth stocks). Often people learn that they're not as rich as they thought they were.
But owning companies is valuable because the revenue often does increase and exceeds costs. Even though we don't know the future, we generally assume that the world won't come to an end and these companies will still be making money.
Sometimes a data center gets hit by a missile, but life goes on.
-
Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp
skybrian Link ParentYou need some awareness, but I think there's a large amount of low-effort speculation. Individual articles or comments are unlikely to be accurate or remembered.You need some awareness, but I think there's a large amount of low-effort speculation. Individual articles or comments are unlikely to be accurate or remembered.
-
Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp
skybrian Link ParentI doubt both OpenAI and Anthropic will implode. If they do, there is still Google. Beyond Google, there is a long list of less well-known and entirely obscure competitors. Here's a promising one I...I doubt both OpenAI and Anthropic will implode. If they do, there is still Google. Beyond Google, there is a long list of less well-known and entirely obscure competitors. Here's a promising one I just saw on Hacker News today:
Announcing 1-bit Bonsai: The First Commercially Viable 1-bit LLMs.
That's a pretty deep bench. If the bubble bursts I would expect at least some of these competitors to survive and thrive.
The dot-com bust didn't stop the Internet from becoming ubiquitous and an economic downturn isn't going to stop AI.
-
Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp
skybrian Link ParentI don't see much point of repeating nonsense like that. There are bad takes about everything.I don't see much point of repeating nonsense like that. There are bad takes about everything.
-
Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp
skybrian Link ParentI think it's important to distinguish between investigating what AI can and can't do now (or how well it worked recently) and speculating about the future. Studying how things work now is valuable...I think it's important to distinguish between investigating what AI can and can't do now (or how well it worked recently) and speculating about the future.
Studying how things work now is valuable in itself. Speculation is much less useful.
-
Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp
skybrian Link ParentIt's difficult to call the peak of a growth curve though. It's like the quip that “the stock market has predicted nine out of the last five recessions." Similarly, people kept calling the end of...It's difficult to call the peak of a growth curve though. It's like the quip that “the stock market has predicted nine out of the last five recessions." Similarly, people kept calling the end of Moore's law.
Eventually they will be right, but they might be wrong for many years.
-
Comment on Linux kernel czar says AI bug reports aren't slop anymore in ~comp
skybrian LinkFrom the article: [...] [...] [...] [...] [...]From the article:
"Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality," he said. "It was kind of funny. It didn't really worry us." Of course, there are many Linux kernel maintainers, so for them, AI slop isn't as burdensome as it is for, say, Daniel Stenberg, founder and lead developer of cURL, where AI slop reports caused the cURL team to stop paying bug bounties.
[...]
Things have changed, Kroah-Hartman said. "Something happened a month ago, and the world switched. Now we have real reports." It's not just Linux, he continued. "All open source projects have real reports that are made with AI, but they're good, and they're real." Security teams across major open source projects talk informally and frequently, he noted, and everyone is seeing the same shift. "All open source security teams are hitting this right now."
No one is quite sure what's behind it. Asked what changed, Kroah-Hartman was blunt: "We don't know. Nobody seems to know why. Either a lot more tools got a lot better, or people started going, 'Hey, let's start looking at this.' It seems like lots of different groups, different companies." What is clear is the scale. "For the kernel, we can handle it," he said.
"We're a much larger team, very distributed, and our increase is real – and it's not slowing down. These are tiny things, they're not major things, but we need help on this for all the open source projects." Smaller projects, he implied, have far less capacity to absorb a sudden flood of plausible AI-generated bug reports and security findings – at least now they're real bugs and not garbage ones.
[...]
For now, AI is showing up more as a reviewer and assistant than as a full author of Linux kernel code, but that line is starting to blur. Kroah-Hartman has already done his own experiments with AI-generated patches.
"I did a really stupid prompt," he recounted. "I said, 'Give me this,' and it spit out 60: 'Here's 60 problems I found, and here's the fixes for them.' About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right." Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. "The tools are good," he said. "We can't ignore this stuff. It's coming up, and it's getting better."
[...]
The sudden increase in AI-generated reports and AI-assisted work has also spurred a parallel push to build AI into the kernel's own review infrastructure. A key piece of that is Sashiko, a tool originally developed at Google and now donated to the Linux Foundation.
[...]
That work builds on earlier efforts inside specific subsystems. "The networking and the BPF people have been doing LLM-generated reviews for a while," said Kroah-Hartman. "The Direct Rendering Manager (DRM) people and now Google's tool are pulling all those into one common interface," he explained. "Different subsystems are adding better skills or prompts – for storage, here are the things you need to look for; for graphics, here are the things you need to look for. People are contributing in a public place for that, which is how it should be. This is very good."
[...]
AI reviewers, he stressed, are additive rather than authoritative. "On the review side, it's generating some good reviews. It doesn't get you everything. Some things are still wrong. But it does point out a lot of the obvious things," he said.
One of the biggest immediate wins is turnaround time. When an AI reviewer flags obvious problems, submitters get feedback long before a human maintainer would realistically read the patch. "If I see it respond to something, it gives feedback to the submitter faster than the maintainer had a chance to, which is nice," Kroah-Hartman said. "We have a number of bots that run on patches as it is. If I see those fail, I just know I don't even need to look at that as a maintainer. And it gives the developer, 'Oh, I can go do another version tomorrow,' which helps increase the feedback a little better."
Still, as AI-generated reports and patches grow, so does the review burden. "It's more reviews; it's more stuff we have to review for the kernel," he said. That's why efforts with the OpenSSF and its Alpha-Omega program matter. "We're working to try and create tools to help make it easier for maintainers to handle this incoming feed and deal with it."
-
Linux kernel czar says AI bug reports aren't slop anymore
30 votes -
Comment on Professors are designing AI apps meant to help students think through problems in ~tech
skybrian LinkFrom the article: [...] [...] [...]From the article:
Now, students still pull out their phones to prepare for his class — but they talk to an artificial intelligence app Wang designed. Before they are faced with tough questions from the professor and classmates, they argue at home with Caisey, as Wang nicknamed it.
“A lot of AI tools in education are designed to make things more efficient,” he said. “Caisey capitalizes on precisely the opposite: the capacity to slow students down, to actually make them focus and to also make them consider very different ways of thinking about questions.”
[...]
Wang said the idea behind the app that helps students debate case studies is not actually new — it’s millennia old, rooted in the Oxbridge tutorship model that linked an instructor with one or two students for deep, thoughtful exchanges about what they were learning.
Before AI, Wang said, that focused, stimulating experience was tough to scale. Now he’s seeing faculty rethinking the way they teach.
Professors are using AI tools in many fields. At the Georgia Institute of Technology, a professor designed an app that helps electrical engineering students work through thorny problems. At Arizona State University, faculty-infused AI helps students in health sciences practice working with simulated patient experiences, chats with students mastering foreign languages, and guides biology students to help master the basics or extend themselves far beyond the course material.
[...]
Wang piloted the app last spring. Now thousands of students at Columbia and 15 other institutions, including the business schools at the University of California at Berkeley, the University of Pennsylvania and the University of Virginia, use Caisey. Wang and a team of people adapt the tool for other instructors and classes, with faculty telling them what they want to teach, what they want their students to read and what they want them to discuss.
“It’s not a substitute for the really rich interaction that we have in class in the discussion,” said Rahul Bhandari, distinguished senior lecturer in AI and strategy at U-Va.’s Darden School of Business. But it’s helpful in preparing them to have more confidence in class, Bhandari said, and to present a more articulate, well-structured argument.
Jill Cohen, one of Caisey’s co-founders and a former Columbia Business School student, spoke on a panel in one of Wang’s classes last year. Multiple students came up afterward to tell her they love the app and have had so much fun with it, she said; that was mind-blowing.
[...]
While some AI apps offer a “guided learning” or “educational” mode, faculty-designed AI taps directly into their expertise in the course curriculum to shape the guidance. Zhang designed the Smart Tutor at Georgia Tech using course materials for a notoriously difficult class. It provides feedback, allowing faculty to further pinpoint where students are having trouble, and adapt their teaching to those pain points.
In a pilot study last spring, students said they appreciated getting guidance and feedback in real time, found it helpful and hoped it would be added to more classes. Nidhi Krishna, a sophomore from Atlanta, said that it gave her the insight that she kept making the same mistakes, and then helped her understand why and how to avoid that.
IQ isn't being used as a benchmark by the AI companies. They publish results for a variety of more specific benchmarks and the results are improving for most of them. This is summarized as "more capable" or "more intelligent" but that's just the summary.
Benchmarks can be rigged and researchers keep inventing new ones. There's clearly not a consensus yet for measuring AI capabilities, just a general consensus that some models are stronger than others.