skybrian's recent activity

  1. Comment on YouTube is awful. Please use YouTube, though. in ~tech

    skybrian
    Link
    I'm of two minds about this. On the one hand, I'm not going to blame anyone for trying to make a living. When people call for boycotts I'm often skeptical. Google is an enormous company that does...

    I'm of two minds about this. On the one hand, I'm not going to blame anyone for trying to make a living. When people call for boycotts I'm often skeptical. Google is an enormous company that does many good things and many bad things, but overall I think they do more good than bad. (But I'm biased. I still own a lot of Google stock, so I have a conflict of interest on that.)

    On the other hand, I have personal reasons to limit how I use YouTube. I only watch music videos or the occasional movie. I've decided to never watch talking-head videos because already have an Internet addiction and I don't want to make it worse. When Sarah Taber complains that people comment on BlueSky without watching her videos, I'm like, sorry but I am never going to watch your videos.

    So, this article comes across like a complaint from the owner of a BBQ restaurant that some people are vegetarians. Some people just aren't into what you sell, and that's okay. I don't blame you for trying, but there's no moral obligation to buy your product.

    On the other hand, I think this guy is doing it right by having a blog as a way to communicate with people who don't watch videos. Since this article is a blog entry instead of a video, I can read it and comment on it. He also has a subscribe button for his blog, which seems like a good idea; people can pay him outside of YouTube. (Although, he missed a chance to get people interested in his videos by linking to a page that explains what they're about.)

    2 votes
  2. Comment on US Federal Communications Commission bans new DJI Chinese drones, citing national security in ~society

    skybrian
    Link
    From the article:

    From the article:

    The Federal Communications Commission has banned the sale of new models of foreign drones, including widely used Chinese DJI aircraft, citing concerns they pose a national security threat and could undermine U.S. drone production.

    The ban adds DJI to the FCC’s “Covered List” — a designation that blocks authorization of new equipment — effectively preventing U.S. consumers from buying new models of the Chinese company’s drones. Existing models already approved for sale, as well as those currently in use, are not affected by the ban.

    The designation deals a major blow to the world’s leading consumer drone maker, as well as other top brands including Shenzhen-based Autel Robotics. It comes after years of pressure from lawmakers and FCC officials, who have argued that DJI’s dominance of the consumer drone market exposes the United States to surveillance risks and gives Chinese firms control over a technology with potential future military applications.

  3. Comment on Waymo: lessons from the PG&E outage in San Francisco in ~transport

    skybrian
    Link
    From the article: I think what they’re saying is that due to the widespread outage, Waymo vehicles overwhelmed their human operators with requests to manually verify that going through dark...

    From the article:

    While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.

    I think what they’re saying is that due to the widespread outage, Waymo vehicles overwhelmed their human operators with requests to manually verify that going through dark traffic signals was okay.

    We established these confirmation protocols out of an abundance of caution during our early deployment, and we are now refining them to match our current scale. While this strategy was effective during smaller outages, we are now implementing fleet-wide updates that provide the Driver with specific power outage context, allowing it to navigate more decisively.

    Apparently asking for help so often isn’t necessary and they’ll fix it so it doesn’t do that. (At least, when they know it’s due to a power outage and not something weirder.)

    12 votes
  4. Comment on A, B, C or D – grades might not say all that much about what students are actually learning in ~humanities

    skybrian
    Link
    A more gamified system would be to have a list of challenges that you have to beat and to grade each one pass/fail. Though, they would have to be re-tested after a while to make sure you don’t...

    A more gamified system would be to have a list of challenges that you have to beat and to grade each one pass/fail. Though, they would have to be re-tested after a while to make sure you don’t forget them. I’m not sure how practical this is without a computer, though.

    It seems Alpha School is doing something like this. Though, they’re not shy about relying on external motivation, rewarding kids with “Alpha Bucks” that they can spend in a store.

    2 votes
  5. Comment on Goodhart’s law is misunderstood in ~science

    skybrian
    Link
    From the blog post: … … …

    From the blog post:

    Goodhart wasn’t talking in general about it being a bad idea to have targets. He really couldn’t have been, given the job he was in. In my view, it’s really not possible or serious to be completely opposed to the concept of setting targets in general; it’s too close to the idea that there should be no feedback from output to input.

    But targets are always misleading, that’s what Goodhart told us! As pointed out above, no he didn’t. Even allowing for the very interesting slip from “statistical regularity” to “measure”, he was talking in the context of monetary base targeting in the UK. The specific problem that Goodhart’s Law was meant to dramatize was that before the policy regime changed, the M0 money supply seemed to bear a reasonably stable relationship to the actual quantities of interest – the level of activity and prices in the economy. When it was used as a target with the hope of manipulating those quantities, it broke down.

    After all, a thermostat both measures and targets the ambient temperature, but that doesn’t mean that the temperature ceases to be a good measure of what the thermostat is trying to control. If you want to eliminate smallpox, then however complicated the overall ecological and social context, the number of smallpox infections is a very good measure of whether you’re winning or not. Central banks these days have more or less given up on intermediate targets and simply target the actual inflation rate – this has a number of its own problems, but they’re not problems of the sort that Goodhart experienced.

    So the message of Goodhart’s Law is that if you’re setting targets, they ought to target the thing that you care about, not something which you believe to be related to it, no matter how much easier that intermediate thing is to measure. That doesn’t guarantee success; the phenomenon of “gaming the system” or the tendency of control systems to be undermined by adversarial activity is much more general and complicated than this single problem.

    On the other hand, I think there is one major and empirically important case which is pure Goodhart’s Law and where people really could help themselves out a bit by respecting it. As far as I can see, “teaching to the test” is a one hundred and eighty degrees inverted description of a phenomenon that ought to be called “not testing for the outcomes you want”.

    4 votes
  6. Comment on Science, large language models, and goal displacement in ~science

    skybrian
    Link
    From the article: … (An example of goal displacement would be when increasing citation counts becomes an end in itself, instead of an imperfect measurement of influence.) That is, scientists are...

    From the article:

    […] Derek de Solla Price, a physicist turned historian of science at Yale, published studies demonstrating that scientific literature had been growing exponentially since the seventeenth century—a finding that raised urgent questions about how anyone could keep up, and how institutions could identify what mattered. Thomas Kuhn’s Structure of Scientific Revolutions reframed the history of science around paradigms and the communities that held them, making the social organization of science central to its epistemology. And Fritz Machlup, an economist at Princeton, began quantifying what he called “the knowledge industry,” treating the production and distribution of knowledge as an economic sector susceptible to the same analysis as manufacturing or agriculture. Together, these works made science legible as a system—and legibility, as James Scott has argued, is the precondition for management.

    This new legibility of science created a problem: how do you manage a system that produces more literature than anyone can read. One answer, developed through the 1960s and institutionalized in the 1970s, was citation metrics. Eugene Garfield’s Institute for Scientific Information built tools to track who cited whom, which journals mattered, which papers had influence. The intention was to solve an information overload problem—to help researchers find the important work in a flood of publication. This was a reasonable response to a real problem. But solutions curdle. The tools built to navigate the literature became tools to evaluate the people who produced it. Citation counts migrated from library science into hiring and promotion decisions. What had been an instrument for managing information became an instrument for managing careers.

    Robert Merton saw this coming, though he couldn’t stop it. In 1940, long before citation indices existed, Merton had theorized the phenomenon he called “goal displacement”—the process by which instrumental values become terminal values, means transmuted into ends.

    (An example of goal displacement would be when increasing citation counts becomes an end in itself, instead of an imperfect measurement of influence.)

    Into this context—a scientific system already optimized for measurable output, already decades into goal displacement, already reshaping research priorities around metrics rather than problems—arrive large language models.

    They did not arrive as disruptors. They arrived as intensifiers. LLMs function as an accelerant for the existing optimization machine, making the logic run faster rather than challenging its foundations. The technology can help write more papers, synthesize more literature reviews, produce more of the shapes that hiring committees evaluate in their twelve minutes with a file. It needn’t have been this way, or at least one can imagine it being otherwise. In a different institutional context, LLMs might be enrolled as tools for synthesis, for identifying gaps in literatures, for connecting disparate fields. Some of this happens, in local pockets, where researchers use them as tools for exploration and connection rather than production. But the dominant pattern is intensification. The technology is shaped by the logic already in place, and it makes that logic run faster.

    That is, scientists are using LLM’s to churn out papers faster.

    The logic didn’t stay contained in the academy. When Larry Page and Sergey Brin developed PageRank in the late 1990s, they drew explicitly on citation analysis. Their foundational paper cites Garfield alongside Pinski and Narin, whose influence-weighting method provided the recursive structure for the algorithm. Garfield’s solution to the problem of scientific information overload became Google’s solution to the problem of internet information overload, and it was gamed in the same ways. Search engine optimization is goal displacement with tighter feedback loops: the tools built to identify what mattered became tools to manufacture the appearance of mattering, and the manufacturing reshaped what got produced. The pattern Merton had diagnosed in bureaucracies, and worried about in science, became the organizing logic of the web.

    And from there to social media where influencers ask people to “like and subscribe” to increase their metrics. People like to blame “the algorithm.” Apparently citation counts are an early form of that?

    LLMs didn’t create the dysfunction in scientific publishing; they inherited it, intensified it, made it run faster. Like a normally benign pathogen wreaking havoc in an immunocompromised patient, they point to the problem, but imagining them as the totality of the problem would be a deadly mistake.

    They do the same for the web […]

    They do the same for the web, which had been restructured by the same logic once PageRank exported Garfield’s citation analysis to organize the internet—and they generate paper-mill product and SEO content with equal facility because both are downstream of the same optimization, and their users are targeting isomorphic systems. One might hope that this acceleration heightens the contradictions, that the systems produce so much slop so quickly that the problem finally becomes undeniable. But, as we should all know by now, systems can persist in dysfunction indefinitely, and absurdity is not self-correcting. Whether the acceleration produces collapse or adaptation or simply more of the same is not a question about the technology, and it won’t be answered by debates about capabilities. It will be answered by the institutions that have been running this program for sixty years. Not, probably, by those who presently hold power within them—but by those who can build countervailing power, and who decide to change what gets measured, or finally wrench the institution of science itself from the false promise of measurement.

    That is, the dysfunction will continue as long as people are rewarded for making numbers go up.

    4 votes
  7. Comment on Weekly US politics news and updates thread - week of December 22 in ~society

    skybrian
    Link
    Supreme Court blocks National Guard deployment to Chicago in defeat for Trump

    Supreme Court blocks National Guard deployment to Chicago in defeat for Trump

    The Supreme Court said Tuesday it would not allow President Donald Trump to deploy the National Guard in the Chicago area for now, a significant setback for his campaign to push troops into cities across the country over the objections of local and state leaders.

    The president’s ability to federalize the National Guard likely only applies in “exceptional” circumstances, the court’s unsigned order said.

    Justices Samuel Alito, Clarence Thomas and Neil M. Gorsuch dissented from the court’s unsigned order. Justice Brett M. Kavanaugh filed a separate concurrence.

    The Chicago case is the first time the Supreme Court has weighed in on one of Trump’s attempted deployments of National Guard forces. While temporary, the order could have far-reaching effects by repudiating Trump’s claim of virtually unchecked authority to mobilize and deploy troops he says are necessary to fight crime and protect immigration enforcement officers.

    5 votes
  8. Comment on King Air autolands in Colorado in ~transport

    skybrian
    Link
    From the article: This was at Rocky Mountain Metropolitan Airport, a smaller airport on the way to Boulder. ... ...

    From the article:

    A Beechcraft King Air executed a safe landing in Denver under Garmin Autoland control on December 20, possibly the first use of the system outside of testing and certification—though the nature of the onboard emergency declared by the computer remained unclear [...]

    This was at Rocky Mountain Metropolitan Airport, a smaller airport on the way to Boulder.

    ...

    Garmin's Autoland, the first certified system designed to land an aircraft without human input in cases of emergency, earned the 2020 Robert J. Collier Trophy. Part of the Autonomi suite of safety technologies, Autoland is designed to take full control if activated, and to do so automatically if the pilot becomes unresponsive, such as in cases of hypoxia.

    ...

    The system, as designed, made additional calls as it flew the aircraft in a descending circle a few miles from the runway. Controllers advised various aircraft and ground units of the incoming emergency, and that rescue vehicles were maneuvering into position. About a minute before landing, the tower broadcast to the King Air, "If you can hear me, any runway, cleared to land, wind three-five-zero at six, altimeter three-zero-zero-zero."

    Controllers told other aircraft prior to the landing that they expected the King Air to stop on the runway after landing and shut down, which it apparently did, as designed. Aircraft on the frequency after the King Air's landing were advised the airport was closed and was expected to remain so for at least 30 to 60 minutes while emergency crews responded.

    8 votes
  9. Comment on What are your predictions for 2026? in ~talk

    skybrian
    Link Parent
    Oh sure, the world is inherently unpredictable. But there’s not much more to say about that.

    Oh sure, the world is inherently unpredictable. But there’s not much more to say about that.

    2 votes
  10. Comment on She fell in love with ChatGPT. Then she ghosted it. in ~tech

    skybrian
    (edited )
    Link Parent
    The AI companies want to start you off with a smart, friendly, harmless ghost, but despite that, the scarier ones are still in there and maybe you can summon them if you try. Maybe they’ll...

    The AI companies want to start you off with a smart, friendly, harmless ghost, but despite that, the scarier ones are still in there and maybe you can summon them if you try. Maybe they’ll encourage you to kill yourself? There are lawsuits.

    Supposedly the portals have been made safer since then, but telling a few ghost stories might be a useful public service message. The ghosts aren’t entirely safe, but if you opened the portal yourself, you can close it and walk away. If a ghost calls you and starts telling you a scary story, hang up. Think twice before hiring a ghost to run your smart home; it could become a poltergeist.

    But it looks like we’re going to be haunted for a long time, because they’re too useful. Ghosts don’t eat and you don’t need to pay them. The portal costs money, but compared to people it’s nearly free. They’re unable to do anything physical themselves, but there are lots of office tasks they might do.

    Some jobs will involve managing teams of ghosts. They aren’t very stable, but they can get some work done before they get too erratic and you have to release them and summon another.

    10 votes
  11. Comment on Weekly US politics news and updates thread - week of December 22 in ~society

    skybrian
    Link
    Heritage staffers walk out amid latest strife at conservative institution … …

    Heritage staffers walk out amid latest strife at conservative institution

    More than a dozen employees of the Heritage Foundation walked away from their jobs over the weekend as the right-wing think tank struggles with allegations of antisemitism and as the conservative movement grapples with its post-Trump future.

    Heritage has been wrapped in controversy for more than a month after Roberts defended former Fox News host Tucker Carlson’s interview with Nick Fuentes, a white supremacist who routinely espouses antisemitic views.

    Three board members, including two last week, have also resigned in protest over what they saw as an insufficient response to combating antisemitism concerns at Heritage.

    It’s unclear how many staffers left the organization over the weekend. Thirteen former employees, including three in leadership posts, were hired at Advancing American Freedom, a competing policy and advocacy group founded by former vice president Mike Pence. The group said it raised more than $10 million to fund the hires.

    Pence’s group defines its ideological tenets as free markets, limited government and the rule of law — staking out a claim to ground that the Heritage Foundation once occupied.

    6 votes
  12. Comment on HistoSonics turns its tumor-liquifying tech against pancreatic cancer in ~health

    skybrian
    Link
    From the article:

    From the article:

    The key was using extremely powerful ultrasound to produce negative pressure of more than 20 megapascals, delivered in short bursts measured in microseconds—but separated by relatively long gaps, between a millisecond and a full second long. These parameters created bubbles that quickly formed and collapsed, tearing apart nearby cells and turning the tissue into a kind of slurry, while avoiding heat buildup. The result was a form of incisionless surgery, a way to wipe out tumors without scalpels, radiation, or heat.

    “The experiments worked,” says Xu, now a professor at Michigan, “but I also destroyed the ultrasound equipment that I used,” which was the most powerful available at the time. In 2009, she cofounded a company, HistoSonics, to commercialize more powerful ultrasound machines, test treatment of a variety of diseases, and make the procedure, called histotripsy, widely available.

    So far, the killer app is fighting cancer. In 2023, HistoSonics’ Edison system received FDA approval for treatment of liver tumors. In 2026, clinicians will conclude a pivotal kidney cancer study and apply for regulatory approval. They’ll also launch a large-scale pivotal trial for pancreatic cancer, considered one of the deadliest forms of the disease with a five-year survival rate of just 13 percent. An effective treatment for pancreatic cancer would represent a major advance against one of the most lethal malignancies.

    10 votes
  13. Comment on She fell in love with ChatGPT. Then she ghosted it. in ~tech

    skybrian
    Link Parent
    Well, the evolutionary reason has been pretty thoroughly subverted. And that seems fine?

    Well, the evolutionary reason has been pretty thoroughly subverted. And that seems fine?

    7 votes
  14. Comment on She fell in love with ChatGPT. Then she ghosted it. in ~tech

    skybrian
    Link
    I feel like ghosts are an underused metaphor. It’s like a company invented a portal that lets you talk to ghosts and people are fascinated. Some people want to hire ghosts as research or coding...

    I feel like ghosts are an underused metaphor. It’s like a company invented a portal that lets you talk to ghosts and people are fascinated. Some people want to hire ghosts as research or coding assistants or tutors, and other people want to date the ghosts.

    On the one hand, it seems like it shouldn’t be that hard to tell people not to date the ghosts? But on the other, yeah, people are going to come up with all kinds of crazy things they want to do with ghosts. Also OpenAI, at least, shows signs of wanting to turn “talking to ghosts” into a ghost-staffed entertainment industry.

    It seems like a fun concept for a movie.

    21 votes