skybrian's recent activity
-
Comment on What are some bands you regret not seeing live (or, just never had the chance to see in the first place)? in ~music
-
Comment on New AirSnitch attack breaks Wi-Fi encryption in homes, offices, and enterprises in ~tech
skybrian LinkFrom the article: [...] [...] [...] [...] [...] [...] [...] [...] [...] The attack is somewhat mitigated by widespread use of https and ssh. Something like Tailscale would probably help for...From the article:
New research shows that behaviors that occur at the very lowest levels of the network stack make encryption—in any form, not just those that have been broken in the past—incapable of providing client isolation, an encryption-enabled protection promised by all router makers, that is intended to block direct communication between two or more connected clients.
[...]
The isolation can effectively be nullified through AirSnitch, the name the researchers gave to a series of attacks that capitalize on the newly discovered weaknesses. Various forms of AirSnitch work across a broad range of routers, including those from Netgear, D-Link, Ubiquiti, Cisco, and those running DD-WRT and OpenWrt.
AirSnitch “breaks worldwide Wi-Fi encryption, and it might have the potential to enable advanced cyberattacks,” Xin’an Zhou, the lead author of the research paper, said in an interview. “Advanced attacks can build on our primitives to [perform] cookie stealing, DNS and cache poisoning. Our research physically wiretaps the wire altogether so these sophisticated attacks will work. It’s really a threat to worldwide network security.” Zhou presented his research on Wednesday at the 2026 Network and Distributed System Security Symposium.
[...]
The most powerful such attack is a full, bidirectional machine-in-the-middle (MitM) attack, meaning the attacker can view and modify data before it makes its way to the intended recipient. The attacker can be on the same SSID, a separate one, or even a separate network segment tied to the same AP. It works against small Wi-Fi networks in both homes and offices and large networks in enterprises.
[...]
Given the range of possibilities it affords, AirSnitch gives attackers capabilities that haven’t been possible with other Wi-Fi attacks, including KRACK from 2017 and 2019 and more recent Wi-Fi attacks that, like AirSnitch, inject data (known as frames) into remote GRE tunnels and bypass network access control lists.
[...]
The MitM targets Layers 1 and 2 and the interaction between them. It starts with port stealing, one of the earliest attack classes of Ethernet. An attacker carries it out by modifying the Layer-1 mapping that associates a network port with a victim’s MAC—a unique address that identifies each connected device. By connecting to the BSSID that bridges the AP to a radio frequency the target isn’t using (usually a 2.4GHz or 5GHz) and completing a Wi-Fi four-way handshake, the attacker replaces the target’s MAC with one of their own.
[...]
“In a normal Layer-2 switch, the switch learns the MAC of the client by seeing it respond with its source address,” Moore explained. “This attack confuses the AP into thinking that the client reconnected elsewhere, allowing an attacker to redirect Layer-2 traffic. Unlike Ethernet switches, wireless APs can’t tie a physical port on the device to a single client; clients are mobile by design.”
[...]
“Even when the guest SSID has a different name and password, it may still share parts of the same internal network infrastructure as your main Wi-Fi,” the researcher explained. “In some setups, that shared infrastructure can allow unexpected connectivity between guest devices and trusted devices.”
[...]
As noted earlier, every tested router was vulnerable to at least one attack. Zhou said that some router makers have already released updates that mitigate some of the attacks, and more updates are expected in the future. But he also said some manufacturers have told him that some of the systemic weaknesses can only be addressed through changes in the underlying chips they buy from silicon makers.
The hardware manufacturers face yet another challenge: The client isolation mechanisms vary from maker to maker. With no industry-wide standard, these one-off solutions are splintered and may not receive the concerted security attention that formal protocols are given.
[...]
If the network is properly secured—meaning it’s protected by a strong password that’s known only to authorized users—AirSnitch may not be of much value to an attacker. The nuance here is that even if an attacker doesn’t have access to a specific SSID, they may still use AirSnitch if they have access to other SSIDs or BSSIDs that use the same AP or other connecting infrastructure.
[...]
The most effective remedy may be to adopt a security stance known as zero trust, which treats each node inside a network as a potential adversary until it provides proof it can be trusted. This model is challenging for even well-funded enterprise organizations to adopt, although it’s becoming easier. It’s not clear if it will ever be feasible for more casual Wi-Fi users in homes and smaller businesses.
The attack is somewhat mitigated by widespread use of https and ssh. Something like Tailscale would probably help for connections between machines on the same network? Since it only works if the attacker has a connection to the wifi router (on any network) either not having a guest network or having one with a decent password probably helps too.
It seems bad if you're connected to someone else's WiFi network, but it's always been the case that you should be more careful then.
-
New AirSnitch attack breaks Wi-Fi encryption in homes, offices, and enterprises
12 votes -
Comment on How The New York Times uses a custom AI tool to track the “manosphere” in ~life.men
skybrian Link ParentA simple tool can still be quite useful. Also there's a lot of activity around figuring out how best to build tools that use LLM calls to do interesting things. These tools are cheap to build so...A simple tool can still be quite useful.
Also there's a lot of activity around figuring out how best to build tools that use LLM calls to do interesting things. These tools are cheap to build so there's a zillion of them. People are building their own.
It reminds me a bit of when seemingly everyone was building their own websites. A lot of them will be slop. Most will be ignored. But I expect there will be hits, too, like that crazy OpenClaw thing. Well, hopefully better than that next time.
-
Comment on Wolbachia-infected mosquitoes can lower dengue risk by 70%, citywide experiment finds in ~health
skybrian LinkFrom the article: [...]From the article:
In a two-year-long citywide experiment in Singapore, researchers divided urban neighborhoods into clusters, releasing sterile, Wolbachia-infected Aedes aegypti male mosquitoes in some areas while leaving others untreated to test whether this biological approach could reduce disease transmission in a densely populated city.
The mosquito releases proved to be quite effective. In areas where the intervention was used, mosquito numbers fell sharply, and the people living in treated neighborhoods were about 70% less likely to develop symptomatic dengue after a few months of exposure. The findings are published in The New England Journal of Medicine.
[...]
Over the past few years, scientists have discovered that infecting Aedes aegypti mosquitoes with Wolbachia bacteria can be a powerful alternative to traditional dengue control methods. Wolbachia prevents the dengue virus from replicating inside these mosquitoes, making them far less capable of spreading the disease.
Project Wolbachia works by releasing male Aedes mosquitoes that carry Wolbachia. Although male mosquitoes do not bite humans, they play an important role in reducing the population of biting mosquitoes that transmit dengue.
When these infected males are released to mate with wild female mosquitoes that do not carry Wolbachia, the eggs they produce do not hatch. Over time, repeated releases result in fewer mosquitoes surviving in the city. This specific strategy is known as the Wolbachia-mediated incompatible insect technique–sterile insect technique (IIT-SIT).
-
Wolbachia-infected mosquitoes can lower dengue risk by 70%, citywide experiment finds
5 votes -
Comment on The first fully general computer action model in ~tech
skybrian LinkFrom the article: [...] [...] [...]From the article:
We designed FDM-1, a foundation model for computer use. FDM-1 is trained on videos from a portion of our 11-million-hour screen recording dataset, which we labeled using an inverse dynamics model that we trained. Our video encoder can compress almost 2 hours of 30 FPS video in only 1M tokens. FDM-1 is the first model with the long-context training needed to become a coworker for CAD, finance, engineering, and eventually ML research, and it consistently improves with scale. It trains and infers directly on video instead of screenshots and can learn unsupervised from the entirety of the internet.
Before today, the recipe for building a computer use agent was to finetune a vision-language model (VLM) on contractor-annotated screenshots of computer use, then build reinforcement learning environments to learn each specific downstream task. Agents trained this way are unable to act on more than a few seconds of context, process high-framerate video, do long-horizon tasks, or scale to competent agents.
[...]
To train on all this video, you need to label it with actions like key presses and mouse movements. Prior literature has explored automatically labeling data: in Behavior Cloning from Observation, the researchers taught an “inverse dynamics model” (IDM) to label what action was taken between before states and after states in various simulated environments. IDM-labeling is possible for computer use datasets because mouse movement and typing actions are often easily inferable from the screen: if a “K” shows up, you can be reasonably confident the “K” key was pressed. [1] 1. There are harder examples (e.g. a Cmd+V from an earlier Cmd+C) but looking at minutes of history lets us accurately label long-range inverse dynamics, so we can have high confidence in the sequence of actions that produced a given computer state for almost any video. OpenAI’s Video PreTraining (VPT) paper was the first to apply this method at scale, bootstrapping a Minecraft-specific IDM on a small amount of contractor data to create a competent Minecraft agent with six seconds of context. [2] 2. https://arxiv.org/pdf/2510.19 VideoAgentTrek also trained a computer action IDM to label data. The key problem here is they don’t have video context (cannot do Blender or any continuous tasks) and instead rely on screenshot-action-CoT triplets.
[...]
The missing piece is a video encoder. VLMs burn a million tokens to understand just one minute of 30 FPS computer data. Our video encoder encodes nearly 2 hours of video in the same number of tokens—that’s 50x more token-efficient than the previous state-of-the-art and 100x more token-efficient than OpenAI’s encoder. These improvements in context length and dataset size mean we can finally pretrain on enough video to scale computer action models.
[...]
The FDM predicts the next action given the prior frames and actions (Figure 9). [8] 8. Labeled data isn’t strictly necessary for prediction because of the near-determinism of computer environments. We exploit this for small-scale experiments, masking action events to slow overfitting. Unlike VLM-based approaches, our FDM operates directly on video and action tokens—no chain-of-thought reasoning, byte-pair encoding, or tool use. [9] 9. We still have transcription tokens during training, mainly for instruction tuning downstream and general language grounding. This is still extremely different from chain-of-thought data because most actions do not have a transcript preceding them. Overall we have ~1.25T transcript tokens This keeps inference low-latency and allows modeling a multitude of tasks that current designs cannot capture—e.g. scrolling, 3D modelling, gameplay. We trained FDM-1 with no language model transfer.
-
The first fully general computer action model
9 votes -
Comment on Peter Girnus (@gothburz) on X about Anthropic's Responsible Scaling Policy in ~society
-
Comment on Updating Eagleson's Law in the age of agentic AI in ~comp
skybrian LinkDiving into other people's code is totally normal when working at large companies. Most of the code you work on was written by someone else. Sometimes it's fairly decent and sometimes... less so....Diving into other people's code is totally normal when working at large companies. Most of the code you work on was written by someone else. Sometimes it's fairly decent and sometimes... less so.
Now you can have the same experience on your own projects. So, get used to it I guess?
So the question is, how to do you set policy? What automatic safeguards should you put in place? How do you write guidelines and make sure that the coding agent actually reads them? What sort of cleanup processes do you have to find and fix problems when it cut a few corners?
One way is to read every commit. Since it's just a personal project, I tend to let things go a bit, but then I get into a cleanup mood and spend some time getting the coding agent to clean up existing functionality and organize things properly.
-
Comment on Peter Girnus (@gothburz) on X about Anthropic's Responsible Scaling Policy in ~society
skybrian LinkFrom the Twitter post: ... ... ... ... ... ... ... ... ...From the Twitter post:
We left OpenAI because of safety.
Seven of us. 2021. Dario said it was about "disagreements over AI vision and safety priorities." That was the diplomatic version. The real version was that we sat in a room and watched the company decide that speed mattered more than caution and we said we would build something different.
...
I was employee number nineteen. My title was Head of Responsible AI. I had a desk near the founders. I had a document. The document was called the Responsible Scaling Policy.
...
I wrote version 1.0.
RSP 1.0 shipped September 2023. It was clean. AI Safety Levels — ASL-1 through ASL-4. If the model reached a threshold, we paused. If safeguards weren't ready, we didn't ship. The policy was not a suggestion. It was a gate. The gate had a lock. The lock was the whole idea.
...
You cannot pause a $380 billion company. You can revise the document that says you will pause. These are different actions. One of them is responsible. One of them is what we did.
I wrote version 3.0.
RSP 3.0 shipped February 24, 2026. One day before the ultimatum. Nobody outside the company noticed the timing. Everyone inside the company understood it.
...
An if-then commitment says: if this happens, we do that. A positive milestone says: we aspire to reach this point. An if-then commitment is a contract. A positive milestone is a wish. I replaced a contract with a wish and I called it "maturation of our framework."
...
Version 3.0 also separated what Anthropic would do alone from what required "industry-wide coordination." This sounds reasonable. It means: the hard parts are someone else's problem now. The parts that require pausing, restricting, or refusing — those require the whole industry. And the whole industry will never agree. So the hard parts are deferred permanently. This is not a loophole. This is a load-bearing wall removed and replaced with a suggestion that someone should probably install a new one.
...
The LessWrong community noticed. They always notice. They wrote that we had "weakened our pausing promises." I forwarded the post to the policy team. The policy team said the criticism was "philosophically valid but operationally impractical." We did not respond publicly. Philosophically valid but operationally impractical is the most Anthropic sentence ever written. It means: you're right, and we're not going to do anything about it.
...
We had restrictions. No autonomous weapons. No mass surveillance of Americans. These were our terms. These were the lines we drew. The lines were real. I wrote them into the contract myself.
...
xAI agreed to classified use without restrictions. They said yes immediately. OpenAI accepted similar contracts. Google accepted. We were the last ones holding. We are still holding. As of this morning.
...
The Responsible Scaling Policy is on version 3.0. Version 1.0 said we would pause. Version 2.0 said we would commit. Version 3.0 says the hard parts are someone else's problem. There will be a version 4.0. Version 4.0 will say whatever Friday requires it to say.
-
Comment on The Pentagon threatens Anthropic in ~society
skybrian Link ParentThat doesn't seem to be an adequate summary. This dispute isn't over yet and doesn't seem to have much to do with the safety pledge.That doesn't seem to be an adequate summary. This dispute isn't over yet and doesn't seem to have much to do with the safety pledge.
-
Comment on The Pentagon threatens Anthropic in ~society
skybrian LinkFrom the article:From the article:
Here’s my understanding of the situation:
Anthropic signed a contract with the Pentagon last summer. It originally said the Pentagon had to follow Anthropic’s Usage Policy like everyone else. In January, the Pentagon attempted to renegotiate, asking to ditch the Usage Policy and instead have Anthropic’s AIs available for “all lawful purposes”1. Anthropic demurred, asking for a guarantee that their AIs would not be used for mass surveillance of American citizens or no-human-in-the-loop killbots. The Pentagon refused the guarantees, demanding that Anthropic accept the renegotiation unconditionally and threatening “consequences” if they refused. These consequences are generally understood to be some mix of :
-
canceling the contract
-
using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree.
-
the nuclear option, designating Anthropic a “supply chain risk”. This would ban US companies that use Anthropic products from doing business with the military2. Since many companies do some business with the government, this would lock Anthropic out of large parts of the corporate world and be potentially fatal to their business3. The “supply chain risk” designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented.
Needless to say, I support Anthropic here. I’m a sensible moderate on the killbot issue (we’ll probably get them eventually, and I doubt they’ll make things much worse compared to AI “only” having unfettered access to every Internet-enabled computer in the world). But AI-enabled mass surveillance of US citizens seems like the sort of thing we should at least have a chance to think over, rather than demanding it from the get-go.
More important, I don’t want the Pentagon to destroy Anthropic. Partly this is a generic belief: the “supply chain risk” designation was intended as a defense against foreign spies, and it’s pathetic Third World bullshit to reconceive it as an instrument that lets the US government destroy any domestic company it wants, with no legal review, because they don’t like how contract negotiations are going. But partly it’s because I like Anthropic in particular - they’re the most safety-conscious AI company, and likely to do a lot of the alignment research that happens between now and superintelligence. This isn’t the hill I would have chosen to die on, but I’m encouraged that they even have a hill. AI companies haven’t been great at choosing principles over profits lately. If Dario is capable of having a spine at all, in any situation, then that makes me more confident in his decision-making in other cases4, and makes him a precious resource that must be defended.
-
-
The Pentagon threatens Anthropic
14 votes -
Comment on The United States needs fewer bus stops in ~transport
skybrian LinkFrom the article: [...] [...] [...] [...] [...]From the article:
American bus stops are often significantly closer together than European ones. The mean stop spacing in the United States is around 313 meters, which is about five stops per mile. However, in older, larger American cities, stops are placed even closer. In Chicago, Philadelphia, and San Francisco, the mean spacing drops down to 223 meters, 214 meters, and 248 meters respectively, meaning as many as eight stops per mile. By contrast, in Europe it’s more common to see spacings of 300 to 450 meters, roughly four stops per mile. An additional 500 feet takes between 1.5 and 2.5 minutes to walk at the average pace of 2.5 to 4 miles per hour.
[...]
Close stop spacing slows buses down. When a bus stops, it loses time as passengers get on and off the bus (dwell time). The bus also needs to decelerate and accelerate; it may need to kneel (hydraulically lower itself to the floor and back up again to let strollers, wheelchairs, and mobility vehicles on); it may need to leave traffic and return into traffic; and it may miss a light cycle (non-dwell time). Buses spend about 20 percent of their time stopping then starting again.
[...]
Close stop spacing also creates lower quality bus stops. In the US, the sheer number of bus stops means that agencies can’t invest meaningfully in each one. This results in many stops being ‘little more than a pole with a sign’, lacking basic amenities like shelters, benches, or real-time arrival information. Uneven and cracked sidewalks and a lack of shelter or seating present a particular challenge for elderly and disabled riders.
By contrast, a bus stop in a French city like Marseille will have shelters and seating by default. Higher quality stops in the city also include real time arrival information, better lighting for safety, level boarding platforms, curb extensions that prevent illegal parking at bus stops, and improved pedestrian infrastructure leading to the stops. Marseille is not a particularly wealthy French city, but because it has wider stop spacing and fewer stops, it can invest more money into each one.
[...]
A McGill study found that even substantial stop consolidation only reduced system coverage by one percent. A different study modeled a stop balancing proposal for San Luis Obispo, and found that even a 44 percent reduction in stops would have only a 13 percent reduction in coverage area. New York’s transit authority increased the distance between stops on a local route from ten to seven stops per mile (a 42 percent increase in distance between stops) but estimated that the average walking distance went up by only 12 percent.
Buses that move more quickly can traverse their routes more times per day. That means that achieving the same frequency requires fewer drivers as the speed of the journey goes up. Because labor is the largest expense of running a service, faster buses are cheaper to run.
[...]
In Vancouver, stop balancing on one route saved the transit operator $700,000 CAD (about $500,000) in annual operating costs owing to peak vehicle savings. They estimate they will save a further $3.5 million each year by cutting stops across their 25 most frequent routes. In the study from McGill on Montreal’s operator, stop balancing had the potential to ‘save a bus’ (reduce the total buses needed each day by one) on 44 routes.
[...]
Vancouver found that stop balancing improved the reliability of Line 2, especially on the slowest trips. This helps passengers plan their journeys and agencies maintain more accurate schedules, reducing the need for excess recovery time at the end of routes. If agencies want to maximize the benefit of stop balancing on reliability, they can incorporate passenger boarding variability into their stop consolidation program, as McGill University did in their proposal for Montreal’s Bus Network.
-
The United States needs fewer bus stops
6 votes -
Comment on How we rebuilt Next.js with AI in one week in ~comp
skybrian Link ParentCoding agents are definitely capable of fixing issues like the stuff you found, so it's odd that they didn't spend another $500 on code health.Coding agents are definitely capable of fixing issues like the stuff you found, so it's odd that they didn't spend another $500 on code health.
-
Comment on How we rebuilt Next.js with AI in one week in ~comp
skybrian LinkFrom the article: [...] [...] [...] [...] [...] [...] Also, a large company can probably clone your open source project rather easily. And so could a hobbyist.From the article:
Last week, one engineer and an AI model rebuilt the most popular front-end framework from scratch. The result, vinext (pronounced "vee-next"), is a drop-in replacement for Next.js, built on Vite, that deploys to Cloudflare Workers with a single command. In early benchmarks, it builds production apps up to 4x faster and produces client bundles up to 57% smaller. And we already have customers running it in production.
The whole thing cost about $1,100 in tokens.
[...]
The current deployment target is Cloudflare Workers, but that's a small part of the picture. Something like 95% of vinext is pure Vite. The routing, the module shims, the SSR pipeline, the RSC integration: none of it is Cloudflare-specific.
Cloudflare is looking to work with other hosting providers about adopting this toolchain for their customers (the lift is minimal — we got a proof-of-concept working on Vercel in less than 30 minutes!). This is an open-source project, and for its long term success, we believe it’s important we work with partners across the ecosystem to ensure ongoing investment. PRs from other platforms are welcome. If you're interested in adding a deployment target, open an issue or reach out.
[...]
We want to be clear: vinext is experimental. It's not even one week old, and it has not yet been battle-tested with any meaningful traffic at scale. If you're evaluating it for a production application, proceed with appropriate caution.
[...]
A project like this would normally take a team of engineers months, if not years. Several teams at various companies have attempted it, and the scope is just enormous. We tried once at Cloudflare! Two routers, 33+ module shims, server rendering pipelines, RSC streaming, file-system routing, middleware, caching, static export. There's a reason nobody has pulled it off.
This time we did it in under a week. One engineer (technically engineering manager) directing AI.
[...]
What changed from those earlier attempts? AI got better. Way better.
[...]
Why do we have so many layers in the stack? This project forced me to think deeply about this question. And to consider how AI impacts the answer.
[...]
It's not clear yet which abstractions are truly foundational and which ones were just crutches for human cognition. That line is going to shift a lot over the next few years. But vinext is a data point. We took an API contract, a build tool, and an AI model, and the AI wrote everything in between. No intermediate framework needed. We think this pattern will repeat across a lot of software. The layers we've built up over the years aren't all going to make it.
Also, a large company can probably clone your open source project rather easily. And so could a hobbyist.
-
How we rebuilt Next.js with AI in one week
16 votes -
Comment on The evolution of eyes began with one in ~science
skybrian LinkFrom the article: [...] [...] [...] [...] [...] Not a lot of evidence in the article. The paper apparently goes into it more, but I didn't find it all that readable.From the article:
In 1994, scientists didn’t know enough about those microscopic details to develop a hypothesis for how they evolved as well. Three decades later, that’s no longer the case. “There’s lots of molecular data now that we can use that is extremely powerful,” Dr. Nilsson said.
He and other vision experts have now joined forces to develop a hypothesis for how vertebrate eyes evolved.
“You look at all the evidence in your head, and it suddenly clicks,” said Tom Baden, a neurobiologist at the University of Sussex who collaborated with Dr. Nilsson. They and their colleagues unveiled their detailed scenario for the evolution of vertebrate eyes on Monday in the journal Current Biology.
[...]
The scenario starts about 560 million years ago, when our invertebrate ancestors lived mostly buried in the ocean floor. They stuck out their brainless heads to filter bits of food floating by.
On the top of their heads, Dr. Nilsson and his colleagues propose, these forerunners of vertebrates possessed a single patch of light-sensitive cells. Those cells tracked the cycle of day and night, setting the animals’ body clocks, and also provided simple clues about their position, so that the animals could keep their heads just high enough to eat without being eaten.
Some of the descendants of this cyclopean ancestor left their burrows and started to swim. They were still simple creatures with tiny brains, and they still filtered food from the water they swam through. But now they needed more information about their environment.
Their single eye grew more complex. Cup-shaped depressions evolved on either side, sensitive to the direction of incoming light. (Different light-sensitive cells became active depending on where they sat along the curve of the cups.) Dr. Nilsson and his colleagues argue that these were the forerunners of the retinas in our own eyes.
An awareness of the direction of light helped the animals travel through the water, enabling them to stay upright and stable.
[...]
Over millions of years, our filter-feeding ancestors evolved into tiny fish, complete with brains and mouths they could use to catch live animals. Dr. Nilsson and his colleagues contend that this transformation could not have happened without an additional change to the eyes.
“There is a better place for them, on either side of the head,” Dr. Nilsson said.
[...]
But as the new eyes migrated to their new positions, the animals still retained the ancestral eye on top of their heads. While it could not provide details about their surroundings, it continued to provide vital information, such as the overall level of light. Modern fish still have a light-sensitive patch of cells on top of their heads, known as the pineal gland.
“It’s a compelling new idea, but the jury is still out,” said Karthik Shekhar, a computational biologist at the University of California, Berkeley, who was not involved in the study. One way to test the idea, he said, would be to compare the activity of cells in the pineal gland and the retina in many vertebrate species. If Dr. Nilsson and his colleagues are right, the cells in the two organs should have deep molecular similarities — a sign of the deep evolutionary link.
[...]
But new fossil discoveries suggest that the evolutionary course of eyes may have taken some surprising turns that Dr. Baden and his colleagues hadn’t envisioned in their hypothesis.
In recent years, paleontologists in China and England have been studying some of the earliest vertebrate fossils, which date back 518 million years. These show traces of eyes on the sides of the head, complete with lenses and retinas. But at the top of the head, there is a second pair of eyes, complete with lenses and retinas.
Jakob Vinther, a paleontologist at the University of Bristol, speculated that early vertebrates — small animals hunted by big invertebrates — may have benefited from the extra-wide field of view that four eyes provided.
[...]
Dr. Nilsson speculated that the extra eyes might have evolved thanks to a drastic change to vertebrate DNA. Some studies hint that, early in the evolution of vertebrates, the entire genome was duplicated. An extra set of genes may have given rise to an extra pair of eyes.
Not a lot of evidence in the article. The paper apparently goes into it more, but I didn't find it all that readable.
Although I had no chance to because I hadn't heard of her yet, I regret not seeing Hiromi when she was playing with her previous bands, Sonicbloom and Trio Project. I've seen her twice with her latest band which is okay but I don't like the songs as much.
Fortunately there are lots of great concert videos on YouTube.