This is probably nitpicky, but why is this called a "data science" conjecture rather than one in computer science? I get that "data science" is a buzzword now, but it wasn't back in the 80s when...
This is probably nitpicky, but why is this called a "data science" conjecture rather than one in computer science? I get that "data science" is a buzzword now, but it wasn't back in the 80s when Yao's conjecture was... conjected. I already have trouble, as someone whose last job title was "data scientist", with "data science" being used to describe too many dissimilar things. It goes from having a very nebulous meaning to being nigh-meaningless if you use it for any computer science concept that remotely involves data.
I like how the reporter tried to explain pointers. I thought they did a decent job for someone with no background. I imagine the arrows came from someone drawing it out for them.
I like how the reporter tried to explain pointers. I thought they did a decent job for someone with no background.
I imagine the arrows came from someone drawing it out for them.
Together, Krapivin (now a graduate student at the University of Cambridge), Farach-Colton (now at New York University) and Kuszmaul demonstrated in a January 2025 paper that this new hash table can indeed find elements faster than was considered possible. ln so doing, they had disproved a conjecture long held to be true.
We show that, even without reordering elements over time, it is possible to construct a hash table that achieves far better expected search complexities (both amortized and worst-case) than were previously thought possible. Along the way, we disprove the central conjecture left by Yao in his seminal paper ``Uniform Hashing is Optimal''. All of our results come with matching lower bounds.
From discussion on Hacker News, the reason this isn't a practical result is that normally, when a hash table gets full enough, it's automatically resized.
From discussion on Hacker News, the reason this isn't a practical result is that normally, when a hash table gets full enough, it's automatically resized.
This is probably nitpicky, but why is this called a "data science" conjecture rather than one in computer science? I get that "data science" is a buzzword now, but it wasn't back in the 80s when Yao's conjecture was... conjected. I already have trouble, as someone whose last job title was "data scientist", with "data science" being used to describe too many dissimilar things. It goes from having a very nebulous meaning to being nigh-meaningless if you use it for any computer science concept that remotely involves data.
It's just the layman reporter subbing in words that they know. You don't have to get further than the second paragraph
yeeaaaahhhhh
I like how the reporter tried to explain pointers. I thought they did a decent job for someone with no background.
I imagine the arrows came from someone drawing it out for them.
Oof, I remember pointers.
...Vaguely.
Here is the paper. From the abstract:
From discussion on Hacker News, the reason this isn't a practical result is that normally, when a hash table gets full enough, it's automatically resized.