V17's recent activity
-
Comment on Ukrainian support for war effort collapses in ~society
-
Comment on I wanna raise crickets for the sound in ~life.pets
V17 As someone with tinnitus that slightly resembles crickets and is made significantly worse by having to listen to them, you seem like a crazy person, but I wish you good luck in your quest! Surely...As someone with tinnitus that slightly resembles crickets and is made significantly worse by having to listen to them, you seem like a crazy person, but I wish you good luck in your quest!
Surely there must be books that include some information on how to raise crickets. Try asking an LLM that is able to search the internet about the books or other sources (ChatGPT 5 is fine), LLMs are often great tools for specifically looking up things that you don't know how to query on google.
-
Comment on Ricoh announces new specifications and details of their upcoming compact camera Ricoh GRIV in ~tech
V17 I think stu omits some of the specific consequences. Basically at least 50% of the resulting photo is about how the raw digital data is processed, and a real camera gives you flexibility that a...I think stu omits some of the specific consequences. Basically at least 50% of the resulting photo is about how the raw digital data is processed, and a real camera gives you flexibility that a smartphone is incapable of. Phone cameras are heavily dependent on very sophisticated automated image processing to get a great result out of hardware severely limited by its tiny size (both the chip and the lenses).
This means that as you take a picture, the camera app guesses how to extract the most image quality out of the specific scene using various types of HDR processing, color mapping, denoising, sharpening (often in multiple steps for all of those) and other tools that are invisible to the user. It leads to a specific result that usually looks great. This process, by the way, needs to be tailored to the specific hardware used, which is why cheap phones, even with pretty good camera hardware, don't do as well as flagship phones - the manufacturer doesn't spend as much time finetuning it.
With a real camera you do all of those steps as well, but because the input data is so much better, there isn't just one way to get the best quality out of the picture, there are numerous ways to do it, you can do the whole process yourself, and you get a lot of freedom in how you want to develop the photo. This shows the most in photos with difficult lighting conditions, high dynamic range etc., or simply if you want to be more creative with your photography. You don't need to go for the photo that looks like the mainstream "best" photo that your iPhone steers you towards, you can do whatever you want with it.
In a situation where the conditions are favorable, the iPhone app correctly guesses the way to make the photo look the best and you don't need low depth of field, like pictoresque landscapes with good lighting, the difference between it and a real camera can be so small to be meaningless, especially to a layperson. In situations that are on the other end of the spectrum, the difference is huge.
-
Comment on Famous cognitive psychology experiments that failed to replicate in ~science
V17 I remember doing an online test on specifically the mechanism mentioned in the article, published by some of the researchers, probably over a decade ago when it was trending during Reddit's...unconscious bias measurement being the hot thing more recently
I remember doing an online test on specifically the mechanism mentioned in the article, published by some of the researchers, probably over a decade ago when it was trending during Reddit's "social justice" era - in retrospect it's quite ridiculous how many pages of frustrating online arguments and shouting matches were written based on bad research. Anyway, I did the test as honestly as I could and the results told me that I do not have any bias against obese people. That was when I decided it's probably bullshit.
-
Comment on Famous cognitive psychology experiments that failed to replicate in ~science
V17 This really is a problem, isn't it? I recently listened to a long interview with a scientist who's been doing research on something that on surface sounds very similar to the "ego depletion...Ironically, due to the replication crisis, I don't actually know what's true without doing much more of a deep dive
This really is a problem, isn't it? I recently listened to a long interview with a scientist who's been doing research on something that on surface sounds very similar to the "ego depletion effect" and "willpower battery" except it's about the ability to concentrate on certain tasks, iirc also working memory and maybe some other metrics, and ways to recharge the ability. Which to a layperson sounds very close to "having or lacking the willpower to do something". But the research looks solid, the dude's lab has been working on isolating very specific aspects of the effect to understand how it works and so far it all holds up.
I think that articles like these definitely make the overall situation better, but I know that many people incapable or unwilling to look deeper will throw out the baby with the bathwater with regards to research that seems closely related but holds up. Hopefully most of the "damage" is only going to be annoying arguments on reddit.
-
Comment on ‘Being short is a curse’: the men paying thousands to get their legs broken – and lengthened in ~life.men
V17 I don't disagree with your tips on how to meet women and it's true that normal people meet and date normal people and the internet distorts that reality to some degree, but the whole comment feels...I don't disagree with your tips on how to meet women and it's true that normal people meet and date normal people and the internet distorts that reality to some degree, but the whole comment feels like you are not entirely aware of social bubbles you live in.
It reminds me of an observation/interpretation of the conflict between contemporary feminists and certain groups of redpill men. The gist is that their disagreements stem, among other things, from them living in quite different realities because the feminists are mostly middle class women with higher education whereas the redpill guys are often working class men. Saying that men only care about looks in women or that men cheat on their wives with their babysitter is a social norm among working class men (whether the person saying it actually does those things or not). The men are saying how they think the world works around them right now. The middle class educated women however see that obviously this is not how their world works, so they call the men bigoted or brainwashed.
I think that a similar thing must be happening here, because while in my social circles the world does mostly work this way, outside of them it's often a very different image. The world is full of shallow and stupid people, it always has been and it always will be. And they don't even have to be those things, average preferences imo obviously change in different groups of people to some degree. I'm a skinny nerdy young looking guy, I'm not effeminate in my looks or demeanor, but I'm obviously non-threatening. In my experience this seems to be much more of an attractiveness handicap in certain social groups than in others.
I'm just short of 180 cm, so I never had problems with height specifically, though I still remember the first look of a Tinder date who was 5 - 10 cm taller than me and neither of us realized in advance, and how she wasn't entirely successful in hiding her disappointment - fortunately I thought it was funny rather than disappointing. But dating certainly seemed visibly tougher for my short friends, even though most of them were eventually successful.
-
Comment on AI is a mass-delusion event (gifted link) in ~tech
V17 It's already affordable for businesses to use somewhat useful self-hosted AI models. It's just more practical and reasonably cheap to not do that and always have the best thing. And again, while...I think it's much simpler. It's never going to happen because, as mentioned, they will just try to to rand it until it sticks again. It's incredibly slim, but I can see a society where it's untenable for any business to use AI. That would basically be an "Ai-less" world, except for some small toy GPT's for personal use.
It's already affordable for businesses to use somewhat useful self-hosted AI models. It's just more practical and reasonably cheap to not do that and always have the best thing. And again, while moronic managers push AI in aplications where it's useless, and this inflates usage, it is already immensely useful in some areas, so saying anything like "it's only going to stick if it's pushed on us by force" has no basis in reality.
The only solace here is that this isn't going to happen in 5 years like how corporate wants to hype it up to try and be. Maybe over the course of 20 years, woth a shift in policy makers, we could either address this issue or prevent it entirely.
This is what I'm saying basically. All of the things you mention above this are going to happen with a fast continuing evolution of AI as well, and to a higher degree. Plateauing development would at least give us some time to adapt.
Not really. I was just reading an article about how 95% of Gen Ai projects ended in failure. The technology is there but it sure isn't being welded competently. It's wielded by a businessman introduced to a hammer and claiming they are a carpenter.
Eh, on one hand I agree with the last sentence, but on the other hand I only believe statistics that I falsify myself, plus this says absolutely nothing about the most important part, which is just using existing AI models in some workflows within existing fields. One example: I'm using LLMs to do R&D in electroacoustics/psychoacoustics as a self taught person and it's been an absolute gamechanger comparable with the introduction of widely available advanced loudspeaker simulation software.
-
Comment on AI is a mass-delusion event (gifted link) in ~tech
V17 DeepSeek is good enough to run basically what we use ChatGPT for, likely slightly worse but not by an order of magnitude, and afaik it can be run locally without needing the most expensive and low...AI will still be good for generating bespoke stock images, translating, autocomplete, search, etc.
DeepSeek is good enough to run basically what we use ChatGPT for, likely slightly worse but not by an order of magnitude, and afaik it can be run locally without needing the most expensive and low availability GPUs because reasoning models can be ran partially on several GPUs (so you do need several top of the line gaming GPUs, but those are accessible). Not 100% sure on that specifically, maybe it was only with VRAM mods which few people dare to do, but my point is that the usage that we have for LLMs now is not going away.
-
Comment on AI is a mass-delusion event (gifted link) in ~tech
V17 Nonsense, never going to happen because for many things it is unquestionably already good enough. For example learning new complicated things - a situation where you need to verify what you're...The AI bubble bursts, everyone wakes up from this delusion and we return to a world without AI
Nonsense, never going to happen because for many things it is unquestionably already good enough. For example learning new complicated things - a situation where you need to verify what you're learning anyway, but it's difficult to get over the beginner hump because you don't even know where to start, what kind of theory you need, you don't know the terminology... LLMs are already immensely useful here in giving you the basic intuition and information on how and where to start, especially the new "reasoning" models that hallucinate less and can also verify things in scientific publications etc.
AI plateaus and performs just good enough as the article describes. It might contribute some good solutions to certain issues but nothing life-changing, while making the world a whole lot worse.
I fail to see how "the world is not going to be radically transformed in ways that may end up okay overall, but will no doubt create huge challenges and at least some singificantly negative changes" is a bad outcome. It would give us time to eventually adapt, at least. I wish this were true because I think the result wouldn't be as bad as suggested, but I'm worried the development is not going to stop there.
Right now, the models don’t think—they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap.
Imo this is not too far from the nonsensical "stochastic parrot" term. We are in the middle of one of the fastest technological evolutions in history, one that exploded into the mainstream because we found out that if we just make a neural net big enough, it starts to have unexpected emergent properties. There is no rule that says that the development is going to continue just as fast, but so far it has been (o1 was released not even a year ago and it was a big step up in capabilities), saying that surely any moment it's going to stop has no basis.
Do note that I'm not saying that AI is great and that the changes are going to be positive, I hope that's obvious from how I see the second outcome.
Also I cannot predict the economical consequences of a potential bubble bursting, but I do predict that if it does burst, it's not going to be a burst that would delete the whole field, it's just a question of how big the setback is going to be. Worst case scenario we're going to be using Deepseek R1 or some equivalent on our own future GPUs or more likely AI acceleration chips (already on market, though not nearly good enough yet) in the future.
-
Comment on My favorite mouse costs less than USD 10 in ~tech
V17 Anyone has a tip for a similarly cheap (semi)vertical mouse? I used to use a Connect IT mouse for around that price, but they use shitty scroll wheels that broke just after the warranty was over,...Anyone has a tip for a similarly cheap (semi)vertical mouse?
I used to use a Connect IT mouse for around that price, but they use shitty scroll wheels that broke just after the warranty was over, 2 or 3 times in a row. Now I'm using an Anker mouse which costs about twice that, it's almost exactly the same and the scroll wheel hasn't broken yet, but it looks exactly the same as well from the outside, so I think it's planned obsolescence time again.
-
Comment on How embryo selection exploits our flawed intuitions about risk in ~science
V17 (not skybrian replying) I have found that nearly every time when I encounter articles criticizing Scott Alexander, after going either to the articles of his that the criticism is referencing or...(not skybrian replying)
I have found that nearly every time when I encounter articles criticizing Scott Alexander, after going either to the articles of his that the criticism is referencing or other articles touching the same topic (he regularly explicitly or impicitly references his past writing) that give more context to his opinions, it's never quite what the critic claims. So to make your own opinions I recommend that.Perhaps the most controversial example are his private leaked emails about "human biodiversity", in other words the claim that there is a correlation between ethnicity and IQ (and possibly other heritable traits). This is obviously shocking. However the content of the emails is roughly "I did my best to go through all the available evidence and I'm quite worried because the conclusion is that there may be some truth in it. I'm only saying this privately because the implications of that would be very bad".
This may still be shocking, but much less so than the way it's usually presented. He may be wrong about the data (I don't know, he as a psychiatrist is closer to the field than I am), but he obviously doesn't see the possibility as a good thing.
Other criticisms I saw and double checked in his articles were considerably less shocking than this, I think this is clearly the worst.
-
Comment on How embryo selection exploits our flawed intuitions about risk in ~science
V17 I think it's worth saying that the person you're accusing of sane washing nazi shit recommends voting for democrats in iirc every US election.I think it's worth saying that the person you're accusing of sane washing nazi shit recommends voting for democrats in iirc every US election.
-
Comment on How embryo selection exploits our flawed intuitions about risk in ~science
V17 I can't agree with this when arguably the most well known charity evaluator connected to effective altruism, GiveWell, has mosquito nets as the second best option, the first one being malaria...It's the same issue I have with where "effective altruism" goes. It's not just like "checking to see if where you donate funds is a responsible steward of your money" but it manages to disregard practical solutions about saving lives today in favor of "solving" big picture problems, sometimes by investing in AI to inevitably let it solve those future big picture problems. (ETA and a lot to basic prevent an evil AI that will kill us all or something)
Meanwhile mosquito nets don't get funded and people die of malaria. Maybe there were good intentions behind it, but it seems like ultimately the same people and same money behind all of it.
I can't agree with this when arguably the most well known charity evaluator connected to effective altruism, GiveWell, has mosquito nets as the second best option, the first one being malaria treatment and the other two in their top charities list are preventing/treating vitamin A deficiency and vaccines, both in children. Giving What We Can, motivating people to give 10% of their income to effective charities and also founded at least partially by EA people, is the same. I listened to a couple interviews with EA people done by Sam Harris and I think Lex Fridman and mosquito nets were the number one example of efficiency mentioned in both.
-
Comment on Why are there so many rationalist cults? in ~life
V17 I don't really want to play devil's advocate for these people, but I don't think that's what they believe, at least the ones writing about it online. Generally, as is surely not surprising, there...I don't really want to play devil's advocate for these people, but I don't think that's what they believe, at least the ones writing about it online. Generally, as is surely not surprising, there seems to be an overlap between racists and certain ultraconservatives or various alt right sub groups, who both comonly have an insistence on working hard, and working for themselves or their immediate surroundings as opposed to being forced to work for society in general.
What I've seen is more in tune of "they are genetically not smart enough and too lazy to work and be as good as us, hard working men" or what could be shortened to "as opposed to them, we are capable". The people in discussions seem to commonly point out achievements of other people instead of their own, but they still ultimately focus on actual achievements, crime (very common) etc. instead of whiteness as the ultimate achievement.
-
Comment on Why are there so many rationalist cults? in ~life
V17 I don't think that's the motivation because the conclusions tend to be that east asians and ashkenazi jews are more intelligent on average than caucasians. Any specific goal is probably closer to...As an aside I don't really get why people would want to be a member of the 'master race' in the first place, doesn't that devalue all your own achievements?
I don't think that's the motivation because the conclusions tend to be that east asians and ashkenazi jews are more intelligent on average than caucasians. Any specific goal is probably closer to proving that african and aboriginal ethnicities are various levels below others.
Though I don't see how being more intelligent would devalue any achievements to be honest.
-
Comment on Forums are still alive, active, and a treasure trove of information in ~tech
V17 I've never used it, but Tapatalk is iirc meant to do exactly this.I've never used it, but Tapatalk is iirc meant to do exactly this.
-
Comment on Why are there so many rationalist cults? in ~life
V17 You're making an fictional scenario in "a group could flawlessly screen for intelligence somehow, and everyone admitted was genuinely very smart" and then applying the limitations that apply to...You're making an fictional scenario in "a group could flawlessly screen for intelligence somehow, and everyone admitted was genuinely very smart" and then applying the limitations that apply to real world Mensa ("only weird people who didn't achieve anything yet would join"), which cannot do the idealized screening because it's impossible.
I'm telling you right now that if such idealized group of people existed, I would definitely join and I think many others would too. I met some brilliant people in academia, but few of them were well-rounded enough to be experts in more than one field (similar to what I studied), so the things we could talk about were limited. It's similar in many expert professional fields. I really value my doctor friends for example because we can share interesting and advanced things that the other knows very little about. Having friends in wildly different fields is also very useful for connections when doing whatever. When I was attempting to start a tiny company that built and sold designer hifi loudspeakers, having good connections with mutual respect to people who did marketing and graphic design was invaluable. But they're also just interesting people to meet.
I was writing some more anecdotes but honestly getting to know very smart people is just a topic in my life and I don't think I'm alone in this at all. And it's both genuine interest and practical usefulness. Many successful smart people do seek out precisely this as well through various private clubs - those are often gatekept using money, but not exclusively, and financial interests are not nearly the only reasons for their existence.
-
Comment on Why are there so many rationalist cults? in ~life
V17 I honestly don't think so at all, and I'm saying that in the context of living almost entirely in a university educated social bubble. But I admit that like skybrian I only know the ones that post...I honestly don't think so at all, and I'm saying that in the context of living almost entirely in a university educated social bubble. But I admit that like skybrian I only know the ones that post online, especially on blogs but also forums. But those tend to be the most visible and therefore also criticized ones anyway.
-
Comment on Why are there so many rationalist cults? in ~life
V17 The thing is that despite all of those valid criticisms of rationalists, my experience with them is that they tend to be smarter than at least 95% of the population including most of their...The thing is that despite all of those valid criticisms of rationalists, my experience with them is that they tend to be smarter than at least 95% of the population including most of their critics. So if rationalists cannot escape this, what does that tell us about humanity in general.
The lesson, of course, is to be more humble, but that doesn't negate that the problem still exists.
-
Comment on Why are there so many rationalist cults? in ~life
V17 Perhaps I worded it wrong, but I meant to say that RationalWiki, despite being unrealted to rationalists, is another community that claims to focus on rationality despite being quite biased and...Perhaps I worded it wrong, but I meant to say that RationalWiki, despite being unrealted to rationalists, is another community that claims to focus on rationality despite being quite biased and not exactly rational. Perhaps the word itself is a potential red flag when used in this way.
And iirc an interesting detail on this is that there was relative political unity in these matters, opinions were roughly the same among both Zelensky supporters and supporters of the opposition. So it's unlikely to change even if an election happens and Zelensky loses.