Macil's recent activity
-
Comment on What's your p(doom)? in ~talk
-
Comment on MyHouse.WAD - Inside Doom's most terrifying mod in ~games
Macil (edited )Link ParentThat problem exists with the silent teleportation lines, which are the jankier older alternative to line portals that are used in the beginning part of the map. Line portals avoid this issue, but...Those tricks aren't entirely perfect, unfortunately; there are some places where you can walk back a little too far in a corridor and the monsters in the room in front of you just "disappear" But I only came across that problem just once, and it's by no means major.
That problem exists with the silent teleportation lines, which are the jankier older alternative to line portals that are used in the beginning part of the map. Line portals avoid this issue, but the map intentionally uses the jankier alternative early on to pretend to be an older amateur map.
The beginning part of the map restricts itself to using the game's original textures and Doom engine features that were common in the early 00s, since that's when the original base of the map was supposedly made. Then the map starts subtly using custom resources and newer engine features (including line portals), which to an experienced Doom modder is surprising from being so anachronistic.
light spoiler section about some anachronistic/surprising custom features used
- line portals. (Not supported until the mid 10s. Subtly used at first to make some room-over-room areas more polished and later used to accomplish some impossible level geometry.)
- actual 3d floors. (It's actually possible in Gzdoom to make a 3d floor with proper room-over-room, which a few writeups about MyHouse get wrong, but the feature does have limitations and is used sparingly in the map)
- 3d polyobjects. It's really funny watching playthroughs of MyHouse because you can immediately recognize the experienced Doom players by their instant reaction to the house suddenly changing to having swinging doors instead of the classic vertical sliding doors.
- the classic weapon animations being replaced with higher-fps animated versions midway through the level. Many players don't notice, but very experienced players get very confused at this point trying to remember if these animations were always present, because the idea of a map contorting itself to be able to suddenly activate these in the middle of gameplay is too silly of an idea to consider.
- custom enemies. Maybe not exactly anachronistic as zdoom supported custom enemies since 2006ish, but it feels unusual for a map that appears to be going for a classic vanilla style to introduce these later on. Especially when during a usual run of MyHouse, they're first introduced during a sudden eery moment in the map.
Experienced Doom modders are definitely one of the primary target audiences of the map. It's impressive that the map managed to be built for that while at the same time being so interesting to inexperienced players too.
-
Comment on Director’s take – reflecting on the future of Overwatch 2 in ~games
Macil "Overwatch 2" is not all that different from any other update to the game replacing the old version. Do you miss 6v6 and lootboxes that much specifically?"Overwatch 2" is not all that different from any other update to the game replacing the old version. Do you miss 6v6 and lootboxes that much specifically?
-
Comment on MyHouse.WAD - Inside Doom's most terrifying mod in ~games
Macil (edited )LinkThis mod was written about in PC Gamer. The article starts without spoilers with instructions for running the mod. This mod is so cool. I knew if a doom mod was getting attention like this, then...This mod was written about in PC Gamer. The article starts without spoilers with instructions for running the mod.
This mod is so cool. I knew if a doom mod was getting attention like this, then it was worth it for me going in blind and trying myself first. After playing it, I've watched a few playthrough videos including the one posted here. It's been interesting watching both veteran doom players and total noobs play it, because the mod clearly has a good mix of general appeal and a lot of subtle design nods toward veteran doom players and mappers.
I used to be obsessed with doom modding and the zdoom engine 15-20 years ago. I got a lot of my early programming experience in scripting maps I made. Seeing tweets about this mod and the pcgamer article is what got me to download gzdoom again. I've got ultimate doom builder installed too now and it's tempting to try to throw together a map again now.
-
Comment on "SHA-1 is a Shambles" - A demonstration of a chosen-prefix collision for SHA-1 (2020) in ~comp
Macil It's correct that SHA1 is bad for password hashing because it's not a slow iterated hash, but that's unrelated to this issue. This issue affects SHA1's use as a hash in other non-password...It's correct that SHA1 is bad for password hashing because it's not a slow iterated hash, but that's unrelated to this issue. This issue affects SHA1's use as a hash in other non-password contexts. This issue doesn't make it easier to crack a hash of unknown content; this issue makes it possible for an attacker to create two pieces of different content that have the same hash as each other. This is a problem when an attacker creates a good file and a bad file with the same hash, and then gets some victim to sign the hash of the good file or otherwise direct users to trust files containing the hash. An example of this would be if someone created an HTTPS certificate for a domain they owned and a domain they didn't own, and then got a certificate authority to sign their certificate. Then the attacker would be able to host a website that could verify itself over HTTPS as the domain they didn't own, which would allow a man-in-the-middle attack of an HTTPS-using website to be done successfully.
-
Comment on Google's adoption of passkeys (security blog article) in ~tech
Macil Yeah, I can't wait to no longer have to try to convince people to stop reusing passwords between sites, or have to convince web developers to do password hashing correctly (passwords shouldn't be...Yeah, I can't wait to no longer have to try to convince people to stop reusing passwords between sites, or have to convince web developers to do password hashing correctly (passwords shouldn't be hashed with just any regular hash algorithm alone like SHA or MD5, but with a password hash specifically like Argon2 that does many iterations and uses a salt).
-
Comment on Is Gmail killing independent email? in ~comp
Macil (edited )Link ParentSome domains specify IP addresses that emails from that domain are allowed to come from, so that way random scammers can't send emails that look like they're from the domain. When you forward...Some domains specify IP addresses that emails from that domain are allowed to come from, so that way random scammers can't send emails that look like they're from the domain. When you forward emails, the receiving side may not know to specifically trust your forwarder and then decide to block emails from it whose domains say they can only come from certain IPs that aren't your forwarder. It's a very awkward problem that breaks the simplicity of email forwarding.
I think there are supposed to be solutions to this problem (I think DKIM is supposed to provide a solution for this?) but I think it might require the forwarder and/or the sender's domain configuration to have special support for some standard which isn't always done.
The best route in my opinion to receiving email at a custom domain is to set it up as its own email account with something nice like Fastmail that you can use directly, or with a basic email provider that you set your email client (Gmail, Outlook, Thunderbird, etc) to import all the emails from continuously (over POP/IMAP).
-
Comment on Bluesky is Jack Dorsey’s attempt at a Twitter redo and it’s already growing fast in ~tech
Macil Yeah, my biggest problems with Mastodon were that they seem against doing much for discoverability (especially for anything cross-instance, which describes most possible user pairings) or...Yeah, my biggest problems with Mastodon were that they seem against doing much for discoverability (especially for anything cross-instance, which describes most possible user pairings) or algorithmic timelines (that can let you see popular posts from someone you follow who posts when you're not on, without having to scroll past all the more recent posts by other people you follow first).
-
Comment on Bluesky is Jack Dorsey’s attempt at a Twitter redo and it’s already growing fast in ~tech
Macil It uses an open decentralized protocol that synchronizes messages between servers that has sometimes been compared to a blockchain and the general principles a cryptocurrency is designed for, but...It uses an open decentralized protocol that synchronizes messages between servers that has sometimes been compared to a blockchain and the general principles a cryptocurrency is designed for, but it doesn't have anything specifically to do with cryptocurrency or the exact way blockchains in them are done.
-
Comment on Longtermism is the world’s most dangerous secular credo in ~humanities
Macil (edited )Link ParentYeah, it's a little weird seeing a bunch of groups including Effective Altruism and MIRI tarred for their association with longtermism, but none of them are shown to exemplify any of the downsides...Yeah, it's a little weird seeing a bunch of groups including Effective Altruism and MIRI tarred for their association with longtermism, but none of them are shown to exemplify any of the downsides of longtermism the author explains. In contrast with this, EA has been very focused on global poverty, and MIRI's Yudkowsky agrees with the author in principle that we should slow down technological development that could be a net negative. Longtermism is imperfect with hypothetical failure modes when taken to the extreme without any other principles, but it doesn't seem that organizations influenced by it are falling into these failure modes.
-
Comment on I found the libertarian (ultra rationalist?) tildes! Themotte.org in ~tech
Macil (edited )Link ParentSSC spun out of the rationalist community (LessWrong), but unlike the rationalist community which focused mostly on AI and epistemology, SSC had a wider focus including politics and then gained a...SSC spun out of the rationalist community (LessWrong), but unlike the rationalist community which focused mostly on AI and epistemology, SSC had a wider focus including politics and then gained a lot of followers outside of the rationalist community. I wouldn't describe the Motte as rationalist.
-
Comment on What is your most essential pessimistic belief? Conversely, what is your most essential optimistic belief? in ~talk
Macil I think this is true in a way but it's not as bad as what many think. Imagine you have a choice between your loved ones getting X dollars after you die, or they get nothing but you get a false...Even altruism can be explained as making yourself feel good.
I think this is true in a way but it's not as bad as what many think. Imagine you have a choice between your loved ones getting X dollars after you die, or they get nothing but you get a false memory implanted that you chose a happier option where they get 2X dollars after you die. Most people would agree that the first option is much better to take and even that it feels better in the moment to choose that. That option feels better when you see both options specifically because you care about its better results out in the world. If your feelings really were an end to themselves, you'd have no hesitation in picking the second option.
-
Comment on What is your most essential pessimistic belief? Conversely, what is your most essential optimistic belief? in ~talk
Macil (edited )LinkMy most optimistic belief is that humanity is going to be very successful through AI. Natural selection and human intelligence have been extremely consequential processes on Earth. Powerful...My most optimistic belief is that humanity is going to be very successful through AI. Natural selection and human intelligence have been extremely consequential processes on Earth. Powerful general artificial intelligence wouldn't just be a new force like these; it would be the first time that a process like those is able to directly understand and intelligently improve its own operation on all levels. If we make it just right and things goes well for us, then things will go extremely well for us. Published ML researchers in a 2022 poll gave an average estimate of 37 years until superhuman AI is made. If this pans out, then we live in the most important and consequential period of history.
My most pessimistic belief, besides that we might not make it, is if humanity made it big and achieved all the technology we want including immortality, that human psychology and society might not be able to fully adapt without lots of tradeoffs. It would be a massive improvement of course, but it's disappointing to imagine that even utopia could still have fundamental issues. Could the default nature of the human mind be to accumulate countless personal and societal grudges over time, such that all of us would become nonfunctional antisocial hermits within our first thousand years of life?
If that were the case, we'd surely come up with techniques and treatments for avoiding this, but I wonder if all the solutions might have significant tradeoffs. Imagine people rejuvenate themselves by archiving most of their memories every few decades or centuries so they can face the world with the attitude of an unburdened ambitious 20-year-old again. It's a much better option than death, but is this not just a softer ongoing death? Maybe instead we find ways to improve the human mind to make us less emotionally vulnerable. Will people look back on today and yearn for the days before most people they met were so emotionally guarded, even though they knew that couldn't last? Maybe there will be subcultures and clusters of people who let their shields down together, but it's a shame if on some level our true selves can never directly engage with the full community of humanity.
And that's only thinking about how we'd adapt to immortality itself. What about the people who upgrade their own minds to stand next to superintelligent AI? Would the nature of superintelligence lead to a kind of self-sufficiency that's at odds with our innate social desires? Will society split into baseline humans, humans that have upgraded themselves without limit into peerless hermit gods, and humans that have upgraded themselves only as far as is currently known to be compatible with a social life?
Maybe many people will play multiple of these roles simultaneously: imagine existing both as an upgraded mind with full memory of your life and the intellectual ability to solve any question, and as a human mind that's managed by the superself, kept somewhat firewalled from that raw power, and used by the superself as a social avatar and the source of goals. You could be your own guardian angel. Would this bifurcation of the mind be seen as merely a tacky roleplay or as necessary to pursue both of the goals of understanding the world and socializing? Would people doing this ultimately become their own alien society separate from each of the segments they were trying to bridge?
-
Comment on The Witch Trials of JK Rowling in ~lgbt
Macil (edited )LinkGreat video. I love how the video counters the idea that "people in this progressive movement specifically take things too far, so we should oppose it entirely" by contextualizing it with past...Great video. I love how the video counters the idea that "people in this progressive movement specifically take things too far, so we should oppose it entirely" by contextualizing it with past movements.
I also love how it emphasizes that a lot of anti-trans debate points are really about "should transphobia be acceptable?" or "should canceling ever be acceptable?", but obscured under word definition games that are far from what people directly care about. I like her talking about imagining the idea of asking an opponent "what would count as transphobia to you?". Productive disagreements should look like this where the opponents figure out what it is they agree on and try to zero-in on what they disagree about. It reminds me of the double-crux strategy: each party should figure out possible statements that if true or not would change their mind in the argument, and then look for any of these cruxes they have in common. The fact that many debates don't look like this shows how unproductive many are, often intentionally, which is kind of a point of hers.
The segment about the year-old twitter beef with Vaush was a little weird though. I'm biased as an occasional Vaush watcher, but it felt surprisingly spiteful to represent a person who made an overly edgy joke (that he didn't actually post as a reply to JKR) as if he's comparable to the fully outright bigotry and laws that she's calling on JKR and TERFs to disavow. Especially when she's been present with him and friendly on his stream before, the clip she shows of his stream where he says "go shame her into agreeing with me" was a bad joke right after he talked for the nth time about how great she was and how twitter mobs keep trying to shame her into things, the clip where he mentions her substance abuse is after he was sympathetically talking about a video of hers to the stream chat and he's clarifying it's not part of the twitter discussion, and she's not a stranger to making edgy jokes (though maybe wisely kept in two hour videos instead of context-light tweets). I get the broader point she's going at, that many on both sides don't necessarily agree with a lot that happens for their side, but it felt a bit like some interpersonal drama she forced in because she had a grudge from a twitter argument.
It is interesting that two prominent anti-trans figures in the video, JKR and Graham Linehan, both have been spurred on significantly by being spurned in twitter arguments. I like twitter well enough but it makes you wonder if there's some sharp edge about it that we're not psychologically fit to deal with. Maybe it makes it too easy to stake everything in an argument and quickly enter into a large public sunk cost.
-
Comment on We asked the Olympics why their official esports are so weird in ~games
Macil (edited )Link ParentIt would be interesting if open source games were popular enough to be used here. OpenArena is based on the code of and has the exact gameplay of the classic Quake 3, which was one of the first...It would be interesting if open source games were popular enough to be used here. OpenArena is based on the code of and has the exact gameplay of the classic Quake 3, which was one of the first popular online games and had a competitive scene, so the idea is at least feasible. It would be a funny situation if the Olympics decided they couldn't choose any games strictly owned by a company, and therefore OpenArena was selected. It would make more sense than most games on the current list.
-
Comment on Megathread #4 for news/updates/discussion of AI chatbots and image generators in ~tech
Macil Seems like a more useful version of DALL-E's "generate variations" feature, with the important difference that it gives you a prompt you can read and manually refine. DALL-E's "generate...Seems like a more useful version of DALL-E's "generate variations" feature, with the important difference that it gives you a prompt you can read and manually refine.
DALL-E's "generate variations" feature is so entertainingly surreal. I used it on a few in-game VRChat photos and it's weird to see how it reinterpreted images of my experiences with some misunderstandings. It would be a funny prank to show someone a familiar photo album of theirs with all of the images replaced with reinterpreted images like these. It could be familiar enough to be recognizable while making the person think they forgot it all.
-
Comment on Megathread #4 for news/updates/discussion of AI chatbots and image generators in ~tech
Macil (edited )Link ParentRight, but it won't be able to directly build on itself as a base and will instead have to first recreate the work OpenAI did to build and train an AI at its own level before the supercritical...Right, but it won't be able to directly build on itself as a base and will instead have to first recreate the work OpenAI did to build and train an AI at its own level before the supercritical stage of direct recursive self-improvement can happen. If the AI is somewhere around human-level, that would probably take an amount of time and resources comparable to what OpenAI needed. Consider how other companies like Google and Meta with resources and access to OpenAI's public research still haven't yet made something as capable as GPT-4 (or even GPT-3.5 imo; the benchmarks that show various LLMs beating GPT-3.5 haven't actually corresponded with them being as useful as GPT-3.5 in my experience). It's not a complete and foolproof solution but it helps give us a human timescale to react.
-
Comment on Megathread #4 for news/updates/discussion of AI chatbots and image generators in ~tech
Macil I think one important guard rail is not giving an AI access to its own code and weights, so it couldn't directly recursively self-improve or spread itself to other machines on the internet. This...I think one important guard rail is not giving an AI access to its own code and weights, so it couldn't directly recursively self-improve or spread itself to other machines on the internet. This is actually something OpenAI is doing right so far.
-
Comment on Megathread #4 for news/updates/discussion of AI chatbots and image generators in ~tech
Macil As someone who agrees with the premise, I think the weakest part of the article is the timing. Are we actually dangerously close to building superhuman AI? If that's probably at least still years...As someone who agrees with the premise, I think the weakest part of the article is the timing. Are we actually dangerously close to building superhuman AI? If that's probably at least still years away, then halting AI progress now instead of later is sacrificing useful technology (that might be useful for research in understanding the danger and aligning superhuman AI). It would be better to establish agreements and principles about lines not to cross. I hope there are responses from AI labs along these lines.
Ultimately though, trying to get closer to the danger without getting into it is playing chicken with the world at stake.
-
Comment on These new tools let you see for yourself how biased AI image models are in ~tech
Macil (edited )LinkOpenAI apparently has been trying strategies to address this in DALL-E, like generating some images of people as if the prompt contains a minority descriptor, though it sometimes has the...OpenAI apparently has been trying strategies to address this in DALL-E, like generating some images of people as if the prompt contains a minority descriptor, though it sometimes has the unintentional result of adding people into images that weren't meant to contain people (not sure if this problem is as common nowadays though). I assume there are also deeper techniques they're trying by now.
To assume p(doom) = 0, you'd have to assume AI fizzles out before getting near human level. Even if it fizzled out exactly at human level somehow, having human-level minds that are copyable (therefore practically immortal if they keep moving and humanity doesn't enact authoritarian-like measures to control the internet and computers worldwide) and possibly sociopathic to our values will be more impactful on society than any previous human invention. This has to be considered through the lens of the introduction of a second human-level-intelligent species on Earth. Like any suddenly-introduced invasive species, it's going to find a lot of niches that were occupied by us and that won't be pleasant unless we're very good at shaping them to play nicely. Maybe if they're still human level like us then it could be argued an outcome exactly as bad as extinction is unlikely, but in general I'm not very comfortable with a statement that the most world-changing invention ever is risk-free and I don't spend much time thinking about the idea of it fizzling out at exactly human level.
Our genetics and minds changed a bit from our great ape ancestors, and in an evolutionary blink of an eye we suddenly got comfortable indoor living, enough food for everyone, safety from wild animals, dominance over any land we want as a society, caused many extinctions, planetary communication, satellites, nuclear weapons, people on the moon, etc. Human intelligence is a superpower, but also: all of these things appeared shortly after we evolved to be smart enough to do them. We're the dumbest possible species to create modern civilization! We haven't had enough time on an evolutionary scale to evolve to be any smarter than that. Unlike AI development, evolution took hundreds of thousands of years to make steps of progress on intelligence.
If we establish that p(doom) is greater than some low amount like .1%, then I don't think it's worth it to quibble much further on the exact probability before acknowledging that the expected value calculation of the impact still gets a significant enough result that society should take heed of. We can do things like put regulations on the top AI development companies to make sure they're putting effort toward making AI be used for and do what we want as a society, and encouraging/incentivizing/requiring AI researchers to study techniques for AI safety and alignment. All of these strategies have the dual purpose of helping with non-existential risks from less than human-level AI too. There's no conflict between caring about these different concerns.