63
votes
What common misunderstanding do you want to clear up?
A debunked myth, a frequently misused word, a lie that seemingly everyone believes…
What’s a common misunderstanding, and what should people really know instead?
That was quite interesting about the exercises! I knew the difference between the two things but not how much actually goes into the exercises. Logistics are a crazy thing.
And yip, don't believe everything you read! Because as soon as you read articles and hear news about subjects you actually are an expert on, you realize how much most journalists get wrong. Most news are very surface level and it's sometimes to the point of being either so simplified that it's wrong, or straight up misinformation/misunderstanding on behalf of the journalist.
I think we all have a lens through which we read articles that are about subjects we know a lot about. At least for me, my reaction is often along the lines of "kind of yes, but not really" or like "well that's misleading" or similar. So I try to remember to apply that lens to all other news I read - that there's a lot of accidental misinformation or details that are misleading and the particular media's slant and biases etc. etc. That I'm at best getting surface level knowledge of whatever thing I'm reading.
So that feeling we have when reading about our own areas should always apply to everything else we read.
So.... they're just inviting friends over to paint their Warhammer minis together? Maybe play a round of MTG or two to try out the new decks?
Question. If Canada suddenly finds itself maybe not so friendly with ~~ our downstairs meth lab neighbour~~ another country. Since these things are planned years and years into the future, would we still go but kinda be careful and take data/hold back some, or (hopefully) the military forces are beyond petty changes of office and still very friendly together?
Ah yes, the classic quick family-friendly game of Campaign for North Africa.
Huh. I've always understood popular references wargames to mean planned military exercises but can't recall seeing any alarm attached to the term in the media. Maybe I'm out of the loop.
Is this an example? Apparently due to an actual(?) (simulator not exercises) wargame modelling China v US in January, this was harder to find than I thought. Maybe journalists updated their best practices /optimism
What to know about China's war games around Taiwan over the years
Thanks! That last link is what was confounding a lot of my searching.
But I know what War Games really are ;)
Your knowledge is much appreciated, thanks for sharing!
I suppose, in hindsight, I should have realized military exercises don't happen quickly. It's crazy to think they start 1 or 2 presidents ago and the current one gets blamed for it.
Thanks for the info! Glad to have another piece of info to see through the media's smoke and mirrors. Wish we had reporters we could trust to point this stuff out. :(
The big one for me is what I studied (and continue to study). Ever since deciding on the major I've been occasionally confronted with folks who think Philosophy is about examining one's navel in an armchair, when the experience of actually studying it has been a sort of constant reminder it was good to do.
Fair warning I'm gonna ramble a bit - it's because a lot of what you do with this material can't really be boiled down into a simple, generally-accepted-as-useful outcome. I can't give you a bullet point list of exactly why Philosophy is worth the time, because in the abstract it can do many things that serve as transitional steps to more things.
Philosophy isn't about asking stoner questions, delivering self help, or providing complete/"correct" worldviews. It has also moved on from Socrates, Descartes, 19th century Germany, and postmodernism. The contemporary, rigorous stuff is more about bending concepts, processing information, and considering different modes of thinking. You're building and applying lenses of analysis for a variety of purposes, some of which might just come to you along the way. You operate at the intellectual frontiers of things and the work helps you be the explorer out in that wilderness. "Wilderness" is descriptive of much more than you might think.
I'm oversimplifying a bit because I'd like to spare folks a laborious read - it ain't the most exciting shit let me tell you. But, it's been a consistent experience of mine that folks just do not have a single idea what you do when you study that kind of material, usually because they have not actually seen what contemporary material does. They seem to believe a freshman level course is all that philosophy is, and plenty of very accomplished people have this sort of misconception fueling their derisive commentary on it. You're not in it to figure out the mysteries of existence, tear apart belief systems, uncover personal truths and win arguments. Those things can happen, they do happen, but that's not why the material was written, I will bet you the $2 in my wallet on it. The material has a purpose, something is being figured out and you are reading the figuring.
Folks producing earnest philosophical writing are conducting an inquiry - they're trying to understand something and (usually) spell out clearly what their aims are/why they're doing it. It's part and parcel of being taken seriously when you have other people read what you wrote. Applying logic and rigor to one's own ideas helps to make them communicable, iron out what makes less sense and throw out what's not worth anybody's time. You pick things and then pick them apart, to see what they are (or aren't), just like you might take apart electronics to figure out how they work. When you do that a bunch, you come out with a process/a method, and if you're really good, other folks will take your method and elaborate upon it. If you get to be really good you might provide what's needed to start a new field of inquiry, it happens from time to time.
To be a little more practical: Some stuff has to be thought about before you can devise worthwhile experimentation and organize resources. There are sometimes boundaries and problems that make other forms of inquiry unable to go very far (or, not as far as folks think they can go). Depending what you're engaging with there may not be settled language yet for talking about things - philosophy is in part the process of putting some words to stuff, of making good words that get much across. Building concepts and frameworks of analysis that can enable other folks to communicate what they can see matters, because being able to talk about things is foundational to doing much else with those things. We can help make wheels so folks don't have to reinvent them every time they want to do things.
On a more personal level, you're taking, making, and adjusting intellectual tools, with your own sort of experimentation when you try to deploy those tools and find where they come up short. You have to be prepared to interrogate your own notions, revise and change your own understanding, alter and abandon core assumptions (temporarily or permanently). Philosophy is critical thinking, it's taking critical thinking and developing it like you would your muscles when you're the student. To develop your muscles effectively you need to have things like a good exercise routine and diet - so too with thinking critically. You have to engage with stuff that's hard to think about, lift an ever increasing weight over and over to get the muscle bigger, and when you're ready/if you're lucky you can do something powerful with it. As an example, "Phenomenological Psychopathology" is one of those $10 phrases that ends up being bedrock to all kinds of analysis, experimentation, therapy, etc. You get to a phrase like that and all it entails by doing the philosophical inquiry into the experiences of others and putting together what folks in the past understood. With the words in hand it becomes much easier to collect and process what's relevant, useful, etc. The tools of the trade (logic, debate, analysis, so on) are what help you to make those kinds of determinations.
Philosophy is also a thing people do outside the confines of the western ivory tower. You can, for example read a book like Atuolu Omalu and with time understand in a more precise way just how differently other peoples of the world see things, how they got there, and so on. Maybe you can do something with that nobody in your culture really thought of before. Maybe you can enhance your understanding and have an easier time talking to people. The use value isn't exactly the point but it often becomes a happy bonus after hours of feeling like you're on the verge of a migraine. The further you go the more uncertain you will likely become, but when life throws you curveballs you might find you've got an exceptional ability to catch them precisely because you know how vague and uncertain a whole bunch of shit really is. More uncertainty isn't scary when you swim in it continuously. Being wrong doesn't matter a whole lot when you know how impossible being right sometimes is.
I'll stop here but I hope that all makes some sense. It's hard not to talk in terms of use value, especially because criticism often hinges on being unable to immediately tell what the use value of such material is.
Maybe the best way to sum it all up is by calling philosophy a form of "brain exercise" - a way to stretch and build strength in your intellect, so that you can apply your intellect and figure shit out well. A strong mind paired with a healthy body will get you to (and through) all kinds of stuff. I may not be able to say exactly what, but just like physical exercise, you do it to be stronger and as time passes that strength finds its useful moments. What stops some won't stop you. What confuses and terrifies some, you'll be able to handle and understand. Uncertainty and a lack of clarity won't be the obstacles they once were.
Are there any resources you'd recommend for someone interested to learn more about modern philosophy?
The SEP (Stanford Encyclopedia of Philosophy) has been a go-to for me for a very long time. It may not help with figuring out what is most current on its own, but as a reference it's a huge help for finding who thought about what/the lineage of things. You could take a broad term, like "postmodernism" or "metamodernism", check out what is in the reference material, and from there find authors/researchers who are toward the frontiers of those schools of thought. Same with areas of specialization, like epistemology, phenomenology, ethics of various kinds, etc.
To be more practical: Say you went to the SEP and figured out Epistemology is an area of interest. You can almost always find an anthology of something like that. Grab one with a recent publication date and see what you find, follow the references/footnotes. Just about any -ology or -ism will have an anthology/journal, it isn't that different from a scientific discipline in that respect. That route can take you through some real dense reading, just as a heads up. But if you're already using a reference like the SEP, you can quickly get up to speed on the lineage of stuff and have a workable idea of what the author is doing. You can quickly find similar material in different cultural contexts by slipping in some cultural words - I have a few books on "Islamic epistemology", "Africana Critical Theory", " The Philosophy of Jainism", so on and so forth.
On the note of anthologies, you can find a "philosophy of" for practically any broad discipline. Pick something you like and see if you can find a "philosophy of" of it. Routledge, for example has a shitload of books like that which you can use to get to what's most current. Avoid any that focus on pop culture topics - no shade at folks making a living but if you want to really go deep those just aren't as connected to where serious work happens.
This is totally tangential, and feel free not to answer, but I'm working on a(n amateur) piece on the terminology of "modernity", and was wondering if you could expand precisely what you mean by saying that philosophy has "moved on from [...] postmodernism".
I'm happy to expound on my slight perplexion if you'd like, but also don't want to bias your response off the bat.
In the context of the post (in a perhaps clunky and colloquial way) I was meaning that philosophy has more to it than just those different topics, that it is a thing which constantly develops. People took "postmodernism" and elaborated upon it just like any other thing. "Metamodernism" is one consequent example, with its own sets of ideas/frameworks, and with time that too will go in various directions and generate more -isms. Probably, that's already happening and it's a matter of finding the niche.
Phrasing it that way implies a linear progression, which is another oversimplification. Ideas evolve in many directions, and get mediated by their cultural/political context. There are winners and losers, in the sense some get picked up for a while and others are left with little attention, sometimes without regard for things like use value or clarity. My point was to draw attention to the idea that philosophy writ large operates this way, to challenge notions of the field being contained to some dead authors or a single school of thought. Hope that answers your question!
It does, thanks! I hadn't read your statement as a linear progression, but also didn't want to quite assume any particular logic otherwise. Appreciate the time.
China's infamous social credit system doesn't really exist (except it does(except not in the way people in the West generally think it does(except in a few areas(kinda(for a few years))))).
Thanks for clearing that up.
A lot of folks who grew up in countries where the government has to follow rules and be transparent don't understand this. It's closer to "because I said so / Don't you dare talk back / You should have known" parenting, where laws and facts can only be used against you and never against the ruler, nor any temporary favourites the ruler deems okay to tolerate.
Tell us more please!
Tor, is that you? Lol
https://m.youtube.com/watch?v=wYaKoDyIvWA&pp=0gcJCfsJAYcqIYzv
This reminds me of a video of someone going around China on the anniversary of Tiananmen Square asking people "do you know what day today is". The people immediately seem to become confused or just walk away. The fear is palpable
Nah, they most likely actually don’t know. When I talk to the (young) people I know in mainland China, in terms of history they generally don’t know about anything Deng Xiaoping did wrong. Mao gets more of a critical eye, in contrast.
You don't feel nauseous, you feel nauseated. If you're "nauseous," you're making others feel nauseated with your disgusting grammar 🤢
Just kidding. Like pretty much everyone else, I've been making that mistake all my life. It used to drive my ex crazy though, so I made the switch and now I can show off how sophisticated I am when I'm about to puke.
Worth noting that it's not a mistake once it becomes how the word is generally used, which has long since happened for "nauseous" -- the definition that equates to "nauseated" is present in major English-language dictionaries, with the other definition being considered less common and more formal. Language change happens this way all the time, it cannot be stopped by pedants, and people are not wrong or less intelligent for using language naturally rather than following made-up rules that involve arbitrarily deciding certain word usages and grammatical structures that are naturally part of the language are "wrong" and a sign of unintelligence.
"Nauseous" is actually attested as being used for the now-obsolete meaning of "inclined to nausea, easily made queasy" earlier than it's attested as being used to mean "causing nausea,". Were people wrong for beginning to use "nauseous" to mean "causing nausea" back then? When did they become correct? Who decides when the way normal people regularly use a word is "right" and when it's "wrong", and how might their decisions be influenced by arbitrary bias towards the things they're familiar with rather than anything remotely approaching "objective truth"?
The people who insist that the way common words in everyday language are used should be policed like this rarely, if ever, think about these things, and that's before you even start with them literally just making up grammar rules that are not a thing in the English language (not splitting infinitives and not ending sentences with prepositions being two prominent examples that spring to mind) to start enforcing for no reason other than as a purity test to filter out the "uneducated." And it's all fun and games with little trivialities like "nauseous" vs "nauseated", but the same mindset and behaviors do regularly result in actual discrimination and horrific mistreatment around the world of speakers of languages and dialects that are just as rich and complex as those spoken by the people who make these "rules," but just happen to have less societal prestige.
People who insist that you're using language "wrong" for "mistakes" like this are overwhelmingly assholes (and I say this as a reformed "grammar nazi" myself), and they're also just straight-up wrong and silly from the perspective of those who actually study language from a scientific perspective.
You don't need to think a certain usage of a word is objectively correct in order to merely suggest people use it a certain way. I'm no hardcore prescriptivist myself, or anything, but I think a lot of dyed-in-the-wool descriptivists lose sight of the social, aesthetic, and most importantly practical implications of language use. Moreover, people tend to view prescriptive and descriptive attitudes as polar opposites, but really they are orthogonal - they have somewhat unrelated sets of goals, and in fact can often be used to enrich each other.
Of course, there are plenty of prescriptivists who are in it for the joy of being pedantic, or worse, to try to enforce linguistic biases which disadvantage minorities. But there are plenty of morally neutral or even morally positive uses for prescriptivism. I think most linguists are in favor of revitalizing endangered languages, but if you think about it, this is technically a prescriptivist project - a normative stance on the way language should be used. In a way, a lot of more conventionally prescriptivist ideas can be thought of as an attempt to revitalize a slightly outdated and fussy form of, say, English.
I'm sympathetic to the 'nauseous'/'nauseated' distinction (and in fact I myself have posted about it on Tildes before) for a combination of aesthetic and practical reasons. After all, if we use the two words to mean essentially the same thing, with more or less identical etymologies, why bother to have two words at all? Wouldn't it be more elegant to observe a subtle distinction in the two words' meanings? Of course, I would never use this opinion as an excuse to browbeat or discriminate against someone. But I don't think it's merely a matter of "fun and games" (as you put it) either.
I think the more fundamental question here is whether certain usages of words can truly be seen as practically superior. The whole descriptivist ethos fails to really grapple with this question. Claiming 'language change is natural, therefore a new usage of a particular word is okay' is essentially a naturalistic fallacy. In a way, linguistic descriptivists taking a normative stance against linguistic prescriptivism is kind of like cellular microbiologists taking a normative stance against antibiotics. The desire to catalog and understand different types of bacteria should not be mistaken as justification for letting all those bacteria exist wherever they are naturally wont to.
And there are in fact many places where snooty, frequently-ignored grammatical rules actually serve a clear function that seems to improve the practical quality of language. For example, rules about ambiguous placement of adverbs - consider the sentence 'people who eat this mushroom often get sick'. The placement of 'often' here could modify either 'eat' (i.e. 'you will get sick if you eat this mushroom often, but you can eat it a few times without issue') or it could modify 'get sick' (i.e. 'it is often the case that people get sick when eating this mushroom').
Now, obviously a conversation about potentially poisonous mushrooms will probably entail more clarifying sentences, but I think there's an obvious utility to certain grammar rules that descriptivism frequently fails to acknowledge. In the descriptivist mindset, words and grammar that emerge 'in the wild' are practical by definition, because people empirically find it effective to use them. Often a parallel is drawn with biological evolution. But we often forget that biological evolution is not optimal - it's often 'good enough' (like the way the recurrent laryngeal nerve takes a big, pointless detour around the aorta, or more importantly how malaria resistance genes can cause sickle cell anemia). And likewise, linguistic evolution is often shaped by compromises, aimless drifting, and (gasp) human laziness. To think that this couldn't be improved upon is a bit shortsighted.
I'm not leaving this comment simply for the sake of being contrary. In fact, I think prescriptivist ideas are wrong like 75% of the time, and there's lots of new, slangy developments in language that fill very useful niches. But the radical embrace of linguistic descriptivism in certain academic or social justice circles is kind of an overcorrection, and often falls prey to the same inflexible puritanism that prescriptivism also historically has.
I think most of your comment is well-written even though I disagree strongly with your points, but I think claiming that "radical descriptivism" is even capable of being remotely as "puritanical" as prescriptivism given the amount of real world harm -- including actual genocide -- that has come alongside language policy based on prescriptive ideas about language makes that statement utterly absurd. The end result of believing that there should be a push towards "optimal" language or that the way certain native speakers use language is wrong isn't just stupid arguments online, it's an incredible amount of real-world harm. I wish you'd addressed this in your comment.
And the idea that a descriptivist approach to language necessitates an utter lack of care towards elements of style pervades the rest of your arguments, when that is not and has never been what's being argued. The idea that it can make your writing clearer to choose certain words or pay attention to the placement of certain modifiers is completely orthogonal to whether doing things differently than that is "wrong", and people who insist that these things are universal "rules" of the English language rather than simply stylistic choices that a writer should pay attention to are not only failing to comprehend that context is deeply important when it comes to how one uses and interprets language, but they are also perpetuating ideas that are used to commit significant actual real-world discrimination. Their sense of intellectual superiority for speaking and writing "correctly" isn't neutral when it's used to punish and demean people for speaking AAVE, insisting on the unintelligence of those who can effortlessly code switch between two grammatically rich dialects of English because one of them is "wrong."
Rejecting the scientifically invalid idea that some languages and dialects are better than others and that those who fail to meet an arbitrary, unnaturalistic standard that more often than not does not improve clarity or decrease ambiguity are inferior or unintelligent is disgusting, harmful, and has no foundation in any remotely scientific study of human language. There's certainly no scientific basis to believe that language evolution, even if it is accounted for by "laziness" (it generally isn't), is producing language that is worse in any objective way, and there's absolutely plenty of evidence that attempts to "improve" language to make it more "optimal" through resistance to natural language change are utterly ineffective at best and genocidal at worst. If these ideas' foundation in the scientific study of language or empathy for your fellow humans can convince you that this concept isn't too "radical" or "puritanical", I can't help you, because if you come to this topic refusing to change your mind based on either of those perspectives, I fundamentally cannot relate to you.
I tried to make it clear that I condemn discriminatory language policies, but maybe I wasn't explicit enough about the scope of the harm they cause. So to start with, I want to emphasize my agreement on that point. Perhaps it is naive of me to think that we could discuss the philosophy of prescriptivism separately from language policies that use prescriptivism as a cover for racism. But I think it is worth the effort to distinguish those two things - it's similar to the difference between eugenics (a downright evil ideological system) and gene therapy (an incredible, life-saving scientific project), which both hinge on the same (essentially prescriptive) idea that it is possible improve people's genes.
To wit, you claim that descriptivism "necessitates an utter lack of care towards elements of style pervades the rest of your arguments, when that is not and has never been what's being argued." - but this exactly what I am arguing, and doubtlessly what other (non-racist, philosophical) prescriptivists would argue. The mindset of there being 'a correct way to use language' is essential to actually using language effectively. Let's not forget that language is really goddamn hard. I've revised these very paragraphs numerous times and they are still not as clear as I want them to be. It is only through the mechanical adherence to what I consider the norms of 'correct' language use that I stand even a remote chance of communicating well.
I want to emphasize my use of the term 'norms' (and not 'rules'). There is nothing morally inviolable about norms, they are just a convention that gets people on the same page. The descriptivist stance here is that such linguistic norms are emergent features among language users. Now, certainly some norms are emergent, but if you want to take that generalization to its logical conclusion, you'd have to be opposed to native language classes (e.g. English classes for native English speakers). After all, what is a native language class if not a presciptive handing-down of norms? But the unfortunate reality is that many people need these classes. Not everyone is gifted enough to intuitively figure out the most effective way to use language. And for those who are gifted, in many cases it is better to be instructed on norms so that they can be broken well.
And if I am drifting dangerously close to elitism here, let me acknowledge that for someone whose native language is AAVE (for instance), taking a standard English class is sort of like taking a second language, and we absolutely ought to be mindful of how challenging this is, and how the potential for discrimination exists in this dynamic. A socially conscious solution might be to offer native AAVE instruction, and to allow curricula to be tailored to each student's intellectual and cultural needs. But let me also point out that AAVE language classes would also be fundamentally prescriptive. Ultimately, social justice concerns do not really impact the philosophical argument at play here.
Separately, I take issue with you calling prescriptivism a "scientifically invalid idea that some languages and dialects are better than others" and that they mandate "arbitrary, unnaturalistic standards". Certainly some prescriptivists support these ideas, but you can also prescribe norms which are respectful to their corresponding speech community, and which do not make value judgments about other languages and dialects. In fact, much of our current discussion is not so much about the validity of prescriptivism, but the definition of it, which is itself a prescriptivist argument. And this actually touches on an important social function of the prescriptivist attitude. You claim that "attempts to improve language through resistance to natural language change are utterly ineffective at best", but again, this presumes that prescriptivist ideas must come from a formal authority. In fact, prescriptivism is equally common as a grassroots, person-to-person phenomenon, with norms that are adopted by many voluntarily and propagated by the social pressure of individuals.
I can think of no better example of this than the usage of the word 'gay'. When I was in middle school (from the 90's to early 2000's), 'gay' was commonly used as a general-purpose insult. No doubt this usage originated in homophobic hate speech, but due to evolutionary language shifts, in many cases the meaning came to be completely divorced from the topic of sexuality (e.g. 'my math teacher gave me extra homework for being late to class' - response: 'that's gay'). But in the late 2000's, there was a general push towards correcting the use of this word. It was common to tell someone "hey, you shouldn't use 'gay' like that" - which is about as prescriptive a statement as you can get. I remember distinctly being on the receiving end (and later on the giving end) of this statement. It did not come from a central authority. And importantly, it was not necessarly a comment on content or sentiment that the word 'gay' was being used to express - most people who used 'gay' as an insult were not actually homophobic, they were just using language wrong.
In our current year, where social cohesion seems to be reaching a breaking point, the impulse to find common language norms is all the more urgent. One norm I am personally trying to prescribe surrounds the use of the word 'fascism'. Too often, arguments about whether Trump is a fascist are waylaid by a failure to find a common-ground definition of the word. Many people (especially Republicans) hold a concept more akin to "Hollywood fascism" - jackbooted Nazis marching through streets, relentless state-sponsored murder of Jews, and a complete conversion of all social order to the fascist regime. I spend a lot of time and effort trying to explain to these people what fascism is, or really, why it's important we hold ourselves to a standard of using the word 'fascism' correctly.
Honestly I think the principle issue that leads to our different perspectives here is this line. I don't believe this is true, and I don't think you've really substantiated it sufficiently here.
I am perfectly capable of using language effectively without believing that there is an objectively correct way to use language, and my belief that there isn't an objectively correct way to use language (aside from "how we observe people to be using it," ofc) does not hinder me from making suggestions on how something could be expressed more clearly in a certain context. I am perfectly capable of criticizing someone's writing and so is every other linguist I've worked with. Descriptivism does not render the concept of adhering to certain norms in certain contexts moot, nor does it prevent criticism on elements of style. It is very possible to criticize someone's use of language in a certain context based on any number of factors without the foundation for that criticism being "there is an objectively correct way to use language and you're doing it wrong". Descriptivists are capable of distinguishing between different grounds on which to criticize language use. What I was trying to say in the quoted portion of my previous comment was that descriptivists have never been arguing to throw out the concept of good writing and editing for style and that the accusations to this effect from prescriptivists are a seriously misguided straw man that does not accurately reflect what descriptivism is and betrays a serious refusal to actually engage with the science of language.
Moreover, I simply don't think your example is a good one. I grew up during the "gay being used as an insult" era, and setting aside the fact that using the word this way was definitely actually homophobic, not just etymologically related to homophobia in some way, it wasn't "using language wrong" in the sense that it was violating the rules of the English language. Comprehensive dictionaries include this pejorative use of "gay," in fact, because by being used that way it became part of the word's meaning just as naturally as its more neutral-to-positive use to describe homosexuality and queer attraction more generally. This meaning of "gay" is no less "correct" than The reason people tell others not to use "gay" as a pejorative or not to use slurs has absolutely nothing to do with whether doing so is "correct" according to the grammar of the English language -- linguists study how slurs are used in various languages precisely because the way people use them is part of a language's grammar! People tell others not to use words like this because it is socially harmful, not because it's incorrect use of English. It is perfectly possible to criticize language use for being morally wrong, as is happening here, without insisting that it is violating the rules of English (which it is not).
I am queer, and I have friends who use neopronouns, fwiw, so I'm absolutely not of the opinion that one cannot consciously use language differently to express certain ideas. Those things are part of the most interesting parts of how humans use language, and sometimes they lead to long-lasting change while other times they don't. But the prescriptivist bent of pushing back against language change is consistently reactionary and conservative, and while most cases are simply them serving as pointless nuisances, tilting at windmills about language changes that have long since taken root and which they have no hope of reverting (as in the case of "nauseous"), others are actively harmful, as with opposition to use of singular-they -- both a great example of prescriptivists arbitrarily insisting a construction that has been in use in English for centuries "wrong" and coming up with strictly inferior replacements like "he or she", as well as an example of people using "grammatical incorrectness" as a thin veneer over opposition to the social change represented by a change in language use (because use of they/them for specific people rather than just for when an individual's gender is unknown or unspecified has actually represented a change in use -- and arguably an intentional one!)
The "bits of prescriptivism" you seem to want to hang onto don't actually require you to hang onto the prescriptivist foundation that there is an objectively correct way to use language. You can criticize how people use the word "fascism" without insisting that the way others are using it is an objectively incorrect use of the word. In fact, I think it would be a far stronger argument to rely not on some abstract notion of "correct use of language" but rather to focus on the more concrete reasons why you think it should be used or defined in certain ways. It is far better to argue why using the word in a watered-down way is harmful, not that it's violating some objective "definition" -- especially for a word that's been long-recognized as hard to strictly define (after all, "trying to define 'fascism' is like trying to nail jelly to the wall.")
You might be familiar already but I learned a lot from the book the anatomy of fascism.
Good recommendation, I actually just finished reading it for a political book club I'm part of.
I appreciate what you're saying and all of the layers of bigotry that can come out of grammar judgment. I do want to give one example where "language marches on" leads to the loss of a beautiful bit of linguistics:
The plural of "clitoris".
As a dorky university student, my roommate and I looked up the plural in the dictionary and were thrilled to learn that the "correct" plural of clitoris is not "clitorises", but actually "clitorides" (which my autocorrect just "fixed" for me), I assume because the base word is Greek in origin. This is to me one of those glorious bits of linguistics that shed light on word origins and make you stop and appreciate where words come from. 20 years later, seeing that "clitorises" is now the preferred plural (probably because there's much more public discussion of them), I am genuinely sad to see that language is actively marching on, to the point where my autocorrect doesn't even recognize the "real" plural. Probably won't be appreciated by anyone but a nerd, but I do feel a little pang every time I hear someone use the colloquial plural.
yeah, analogical leveling does unfortunately often get rid of weird little quirky things like that over time... we must comfort ourselves with the fact that language change will make new weird little quirky things over time to replace them.
Thank you for the info! I know a little bit about Latin, but virtually nothing about Greek, and so I appreciate the explanation.
You may also be happy to know that the "correct" plural of "penis" is "penes"
You're right! That's fantastic!
I think it's interesting that in Spanish we have something called RAE (real academia española), which is an organization which decides what is "proper Spanish" and what is not.
It sounds terrible, as in why would we police language, but in my opinion it's actually pretty great.
I find it particularly useful when I have to write something academic. You have an easily accessible place where we can all agree about how things work.
Spanish is also very complex because of all the different cultures that speak it. I speak a variation that is basically only used in two cities and RAE, even though it's from Spain, includes everything we say as a correct variation.
They are also willing to evolve and as people start using new words or using words differently, they just add them, even if they are borrowed from other languages (looking at you, French).
They even have a doubts section which I find extremely helpful. It's not just about grammar or spelling, it's about punctuation too for example.
I teach English and I wish I could just check an English RAE to make sure I'm not making mistakes. I know I can check the big dictionaries, but they sometimes differ.
Not every Spanish speaker shares my opinion, but I think it's great and they do a very difficult job in a great way.
I'm not super familiar with the RAE's approach to Spanish so I can't directly comment on it. My principal exposure to language academies has been to language academies that are incredibly elitist and out-of-touch like the Académie Française -- and a look at France's treatment of other local languages is a phenomenal demonstration of how harmful language policy can be. I'm very glad if it's the case that the RAE approaches things differently and takes into account dialectical variation and includes things that are in everyday usage by Spanish speakers despite being traditionally "incorrect." A linguistically-grounded approach such as this would be the only remotely acceptable way for a language academy to exist imo.
That said, even without a formal language academy English has an absolute bevy of absurd arbitrary rules that were literally made up in relatively recent history to serve as a sign of status and education... so I'm very glad there's no English language academy, as I'm confident it would be extremely regressive in this regard if it existed. We can see evidence of this in Style Guides, which attempt to be more or less the type of authoritative reference for what's "correct" that you see -- I don't know any linguist who doesn't have some degree of distaste for Strunk & White. That said, there are style guides out there that set out their limitations in terms of what they can and should address about language use -- they're just a relatively new phenomenon.
If you can get access to a copy through a library, university, or other means (it's definitely more of an academic reference text than anything else and both physical and digital copies are accordingly expensive), I highly recommend checking out Huddleston and Pullum's 2002 The Cambridge Grammar of the English Language. I doubt it would really serve as a practical way to look up whether something is allowed in English while writing or teaching, but it is a very thorough explanation of the actual rules of English grammar explored from a linguistic perspective, and it includes examples and explanations even of relatively niche structures in English (I cited it extensively in my bachelor's thesis on right dislocation, and right dislocation is not a particularly common or thoroughly-discussed sentence structure in English!) As far as I remember the text is pretty readable despite it being targeted at a more academic audience, but there's also a Student's Introduction to English Grammar that's based on the same text afaik.
r/grammar's list of resources actually has some good recommendations -- I haven't used the style-guides they cite, but since they recommended the same book as I did above and include a link that explains the issues with Strunk & White, I feel confident that their other recommendations are solid, and style guides seem like what you might be after based on how I understood your comment.
Thank you so much for your reply, and for all the resources. I'll definitely check them out. I understand what you mean by saying that you're glad a language academy doesn't exist for English, but I think RAE is proof that it can be done well.
I think they were saying that given America and Britain's respective histories and presents, neither would be done well even if it's possible.
We have the same thing in French! The Académie Française, and it's basically a bunch of old elitist conservative white men that call themselves "les immortels" (I'm not even kidding) and none of them is a linguist. Their goal is to "contribute to the improvement and promotion of literature" so they spend most of their time banning words borrowed from other languages (mainly English), and substitute them with ridiculous alternatives. They're mocked by everyone, except other conservatives.
You are literally on fire with this comment!
I say that as a mostly reformed prescriptivist.
Similarly, "poisonous" vs "venomous".
as the song says, ♪♪ your lips are venomous poison ♪♪
With the context of words, does “nauseated” have a common root with “nautical”? Like, is “seasick” just saying the same word twice?
If I follow a bunch of etymology explanations, the word nausea (nautia/naucia in Greek) originally meant specifically seasickness, so yes but also no it's not being redundant; it's more that we decided to use "nausea" for every type of land/vehicle sickness too (and also the feeling of disgust) as late as when it hopped to Latin.
Others have already given you the answer here, but for future reference, when you have questions like this about English words, this online etymology dictionary is free, searchable, and based on high-quality sources.
also wiktionary is maybe slightly less trustworthy but a bit more expansive (also covers modern slang)
Yeah, wiktionary is a good enough resource (especially for very well-documented languages like English) when you can't find something more reliable. Its results should definitely still be taken with a bit of skepticism though, especially for languages with fewer speakers and less accessible documentation.
The Byzantine empire isn't the continuation of the Roman empire, or a succesor state of the Roman empire. It was the Roman empire, centred on the capital it had had since Constantine (Constantinople), speaking a language it had spoken since the Late Republic (Greek, which they called Romaic), with the same religion that was growing in the state since the time of Augustus all the way to Diocletian before Constantine (Christianity).
It got caught up in European renaissance propaganda (you can't revive Roman civilisation when the Roman state is still right there), and is today caught up in post Ottoman national myths.
You have piqued my interest, what role does it play in post-ottoman myths?
Both Turkey and Greece define their national awakening in opposition to each other. Turkey defined Anatolia as the homeland of the Turks and become hostile to any other ethnicities that call Anatolia their home. This became a problem for the Greeks, Armenians and Kurds that had been living there for centuries. For Turkey, Roman history is inconvenient because it gets in the way of Turkish history.
Greece defined its national awakening as free Greeks casting off the shackles of being enslaved Romans under the Turkish yoke, and therefore reinterpreted the Roman (Byzantine) empire as a continuation of a single unbroken Greek civilisation. The fact that Greek speakers six centuries ago were talking about Augustus and Aeneas, and proudly calling themselves Romans, was inconvenient. Added to that, it also tried as best it could to reject the undeniable Ottoman influences that five centuries of being part of the empire gave it. Classical Greece had been dead for centuries, and it became important to revive it, both as as part of the ethnogenesis of the modern Greek people, and to get European help.
Roman civilisation doesn't have a place in the two halves of its former heartland. There is no modern nation state that defends it, so therefore, it's forgotten.
Going down that route, there are still people today that call themselves Romans, Ρωμιοί, in Istanbul. The identity isn't quite dead. But I can't speak about how much the consciously tied themselves to the Roman Republic (politeia/πολιτεία) the way medieval eastern Romans did.
And they'll die within our generation. Turkey isn't kind to them, and when they move to Greece, their children are Greeks.
My History of the Byzantine Empire class I took randomly in college was one of my favorite classes and my absolute favorite history class ever even if I had no right to be in a history class cross listed for grad students as a bio turned psych major.
It was basically all things I'd never learned before because "Western Civ" jumps from Rome to William the Conqueror.
...help me reconcile 'isn't the continuation of the roman empire' with 'was the roman empire'; i can't quite parse what you're saying...
Constantine moved the capital of the Roman Empire to Byzantium; there's a direct line of continuity. For a modern example, Brazil moved its capital from Rio de Janerio to Brasilia in 1960 and it clearly stayed the same country.
For an example, I think generally people would accept that “the Russian Federation is a continuation on from USSR, or it followed after the USSR, but you can’t really say that it is still the USSR because too much had changed”
I believe OP’s point is that you can’t really say the Byzantine Empire had changed enough from the Roman Empire to call it a different thing, it’s still the same thing so why are you giving it a different name and pretending there’s some substantially relevant difference between them?
I suspect that there are important differences due to it being many centuries of history, but there is no clear break. One history book I was reading just said "okay, from this point we will start calling it the Byzantine Empire," but that's not what they called themselves.
People continued thinking of themselves as Romans for a while in the West, too, after central control and trade broke down. Since the Roman Empire had had civil wars before, it wasn't obvious that it wouldn't come back again this time.
I believe even some "barbarians" considered themselves Roman. In some cases they became Roman emperors.
I'd have phrased to "Isn't the descendant of the Roman Empire" because it still is the Roman Empire (albeit "Soon with Less Rome!")
...yes, i'd always understood the byzantine empire to be a direct continuation of the eastern roman empire and not a successor state; the grandparent statement kind of threw me for a loop because it seemed to be simultaneously denying and validating my understanding...
I think it was just a phrasing thing
Saying that the Byzantine Empire is the continuation of the Roman Empire implies that they're two different things that continue from each other. It's like saying... I don't know, that the 21st Century American Federal Republic is the continuation of the 19th Century United States of America. It's a nonsensical statement that implies a difference that does not exist. And you had to invent a new term that no American uses for their country to boot.
I think its wild how many distinct groups of people tried to cosplay as "Romans". There's Charlemagne and the Holy Roman Emperors explicitly have popes crown them as Roman Emperors. Both Kaiser, from the German/Prussian Empire, and Czar, from the Russian Empire, are based of the term Caesar. Even modern posh British people still learn Latin, or more specifically memorize particular Latin phrases. to play on this trope. Hell, I took 4 years of Latin in high school, but that's just because my parents are hyper-conservative Catholics. Anyway, I think that Carthage must be destroyed.
There is one crucial difference between the HRE, Russia, the Church, etc. Only one empire ruled over a group of people who's only identity was "Roman", and that was the Roman empire in Constantinople.
Other people took the title of Roman emperor, because of the prestige. But no one in Moscow or Vienna thought of themselves as a Roman citizen of Roman ethnic origin living in Romanía. People living in Constantinople identified exactly as that.
Oh yeah, the "Byzantines" were as Roman as you could be, but I disagree that Roman was ever an ethnic group. I also agree with all the stuff after roughly 1000CE. Roman was always a multi-ethnic concept, especially by late antiquity. Most Gauls had been Roman for centuries. There were ethnically Frankish Roman Emperors; a lot of Roman Emperors weren't even from Italy let alone Rome itself. I just get grumpy when people act like all of Roman culture and peoples was deleted from Europe after the "Fall" of the Western Roman Empire.
The Romans of the east absolutely were an ethnic group, and talked of themselves as such, constantly. The Roman empire of the mediaeval era was not multi ethnic in any real sense. During the period of Macedonian expansion, the conquered Armenian, Georgian and Arabs were integrated into the system (to an extent), but they were never Roman, unless they romanised. Indeed, there were critics of the expansion precisely because some people didn't want to make the empire less ethnically Roman.
The sources are chock full of people saying so and so was half Bulgarian and half Roman, or how ethnic Romans were being born in captivity in the Persian lands. They would've been extremely confused if you had told them that a Pecheneg with Roman citizenship was actually Roman.
Being a Roman wasn't an ethnic label in 10 AD, but it certainly was in 1000 AD.
Yes, Roman became an ethnic group in the Eastern half eventually. My main counterpoints are firstly that the shift to Roman as ethnicity vs Roman as citizenship/cultural-symbol quite far into the split. Second, Roman continued to be a fairly inclusive to new groups relative to there cultures. There were Armenians and Georgians who did become ethnic Romans, in a way that didn't happen in many other contemporary cultures. Also, the Roman ethnicity was created from a multi-ethnic Roman citizenship body.
Oh, absolutely, there was a transition. The word Roman shifted in meaning through the long centuries, and ancient Romans would've been surprised by the medieval Romans, though probably not for the reasons we would assume!
Who says old jokes aren't funny? People have been cracking this non sequitur off for more than 2,000 years, and I still chuckle at it.
It's also ironic because Roman Carthage became one of the most important cities of the empire. Carthage falling to the Caliphate was a catastrophe for the Romans.
The Phoenicians didn't establish a colony there for no reason, I suppose.
I should do some learning about Carthage. It occurs to me that I don't know much about the region or the culture, aside from that it must be destroyed.
Edit: I remembered that I have watched/listened to the Carthage episode of the fantastic Fall of Civilizations podcast. It's good stuff, but keep a pot of coffee handy, because his narration can induce narcolepsy (in a good way).
I've actually tried really hard to read about Roman Carthage. It was a major Roman metropolitan centre for over seven centuries, there's a fuckton of history there. But even with access to University papers, there's not a lot of research. Carthaginian empire, conquered by the Romans, yada yada yada, rise of Islam.
You yada yada'd over the best part! What was that city like???
It kinda makes sense in the context of Roman culture though. Like any expansionist culture, they were jingoistic enough to neglect the things they didn't value too highly, and I'm sure by the time of the Empire they were convinced Carthage was a solved problem, especially since Cato and his war hawk buddies made such a stink about the city. They weren't generally the sort of culture to celebrate the losing side, except in the very specific and complicated subject of Greece.
Then again, they did have Troy to look to as a template. I'm sure lots of Roman patricians thought of Carthage as their own Troy, even before Virgil made the connection explicit. The Iliad we have today is pretty celebratory of the Trojans despite, or maybe because of, their loss and extinction. Maybe there was some cultural bias against acknowledging the continuance of Carthaginian culture after the conquest because the really good tragic losers have the decency to stop existing after they lose.
Roman Carthage was not Carthaginian. After the Third Punic War the old city of Carthage was well and truly destroyed and its populace entirely shipped off as slaves or killed or fled as refugees. Much later, Julius Caesar made a new Roman city kind of adjacent to where the old city was, and as a Roman city it was an important city in the region for commerce and government.
Post-rebuild, it was entirely of Roman design and settled by Romans. I don't think any tensions between Roman and Carthage mattered by then, it was many hundreds of years later, and had little to do with the ancient city.
But that's the thing, how Roman was Roman Carthage? Surely the surrounding hinterland was Punic. Two centuries later, Emperor Severus shows up from neighbouring Libya as a native Punic speaker, so the language wasn't suppressed in any way.
I have to imagine that if Roman Carthage was a Roman island in a Punic sea, the demographics of the city would've been changing as the seven long centuries rolled by.
But I can't find a lot of research about Roman Carthage. Demographics, material culture, etc.
...we frequently used to play the original avalon-hill civilisation board game, which includes a 'speech' phase in each turn: we house-ruled that all speeches must conclude 'and carthage must be destroyed'...
"Carthago Delenda Est" would actually be a pretty good name for a Mediterranean-flavored metal band, come to think of it.
In case anyone is curious and would like to learn more, I'd strongly recommend The History of Byzantium podcast by Robin Pierson. When I first got in to listening to podcasts, one of the initial podcasts I checked out was The History of Rome podcast by Mike Duncan (I was going to link to the website for this, but it looks like the hosting provider for that website went out of business last week), and once I finished it I was interested in continuing the narrative of Roman history. My only exposure to Byzantium was that it was a faction in Medieval 2 Total War that was fun to play occasionally, and I knew that the Roman Empire wasn't finished after Rome fell and continued to exist, but didn't know much about it.
The podcast has recently reached 1453 and the fall of Constantinople. Robin has indicated he will continue doing a brief overview of the Ottomans and covering some other Byzantine things for the foreseeable future.
The whole saga between Duncan in 753 BCE to Pierson in 1453 CE has been amazing. Over two thousand years of human history.
I completely agree. I would love to go back and listen to both podcasts again one day, and probably will. The only reason I haven't now that he's reached 1453 is that I have way too many podcasts I'd like to listen to right now as it is.
Disclaimer beforehand, I fully agree. The Byzantine Empire is the Roman Empire for the majority of its existence. What I am about to say is more out of general historical interest and because I like talking about.
Things get a bit murkier around 1204 when Constantinople was lost due to the fourth crusade and the areas that remained under Byzantine control got split up in three successor states, all claiming to be legitimate successor of the Byzantine Empire. Out of those, the Empire of Nicaea managed to reconquer Constantinople over half a century later.
To be clear, I think their claim was pretty legitimate. But as far as empires go, that was pretty the definitive start of the end as after that it only continued to shrink and lose territory on both sides of the Bosporus.
1204 gets weird. I could be pushed to agree that what came after was a Roman empire, if not the Roman empire. However, I think the nature of Romanía as a pre-modern nation state obviates it a little.
If the French Republic fell tomorrow, I don't think anyone would argue that what came after was France. 1204 happened, but the empire that came to rule after the recovery of Constantinople was still led by ethnic Romans governing other ethnic Romans in Roman lands. It can't have been anything but Romanía (land of the Romans).
We're up to the fifth French republic, so the most likely successor would be the sixth :) German occupation during World War II did not end France, but it resulted in a new republic.
Nationalism changes things a bit. A people can think of themselves as the same people with a common history after a change of government. Sometimes this is partially mythical, but it still works.
Yes, France is a good example. You probably think of the French Kingdoms and the Fifth French Republic as France without much controversy.
The Roman Empire is even simpler. From the founding of the republic in 509 BCE to the sack of Constantinople in 1204 CE, there is one uninterrupted Roman government in place. There are interruptions on either end of that, but a Roman state ruling over Romans has existed from 753 BCE (we don't really know the exact date though) to 1453 CE.
“Uninterrupted” is doing a lot of work considering things like civil wars or even just a new emperor overthrowing a previous one. But perhaps fairly brief interruptions “don’t count” when there’s a continuation? (Similarly for World War II and France.)
Those were a normal part of the system. I think one thing that needs to be stated is that the title of Roman emperor is not a European noble title like Archduke of Austria or King of France. It was always, to the very end, a political office. The title is not Emperor of Rome, it is Emperor of the Romans. That office changing hands is no different from Hollande succeeding Sarkozy. It doesn't change anything about the French state. Overthrowing the emperor was a perfectly legitimate way of succeeding the throne, provided you were accepted by the army, the senate, and the people of Constantinople.
The Romans has civil wars for the throne. That was part of their system. Past the Crisis of the Third Century, this method worked fine but then completely breaks down in the 14th Century, because the empire was now so weak that having the army on your side no longer immediately ended the war, so the civil wars now became grinding, horrible wars of attrition where no one could win, which in turn invited foreign powers to step in.
The primary purpose of copyright was intended to be for the good of the public as a whole, not just the good of the copyright holder.
Per the US Constitution:
The goal is encouraging the creation of more things by making it worthwhile enough to invest in them - it was absolutely not intended to enforce some notion of natural ownership over the work. The idea of copyright as a collective good, with the interests of public balanced against a reasonable incentive to create new works, has almost entirely been lost.
The modern conversation almost always starts from an assumption that ideas are private property, something to be exclusively owned, bought, and sold, rather than from the assumption that they are public by default but that it's sometimes worthwhile to grant an artificial, time limited monopoly. Even when talking about copyright reform, it's almost always framed from the point of view of the copyright holder having an inherent right that could perhaps be limited, rather than as society itself being the main stakeholder.
If I may challenge this a little: does it really?
Copyright itself does not cover ideas, only their expression. At least in the jurisdictions that I'm familiar with.
A boardgame is a good example. The idea or the rules of a game cannot be copyrighted. But the presentation of those rules is under copyright. So, in theory, you can make a game that plays exactly the same as some other game, but you cannot write the rules in the same way, you cannot use the same art, or the same set of unique terms, and so on.
One might be able to patent some aspects of game rules, or rather their specific implementation. And one could have a trademark for the title, characters, setting, or other aspects of the game. But I don't think any of these mechanisms cover pure ideas. Ideas are free.
I know this is nitpicking, and I'm responding to a hand-picked, out-of-context part of your post rather than its spirit, but in the spirit of the wider topic: I feel the mechanics of copyright, patent and trademark law are often a source of misunderstanding.
I really appreciate this reply, actually! I wasn't 100% happy with my own phrasing there as I was writing it - I was trying to simultaneously say that the intangible part is the subject, but not use the word "intangible" because it's the conceptualisation rather than the intangibility that's important here - and "expressions of ideas" is a much better way of communicating what I was trying to say.
I am going to push back a little here, though, for two reasons...
Firstly, I'd agree that none of these mechanisms were intended to cover pure ideas, but copyright gets used in increasingly tenuous ways to enforce increasingly abstract "rights": DMCA non-circumvention means it de facto covers almost anything if you wrap some DRM around it, and blank media levies compel users to pay for recording works just in case they might be infringing, to take a couple of examples among many. In practice, patent law as applied in [current year] is more than happy to cover very broad concepts a lot of the time anyway, too.
Secondly, I actually was talking about the general person-on-the-street assumption in that sentence, rather than the legally precise definition. I'm not going to pretend that's why I worded it that way - I didn't, it was clumsy and you've improved it - but I genuinely did mean "the wider conversation starts from this broad impression" rather than "the legislation starts from this premise", and I think that's often the conceptual starting point when these things are discussed because as you rightly say, the whole area absolutely is a source of popular misunderstanding.
Strong, strong agree, and the way I wrote it could easily be interpreted as more patent-y than copyright-y, not to mention the number of discussions I see that totally conflate copyright and trademark (which aren't helped by cases where the IP holders themselves use copyright as a hammer to force an issue that really should be trademark-related). Precision is super important here!
I don't think it invalidates your broader point, but are you speaking of the perspectives of laymen or, like, lawyers here? The first few lines of the wikipedia page on copyright specify that it expressly doesn't protect ideas, for instance, only their expression (which is consistent with how the courts talk about copyright).
I was going for laymen's perspective - I replied in more detail just above before I refreshed the page and saw your comment, but as it happens it covers this nicely too! Only thing I'd add specifically here is that the way the various copyright offices say copyright functions, vs what they actually enforce, vs what the court opinions and precedents say do not necessarily match up well at all.
But yeah, you can absolutely substitute in "expressions of ideas" and it wouldn't meaningfully change what I had in mind, even though I was also talking about the broader public conversation rather than the legally precise one.
Economists aren't evil and don't want the worst for you. Economists in academia understand the limitations of their research. The problem is that economics is difficult to follow without the basics, and research outcomes are often counterintuitive which makes them sound disagreeable. Outrageous findings get more headlines, while nuanced research gets ignored. Moreover, economic facts don't equal policy recommendations. It doesn't help that outdated views are parroted in public discourse, and politicians cherry-pick whatever supports their positions while ignoring modern research.
Like any field involving status and money, economics attracts a minority of questionable people.
It’s frustrating having to explain this all the time, especially to left-wing people, even though I am one myself. Worst are the comments about the Nobel prize in Economics. “Nobel despised people who cared more about profits than society's well-being”. They shouldn’t have named him, but this opinion is completely incorrect. Economics is all about welfare and includes concerns like poverty, inequality, fairness, happiness etc.
Seconded.
The extremist left being it's own thing, it's still extremely frustrating how many SERIOUS voices on the left don't even have a bare bones understanding of economics.
Believe it or not, sometimes you sit down and spend a lot of time, effort, and money working with very very smart people to seriously solve problems like "how do we feed everyone" or "how do we get this thing to the most people" or "how do we not solve this problem for 10 years before devolving into anarchy or dictatorship" and you come up with answers that are not "what if we all just did everything perfectly and got along".
Hell to be very clear, a lot of answers for "what would a good capitalist economy look like" do NOT include things like "free markets for impossible to shop items like health care" and "centralized horizontal or vertical monopolies". Things most can agree on, and yet somehow it seems that by even mentioning that there might be a lot of good evidence for why you might not want a 100% planned economy means you're somehow for the status quo.
I'd love to hear more on this, as a layperson it does appear that economics as a field of study has some quite big problems when it comes to the models they use to study the economy, such as not accurately modelling wealth inequality, and a subset of people who aren't incentivised to solve these problems.
I'm referring to this Gary's Economics video so I'm not going to type out his whole argument, but I'd would be surprised if the problems he describes aren't being addressed. Being charitable, it seems like progress is slow?
https://www.youtube.com/watch?v=CivlU8hJVwc
The man is a grifter. His claim of being the "best trader ever" at citibank or whatever is an outright lie, and his presentation of economics is basically no better than the people who tell you that we're hiding the secret to free energy.
Someone already did a pretty good dive into how his claims don't remotely track:
https://birchlermuesli.substack.com/p/copy-garys-badeconomics/comment/104505656
The frustration (as you can see in the comment section below in the link) is the neverneding one that most grifters abuse.
Yes, wealth inequality is a thing. Yes housing prices are nuts. Yes inflation is a problem. Yes, yes, yes, yes, yes.
Gary's "model" however does not actually explain ANY of it. He knows what we all know. Some people are better off than others (and ourselves). He then just happens to have the answer you were looking for, ITS THEIR FAULT!
And you know, it probably is on some level, but his "argument" doesn't actually add up or solve anything. This is like racing to someone saying "the ocean temperature is rising" which yes yes it is, and following up with "So obviously we should put more ice in it" and...well just no. That's not how this works.
So to his key point, it's not ignored. It just isn't. It's a massive topic of discussion and central to current models. Economics is a constantly evolving field and never going to be perfect, but to imply they're pretending it's not there is outright lying to the public.
this alphaville ft article covers how lots of the stories of his life are pretty heavily exaggerated as well
I'm typing all of this before watching your linked video, because I saw that my initial opinion in response to your comment is essentially spelled out in the pinned comment on YouTube.
I studied just enough economics to decide against making it my career path after all. I don't know how much the general public understands this (given how often I was assumed to have business expertise whenever I mentioned anything economics-related) but economics is a social science, not a natural science - there are commonly accepted paradigms but there is no hard, objective truth per se. All economic theory boils down to imperfect models that can never be proven because you can't establish true control experiments. So, in essence, anyone can pretty much construct any argument they like if they cherry-pick enough evidence to support their claim. When huge commercial interests can potentially be affected, there's bound to be a lot of incentive for research in particular areas, and/or with a particular bent, while others will go largely underfunded/ignored. A certain orthodoxy develops over time and calcifies.
I'm reminded of a blog I came across years ago when Google had started innovating in some questionable directions and their ties with the US military were first coming to light. The author asked what kind of projects Google would be incubating if Silicon Valley had sprung up somewhere in sub-Saharan Africa or South Asia instead of San Francisco? How much more likely is it that they'd be developing advanced projects on soil analysis, rainforest health, biodiversity, etc. with the potential for massive social benefit, rather than (or at least in addition to) military drones, invasive surveillance, toxic digital platforms like YouTube, and so on? Culture has great influence on how and which decisions are made.
I'll watch now and return with any new thoughts.
Edit: He does allude to orthodoxy bias but he also seems quite insistent about class issues and individual egos being at the root of economists' poor predictions, which... I'm skeptical about. If only it were that simple.
There are no true control experiments in astronomy, either, but it's certainly a science and a lot can be done using natural experiments.
The problem with economics is that it's about people and economic assumptions only work imperfectly in aggregate. Economists (often working for governments) do collect statistical data, though, and that's certainly a form of scientific observation.
Moving from an enormous number of price observations to an inflation rate (for example) is already making some theoretical assumptions, but it's also very data-driven.
There is also a lot of theoretical speculation about economics by both amateurs and professionals. The professionals sometimes use toy mathematical models. Such speculation counts as economics, but there's more to it than that.
You might compare string theory (which is mostly mathematics) to experimental physics.
I've only seen this one video and check out his wikipedia page after, but this guy sounds like a populist guru type who appeals to emotions but doesn't cite any facts. He gets the cred from having made a few good bets in the past and having the Oxford education, then he says that academics are stupid, which he knows because he was there. He says that his predictions about the economy were right and academics were constantly wrong, which to me is a warning sign, because anyone who can do this consistently would be more of a household name, or at least a famous name among people following finance. He says that he made a lot of money as a trader, and immediately turns around and says you shouldn't trust rich people because they don't understand the rest of the populace. Not sure how that's congruent, but yeah you should listen to this oxford educated rich person and not the other rich academics.
Long story short, it kind of sounds like he's just trying to convince you to distrust those doing research and trust him instead, so it sounds like FUD to me. It's not that he's against experts, he's against experts who are not him.
The point he's making is that the socio-economic background of economists tends to be on the wealthier side, and he comes from a working-class background.
I don't know how true that generalisation is, but it's a similar yet opposite claim that Thomas Sowell makes to defend conservative policies. I guess people just like a rags-to-riches story.
Yeah, I'm also uncertain about that generalization. Maybe I overgeneralized his statement because I can think of plenty of counterexamples on both sides, so I feel like there's needs to be some supporting data for this assertion. Otherwise it's just playing to your emotions. I'm not sure how you'd measure this though. I'm generally not a fan of blaming certain groups of people for our problems though. It's basically always overly reductive and drags the discussion towards finger-pointing rather than anything productive. Also, this feels a little like anti-intellectualism. I'm totally fine with legitimate criticisms, but saying that "intellectuals are biased and believe [something obviously stupid]" just rubs me the wrong way. Ultimately this feels like blaming without offering a good solution, other than perhaps watching this guy's videos and buying his book.
What's the Sowell claim you're referencing?
If you watch from 1:25 to 3:00 (the end of the quote), I think he makes a very similar claim to Gary, that his view of economics in very much shaped by his working-class upbringing which is framed as a reason to trust him more than traditional academic economists
https://www.youtube.com/watch?v=_yC0dsTtRVo&t=85
Thanks!
Yeah, "street cred" has never really been convincing at all for me. Anyone who pushes that angle just makes me think they're making an argument from authority and showing me that they don't have evidence for their claims.
I read Sowell's Basic Economics and found it a pretty good book, although with a bit too much classic over-extrapolating Chicago School conjecture. The fact that the field is disproving some of that stuff experimentally shows me that economics is not just elites in an ivory tower making assumptions on top of assumptions. I mean, there's some of that, but at least there are people trying to generate good data to support better assertions and models over time.
Yeah, I'd agree with that. I think where the mainstream frustration with economics and wealth inequality comes from is that, if economists have been studying this for a long time, why does it feel like we're still seeing increasing inequality and our governments seem unable or unwilling to commit to redistribution?
Taking Gary charitably, I think this is one area where he has been effective - pushing mainstream awareness of wealth inequality and encouraging people to contact their representatives.
I think the main answer to that is governments frequently go against what mainstream economics would recommend. A secondary answer is that the system is too complex for us to make good predictions, and even good predictions will sometimes be wrong, so let's not forget which predictions were right.
I'm on board with the idea of encouraging people to contact their representatives about fairer income distribution policies. I'm just against the idea of using the blame game to do so. I feel like this creates more polarization and blind anger in the world. It's a weapon that could easily cause collateral damage and maybe cause more harm than good.
There's a third answer in that "sometimes what you think will work, will do the opposite" which is mixed with "while yes there's money to be gotten from higher/better taxes on the rich, that's not going to solve everything".
Things like wealth taxes are floated frequently, which are conceptually and practically problematic. They likely will not work, and the countries that have tried them, have had many of the problems predicted.
Many posts already talked about him. I'll add that he seems to conflate his university studies with the whole field of economics. It's true that you need a lot of math when studying economics but that's still not enough to cover everything you need. There's a huge number of models, some more complex than others. You can't teach all of them, but the more math you know, the easier it is to learn them. I can only talk about my experience here in Germany and I had a lot of freedom to specialize in certain areas. I could have had the same experience as him, math-heavy, theoretical, but fortunately didn't have to. Not every university is the same, but as others already said, the topics he talks about are not ignored at all, even though he might have felt that way in university and I get his feelings about that. Piketty is the most obvious example of inequality research. It's similar to people having to take a few economics courses who end up thinking that these simplified models are all there is.
The "profits over people" reductionism frustrates me. I meet lots of leftists who think there is a Clear, Easy Economic Answer To Solve All Suffering but evil capitalists, economists, and politicians stand in the way.
I remember in the early 2000s there was a leftist position that globalization exploits the global poor and makes poor countries poorer while making rich countries richer. Fast forward 20-some years, the anti-globalization activists have been proven wrong: the global middle class numbers in the billions, extreme poverty is nearly eradicated, and the average non-westerner lives way, waaay longer and happier now. There have been costs, of course, namely environmental damage and the hollowing out of middle classes in developed countries (ironically). But overall — a net positive for humanity.
The 2008 Bailout in the US came at a great cost the taxpayers. With the power of hindsight, it cost US taxpayers -$109b. That is, the US government made a net profit of 109 billion.
Presumably that’s from the Propublica bailout tracker, which includes TARP, Fannie Mae, and Freddie Mac. Looks like most of the profit is from Fannie and Freddie, which have sorta been nationalized.
Matt Levine wrote about the strange situation here:
The US also lost about 11 billion on the GM bailout.
When they say that observing a quantum object collapses its state, that's caused by the physical act of measuring it, not by the particle somehow knowing that a person has perceived it.
To give a bit more detail, because quantum systems are incredibly delicate, they need to be set up very isolated from the outside world. When you take a measurement, what you're really doing is breaking that isolation and introducing the system to the outside world we exist in (the technical term is entanglement), which causes the system to collapse.
There are multiple schools of thought why this happens, one you may have heard of is the "many worlds" idea which essentially says that whenever a quantum system collapses, all possibilities happen and the result we observe is the "world" we happened to end up in. Crucially it is never possible to access or measure those other worlds.
fun fact, some important quantum physcists did actually believe consciousness causes collapse (primarily Wigner), it's not on its face immediately absurd philosophically that a human has to be somewhere in the causal chain
Myth: If you touch a baby bird, the parent will smell you and abandon it.
Birds don't have a heightened sense of smell, they rely much more on sight and sound. But even if you've touched a baby bird, the parent isn't just going to abandon it, unless they feel threatened or the baby is a lost cause. Also a PSA about finding baby birds on the ground in general:
I know a lot of people mean well when they see a baby bird and try to help, but more often than not it will be doing more harm than good.
It's "chaise longue" (lit. "long chair" in French), not "chaise lounge". Drives me nuts for some reason, even though I get how it happened since, y'know, you lounge on it.
So I'm just a guy who finds etymology interesting, so take this with a pinch of salt, but "lounge" and "longue" both appear to come from the Latin "longus", so it's less a misunderstanding and more just anglicization of the term. Although I suppose quite a bit of anglicization comes from misunderstanding :D
I got it drilled into my head by the band Wet Leg, because it's part of the hook in their song Chaise Longue (note that this is a very NSFW song, very funny though.)
That song is played over broadcast radio and in fact has been played at my place of work over the speakers, so it isn't that NSFW.
Was going to say the same, lots of innuendo but not explicitly NSFW.
Also it's a very catchy song!
I’ve been taken to task online for things that I consider mild as hell, so I’d rather over than under warn.
Heh, I live in Seattle and they play "Chaise Longue" pretty regularly on KEXP. Although it's been a bugbear for me for a long time, that's probably why I was thinking about it recently!
Chaise lounge is the English translation of chaise longue. It's kind of a loanword and kinda not.
It's a calque!
Oh, that answers my question in my other comment. TIL!
Makes me wonder if there's a term for "portmanteau but not exactly"
Portmanfaux? I made that up.
All words are made up, I like yours.
What the media reports about experts is not the same as what experts think about and consider.
Now that I've written this down, I'm thinking this is mostly just the result of Crichton's Gell-Mann Amnesia.
It seems like the overwhelming opinion from internet comments is that this is the case on pretty much any technical topic. Despite the fact that this is such a popular sentiment, it also seems like there are an overwhelming number of comments on the internet such as "they never think about [thing that they definitely do think about]" or "they should just try [thing they have definitely tried and might already be in common practice]" where it seems like people are making confident statements using assumptions about various experts in fields using a headline-level of understanding, and those often end up as the most popular comments. A lot of internet discourse on topics I know a lot about are just a bunch of baloney based on a bunch of baloney headlines or baloney tiktoks.
Heart attacks and cardiac arrest are two distinctly different things.
A heart attack (myocardial infarction) is an episode where the heart is not receiving enough oxygen, most often due to a blockage in the coronary arteries (arteries that feed oxygen to the heart), most often due to years of poor health decisions. The most commonly associated symptom is a substernal chest pain, often described as "pressure" (it feels like there's an elephant on my chest). This does not require CPR.
Cardiac arrest is the loss of a pulse due to no heart rhythm, or an unstable heart rhythm (which then leads to no rhythm). This does require CPR.
A heart attack can lead to cardiac arrest. But they are two distinctly different things.
I know there are a number of developers on here, so maybe I'm preaching to the crowd. And I say this as a web developer and not a game developer, but the idea of blaming "the devs" when a game or piece of software incorporates some half-baked crappy feature doesn't really make sense. Admittedly this may just be because "the devs" has come to mean the entire team behind a piece of software, but still.
The developers (as in the fine folks writing the code) probably hate the feature as much as you do. And that bug you can't believe QA missed? Oh buddy, they found it months ago and a dev almost certainly wanted to fix it, but couldn't. You see, kids, we have these things called timelines, project managers, and a hard cap on how many hours there are in a day.
We would all love to get everything perfect before it gets to your computer, but the product manager just promised the client a new feature before the previous one was finished, so that's somehow higher priority. And we don't always have any say in what's priority. So the bug just sits there forever, especially if it's an edge case (uncommon, hard to reproduce, low-impact, etc.). Devs and QA alike were probably raising alarms the whole time, but don't always have the pull within their team to redirect focus to address the concerns.
"the devs" don't mean programmers. It means game developer, ie; the company that is actually responsible for making the game, versus the publishers that finance and promote it and the distributers who sell it.
So like, Bungie, Blizzard, DICE, etc. All of them, including artists, accountants, managers and executives. Not Joe from the rendering engine team specifically.
No absolutely, but I don't think everyone realizes that. They think these issues lay at the feet of the programmers, QA folk, etc. QA in particular usually gets the "how didn't they catch this during QA? lolwhatQA?", which is unfair. I suspect most tech-savvy folk have enough of an idea of how software development works that they know it's more complicated than that. But for everyone else, I do think there's a misunderstanding at work.
On the contrary, I feel most folks I know (and myself personally) ask questions like "how did they miss this in QA?!?!?!" because there wasn't enough outreach for testing or other feedback options, which will then bring back in programmers, but ultimately devs are the ones who make the decisions, not the code.
Yeah, but Joe’s an asshole (/s).
I do think that, as a programmer, when I see someone refer to “the devs” I often do initially think of the programmers themselves first, even if the comment is much more likely refer to the game development company as a whole.
I’m sure there are people who mean it both ways, and sometimes, for smaller companies at least, there isn’t necessarily a huge difference between the devs (read: programmers/software developers) and the devs (read: everything else that has to get done). And even then, the fault of why the bug is still in the game could be on either side.
Programming is hard, but prioritizing is harder.
That depends on the context of course. I frequently here "the devs" to very specifically mean the programmers, but usually within a company not like the example above.
Also not a game dev, we develop customizations on top of Microsoft's Dynamics 365 ERP platform.
There are so many minor bugs and QoL issues I'd love to fix if they'd allocate me a bit of time for it and given permission to touch that part of the code.
These things annoy me every time I have to test my changes, but I'm not the one who pays my salary so I just have to live with it (and maybe sneakily make a quick fix every now and then if I'm working in those methods anyway)
I work in SaaS, though not games, and the way people talk about software, especially games, is infuriating. I’m a long time Destiny 2 player and the way the average user talks about “the devs” on /r/DestinyTheGame shows you they think everyone that works there is in on some nefarious plot to addict gamers and then ship suboptimal or buggy products on purpose for reasons. The average person simply has no clue how hard it is to make software on any level and how many moving pieces there are at larger companies. There are a million reasons software works poorly (whether that’s from bugs or design flaws), but people are so quick to assume that “the devs” can just tell the computer what you want in plain English and it spits out perfect code that will never break.
It literally kills me when pedants point out that you should use "figuratively" instead of "literally".
Speaking of China, it does not actively censor videogames that contain skeletons (or blood, or the other things that sound kinda ridiculous in 2025). Censoring rules are broad and left open to interpretation, and are applied on a case-by-case basis. What happens is that publishers self-censor as much as possible to stay on the "safe" side, avoid rework, and get approved asap.
I've always felt the whole discourse around 'literally' misses a major point: it's silly to conceptualize its 'figurative intensifier' usage as a shift in definition because it's pragmatic. Namely, it is a common way to indicate sarcasm/irony, and there are hundreds of other words which have a similar usage whose definition we would never argue about.
I'll try to come up with an example. Say I'm driving you somewhere, and complaining to you about how bad my day has been, when suddenly I get a flat tire. I throw up my hands and say "that's just what I needed". Of course, I am being ironic - I very much do not want a flat tire, but I'm pretending to embrace the situation as a sort of emotional defense mechanism. And in fact the word 'just' is doing some heavy-lifting as a irony-marker - the phrase "that's what I needed" (without 'just') could be deployed sarcastically but it probably wouldn't be quite as clear what I meant by that. You could argue that the whole phrase has become idiomatic, but you can find 'just' used elsewhere as a marker of sarcasm or irony as well (internet-popular phrases like 'just fuck my shit up', for instance).
In any case, it would be insane to argue that the above usage of 'just' is a shift in its definition to mean 'figuratively', even though the situation is basically an exact parallel of 'literally'. The definition of a word does not change simply because it can be used ironically. You might as well say this about every pragmatic usage: do metaphors change a word's definition? What about lies or confabulation?
I think what happened with 'literally', is that it was a common linguistic trope within a certain valley girl-esque social stratum, and people found this usage (and the social stratum as a whole) annoying. The prescriptive notion that the word is being used incorrectly was invented as a way to legitimize this sense of annoyance, and it was further confounded by the fact that the word is itself being used figuratively (which is an antonym of 'literally').
I suspect there’s some truth to this, as it’s very similar to what I’ve heard about the hate towards vocal fry. Which has been a thing for a very long time, it certainly wasn’t invented in the last few decades, but it’s recently become very popular in some places to hate on vocal fry with or without the explicit valley-girl association in the same breath
My problem with it isn't from a prescriptivist standpoint, but from a practical standpoint. Most of the time you can figure out from context if someone literally "literally" died, but when you can't, what do you even say to clarify?
"That joke was so funny a guy in the crowd literally died." "Wait, like he had a heart attack?" "No, he didn't LITERALLY die. The joke was just very funny"
I wish this was a hypothetical, but shit like this happens to me all the time with the word "literally". There's no other commonly used replacement.
I think it's just not a sentence construction that I find confusing. In the same way that if someone told me " that joke was so funny. I literally died laughing" I wouldn't think that I was talking to a ghost all of a sudden. I am just used to the hyperbolic usage of "literally" I guess?
Also, if somebody was telling me that someone died at a comedy show I don't think they would phrase it that way... They'd probably say "holy shit, somebody had a heart attack at the comedy show" I guess.
Sure, in that specific situation where the person telling you said they died, it's pretty obvious what they're talking about. That's not because the wording is unambiguous though. It's because that particular situation has all the context you need to understand that they weren't being literal.
If someone says "My mom literally weighs 400 lbs" or "my friend is literally a millionaire" or "we literally have no money in our bank account", you'd have no way of knowing if they are being literally literal, or figuratively literal.
Except spoken, which is where I mostly see the hyperbolic literally used, I don't find that confusing. Because there is a big difference between the tone of someone (weirdly) talking about their mom's weight or their friend's money. It's not really replicable in online writing.
In written text, I know I use the hyperbolic less. But for example - "harvest is kicking up so much stuff in the air I'm literally dying" I would never type if I (or anyone) was, in fact, "literally" dying.
I'm not saying no one else can find it confusing, I'm saying I don't as a rule. The only exception would be if I can't read the tone to the extent that I can't tell if, say, an entire post is sarcastic or serious.
You literally (hah) gave an example of how someone could clarify a potential misunderstanding like that in conversation. The vast majority of the time for the vast majority of English speakers, it will be very clear from context. When it's potentially ambiguous, it can be clarified exactly like that. There are plenty of ambiguous sentences and structures in English and every other human language that has ever existed. It's probably not even possible to engineer ambiguity out of human language. So I don't think your "practical" grounds hold water unless you can demonstrate that it's somehow uniquely more problematic than the plenty of other potentially ambiguous words in English -- a language with several words that serve as their own opposites!
The word "literally" has also been used as an intensifier for (literally) hundreds of years.
That said it's still ok on my book to poke light fun at the valley girls who overuse it 🤠
And don't tell anyone where the other intensifiers came from! You've gotta stop using "really" now. And don't look up the etymology of "very"!
Changing from a word that means "really, actually, truly, no really this is the real exact thing" to an intensifier is possibly one of the most well-worn paths in semantic change.
I just find it annoying considering that this makes "literally" mean its exact opposite. It doesn't bother me that much, though, because I don't think its use this way will literally kill its use based on its literal definition. But to be an annoying old person, I'll share a conversation that I imagined while in the shower this morning as I was thinking about your comment.
Me to young person I know (I don't think I actually know anyone who talks like this, though my 8 year-old did use the word "sus" yesterday without knowing what it meant): Hey, I can't watch it right now, how's the Red Wings game going?
My bad caricature of a young person: OMG it's such a great game, I'm literally there right now
Me: Wow! I thought you lived like 5 hours away from Detroit, did you get a good seat?
YP: lol, what? I'm at home
Me: But you just said...
YP: OK Boomer. 💀
Me: https://i.imgur.com/QzwNS9G.png
That example doesn't make any sense. No one would use it like that, young or old. The definition of "literally" shifted, but that doesn't mean that it literally has no meaning anymore, and has become a random filler word.
"Literally", when used figuratively, is an intensifier. It's used to highlight the intensity of something.
So some modified examples in that hypothetical:
In this case, literally is an intensifier indicating the high quality of the game.
The intensification is about the closeness of the seats in this case.
-- Mark Twain, Tom Sawyer
In this case, the young zoomer Mark Twain uses literally as an intensifier to signify how much wealth Tom Sawyer had. If only the youth of today knew how to use words, smh.
The idea that insects don't feel pain. We frankly don't know. There is some good evidence that they might. For example, if you give certain anesthetics to bugs they take longer to move off a hot plate. So, we should probably err on the side of caution.
That's not to say we have to be super compassionate to every insect, but you should at least consider how you treat them if you are concerned about minimizing harm.
I didn’t know about the anesthetics experiment, but I always assume plants and animals at the very least prefer to not be fucked with, so I try to treat all living things kindly. I’m still extremely bothered with how people boil crustaceans alive, for example, I could just never do that.
As someone living through an expanding population boom of invasive spotted lantern flies, there are times when I pause and wonder if I'm hurting the bugs I'm squashing left and right. But then I keep on squashing.
I think with invasive or pest insects it's not necessarily a bad thing to end their life, as long as it's done quickly!
Silicone vs Silicon
Silicone is soft and squishy. Like those reusable snack bags, baking mats, spatulas, etc. Lots of kitchen uses.
Silicon is the brittle mineral used to make circuit boards.
They are not interchangeable!
They are closely related however - silicon is an element, of course, and silicone is a polymer made of silicon, with oxygen and varying other elements. There are several kinds of silicone depending on the exact chemical composition, but it always includes silicon. So there are several "silicones" and several uses for them other than kitchenware - thermal paste and silicon grease for bike chains come to mind.
I knew silicon and silicone weren't the same, but always thought they must be related in some way since the names are so close, so if you were like me now you know too!
I don't feel up to listing them all but there are a ton of misconceptions about living with colorblindness such as how much color perception I have (in my case, lots, but green is and has always been a weak spot for me, and I confuse it often with gray and red), if I've ever tried those special glasses (I have not, and I have no interest in them), or if it's really that big a deal (depends on what I'm doing; I've usually found a workaround but sometimes I have no choice but to straight-up pull out my phone and use a live color-picker, or ask someone to tell me.)
If anyone has any particular questions, ask them here and I can bet you there's about a 10% chance someone else will pipe up with an answer. Because up to 8% of all males are colorblind in some way.
Can confirm, mild-moderate deuteranomoly. Tried the glasses once and I was able to pass a test with them but it's not like they made me suddenly able to see more colors or anything. And since they were tinted quite red (they were sunglasses), it just made everything look more red.
I don't think I appreciate fall colors as much as others though. The yellow leaves pop but the red and green are hard to differentiate from afar. I have also found that I tend to deprioritize color when describing things, but other people often put color first.
holds up thing
'what color is this?'
Sorry, had to do it. But for real though, how common is that reaction from people? I'd imagine it's probably the first thing they ask.
It's just about the first thing everyone ever asks, yeah. The only person who ever didn't ask me that same question off the bat was a gal whose father had a type of colorblindness (I never caught which one.) She asked me if I'd been formally diagnosed or just thought I had it or was lying for attention. I wasn't aware people would lie about colorblindness. What a strange disability to fake.
If anybody else is interested in a small youtube channel about colorblindness, chromaphobe makes really good videos
I'll second that recommendation. I've been following his channel for years.
For whatever reason, I get irrationally annoyed when people mix up cable cars with funiculars.
A funicular has ride cars that are permanently attached to a cable. To move the car, the cable is driven by a motor. When the car is stopped, the cable does not move. Typically funiculars have two identical vehicles that trade places and counterweight each other, but that is not necessary. Despite being called a cable car by my tour guide, the railway to the castle in Ljubljana, Slovenia is a funicular, not a cable car.
With a cable car, the cable is constantly moving, even when the vehicle(s) are stopped. The vehicles have independent brakes, and can grip to the constantly moving cable to accelerate. There is no counterweight system in a cable car.
Fun fact, the cable cars in San Francisco use cedar blocks to grip the cable. You can smell them when they accelerate. They have to be replaced every few weeks.
Wait, so on a cable car, the “brakes” grip the cable and then cause the car to move?
So basically the opposite of how a cars brakes work, where the brake applies pressure to stop?
That’s neat!
Pretty much! They do have two separate sets of brakes. In order to stop, they release the grips from the cable and use traditional friction brakes to slow down.
https://en.m.wikipedia.org/wiki/San_Francisco_cable_car_system
Looks like my memory had a few details wrong. That article is a fun read. Another fun fact, the cable has to go through complex pulley systems to go around corners. Because of this, the cars can’t grip around corners. So they have to maintain enough speed to get around the corner and regrip to the straight cable section.
It's probably more accurate to say they have a clutch and a set of brakes rather than two sets of brakes.
That sounds both lovely and incredibly wasteful. Is there no better material they can use?
A lot of other materials that we make brake pads out of turn into carcinogenic dust. They hopefully fit it into regular checks and maintainence so it might not be that inefficient if they'd have to check on the brakes no matter what they're made from.
Uh, where's SF getting fresh cedar that regularly that CA is just fine with felling that many trees?
California produces ~56 million board-feet of cedar per year and thats less than 5% of their timber industry. A few blocks for cable car brakes in one city is nothing.
How much forest area is required to produce that much? I had no idea the timber industry was still so strong, but makes sense (previously known due to the State of Jefferson attempts).
Millions of acres, hard to tell exactly from a cursory search but there's like 33 million acres of forest in California and a decent chunk of it is used for logging.
When I talk a lot about a subject and make lots of questions, I am not angry. I am curious.
Planes do not generate lift exclusively through negative pressure from air moving faster over the top of the wings than the bottom and pulling it up.
That is a factor, but the wings also push air down, and according to Newton's 3rd law if an object exerts a force on another then a force of equal magnitude is exerted on object a in the opposite direction. So if the wing pushes air down then the air pushes the wing up.
That effect is particularly important in helicopters. They generate more lift by increasing the pitch of their blades, causing more air to be deflected downwards and push against the air below it harder, pushing the helicopter in the opposite direction (up, hopefully). It becomes especially relevant when flying close to the ground, as lift gets magnified by ground effect; the air molecules, instead of pushing against other, floaty air molecules, start pushing against the hard ground, increasing the purchase they have and generating more lift in general. Also very relevant for a very dangerous effect known as vortex ring state, where a helicopter is descending into air that's already moving quickly downward and in unstable vortexes by its own rotors; which suddenly and drastically decreases the amount of lift available and can very easily result in catastrophic crashes.
When I fly helicopters in DCS, I think I die more to the cursed vortex than to enemies shooting me down…
The kinematics of physics was one of my favorite subjects to study, it made things like this "click" for me. We owe much to Mr. Newton!
An engineer at an airplane manufacturer once told me they don't really know why planes fly.
Not sure if he was messing with me or not.
I heard this as well, and I spent a number of years working on simulators at an airplane manufacturer. I kind of took it to mean that there are a variety of factors, and it's easier to just say that phrase rather than go into all the details about the accepted flight mechanics.
I guess you could say this because we don't have exact solutions to navier-stokes, only approximations? But we have a very good idea for almost all conditions. The weirder stuff is not knowing why ibuprofen works which we have much less knowledge about
I dunno, he did something involving wind tunnels. I suck a physics, so I’m not going to pretend I understood, anyway.
As far as I know, most medicine is kind of a mystery. I have no idea how they come up with new compounds to test, or which conditions to test them on. I mean, I take diabetes medication to manage my low blood sugar, which all comes with warnings that it can cause hypoglycemia.
Letters don't make sounds. Letters represent sounds. Writing is a mechanism with which language is recorded. It is subordinate to speech, not the other way around.
In reality, it of course gets more complicated when you study the interaction between spoken (or signed) language and writing. And not every writing system works the same. And there are all sorts of exceptions. And so on. But as a starting point, "letters represent sounds" is a less confused position than "letters make sounds".
Hm, I don't think I agree with this one, in particular the part that "[written language] is subordinate to speech, not the other way around.".
I would say a more accurate description is that written language is adjacent to spoken language - neither subordinate nor suzerain, just different. English is somewhere where this is very evident - after all, for a variety of reasons, English spelling and English pronunciation can be wildly different. A famous example is
All of which are pronounced in completely different ways, even though by the spelling they should be. So how do we, English speakers, read those words?
We read the word, that we link in our minds to the concept, and then from the concept we remember the pronunciation. We don't go letters -> sounds -> meaning, we go letters -> meaning -> sound. Similarly, when we listen to English and write it down, we go sound -> meaning -> letters.
Indicating that both the written word word and spoken word are both different, and equal, manifestations of core concepts in the language.
An even more radical example is Chinese, where you can take any modern person who can read traditional characters, and have them read classical Tang era poetry. This would be the equivalent of someone who can read English comprehending Beowulf with no training.
A modern Chinese speaker would be completely unintelligible to a 600 C.E Tang poet, but they still share a written language - separate, distinct from the spoken form of the language.
Thanks for your response! I agree with you on many points, but I shall push back a little.
Written language is subordinate to speech in the sense that spoken language is an acquired ability, whereas writing is a learnt technology that derives from it. With some exceptions, all humans growing up in a typical social environment will begin to speak the language(s) of the social group, but writing is something that needs to be explicitly learnt.
There seems to be something biologically innate in us that makes language possible. What it is, and to what extent, is much debated. We don't know when human language originally developed, but estimates typically put it somewhere between 100,000 and 200,000 years. As for writing systems, we have had them in some for a little under 10,000 years. It's a relatively recent technology.
To repeat what I wrote earlier, in reality the relationship between spoken and written language does get more complicated when you look at the interaction between the two. This also extends to the relationship between letters and sounds, and indeed meaning. English, like you point out, is a great example of a rather fascinating setup.
Chinese, like other logographic writing systems, doesn't strictly speaking use letters but it is certainly a very interesting system as well. Is it really true that a modern Chinese speaker would, without any additional training, be able to read 7th century poetry? My understanding is that even if many of the characters have remained the same, much of the language itself has changed, and in the specific case of Tang poetry, the brevity of the texts makes it a little easier for modern speakers than other writing from the era? But I have no background in Chinese at all, so I really don't know. I just imagined that it's a bit like me "reading" Chinese texts after learning Japanese. I could figure out much of the general meaning of simple passages, but it didn't feel like a language skill, more a puzzle solving exercise.
I don't think this is inherently true. It is usually true that we learn to speak sooner than we learn to write, but that is because speaking is a more convenient method of communication than writing for babies.
There are many circumstances where people can learn written language without learning spoken language. Obviously you have certain types of disabilites, but in addition, I'd say that on the fringes of language this is very common. It's not uncommon that someone knows how to read and write words in technical or formal speech that they don't know how to pronounce - that classic embarassing moment when you say a word wrong because you've always read it and never spoken it yet.
Personally, due to immigrant things, I ended up in the strange case of knowing how to read and write Chinese more than I could speak it for many years until I circled back as an adult and took some time to seriously study the spoken Chinese language. That's an example where I knew the written language, without knowing the spoken language. That would be impossible if the written language were subordinate to spoken languages.
The idea that spoken or signed language is acquired while writing is learnt is actually a fairly basic concept in linguistics that also has quite a bit of research behind it. Or at least this used to be the case some two decades ago when I completed my degree and briefly taught undergraduate intro courses at a university. Life has since taken me to other places and fields, but I at least haven't seen anything that would have fundamentally challenged this idea.
That said, the method and extent to which language is a natural part of us humans has been much debated. On the one side, we have major theories like universal grammar, which argues that our language abilities are the result of an innate biological component. But then there are also those who see language more as a result of multiple individual components in our brain that aren't in any way language specific.
In any case, we have observed cases of both language deprivation and the emergence of language which have generally been taken to indicate that there is something innate in us for natural language to develop, and that there is a period in our childhood during which one needs pick up a language. If an individual does not acquire a language during that time, they will never be able to fully do so.
I am not aware of anything similar when it comes to writing. It seems to be more of a learnt skill which is not dependent on our biology or developmental phases.
Foreign language learning of course complicates things and it is a good question how much it follows the processes of native language acquisition, and how much it is instead a learnt skill like writing. Probably a little from column A and a little from column B, especially at the beginning of the learning process. There are also many different levels of language proficiency, for both active and passive language skills, such as when one is able to understand a language but not actively produce it. Perhaps, like you pointed out, in this context it indeed isn't quite as black and white as saying that writing is always subordinate to speaking.
We are also living at a very unique period in the history of human language use, as writing is suddenly so ubiquitous and such a major part of many current cultures and social interactions. Still a little over a century ago, being able to read and write was an exception, and still a couple of decades ago, writing was a skill you used much less frequently. Today, many people spend more time exposed to written than spoken language. As we study the results of this change, it will be interesting to see how it adjusts our concepts and understanding of human language ability and the mechanics and processes behind it.
I feel this is a Ceci n'est pas une pipe statement. Letters do not make sounds; they convey which sounds should be made by the reader, which then greatly varies based on which language(s) the reader understands.
They don't know, though. We have many empirical examples from languages that use letters that letters convey meaning, which can be related to sound, but is not necessarily so. You can have written language with no sound whatsoever.
Which is a fair statement, but I believe still adheres to my argument. It's all in the eye (and understanding) of the reader/speaker.
Part of my job is teaching reading and writing, to both monolingual and bilingual students. It's a fascinating process and has really highlighted to me how odd English especially is as a language - both written and spoken.
What makes English exciting is that with an alphabet of 26 letters, we need to represent 44 different phonemes and the wonderful bit is that it's done through roughly 450 graphemes (answers vary a bit). Many of these are niche and borrowed from other languages.
CPU does not mean computer. It’s Central Processing Unit. The CPU is a complex digital circuit that is microscopically etched into a silicon crystal. (Shoutout @FlareHeart for clearing that one up) The computer is a box with a bunch of parts inside. Or a person/job title if you go back 50ish years.
Every* computer has at least one CPU, nowadays more like 4-12. A CPUs job is to read instructions and do simple operations like addition, subtraction, and moving data.
Sure sure but counterpoint: ComPUter
Tapping side of head like a genius.gif
Technically, only computers made with the Von Neumann architecture have CPUs, which comprise the control and arithmetic units of the machine. There are other architectures where control and arithmetic aren't centralized functions (or at least aren't centralized in the same way), so it wouldn't make sense to designate a unit the "CPU."
You could make the case that computers haven't really followed the Von Neumann scheme for decades, since Intel split off some control functions to the Southbridge. You can also make a case that current-day, multi-core, hyper-threaded CPUs aren't actually "CPUs" at all, since they often integrate graphics processing and split control and arithmetic between cores. Those are semantic, not practical distinctions though, and I don't think anyone bothers to argue that–aside from AMD's marketing department, I guess.
Relatedly, another common misconception regards the term "supercomputer." It isn't technically a jazzy term for a really fast, really powerful computer, it refers to computers composed of multiple less powerful computers configured to function together. Think along the lines of an ant colony being a "superorganism;" the individual organisms function together in such a way that the collective's behavior resembles that of a single entity. A pop culture analogy would be Voltron or the Power Rangers' Megazord: giant robots composed of smaller, individual robots connected together to form a super-robot.
In the spirit of clearing up misconceptions, I’m gonna hit you with a friendly ACTUALLY.
It sounds like you’re referring to the whole package that companies like Intel distribute and the general tech-savvy population calls a CPU. Those are better considered SoCs (System on a Chip), since they do all those functions you listed. Maybe I misunderstood you though.
It’s still correct to call each individual CPU core a CPU, even if it’s not so central anymore. Though the only person that will sell you just a CPU these days is an IP vendor, and that’d just be the design for one.
Also, AMD is still selling SoCs in the same sense as Intel. Their products generally have CPUs, Memory Controllers, I/O Controllers, etc in a single package (okay, okay, it’s a System on Chiplets). They just leave off the GPU more often than Intel does.
Yeah, that's a good point. I've always been a hardware guy, so my own views about what each component is are strongly influenced by the role the physical object serves, not the conceptual function of it. Since I was the jackass who brought up von Neumann, that's a fair gotcha.
I think what I was really trying to get at is what is and isn't a CPU or APU or SoC or whatever is more complicated now than it was when the vocabulary was established, which is understandable. We've come a fair way from the first white paper for EDVAC, and as we've hit various bottlenecks and physical limits, we've refined the roles that individual components (logical and physical) play.
This is part of why I usually try to avoid this sort of discussion topic. There are enough perfectly valid perspectives on any given subject that you could play the "well akshually" game until heat death if you're stubborn enough, and I lack that sort of commitment. I'm only stubborn enough in that regard to be annoying.
Popular 1990s sitcoms did not, at any point, use pre-recorded laughs. They had live audiences.
I have another one to throw into this thread. I didn’t think it was very common, but I started reading an autism book that has this issue.
Neurodiverse: a group of people with a variety of neurologies.
Neurodivergent: a single person who is neurologically different than the standard.
A single person cannot be neurodiverse. To use a metaphor, a single apple cannot be a “variety”. It might be a unique or unusual apple, but it, alone, cannot be a variety. We tilderinos are a neurodiverse group, but we are not neurodivergent. The people commenting in the mental health support group are neurodivergent, but those individuals are not neurodiverse.
A sandwich that has been grilled and contains something in addition to bread, cheese, and the spread put on the outside of the bread to cook the bread is not a grilled cheese and is a melt.
It always bugged me for some reason growing up that I'd see people say "I made a grilled cheese" and they basically had a hamburger going on with all the extra stuff they put in there. Then 10 years ago this post was made on /r/grilledcheese, and it has stayed in my memory ever since: https://www.reddit.com/r/grilledcheese/comments/2or1p3/you_people_make_me_sick/
There are a ton of misconceptions that non car people have about cars, but probably the most ridiculous ones are around how many miles a car has and the model year of the car. So many people in my life when in need of a new car will completely dismiss anything older than 5 years or with more than 100,000 miles. People willingly take on $700 a month auto loans when you can find something like a 2012 outback in pristine condition with 150k miles for under $10,000 and it will run like a top.
Yes, older and high mileage cars will need service, but so do newer cars, and newer cars are more expensive to service. A well taken care of Outback or CR-V is going to be in the shop no more frequently than a 2018 car and it will be cheaper to fix. On top of that, you are saving hundreds of dollars a month on a car payment. Most people are wasting tons of money per year on newer cars due to a fear of how unreliable a cheaper car might be, people just don’t do the long term math and let themselves get robbed because they “need” low miles or apple car play (which can easily be put in any car…).
My advice when buying a car is specifically to look for older models with moderate miles that seem well taken care of and just have a car friend help you look it over. I’d trust a 2005 Honda CR-V with 200k miles more than I’d trust most modern cars.
Idk I actually kinda disagree with this one. Post-covid, the used car market really shook up. If I check carvana right now, a 2010 Honda CRV with 120k miles is $15k. That's half the MSRP of a new one.
If that's your budget, it is what it is, but finances allowing, I'd honestly rather just get a new one for MSRP? Not only do you get a new warranty in case of lemons, but reliability has taken massive strides.
You can also just, like, drive your econo-box until it hits 200k miles as well. It'll be even more reliable until then, and just as reliable after.
Currently, for economy/reliable cars, I'd rather get them new for MSRP. I don't think the used price is giving you more than a linear discount on the price. The days of "it drops 50% of its value the moment you drive off the lot" is just not true; hell, sometimes the car gains like 5% of its value the moment you drive it off the lot.
The used car market was absolutely fucked for a few years post covid (I went from upside down on my GTI to being able to make $2k after selling it and paying my loan, lol). It’s been recovering in my area for about a year now, so it could be slightly area dependent? I bought my dad a 2012 outback in pristine condition for $9k this year. Sometimes you also just need to be hawking the market for a few weeks until “the deal” pops up, since good ones like that get scooped up quickly.
I do somewhat agree on new vs slightly used, where some cars are appreciating or at least not dropping value as fast, but I’m still a firm believer of grabbing under $10k cars in good condition. Most people are vastly overpaying on their car payments for what they actually need based on vibes. Most people I know that claim they “need” a 2023 car will have the same experience in a 2012 if they just get an aftermarket touch screen or something and they’ll halve their car payments.
I’m also an absolute shitbox merchant, though. Most of my cars I buy for under $4000 lol, though I don’t recommend the average person go that low unless out of necessity.
Your point about driving your car until it dies is the real play. Too many people upgrade their cars too often when they really don’t “need” to and it’s, IMO, just a waste, but it’s their money and not mine, so whatever.
Mostly, my first comment started about how I hate that people pretend a 2005 car is basically a worthless pile of junk and it kind of morphed into people wasting money.
It's still kinda fucked... I wrecked my '18 Civic EXT last January (2024) and replaced it with a '19 Si shortly thereafter. I had purchased the '18 in mid 2019 with around 8k miles for $22k, and the '19 Si with ~65k miles at 5 years old was $23k.
Now, do note that I demand my cars to have manual transmission, but I don't believe that choice doesn't reflect the pricing. I found some other Civics around $18k, but they were older than my totaled car and/or had some other issues.
Granted, I'm not in the market now (and I hope to keep this one and not wreck it for a long while to come, especially as manual transmissions are so sadly phased out here in the US... just need a personalized plate reading "GASHOLE"), but it seems like everything else after covid, the prices spiked and never really settled back down.
See also: my mortgage. :(
In my city we have a local family owned repair shop that works exclusively on Subarus. They also maintain a fleet of loaner cars, and buy used cars to repair and sell. Their cars sell for a good bit over blue book, but they do all the service and repair the car needs before selling it, so it is usually a good value. In the past, they always had 10-20 cars listed. When I went in a few months ago, they had zero listed, and I asked them why. They said that totaled cars are being bought at auction sight unseen for more than msrp routinely. So, at least in my area, the used market is still fucked.
Without someone who does know a lot about cars, how can a non-car person know when the 2012 Outback is indeed OK or not? The seller will always say "it's great!" What are some easy things to look for?
So when looking at older cars, it’s kind of just the usual stuff, still. Listen for any rattles, bumps, or weird noises, feel if the brakes, acceleration, and shifts are smooth, test all the electronics, check for rust, check underneath to make sure nothing is out of place, check to make sure the wheels and tires are in good condition (both easy to replace, but you might not wanna replace tires the day you buy a car). I don’t particularly like dealers, but buying from reputable ones at least ensures there’s a minimum standard of quality vs a private sale. Always get a carfax either way, unless you really know what you’re looking at and don’t just check it for accident history, but check reported services and repairs, too. Do research on common issues for the car you look at and bring a car friend with you or if you don’t have a car friend, any second set of eyes helps to at least see things you might miss.
I know people have anxiety over buying an older car and it immediately blowing up on them, but you can usually tell if a car has been treated well as long as you give it a good look over with a friend, check the carfax, and make sure it drives smoothly. Yeah, stuff is going to break over time, but that’s true of newer cars too. Sure, if you buy an older car, it’s possible to be facing a $2k repair a few months after buying, but if you saved $8k on the purchase instead of getting something newer, you’re still way ahead.
I just think it’s wise when car shopping to at least consider options 10-20 years old as they can save you a TON of money while being just as reliable. Something like a 2006 Honda CR-V is completely bulletproof and I’d trust an old one over a new one, personally.
Thank you. This is really useful and I wish I knew this a few months ago. I’ll know for my next car tho and I look forward to saving a bunch of cash at that point. I am one of the general public who thought I better avoid anything that’s more than 5 years old and over 100k Km.
And to be clear, there’s nothing wrong with getting a newer and lower miles car if you do need it and can afford it. It just pains me to see people making $700 a month car payments when a cheaper and older car can be just as reliable (and sometimes more reliable). The auto industry has put a ton of work into making people think they need a brand new (or near new) car every 3-5 years.
When you start learning to fix your own cars, that’s where the next level of savings comes in. Not including consumables costs (like oil changes and tires) and optional performance upgrades, I usually spend less than $1000 a year on repairs, I haven’t had a car payment in four years, and on the three cars I bought, but also sold since 2020, I broke even on selling them. So the cost of my five cars I’ve owned since 2020 (two of which I still own), I’m paying less per year for them than some people pay over two months for their car payment. It’s pretty rad. Shops are charging damn near $1000 these days for brake pads and rotors and I can do it myself on any of my cars for $100-$300 depending on the car.
Loans are crazy. The way interest works on those is such a waste of money. And I can't even wrap my head around leasing. I guess it's like long-term renting. I prefer to buy something I can afford 100% rather than buying something fancy I can only afford by borrowing money. I don't like the loan chain around my neck. But a close relative of mine, for example, had no issue splurging on a luxury European car, leasing it. Guess what happened a few years later when life threw them a curveball? Could no longer afford it. Hundreds in leasing fee, insurance, premium gas just to satisfy some materialistic craving for a luxury vehicle.
For my next car I'm taking your advice, tho it won't be for a while. I have a Jetta at the moment, and I chose it hoping it lasts a long time. My brother in law moved to my city recently, and he knows a lot more about cars than I do, so he'll be my car friend who I'll take with me. I can do small things on my own (swap winter / summer wheels, minor repairs). I'd like to eventually learn how to change breaks pads and do oil changes.
I took way too long to get to this but, in education, you likely heard that you are a visual learner, auditory learner, kinesthetic, and so on.
Learning styles are a myth. Claims about visual, auditory, reading/writing, and kinesthetic “styles” (VARK) have been thoroughly tested but despite being so prevalent in education literature (a survey of recent papers indexed in ERIC and PubMed found that 89% implicitly or directly endorsed learning styles), large reviews and experiments find no reliable “meshing” benefit. Hell, you can ask most teachers at my current institution and they still believe it's a thing. Glad all those PD hours and meetings are useful.
What does matter is aptitude–treatment interaction grounded in cognitive load. Novices benefit from worked examples and explicit guidance, while more knowledgeable learners benefit from reduced guidance and problem-solving (expertise-reversal effect). Working memory limits are more of the constraint, and working memory is strongly correlated with fluid intelligence. Amount and type of guidance, should adapt to the learner based by prior knowledge and task complexity.
This actually makes education kind of boring, because how to effectively teach someone is relatively simple (rather than layers of persoanlity attributes, blahblahblah). Clear visuals aligned with text, spaced retrieval, and structured practice reigns king. As learners become more advanced then faded guidance and varied problem types.
A broken clock is NOT right twice a day, the value it gives happens to line up for 1 second twice a day, but it is never correct. The only way you would be able to verify that its right is if you had another timepiece, and even then, the clock isn't working, it ceased operating and the value it last produced can happen to line up with reality. Trying to use a broken clock for anything would be a futile effort, what are you going to do, stare at a clock all day where the hands don't move?
I don’t think I understand this one.
If you want to be pedantic (and it seems like you do) the length of time during which the stopped clock is correct depends on that clock’s precision. So if it’s a clock with a second hand, it’s true that it’ll show the correct time for a second. But if it’s a digital instrument accurate to milliseconds or microseconds or nanoseconds, that interval will be shorter.
What is the smallest possible unit of time? Is there a fundamental minimum “tick” size, or is every duration infinitely divisible? Ultimately that smallest unit, the highest-resolution discrete step from one instant to the next, if there is such a thing, is the duration of time that a stopped clock will be correct for. But it’s still “correct,” for that infinitesimal duration, twice a day. That’s what’s confusing me about what you wrote. The idiom holds up.
Now we can really split hairs and debate what “correct” even means here, because that assumes a standard to measure against. I think this is what you were getting at by bringing in a second clock? Of course, timekeeping itself is a construct, it’s not like there is some Platonic “true time” that we’re all just approximating.
Reality doesn’t have a say here. The whole notion of a correct time and date boils down to consensus and tradition. It’s been refined a lot over the centuries… we have somewhat standardized time zones and atomic clocks and the Network Time Protocol, these things all nudge us into tighter alignment but they’re still just conventions. The cracks are evident in leap years and leap seconds, hacks to correct the discrepancies between our human systems and the physical phenomena they attempt to model. Our systems work well enough for most things, but degrade noticeably once relativity enters the picture.
What’s my point? There’s no such thing as “correct” time but within the framework of timekeeping conventions, the idiom about a broken clock being right twice a day is… still valid. And if you want to disregard that framework, then the whole meaning of “timekeeping” goes poof and the stopped clock is just a meaningless object to which words like “right” and “wrong” don’t apply.
There kinda is a limit, yeah. It’s called a “Planck second” and you can look up the number associated with it, but it’s so impossibly tiny as to be meaningless unless you’re doing physics.
You may already know that the universe has a maximum speed limit (the speed of light).
And you might be able to imagine that there’s a minimum size; if you have an apple, and cut it in half, then cut one piece in half again, and again and again, eventually you’ll have a single molecule/atom/subatomic particle depending on how far you want to keep going, but eventually you’ll hit a limit. This is called the Planck length.
Then if you combine these concepts, what happens when you have the fastest possible speed and the shortest possible distance, then you end up with the briefest possible moment — any moment more brief can only be had if something is faster than the fastest speed, or a distance is shorter than the shortest length. But these two limits put a cap on their shared relationship via time, and so you have the briefest possible time. The Planck second.