TangibleLight's recent activity

  1. Comment on What programming/technical projects have you been working on? in ~comp

    TangibleLight
    (edited )
    Link
    Been really tough to motivate myself on personal projects outside work. Work is good, and interesting, but all that mental effort makes it difficult to focus after hours. I don't really talk about...

    Been really tough to motivate myself on personal projects outside work. Work is good, and interesting, but all that mental effort makes it difficult to focus after hours.

    I don't really talk about work much here, maybe I could. I mainly work in image processing; the last couple months have been dominated by three projects. Two of them deal with 2D/3D registration on X-Ray images, that is, given a 2D X-ray image, try to identify the 3D position and orientation of certain bones of interest. On one of these projects, the last couple weeks have been very uninteresting work dealing with our client's inconsistently structured datasets. I'm updating our system to ingest data from their new acquisition pipeline. For the other project we've been going through some rounds of user feedback and making adjustments to the workflow. No real algorithm work (for now), although we have some ideas for improvements in the future once we get this release out. Iteration times waiting for feedback can be slow, and I find UX work a bit tedious.

    The third project is, to me, the most interesting. The modality - multispectral images - is new to me since joining that team. I've been learning a ton from colleagues and reading literature on the subject - it's fascinating! But a lot of mental effort.

    One of the after-hours projects I had been working on was a Vulkan renderer written in Zig (last update on that here), both to learn more about Vulkan and to learn more about Zig. But, with all the graphics code in the registration projects from work, I really don't have the patience to deal with Vulkan at the end of the day.

    The other after-hours project I was working on, a bit easier to get motivation, is a tiny Python static site generator for a personal blog (last update on that here). I did make progress on this since then - rather than using tikz for figures, I created an extension that allowed me to tag ``` code blocks as executable python scripts that generated figures with matplotlib. It would automatically dump those to SVG and embed them in the page, correctly interacting with the stylesheet. It works pretty well!

    However I stopped working on that project for the fundamental reason that justified monospace text really doesn't work on mobile - and most people reading would be on mobile. It is not too hard to write monospace text that flows for multiple character widths - I did some integer arithmetic to work out ideal line widths. The problem is that in order to support mobile reading, you end up with these little sections of 2-3 words that need to be some exact number of characters long. There's only so many ways to rephrase 2-3 words to hit exactly 13 characters; for any moderately sized piece of text you end up with impossible sections.

    So I had reverted back to proportional text, but that character-counting poetry was a lot of what motivated the project for me. As an alternate motivator - I really want an outlet for writing Zig. So I'm taking a stab at reimplementing my tiny static site generator in Zig. From what I gather*, there are no compliant markdown rendering libraries already available for Zig, so I've started writing my own as a first step. Parsers have nothing to do with my day job, so it's been easier to get invested in it. In isolation, it's a relatively small projects. Just a straightforward input-output transformer. I think it's a good hobby-level outlet to write some Zig code without thinking too hard about managing long-lived resources.

    * EDIT: After writing all this I was curious - surely this can't be true?? And I think it may not be. After a bit more thorough research, it looks like there is an implementation in the zig repo used for documentation, although I don't see if it claims compliance with any spec. I also found koino which is a Zig port of a Rust port of Github's fork of the Commonmark reference implementation.

    I'm just implementing a recursive descent parser in two phases. First, parse the block structure of the document. This is straightforward on a line-by-line basis. Each block then has runs of inline text which are each processed independently in the second phase. I wrote a little script to scrape the examples from the CommonMark spec and generate Zig test cases from them; so I'm testing against that to try to be compliant. My intent, ultimately, is to support Obsidian Flavored Markdown so I could render documents directly from my Obsidian vault, but that's a long way off.

    2 votes
  2. Comment on US Election Distractions Thread in ~talk

    TangibleLight
    Link Parent
    Missed a great opportunity with that double dash at the end.

    Missed a great opportunity with that double dash at the end.

    1 vote
  3. Comment on US Election Distractions Thread in ~talk

    TangibleLight
    (edited )
    Link Parent
    There is a rich variety of dashes! I don't know the discussion you and @updawg mentioned, so apologies if this is old news. Clearly you're familiar with the Hyphen-Minus - and the Em Dash —...
    • Exemplary

    There is a rich variety of dashes! I don't know the discussion you and @updawg mentioned, so apologies if this is old news.

    Clearly you're familiar with the Hyphen-Minus - and the Em Dash

    There's also the Hyphen ‐ and the Minus − which are reserved for typography and mathematics separately.

    And the Soft Hyphen which is hidden, by default, unless the word falls at the end of a line of text. The word is split across the two lines and the hyphen appears only then. Similar to the Zero Width Space. There's a good example text to show this on Wikipedia. Try resizing the window to see how the text reflows.

    Then there's the Non-Breaking Hyphen ‑, which is sort of the opposite of Soft Hyphen; it is never broken across lines, and keeps the words to either side together. Similar to the Non-Breaking Space.

    If a Hyphen is too narrow but an Em Dash is too wide, there is also the En Dash –.

    If Em Dash is too narrow, there's Two-Em Dash ⸺ and Three-Em Dash ⸻.

    There's the Figure Dash ‒ which is specifically aligned to digits rather than letters; so for tabular data this is better as it preserves numerical alignments.

    And of course there are many symbols used in typography for other languages! For example fullwidth dashes properly align with characters and direction of text for East Asian languages. Fullwidth Hyphen-Minus -, Wave Dash 〜, and Wavy Dash 〰 behave this way.

    Note that Tilde ~ is NOT a dash, but Swung Dash ⁓ is.

    Wikipedia also has a more-or-less complete list https://en.wikipedia.org/wiki/Dash#Unicode

    5 votes
  4. Comment on Cmake strategies or alternatives for building (different) code for different platforms in ~comp

    TangibleLight
    Link Parent
    It is wildly out of line, but in a good way I think. I tend to prefer an untracked CMakeUserPresets.json instead of various shell scripts for maintaining things - I don't personally have...

    It is wildly out of line, but in a good way I think. I tend to prefer an untracked CMakeUserPresets.json instead of various shell scripts for maintaining things - I don't personally have experience tracking a CMakePresets.json in git. From what I understand it is good especially for managing automated CI/CD. The runner just has to execute the appropriate --preset for its platform. In that context, the JSON feels much more coherent alongside all the other inevitable json and yaml configuration.

    1 vote
  5. Comment on Cmake strategies or alternatives for building (different) code for different platforms in ~comp

    TangibleLight
    Link
    I can think of two tricks that might help. If, for some reason, the different toolchains depend on each others build artifacts, you can have multiple toolchains in play via ExternalProject. Note,...

    I can think of two tricks that might help.

    If, for some reason, the different toolchains depend on each others build artifacts, you can have multiple toolchains in play via ExternalProject. Note, if you go this route, you're essentially using a given CMake build as a sort of driver script to initiate multiple other CMake builds as independent sub-processes. Those "inner" (external) builds can use different toolchains, and your "super" (outer) build can grab those artifacts and package them up however needed.

    However I'm not quite convinced that's really necessary. If all you need is a few "primary" configurations

    • Build embedded artifacts
    • Build PC artifacts
    • Run unit tests

    Then I suggest instead writing a sensible cmake presets file https://cmake.org/cmake/help/latest/manual/cmake-presets.7.html Look for the toolchainFile option.

    Your build script outside cmake might be something like:

    cmake --preset embedded
    cmake --build --preset embedded
    
    cmake --preset desktop
    cmake --build --preset desktop
    
    cmake --preset CI
    cmake --build --preset CI
    ctest --preset CI
    
    1 vote
  6. Comment on Weekly US politics news and updates thread - week of October 21 in ~society

    TangibleLight
    (edited )
    Link Parent
    That is wild. This can't be true? Is that true? What exactly were the questions they asked? From what I can tell this is that poll. And here's the relevant quote from the questionnaire. Well,...

    I used to believe that, then I saw the poll showing 47% of Americans are pro rounding up “illegal immigrants” and putting them in “militarized camps”

    That is wild. This can't be true? Is that true? What exactly were the questions they asked?

    From what I can tell this is that poll. And here's the relevant quote from the questionnaire.

    Q30. Turning to other issues, how much do you strongly favor, favor, oppose, or strongly oppose the following?

    Q30e. Rounding up and deporting immigrants who are in the country illegally, even if it takes setting up encampments guarded by the U.S. military. [3]

    Strongly favor Favor Oppose Strongly oppose Skipped/Refused
    Sept. 2024 22 25 28 22 3 =100
    [3] Completely agree Mostly agree Mostly Disagree Completely disagree
    Mar. 2024 13 22 26 36 =97

     

    The survey was conducted among a representative sample of 5,027 adults (age 18 and up) living in all 50 states in the United States, who are part of Ipsos’s Knowledge Panel and an additional 325 who were recruited by Ipsos using opt-in survey panels to increase the sample sizes in smaller states. Interviews were conducted online between August 16 and September 4, 2024.

     

    Respondents are recruited to the KnowledgePanel using an addressed-based sampling methodology from the Delivery Sequence File of the USPS – a database with full coverage of all delivery addresses in the U.S. As such, it covers all households regardless of their phone status, providing a representative online sample. Unlike opt-in panels, households are not permitted to "self-select" into the panel; and are generally limited to how many surveys they can take within a given time period.

    Well, that's horrifying. So is a lot of that document.

    7 votes
  7. Comment on Everything I built with Claude Artifacts this week in ~comp

    TangibleLight
    Link Parent
    I'm not thinking of things as being ephemeral enough, I suppose. On the json-to-yaml example: it's not that you're trying to write a tool to convert yaml to json; you're trying to convert a...

    I'm not thinking of things as being ephemeral enough, I suppose. On the json-to-yaml example: it's not that you're trying to write a tool to convert yaml to json; you're trying to convert a particular yaml document to json. You can spend time finding a tool or writing a short script, but that's not really the point.

    So maybe I'd personally prefer a terminal interface for the thing that lets me pipe data through it - but if someone lives in the browser I think I'm starting to see the point.

    2 votes
  8. Comment on Everything I built with Claude Artifacts this week in ~comp

    TangibleLight
    Link Parent
    Yeah, I just got through the post that @Wes linked in their comment https://tildes.net/~comp/1jlv/everything_i_built_with_claude_artifacts_this_week#comment-dwmq The closing line is pretty...

    Yeah, I just got through the post that @Wes linked in their comment

    https://tildes.net/~comp/1jlv/everything_i_built_with_claude_artifacts_this_week#comment-dwmq

    The closing line is pretty compelling, and I think that's "the point" I'm missing.

    This new tooling is stopping me from stopping starting.

    2 votes
  9. Comment on Everything I built with Claude Artifacts this week in ~comp

    TangibleLight
    (edited )
    Link Parent
    So I go on this long rant in another thread, but then see things like this and it makes me wonder if I'm just out of touch. I don't think I am. I've looked through all of these, and they all seem...

    So I go on this long rant in another thread, but then see things like this and it makes me wonder if I'm just out of touch.

    I don't think I am.

    I've looked through maybe 10 of these so far, all of these, and they all seem to wrap just a few lines of code, or some api call, in a small web app. And some of these, I have to wonder if a small web app is even the best place to put it, although that just might be my living-in-the-terminal bias speaking. For example YAML to JSON converter; I'd personally much prefer a cli tool that runs on standard in and out, and at that point all the boilerplate HTML/CSS wrapping it that makes it a web app is not needed. yq already does it.

    Most of the things on that page fit into this category of things that I personally think would both be easier to build and also be more functional if they were not a web app. It obviates a lot of the value proposition of the LLM.

    There are some notable exceptions, though.

    The SQLite and Pyodide demos are fundamentally different beasts than CLI tools. A micro web app seems a fine way to put together a little demo to show things working together.

    The camera settings demo makes more sense with a gui although I have to think it would be easier with CSS filters (not dissimilar from text-wrap-balance-nav demo), or more extensible in a notebook.

    I guess I say it makes me wonder if I'm out of touch because I don't really get the point. It feels like there is a point that I'm missing. I don't really see utility in these kinds of micro apps as finished products - they would be more useful (and easier to create) in a different medium. I usually see utility in the act of creating such apps, usually as some kind of learning exercise or exploration for the author. But if you're having the AI do it, you lose that, so what's the point?

    Do general people really find such micro-apps useful? If so then I guess I have to concede that LLMs really do enable productivity tools that would be otherwise unavailable to them. But at the same time I truly believe they're limiting themselves by the requirement of putting everything in a webapp gui. Is it good in the long run to use the LLM and stay in that local maximum? Maybe, if they wouldn't have searched for any maximum at all otherwise.

    1 vote
  10. Comment on Using AI generated code will make you a bad programmer in ~tech

    TangibleLight
    (edited )
    Link Parent
    Something about your comment reminded me of this talk: Up to Code - David Sankel - CppCon 2021 I think that generative AI as-currently-exists would not have gotten so much attention if we, as a...

    Something about your comment reminded me of this talk:

    Up to Code - David Sankel - CppCon 2021

    I think that generative AI as-currently-exists would not have gotten so much attention if we, as a society, were not so conditioned to tolerate software failures.

    If they've only ever used self driving vehicles; should they be allowed to drive a regular car? Arguably they wouldn't be allowed to drive the self driving car if they don't have a license, but I hope it illustrates the point.

    As I recall, Sankel doesn't suggest that all programmers should have a license. He makes a distinction between programmers and software engineers, although I don't think the exact terminology here matters too much. His equivalent example "fixer/handyman" vs "electrician".

    The main point is on thinking about reliability, safety, and modes of failure. In other domains, when liability and reliability are a concern, you bring in an engineer and worry about regulation. There is a distinction between new work and old work. The software industry has no real equivalents.

    That talk is pre-ChatGPT. I'm curious how it would be different if he'd given it a couple years later.

    3 votes
  11. Comment on Using AI generated code will make you a bad programmer in ~tech

    TangibleLight
    (edited )
    Link Parent
    First: personally, the reason I respect people who create things in assembly or scratch is because of the constraints inherent in the medium; I respect the technical feat of creating great things...

    Again the it's just programming at a higher level of abstraction compared to before. I look upon the guy who programmed roller coaster tycoon in assembly with great respect. But that doesn't also mean that I don't respect people making games in scratch or roblox.

    First: personally, the reason I respect people who create things in assembly or scratch is because of the constraints inherent in the medium; I respect the technical feat of creating great things despite those constraints. Such things don't really apply to AI-generated output.

    I suppose I would have respect for someone who creates something great out of AI-generated slop, given the poor quality of their building blocks. It's hard to build a strong castle when your bricks are made of sand. If someone manages to do so despite this, I'd be impressed. I'd be especially impressed if they manage to keep it from crumbling over time.

    To be clear, I will be impressed with the feat of defying constraints. I will not be impressed with the castle itself. It would have been stronger if it were made of better materials.

    Second: In software engineering, abstraction is not a virtue in its own right. At the end of the day we are all constrained by the same physics on the same machines. Each layer of abstraction between yourself and the metal is a step away from reality. An abstraction is only good if it brings some value that outweighs its cost. That cost-benefit analysis is what engineering fundamentally is.

    For example, I'd never advocate writing assembly in practice because structured control flow and function calls have exceedingly high benefit to reasoning and communication, and their computational cost is low.

    AI does not offer a benefit proportional to its costs. Assembly and Scratch are red herrings. If someone demonstrates that AI does bring some benefit that outweighs its costs, I'd be on board.

    So I'll interpret the remaining points in this way, as benefits of AI that outweigh costs and might make the abstraction valuable.

    You Rob Yourself of Learning Opportunities: eventually, after writing enough ai code and accepting and rejecting suggestions, I believe eventually people will be able to differentiate between bad and good code.

    Right. There's this period of time where novice programmers need to look at examples, build toy projects, and stretch their muscles a bit before they're really good at identifying good vs. bad code.

    What if we curated some list of topics they could use to stretch those muscles more efficiently? We could weed out AI hallucinations at the same time so the person isn't led astray early on.

    Follow this kind of reasoning about efficiency and you re-invent textbooks and school. AI doesn't offer any "value" here except misdirection and disengagement. Frankly, two issues I already take with current computer science education. AI will make this problem worse, not better.

    Skills You Already Have May Atrophy: going back to the car analogy

    Your argument is predicated on the idea that the off-the-shelf components are of higher quality than what you could produce on your own. This is a good argument for using well-written libraries, and the popularity of open-source demonstrates that it's a compelling argument.

    Drawing conclusions here about AI is a non-sequitur. In what way do AI tools improve the fundamental building blocks that one uses to build software? How is it beneficial to use an AI-generated gasket rather than manufacture one myself or design around an off-the-shelf component? Does the availability of an AI-generated part positively impact the design process for the rest of the engine? How will the quality of the typical engine change over time if such parts become widespread?

    You May Become Dependent on Your Own Eventual Replacement

    I tend to agree, although I suspect we have different ideas on what "different job" is likely to mean. Just as we have entire industries dedicated to maintaining or rewriting "legacy" code, I expect a similar attitude and industry around "AI" code.

    Do You Even Own AI Generated Code

    Large (private) organizations tend to be very risk-averse and very protective of IP. I see litigation around this as an inevitability, not something that can be ignored because "politicians" will favor national security over code licensing. If anything I would expect the situation where the private sector litigates and litigates over this, while conglomorates and state actors break their own rules. This already happens. For the rest of the private sector, any such liability is still going to be a big problem.

    9 votes
  12. Comment on Using AI generated code will make you a bad programmer in ~tech

    TangibleLight
    Link Parent
    There's also something to be said about the level of control and presence when using traditional-style code generators or static analysis that I think is missing in AI prompts. I admit I'm not...

    There's also something to be said about the level of control and presence when using traditional-style code generators or static analysis that I think is missing in AI prompts. I admit I'm not well-practiced in using AI for these things so maybe there are ways to be similarly present while code is produced, but from what I've seen proponents of AI tools don't use it that way.

    More algorithmic code generators force you to engage with the structure a bit, find patterns, and synthesize some way to generate those patterns. It also forces you to contend with the mere fact that the repetition is tedious, and likely indicates some greater architectural problem. You might do a rough cost-benefit analysis and determine it's better to leave the architectural problem in order to ship faster - but at least you're engaged with that decision.

    AI feels more like handing the rote task off to an intern and waiting for them to return after they do all the tedium. You aren't even there while they do the work, so how can you engage with the problem? If you have this AI-shaped hammer, everything looks like a tedious nail, and I worry about the larger architectural issues that will fester if people don't continue to engage with the systems they maintain. All the studies I've seen about increased code churn and bugs seem to line up.

    I do have to concede that AI does seem to do a fine job at the tedious tasks, so if you do engage with the problem and decide that reviewing the output of a hyper-flexible code generator is the best approach, then sure, let the AI do it faster.

    8 votes
  13. Comment on Switching to Linux, looking for distro recommendations in ~tech

    TangibleLight
    Link
    So I'm not quite sure I recommend this, but I'll share my setup and let you decide. I use Debian. It is the debian-based distro, after all. I cannot recommend this if you want a streamlined,...

    So I'm not quite sure I recommend this, but I'll share my setup and let you decide.

    I use Debian. It is the debian-based distro, after all.

    I cannot recommend this if you want a streamlined, beautiful, cohesive experience out-of-the box. I actually prefer that, since it means each component is relatively unchanged, correctly aligns with online documentation, and is easy to swap in/out with other components I prefer

    Steam, PlayOnLinux, and Wine all work fine.

    Installing arbitrary .deb packages and adding apt repositories works fine.

    If you enable non-free firmware, Nvidia works as fine as on any other distro. Although, admitedly, I haven't tested Debian/Nvidia in some years. I'm sure Pop!OS is easier out-of-the-box.

    Cinnamon, KDE, XFCE, MATE, etc. all work fine. I currently use i3. It's all fine and easy to switch without reinstalling. I have not tested Gnome 3 as I dislike it. You can probably get some DE to look like Win 10, but I don't have concrete advice here.

    2 votes
  14. Comment on I now think a heretical form of Christianity might be true in ~humanities

    TangibleLight
    (edited )
    Link Parent
    Thanks for the link! This clarifies a lot. So the objection I had with "empirically approximate those other terms" was to do with Barnes's Premise 7. Specifically Barnes says physics provides...

    Thanks for the link! This clarifies a lot.

    So the objection I had with "empirically approximate those other terms" was to do with Barnes's Premise 7. Specifically Barnes says physics provides p(α|LB) - I'm not sure that's true but I am no physicist so I'm out of my depth with regard to what's used in practice. I'll take the background information he provides at face value.


    On pages 15 and 16 he gives a description on how physics does this in practice, and does list the kinds of assumptions on distributions that I expected.

    For dimensional parameters, there is an upper limit on their value within the standard models. [...] [For example] The Planck mass represents an upper boundary to any single-particle mass scale in our current theories. [...] Within these finite ranges, the obvious prior probability distribution is flat between the limits, as other distributions need to introduce additional dimensionful parameters to be normalised.

    These additional parameters would not be parameters of the model, though; they are strictly parameters of the probability distribution and thus the likelihoods. I don't think you can simply discard these when the argument hinges on the likelihoods.

    For dimensionless numbers, we have a few cases. Some are phase angles, and so a flat prior over [0, 2π) is reasonable. Some, such as the Yukawa couplings, are connected to masses of particles and thus subject to the Planck scale upper limit. Others vary over an infinite range. But even in the case of a finite range, physicists do not usually postulate a flat prior. Rather, dimensionless parameters are expected a priori to be of order unity. [...] A number of heuristic (read: hand-waving) justifications of this expectation are referenced in Barnes (2018). [...] As we will see below, this is sufficient for the upper-limit estimates
    we need.

    More assumptions on the distribution. For Physics, reasonable ones, but for this argument I don't think you can disregard them entirely.

    My issue with both these points is it holds L fixed. Barnes always takes L as a given in his probabilities. For physics, this is fine, since we're only interested in refining our approximation of our universe's L.

    The bounds on α are derived from L, so the arguments on assuming distributions fall apart if you let L vary. Then the bounds vary, and you have no "natural" values or distributions for the free parameters. You can't compute p(α|LB) and the concrete argument falls apart. It's worse if you also consider L with radically different parameterizations (ie if α has high or infinite dimension).

    So implicitly, one of the premises of Barnes's argument is that no other L could give rise to life. I disagree.


    Now, with that said, I don't think this changes the specific conclusion of Barnes argument that p(U|LNB) << p(U|LGB). Even if you let L vary - especially if you let L vary - p(U|NB) <<< p(U|GB). This seems true by construction; there's an assumption that G has some intent to create life, so of course p(U|GB) is relatively high.


    [1] For two theories T1 and T2, in the context of background information B, if
    it is true of evidence E that p(E|T1 B) >> p(E|T2 B), then E strongly favours
    T1 over T2.
    [2] The likelihood that a life-permitting universe exists on naturalism is van-
    ishingly small.
    [3] The likelihood that a life-permitting universe exists on theism is not van-
    ishingly small.
    [4] Thus, the existence of a life-permitting universe strongly favours theism
    over naturalism.

    (Page 6)

    The last leap in the argument seems to be that p(U|LNB) << p(U|LGB) implies p(N|ULB) << p(G|ULB). This doesn't follow.

    If Barnes is allowed to claim:

    I contend that there are not, in fact, ∼ 10^136 possible reasons for God to create that have comparable plausibility to that of a life-permitting universe.

    (Page 22)

    Then I think I can also claim: N supports many many more universes - life-permitting or not - than G does. Especially so if you allow L to vary or allow α to take higher and higher dimension. p(G|LB) << p(N|LB).

    If God is infinite then of course this falls apart; but then I can claim various infinities under N and we're at the same point. (Also worth pointing out a truly infinite God could easily have 10^136 better reasons to create, so then p(U|GLB) isn't necessarily high.)

    So to apply the Bayesian flip and make that final leap in the argument, there's an undecidable question: How does p(U|LNB)p(N|LB) relate to p(U|LGB)p(G|LB)? Remember we have p(U|LNB) << p(U|LGB) and also p(N|LB) >> p(G|LB).

    All and all I don't find the statistical approach particularly compelling either way.


    I think Barnes missed the point with his response to the multiverse objection (Page 24). Yes, the measure problem means you can't calculate certain ratios, but it's a non-issue if you assume that the multiverse exists. It's a set. Set membership is not contingent on the likelihood of set membership. Probability distributions on that set don't even need to exist.

    5 votes
  15. Comment on I now think a heretical form of Christianity might be true in ~humanities

    TangibleLight
    Link Parent
    It seems clear you have some reference in mind that I'm not aware of. Could you point me toward the argument you're referring to here? It would be helpful to me to see how exactly Bayes' is...

    The strongest claim one could make is, all things being equal, the constants falling in the life-permitting range is better predicted by theism than naturalism.

    It seems clear you have some reference in mind that I'm not aware of. Could you point me toward the argument you're referring to here? It would be helpful to me to see how exactly Bayes' is applied in the argument and what assumptions the argument may or may not make.

    I don't see how this could possibly be true without making unfounded assumptions about how assuming naturalism or theology change the distribution.

    Since we are talking about epistemic probability and Bayesian statistics, we don't need to know the actual intrinsic probability distribution.

    You don't need to assume the full distribution, but you do need some information about it via the other terms in Bayes' formula. In practice you'd take measurements to empirically approximate those other terms, and that's the real power of Bayesian statistics. But we can't do that in this domain, which is why I say any statistical argument here must be weak (or take unfounded/unfalsifiable assumptions, or prove a weak result).

    5 votes
  16. Comment on I now think a heretical form of Christianity might be true in ~humanities

    TangibleLight
    (edited )
    Link Parent
    To apply Bayes you still need a model; you can approximate it, but that requires multiple samples. So to apply it in this domain you have to make an assumption. In doing so, you have the freedom...

    To apply Bayes you still need a model; you can approximate it, but that requires multiple samples. So to apply it in this domain you have to make an assumption. In doing so, you have the freedom to obtain any result you want.

    If you have multiple samples then you can use Bayes to make some assertions about which models could possibly be valid, and you can actually prove results independent of assumptions. But we don't, so we can't.

    AFAIK the strongest claim we can make, even with Bayes, is: the universe which we I observe does could exist. (edit: ie. the observation supports the model). We don't really need Bayes to help us figure that out, though.

    There may certainly be other statements you can prove I'm not aware of, but I expect them to be similarly vague. Anything stronger must presuppose a model about the distribution of universes in some multiverse which doesn't necessarily have bearing on our reality.

    5 votes
  17. Comment on I now think a heretical form of Christianity might be true in ~humanities

    TangibleLight
    Link Parent
    This seems like the real point. The statistics you can do with only one sample are very limited, and heavily depend on the model you choose. By choosing various models you can come up with just...

    We have evidence for the existence of exactly one universe.

    This seems like the real point. The statistics you can do with only one sample are very limited, and heavily depend on the model you choose. By choosing various models you can come up with just about any result you want. Therefore any statistical argument in this domain has to be very weak. (or strongly prove a weak result, I suppose)

    I'd also suggest that, by considering likelihood of different values, you're implicitly presupposing that multiple different values could be valid. You can interpret this as some space of hypothetical universes in your statistical model, and you're asking about the likelihood of sampling our universe from that "multiverse".

    If you reject that, and say only the one value we see is valid, then we know its likelihood - 1 - but I don't think that's a useful model for this discussion. The creationist might like it, though.

    8 votes
  18. Comment on I now think a heretical form of Christianity might be true in ~humanities

    TangibleLight
    (edited )
    Link Parent
    Following the link in that quote in that quote, and then the link in the linked page, I don't think it's quite gibberish but I'd never heard it expressed in those terms. The hard problem is a real...

    psychophysical harmony

    Honestly, basically just gibberish as far as I can tell.

    Following the link in that quote in that quote, and then the link in the linked page, I don't think it's quite gibberish but I'd never heard it expressed in those terms. I don't personally find much value in that phrase, I'd rather the author just called it what it seems to be: the hard problem of consciousness.

    The hard problem is a real problem, and it is hard; but it is not proof of the supernatural and it is not disproof of the physical. How the author uses it, I'd call it an appeal to ignorance rather than an appeal to incredulity. That's probably splitting hairs. The point is the same, the argument as a whole seems weak to me.


    E: Reading more, it doesn't seem to be directly about the hard problem of consciousness.

    The harmony in question is the general aligment of subjective experience (psycho) with physcial stimulus/response. The example both articles discuss is pain and pleasure responses; the qualia of a pain response is unpleasant and leads to avoidance behavior; the qualia of a pleasure response is pleasant and leads to seeking behavior.

    I really don't understand this argument. It seems totally and completely answered by natural selection (notwithstanding the hard problem, anyway).

    4 votes
  19. Comment on Parent on deathbed? Go or not? in ~health.mental

    TangibleLight
    Link Parent
    Given all the context about the strained relationship, I would personally regret missing the memorial service more than I would regret missing a last visit. I tend to agree with friend #2. If it...

    Given all the context about the strained relationship, I would personally regret missing the memorial service more than I would regret missing a last visit. I tend to agree with friend #2.

    If it is a person you care for, or if you care about how they think of you, I reverse my opinion completely. But if the relationship is strained as OP describes, and the cold pragmatics of interacting with a person with dementia, I don't think there's any value in that trip.

    16 votes
  20. Comment on AI and the American smile in ~humanities

    TangibleLight
    (edited )
    Link Parent
    While I was writing my comment I thought about adding a section at the end addressing situations where LLMs are useful. You brought up both of the examples I considered, grammar checking and new...

    While I was writing my comment I thought about adding a section at the end addressing situations where LLMs are useful. You brought up both of the examples I considered, grammar checking and new languages, so here's my take on those from this lens of basic statistics.

    Both are contexts where uniformity, middling quality, and lack of soul are acceptable or beneficial. For perfect grammar, you want to eliminate variance. To learn a new language, it's an improvement to regress to the mean.

    In general the people I see who are proud of using generative AI appear to believe it has a place everywhere. Your comment I replied to seemed to suggest that AI can compose meaningful text, if only you manage to prompt it just the right way. Until someone gives a nuanced description of their opinions as you've just done, they are indistinguishable from grifters trying to push AI into contexts where it doesn't make sense. So I apologize for incorrectly placing you in that group.

    The real argument in my first comment is that I categorically reject the idea that generative AI should have any place in the humanities, aside from some limited applications in language and visual arts. I think it's more important to be critical and intolerant of AI grift than to be polite to benign enthusiasts, so the "derisive sneer" is justified.

    I didn't address it in that comment, but I also categorically reject that they have any place in the sciences. The argument from basic statistics is that higher-accuracy events have lower probability, so generative AI can't be applied to contexts where accuracy or correctness is important.

    The new o1 demos certainly look accurate, so they challenge this accuracy argument, but I'm still skeptical. I haven't had a chance yet to interact with 1o myself, but what I've heard from others is that the improvement from 4o isn't as substantial as the demos make it seem. My experience with other "multi-step workflow" AI products is not good, and OpenAI hasn't given me much confidence that o1 has any real secret sauce over the others apart from sheer volume of compute resources. The rake kickflip meme comes to mind.

    Basically, knowing that LLMs work in a certain way does not translate to knowing how well they perform in specific tasks and situations. In fact, it can stop you from exploring these possibilities, which in turn limits your practical understanding of them.

    Fair enough. The models are black boxes, so it is impossible to make accurate predictions about how well any one performs in a given context.

    But I counter that the basic statistics is a good lens to predict which contexts generative AI as a technology could possibly do well. And as long as they remain black boxes, it will be impossible to reliably engineer their output to do well in non-obvious contexts.

    1 vote