• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~tech with the tag "development.software". Back to normal view / Search all groups
    1. How do you manage separate development environments on your computer?

      Hello Tildes! There's an open-source app I would like to work on and contribute code to, but it uses a toolchain that I'm not terribly familiar with (Deno), and I'm not a huge fan of letting tools...

      Hello Tildes!

      There's an open-source app I would like to work on and contribute code to, but it uses a toolchain that I'm not terribly familiar with (Deno), and I'm not a huge fan of letting tools like this have full access to my system and files.

      Do any of you use a system to containerize different development environments for software development? I could definitely use a standard Docker/Podman container to run the app, but I'm not aware of a good system where you can edit a program's source in an IDE, make changes, build the app, open a local port, and save your new code, all within a sandboxed environment.

      If anyone uses a system like this or something related, I would love to hear about it and share ideas.

      14 votes
    2. If you're a programmer, are you ever going to believe an AGI is actually 'I'?

      First, I am emphatically not talking about LLMs. Just a shower thought kinda question. For most people, the primary issue is anthropomorphizing too much. But I think programmers see it...

      First, I am emphatically not talking about LLMs.

      Just a shower thought kinda question. For most people, the primary issue is anthropomorphizing too much. But I think programmers see it differently.

      Let's say someone comes up with something that seems to walk and talk like a self-aware, sentient, AGI duck. It has a "memories" db, it learns and adapts, it seems to understand cause and effect, actions and consequences, truth v falsehood, it passes Turing tests like they're tic-tac-toe, it recognizes itself in the mirror, yada.

      But as a developer, you can "look behind the curtain" and see exactly how it works. (For argument's sake, let's say it's a FOSS duck, so you can actually look at the source code.)

      Does it ever "feel" like a real, sentient being? Does it ever pass your litmus test?

      For me, I think the answer is, "yes, eventually" ... but only looong after other people are having relationships with them, getting married, voting for them, etc.

      31 votes
    3. Is AI actually useful for anyone here?

      Sometimes I feel like there's something wrong with how I use technology, or I'm just incredibly biased and predisposed to cynicism or something, so I wanted to get a pulse on how everyone else...

      Sometimes I feel like there's something wrong with how I use technology, or I'm just incredibly biased and predisposed to cynicism or something, so I wanted to get a pulse on how everyone else feels about AI, specifically LLMs, and how you use them in your professional and personal lives.

      I've been messing with LLMs since GPT-3, being initially very impressed by the technology, to that view sort of evolving to a more nuanced one. I think they're very good at a specific thing and not great at anything else.

      I feel like, increasingly, I'm becoming a rarity among tech people, especially executives. I run cybersecurity for a medium sized agency, and my boss is the CIO. Any time I, or any of her direct reports write a proposal, a policy, a report, or basically anything meant to distribute to a wide audience, they insist on us "running it through copilot", which to them, just means pasting the whole document into copilot chat, then taking the output.

      It inevitably takes a document I worked hard on to balance tone, information, brevity, professional voice, and technical details and turns it into a bland, wordy mess. It's unusable crap that I then have to spend more time with to have it sound normal. My boss almost always comes up with "suggestions" or "ideas" that are very obviously just copy pasted answers from copilot chat too.

      I see people online that talk about how LLMs have made them so much faster at development, but every time I've ever used it that field, it can toss together a quick prototype for something I likely could have googled, but there will frequently be little hidden bugs in the code. If I try to use the LLM to fix those bugs, it inevitably just makes it worse. Every time I've tried to use AI in a coding workflow, I spend less time thinking about the control flow of the software, and more time chasing down weird esoteric bugs. Overall it's never saved me any time at all.

      I've used them as a quick web search, and while they do save me from having to trawl through a lot of the hellhole that is the modern internet, with blogspam, ads, and nonsense people write online, a lot of times, it will just hallucinate answers. I've noticed it's decent at providing me results when results exist, but if results don't exist, or I'm asking something that doesn't make sense, it falls flat on its face because it will just make things up in order to sound convincing and helpful.

      I do see some niches where the stuff has been useful. Summarizing large swathes of documents, where the accuracy of that summary doesn't matter much is a little useful. Like if I were tasked to look through 300 documents and decide which ones were most relevant to a project, and I only had an hour to do it, I think that would be a task it would do well with. I can't review or even skim 300 documents in an hour, and even though an LLM would very likely be wrong about a lot of it, at least that's something.

      The thing is, I don't frequently run into tasks where accuracy doesn't matter. I doubt most people do. Usually when someone asks for an answer to something, or you want to actually do something useful, the hidden assumption is that the output will be correct, and LLMs are just really bad at being correct.

      The thing is, the internet is full of AI evangelists that talk about their AI stack made up of SaaS products I've never even heard of chained together. They talk about how insanely productive it's made them and how it's like being superhuman and without it they'd be left behind.

      I'm 99% sure that most of this is influencer clickbait capitalizing on FOMO to keep the shared delusion of LLM's usefulness going, usually because they have stake in the game. They either run an AI startup, are involved in a company that profits off of AI being popular, they're an influencer that makes AI content, or they just have Nvidia in their stock portfolio like so much of us do.

      Is there anyone out there that feels this technology is actually super useful that doesn't fall into one of those categories?

      If so, let me know. Also, let me know what I'm doing wrong. Am I just a Luddite? A crotchety old man? Out of touch? I'm fine if I am, I just want to know once and for all.

      80 votes