• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics with the tag "development". Back to normal view
    1. I had an idea for a Crusader Kings, but about rich families in Victoria-Modern Era. What could go wrong?

      I had an idea for a game some weeks ago, just as the title says. It would be something like Crusader Kings, it's all about dinasties and roleplay, but set in more modern eras, from the beginning...

      I had an idea for a game some weeks ago, just as the title says. It would be something like Crusader Kings, it's all about dinasties and roleplay, but set in more modern eras, from the beginning of the industrial revolution until today, or maybe the future, we'll see. And instead of kingdoms, it's all about businesses. It's all about owning global company empires. Being a kind hearted local chain owner, or a sociopathic cutthroat in the 1% that owns the world. Up to you.

      And this idea is still stuck with me, and I wanted to get back into game development, so I might as well just try it for fun and see what happens.

      I picked Godot, 1) because it's open source, 2) it's going to be fun to see how much it developed in the last decade and 3) it's free, and especially 4) I don't want to use commercial engines and risk being affected by something similar to the runtime fee fiasco

      I still have to finish some tutorials and make some simple games to get a grip on the engine and see how everything works, but as a data analyst I already have programming foundations and, I think, this project is monumental for someone like me, but I also think it's doable.

      I have a very rough idea of how the code will work for the AI. It will incorporate "ticks" like CK, each tick being a day, and some events fire weekly or monthly, where based on the stats and traits, each individual character will calculate how likely they are to accept or reject that event. This event can be about buying shares, accepting proposal marriages, going on a trip, etc.

      My main worry here is if GDScript is good enough to handle "intensive" algorithms. If not, I can always use C#, or C++ if I really have to, and adapt the problematic algorithms.

      Another is what would be the best database manager for this, but I'll cross that bridge when I get there.

      As for the world, initially I was thinking about being about the real one, but I realized that I may not want to deal with... Well, accuracies. For example, I don't want these businesses to exist in a vacuum, I'm going to try to make a system that interacts both the world's political events and these businesses. I want to create events, like, a country invades another, which creates demands for weapons, and if you own a weapon factory, good news for you! If you own businesses in the invaded country, well, sucks to be you.

      So, I want to do those kind of events, but without needing to worry about things like "Portugal would never invade Japan. What is your AI thinking!?" or "Why is Greece an industrial power house?". If a big studio like Paradox has trouble fine tuning their hundreds of nations in their games, me by my alonesome certainly will not be able to do it.

      So I'm thinking just making a fictional world, populated by several countries and empires but not as many as the real world. This way I can fine tune it to my liking and without worrying about being accurate with the real world. This is another challenge by itself, with its own cliffs, but it's more doable.

      And so far, that's it. After I'm done with the learning phase, I'm going to start a proper planning phase, lay down some key mechanics and develop a prototype.

      I wrote this post as a way to put my thoughts down, double check with myself if the idea is good.

      But also, to check with the tildes community if you have any inputs. It can be anything: ideas, suggestions, warnings, problems that you know that I'll face, etc. I'll appreciate anything that you can give me

      22 votes
    2. How do you manage separate development environments on your computer?

      Hello Tildes! There's an open-source app I would like to work on and contribute code to, but it uses a toolchain that I'm not terribly familiar with (Deno), and I'm not a huge fan of letting tools...

      Hello Tildes!

      There's an open-source app I would like to work on and contribute code to, but it uses a toolchain that I'm not terribly familiar with (Deno), and I'm not a huge fan of letting tools like this have full access to my system and files.

      Do any of you use a system to containerize different development environments for software development? I could definitely use a standard Docker/Podman container to run the app, but I'm not aware of a good system where you can edit a program's source in an IDE, make changes, build the app, open a local port, and save your new code, all within a sandboxed environment.

      If anyone uses a system like this or something related, I would love to hear about it and share ideas.

      13 votes
    3. If you're a programmer, are you ever going to believe an AGI is actually 'I'?

      First, I am emphatically not talking about LLMs. Just a shower thought kinda question. For most people, the primary issue is anthropomorphizing too much. But I think programmers see it...

      First, I am emphatically not talking about LLMs.

      Just a shower thought kinda question. For most people, the primary issue is anthropomorphizing too much. But I think programmers see it differently.

      Let's say someone comes up with something that seems to walk and talk like a self-aware, sentient, AGI duck. It has a "memories" db, it learns and adapts, it seems to understand cause and effect, actions and consequences, truth v falsehood, it passes Turing tests like they're tic-tac-toe, it recognizes itself in the mirror, yada.

      But as a developer, you can "look behind the curtain" and see exactly how it works. (For argument's sake, let's say it's a FOSS duck, so you can actually look at the source code.)

      Does it ever "feel" like a real, sentient being? Does it ever pass your litmus test?

      For me, I think the answer is, "yes, eventually" ... but only looong after other people are having relationships with them, getting married, voting for them, etc.

      31 votes
    4. Is AI actually useful for anyone here?

      Sometimes I feel like there's something wrong with how I use technology, or I'm just incredibly biased and predisposed to cynicism or something, so I wanted to get a pulse on how everyone else...

      Sometimes I feel like there's something wrong with how I use technology, or I'm just incredibly biased and predisposed to cynicism or something, so I wanted to get a pulse on how everyone else feels about AI, specifically LLMs, and how you use them in your professional and personal lives.

      I've been messing with LLMs since GPT-3, being initially very impressed by the technology, to that view sort of evolving to a more nuanced one. I think they're very good at a specific thing and not great at anything else.

      I feel like, increasingly, I'm becoming a rarity among tech people, especially executives. I run cybersecurity for a medium sized agency, and my boss is the CIO. Any time I, or any of her direct reports write a proposal, a policy, a report, or basically anything meant to distribute to a wide audience, they insist on us "running it through copilot", which to them, just means pasting the whole document into copilot chat, then taking the output.

      It inevitably takes a document I worked hard on to balance tone, information, brevity, professional voice, and technical details and turns it into a bland, wordy mess. It's unusable crap that I then have to spend more time with to have it sound normal. My boss almost always comes up with "suggestions" or "ideas" that are very obviously just copy pasted answers from copilot chat too.

      I see people online that talk about how LLMs have made them so much faster at development, but every time I've ever used it that field, it can toss together a quick prototype for something I likely could have googled, but there will frequently be little hidden bugs in the code. If I try to use the LLM to fix those bugs, it inevitably just makes it worse. Every time I've tried to use AI in a coding workflow, I spend less time thinking about the control flow of the software, and more time chasing down weird esoteric bugs. Overall it's never saved me any time at all.

      I've used them as a quick web search, and while they do save me from having to trawl through a lot of the hellhole that is the modern internet, with blogspam, ads, and nonsense people write online, a lot of times, it will just hallucinate answers. I've noticed it's decent at providing me results when results exist, but if results don't exist, or I'm asking something that doesn't make sense, it falls flat on its face because it will just make things up in order to sound convincing and helpful.

      I do see some niches where the stuff has been useful. Summarizing large swathes of documents, where the accuracy of that summary doesn't matter much is a little useful. Like if I were tasked to look through 300 documents and decide which ones were most relevant to a project, and I only had an hour to do it, I think that would be a task it would do well with. I can't review or even skim 300 documents in an hour, and even though an LLM would very likely be wrong about a lot of it, at least that's something.

      The thing is, I don't frequently run into tasks where accuracy doesn't matter. I doubt most people do. Usually when someone asks for an answer to something, or you want to actually do something useful, the hidden assumption is that the output will be correct, and LLMs are just really bad at being correct.

      The thing is, the internet is full of AI evangelists that talk about their AI stack made up of SaaS products I've never even heard of chained together. They talk about how insanely productive it's made them and how it's like being superhuman and without it they'd be left behind.

      I'm 99% sure that most of this is influencer clickbait capitalizing on FOMO to keep the shared delusion of LLM's usefulness going, usually because they have stake in the game. They either run an AI startup, are involved in a company that profits off of AI being popular, they're an influencer that makes AI content, or they just have Nvidia in their stock portfolio like so much of us do.

      Is there anyone out there that feels this technology is actually super useful that doesn't fall into one of those categories?

      If so, let me know. Also, let me know what I'm doing wrong. Am I just a Luddite? A crotchety old man? Out of touch? I'm fine if I am, I just want to know once and for all.

      80 votes
    5. The web could be so much more beautiful

      Back in high school when I was writing essays, my teacher always demanded to use justified text, because simple left aligned or right aligned text looked ugly. Even back then as a totally...

      Back in high school when I was writing essays, my teacher always demanded to use justified text, because simple left aligned or right aligned text looked ugly. Even back then as a totally rebellious teenager, I agreed with her. Print has used it for hundreds of years, why shouldn't we?

      The web has always resisted this development because it was difficult. Yes, the css property text-align: justify exists, but browser were always missing the crucial functionality of hyphenating words. That led to very ugly justified texts and so called "rivers" of whitespace because the spaces got so large. Begrudingly, I got used to it.

      I was surprised to learn that all major browsers support the new hyphens css property since late 2023. This one adds exactly that crucial functionality. I was stunned and immediately tried it out and oh look, the web is so much more beautiful now.

      You can try out yourself here on Tildes! Just right click a comment, click "Inspect" and then when the dev console pops up, add

      text-align: justify;
      hyphens: auto:
      

      to p, which stands for the paragraph html tag and in which all text posts are rendered on Tildes.

      It looks so much better! But I do wonder why it hasn't spread around more in the web. Am I the only one? Am I nitpicky? I feel like the improvement is stark and very good for functionally no extra work. I even installed a browser extension which augments a website's css so I could automatically do it on most websites.

      31 votes
    6. If you enjoy very difficult puzzle games, try Epigraph

      Epigraph has been a joy, especially when you consider that it's only $3. I love puzzle games like Portal, The Outer Wilds, Etc., but when I try to explore further in the genre, I often struggle to...

      Epigraph has been a joy, especially when you consider that it's only $3.

      I love puzzle games like Portal, The Outer Wilds, Etc., but when I try to explore further in the genre, I often struggle to find many that provide a sufficient challenge.

      I found that Epigraph, while short overall, provided a solid 4-6 hours of playtime.

      The goal in the game is decipher a series of stones and tablets containing a totally unknown language.

      The Zachtronics games are also phenomenal and probably even more difficult overall if you're like me and looking for a challenge.

      37 votes
    7. Non-engineers AI coding & corporate compliance?

      Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice. With the advent of AI coding, people who don't know how to code now...

      Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice.

      With the advent of AI coding, people who don't know how to code now start to use the AI to automate their work. This isn't new - previously they might use already other low code tools like Excel, UIPath, n8n, etc. but it still require learning the tools to use it. Now, anyone can "vibe coding" and get an output, which is fine for engineers who understand how the output should work and can design how it should be tested (edge cases, etc.)

      I had a team come up with me that they managed to automate their work, which is good, but they did it with ChatGPT and the code works as they expected, but they doesn't fully understand how the code works and of course they're deploying this "to production" which means they're setting up an environment that supposed to be for internal tools, but use real customer data fed in from the production systems.

      If you're an engineer, usually this violates a lot of policies - you should get the code peer reviewed by people who know what it does (incl. business context), the QA should test the code and think about edge cases and the best ways to test it and sign it off, the code should be developed & tested in non-production environment with fake data.

      I can't think of a way non-engineers can do this - they cannot read code (and it get worse if you need two people in the same team to review each other) and if you're outsourcing it to AI, the AI company doesn't accept liability, nor you can retrain the AI from postmortems. The only way is to include lessons learned into the prompt, and I guess at some point it will become one long holy bible everyone has to paste into the limited context window. They are not trained to work on non-production data (if you ever try, usually they'll claim that the data doesn't match production - which I think because they aren't trained to design and test for edge cases). The only way to solve this directly is asking engineers to review them, but engineers aren't cheap and they're best doing something more important.

      So far I think the best way to approach this problem is to think of it like Excel - the formulas are always safe to use - they don't send data to the internet, they don't create malware, etc. The worst think they can do is probably destroy that file or hangs your PC. And people don't know how to write VBA so they never do it. Now you have people copy pasting VBA code that they don't understand. The new AI workspace has to be done by building technical guardrails that the AI are limited to. I think it has to be done in some low-code tools that people using AI has to use (like say n8n). For example, blocks that do computation can be used, blocks that send data to the intranet/internet or run arbitrary code requires approval before use. And engineers can build safe blocks that can be used, such as sending messages to Slack that can only be used to send to corporate workspace only.

      Does your work has adjusted policies for this AI epidemic? or other ideas that you wanted to share?

      23 votes