• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~comp with the tag "discussion". Back to normal view / Search all groups
    1. Should C be mandatory learning for career developers?

      The year is 2025. The C programming language is something like 50 years old now - a dinosaur within the fast-moving environment of software development. Dozens of new languages have cropped up...

      The year is 2025. The C programming language is something like 50 years old now - a dinosaur within the fast-moving environment of software development. Dozens of new languages have cropped up through the years, with languages like Rust and Go as prime contenders for systems-level programming. Bootstrapping a project in C these days will often raise eyebrows or encourage people to dismiss you out of hand. Personally, I've barely touched the language since I graduated.

      Now, with all that said: I still consider learning and understanding C to be key for having an integrated, in-depth understanding of how computers and programming really works. When I am getting a project up and running, I frequently end up running commands like "sudo apt install libopenssl-dev" without really giving it much thought about what's going on there. I know that it pulls some libraries onto my computer so that another program can use them, but without the requisite experience of building and compiliing a library then it's kind of difficult to understand what it's all about. I know that other languages will introduce this concept, but realistically everything is built to bind to C libraries.

      System libraries are only one instance of my argument though. To take a more general view, I would say that learning C helps you better understand computers and programming. It might be a pain to consider stuff like memory allocation and pointers on a regular basis, but I also think that not understanding these subjects can open up avenues for a poorly formed understanding about how computers work. Adding new layers of abstraction does not make the foundation less relevant, and I think that learning C is the best avenue toward an in-depth understanding of how computers actually work. This sort of baseline understanding, even if the language isn't used on a regular basis, goes a long way to improving one's skills as a developer. It also gives people the skills to apply their skills in a wide variety of contexts.

      I'm no expert, though: most of the programming I do is very high-level and abstracted from the machine (Python, Haskell, BASH). I'm sure there are plenty of folks here who are better qualified to chime in, so what do you think?

      39 votes
    2. How is Linux these days?

      How is Linux these days for everyday desktop use? I'm looking to reformat soon and I'm kind of sick of all the junk the comes alone with Windows. I've used Linux briefly, back in the early 2000's...

      How is Linux these days for everyday desktop use? I'm looking to reformat soon and I'm kind of sick of all the junk the comes alone with Windows.

      I've used Linux briefly, back in the early 2000's but..not at all since really. I'm also learning web dev so I thought it could be fun to use to get used to it.

      Do you use it for everyday use?

      If your unfamiliar with Linux, how difficult is it to get things "done" on it?

      Do most modern apps work these days?

      As someone that's been using Windows for most of their life, do you think it's difficult to pick up and get running?

      Do games work?

      Edit I'm going to test out mint tonight on a thumb drive, thanks everyone!

      52 votes
    3. User-friendly and privacy-friendly LLM experience?

      I've been thinking perhaps I'll need to get one of the desktop LLM UI. I've been out of touch with the state of the art of end user LLM as I've been exclusively using it via API, but tech-y people...

      I've been thinking perhaps I'll need to get one of the desktop LLM UI. I've been out of touch with the state of the art of end user LLM as I've been exclusively using it via API, but tech-y people (who are not developers) mostly talk about the end-user products that I lack the knowledge of.

      Ethical problems aside, the problem with non-API usage is, even if you pay, I can't find one that have better privacy policy than API. And the problem with API version is that it is not as good as the completed apps unless you want to reinvent the wheel. The apps also may include ads in the future, while API technically cannot as it would affect some downstream usecases.

      Provider Data Retention (API) Data Retention (Consumer) UI-only features
      ChatGPT Plus 30 days, no training Training opt-out, 30 days for temp. chat, unknown retention otherwise Voice, Canvas, Image generation in chat, screensharing, Mobile app
      Google AI Pro 0 72 hours if you disable history, or up to 3 years and trained upon otherwise Android assistant, Canvas, AI in Google Drive/Docs, RAG (NotebookLM), Podcast generation, Browser use (Mariner), Coding (Gemini CLI), Screensharing
      Gemini in Google Workspace See above 0-18 months, but no human review/training See above
      Claude Pro 30 days Up to 2 years (no training without opt-in) Coding, Artifact, Desktop app, RAG, MCP

      As a dual use technology, the table doesn't include the extra retention period if they detect an abuse. Additionally, if you click on thumbs up/down it may also be recorded for the provider's employee to review.

      I don't think OpenWebUI, self hosted models, etc. would suffice if they are not built to the same quality as the first party products. I know I'm probably asking for something that doesn't exists here, but at least I hope it will bring to people's attention that even if you're paying for the product you might not get the same privacy protection as API users.

      15 votes
    4. Non-engineers AI coding & corporate compliance?

      Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice. With the advent of AI coding, people who don't know how to code now...

      Part of my role at work is in security policy & implementation. I can't figure this out so maybe someone will have some advice.

      With the advent of AI coding, people who don't know how to code now start to use the AI to automate their work. This isn't new - previously they might use already other low code tools like Excel, UIPath, n8n, etc. but it still require learning the tools to use it. Now, anyone can "vibe coding" and get an output, which is fine for engineers who understand how the output should work and can design how it should be tested (edge cases, etc.)

      I had a team come up with me that they managed to automate their work, which is good, but they did it with ChatGPT and the code works as they expected, but they doesn't fully understand how the code works and of course they're deploying this "to production" which means they're setting up an environment that supposed to be for internal tools, but use real customer data fed in from the production systems.

      If you're an engineer, usually this violates a lot of policies - you should get the code peer reviewed by people who know what it does (incl. business context), the QA should test the code and think about edge cases and the best ways to test it and sign it off, the code should be developed & tested in non-production environment with fake data.

      I can't think of a way non-engineers can do this - they cannot read code (and it get worse if you need two people in the same team to review each other) and if you're outsourcing it to AI, the AI company doesn't accept liability, nor you can retrain the AI from postmortems. The only way is to include lessons learned into the prompt, and I guess at some point it will become one long holy bible everyone has to paste into the limited context window. They are not trained to work on non-production data (if you ever try, usually they'll claim that the data doesn't match production - which I think because they aren't trained to design and test for edge cases). The only way to solve this directly is asking engineers to review them, but engineers aren't cheap and they're best doing something more important.

      So far I think the best way to approach this problem is to think of it like Excel - the formulas are always safe to use - they don't send data to the internet, they don't create malware, etc. The worst think they can do is probably destroy that file or hangs your PC. And people don't know how to write VBA so they never do it. Now you have people copy pasting VBA code that they don't understand. The new AI workspace has to be done by building technical guardrails that the AI are limited to. I think it has to be done in some low-code tools that people using AI has to use (like say n8n). For example, blocks that do computation can be used, blocks that send data to the intranet/internet or run arbitrary code requires approval before use. And engineers can build safe blocks that can be used, such as sending messages to Slack that can only be used to send to corporate workspace only.

      Does your work has adjusted policies for this AI epidemic? or other ideas that you wanted to share?

      23 votes
    5. The morality of using AI-generated art in my web app

      Hey, good people of Tildes! I'm building a self-help web app, a small part of which I'd like to involve some pixel pets. I like pixel art and it'd be great if I could create some. Though, the...

      Hey, good people of Tildes!

      I'm building a self-help web app, a small part of which I'd like to involve some pixel pets. I like pixel art and it'd be great if I could create some. Though, the truth is, I can't draw for shit, I have little to no imagination, and I'm afraid even if I put the time and effort into it, I still may not produce something I'd call good enough to put on the website. I also lack the motivation to spend a lot of time learning how to create good pixel art, as I only need it for this project.

      I thought about paying some professional(s) to do it but that would probably break the bank for me, as I want to offer the users a lot of pixel pet options, which brings us to what I guess is the only remaining option.

      I found some services that offer AI-generated pixel art. This one in particular looks like what I'm looking for and also offers animations. While watching a demo of it on YouTube, I noticed a few comments voicing concern about the ethics of selling art that's generated using models trained off of unpaid artists' work. While this is not a new topic, I admittedly hadn't given it much thought before, as I've never used, or planned to use AI-generated art in a meaningful capacity.

      While I'm not sure whether it changes much, for what it's worth, I should note that my web app is going to be free, open-source, and ad-free forever.

      What are your thoughts? Also, I'd love to know if there are options that I missed!

      26 votes
    6. Any BBS sysops here from back in the day?

      As a not quite "old man" I was wondering how many former BBS sysops are here? In the early-mid 90s, I used to run a single line PCBoard powered hobby BBS and dabble in making PPEs (PCBoards...

      As a not quite "old man" I was wondering how many former BBS sysops are here?

      In the early-mid 90s, I used to run a single line PCBoard powered hobby BBS and dabble in making PPEs (PCBoards programming language for plug ins/mods). I also help friends set up SpitFire, Searchlight, Synchronet, WWIV and Aftershock (total PITAs from what I remember)--basically anything free, crackable or unlimited trial.

      Since I was the only sysop that ran PCBoard I was invited to become an officer in our local User Group to run their whopping 3 line BBS and give classes (that was quite the technological achievement back then). That is something I truly do miss. I was, by far, the youngest member of the User Group (your average member probably had grandkids my age) but it was a meritocracy.

      People seemed eager to learn and share information as they found it. Before computers, 3d cards and the internet was ubiquitous, you were automatically accepted into a knowing crowd if you put in the effort/time to join a BBS, forum or (early) MMO. Exclusion brought inclusion, if that makes sense. If you torched your reputation by acting like a jackass it was difficult to move on like a locust to another area.

      So many stories. So many high jinks that would be deemed illegal today, lol.

      Everything that is old, is new again.

      I'm getting a lot of BBS vibes in the aftermath of Rexxit. The slower pace of Tildes reminds me so much of the BBS forums (while I know Tildes isn't new, it is growing in popularity in the aftermath.) Even the Fediverse harkens back to the days of early BBS synch nets.

      Now if only I could find a modern remake of Tradwars 2002.

      51 votes
    7. Do you have a personal website/blog?

      I've been thinking for a while about making my own little personal website/blog, and I was wondering what other people here on Tildes might have set up. I feel like having one could be a cool...

      I've been thinking for a while about making my own little personal website/blog, and I was wondering what other people here on Tildes might have set up. I feel like having one could be a cool little way to get myself to write more often and hopefully improve my writing, especially when it comes to technical subjects.

      34 votes
    8. Game Frameworks: What are people using for game jams nowadays?

      Hi, I've been mulling ideas about a game for a while now, I'd like to hack out a prototype, and my default would be Love2D. (As an aside: one of the things I like about Love2D was that you could...

      Hi,

      I've been mulling ideas about a game for a while now, I'd like to hack out a prototype, and my default would be Love2D. (As an aside: one of the things I like about Love2D was that you could make a basic 'game' in a couple of LoC, and it was 'efficient enough' for what you got. Perhaps the only gripe I had with it was that it didn't output compiled binaries (I mean, you could make it do that, but it seemed like a hack). I think Polycode seemed to be a semi-serious contender, but last I checked (a year or two ago) it's pretty much as dead as a doornail. Some of the other alternatives I remember seeing (Godot? Unity?) felt too much like Blender.

      So I've been wondering, it's been a while since I've been keeping tabs on the 'gamedev community', so I don't know if there have been any more recent development in that space.

      So I guess my question is: What are people using for game jams nowadays? Preach to me (and everyone else) about your favorite framework and language :)

      15 votes
    9. Why are so many websites (and CDNs) IPv4 only?

      One of the people in an IRC channel I frequent pointed out a site I've been building uses CDNs that are IPv4 only. I never realized this, I just assumed every major provider had deployed IPv6. Oh,...

      One of the people in an IRC channel I frequent pointed out a site I've been building uses CDNs that are IPv4 only. I never realized this, I just assumed every major provider had deployed IPv6. Oh, how very wrong I was. A quick check of some major (to me) sites shows a shocking lack of IPv6, including:

      • Bootstrap (stackpath.bootstrapcdn.com)
      • Discord
      • FontAwesome (use.fontawesome.com)
      • GitHub/GitHub pages
      • GitLab/GitLab pages (self-hosted supports IPv6, but officially hosted GitLab only supports IPv4 due to Azure limitations)
      • jQuery, IF you use code.jquery.com (some tutorials use ajax.googleapis.com, which does have IPv6, but an unfortunate amount use code.jquery.com, including the getting started page for Bootstrap)
      • Parts of Amazon/AWS (Amazon is IPv4 only, some of AWS is IPv4 only, including S3)
      • Reddit
      • Stack Overflow/Exchange/etc
      • Twitter

      An honorable mention goes to Angular's websites because the websites themselves are IPv4 only but the libraries are hosted on ajax.googleapis.com, which is IPv6 accessible. I checked npm, PyPI, RubyGems, and Tildes, and they all support IPv6.

      I can understand why companies like Amazon have partial support (upgrading can be a PITA if you're a cloud service provider with uptime requirements), but then you have services like Discord (launched in 2015 with no obligation to maintain service) that only support IPv4. At the very least, I'd expect CDNs referenced by thousands (if not millions) of webpages to be on IPv6 by now.

      Am I missing something? CDNs are pretty static, it's just a matter of choosing one that supports IPv6, you don't even need to update your application if you just change the DNS entries.

      13 votes
    10. Let's talk Puppy Linux

      For the uninitiated, you can visit puppylinux.org to get to know more about it. My first experience with Puppy wasn't good, since, for the life of me, I couldn't get the saves working. I still...

      For the uninitiated, you can visit puppylinux.org to get to know more about it.

      My first experience with Puppy wasn't good, since, for the life of me, I couldn't get the saves working. I still didn't, but I found that xenialpup does work for some reason, so I stuck with it.

      After that, it's been great, and although I don't like the UI and some of the default apps, it worked on every computer I've tried it on, and it's light enough to run well on ancient computers.

      As far as the tools go, it has everything I need to do my work, even if I'd prefer different tools (like vim and ranger).

      That is, of course, only a problem with the default configuration, and Puppy has a very convenient tool to remaster itself, which I'll be using these holidays. It's great to be able to build a more welcoming version for yourself without needing any knowledge or spending a lot of time.

      So, I just wanted to see what was your experience with Puppy, or, if you haven't tried it, what you think about it.

      9 votes
    11. Let's talk browsers

      I've tried a lot of browsers. Starting from Chrome, to Chromium, to Firefox, to Links, to w3m, to, eventually, Qutebrowser, which I use for most of my browsing now. At least for me, I had four...

      I've tried a lot of browsers. Starting from Chrome, to Chromium, to Firefox, to Links, to w3m, to, eventually, Qutebrowser, which I use for most of my browsing now.

      At least for me, I had four things in mind while choosing a browser:

      • I want it to be light
      • I want it to be minimal
      • I want it to be keyboard-oriented
      • I want it to be able to use modern websites

      I won't be going through all the browsers I've tried, but those I mentioned are the big ones, so I'll just do a quick check-list of these things.

      Chrome/Chromium:

      • Weighs like a sumo wrestler 1/5
      • Cluttered 1/5
      • Just some shortcuts and extentions 3/5
      • The model, the idol to strife for 5/5

      Firefox:

      • Apparently lighter than Chromium, though not by much 1/5
      • Cluttered 1/5
      • Some shortcuts, famous extensions 3/5
      • On point 5/5

      Links:

      • Very light and fast 5/5
      • Minimal, though can go smaller 4/5
      • Yes 5/5
      • Doesn't support javascript 1/5

      w3m:

      • As light as it gets 6/5
      • Pretty damn minimal 5/5
      • Even works for blind 5/5
      • Does javascript, but hard to use with cluttered wesites like Reddit or any news site 3/5

      Qutebrowser:

      • It is quite small and feels fast 4/5
      • Can be easily modified to not have anything on screen, and command line-like controls 5/5
      • Great, but hint system fails with javascript 4/5
      • Doesn't work with Reddit, for some reason 4/5

      With the things that I look for, Qutebrowser is the answer, with w3m being the close second. Of course, there are different things to look for in a piece of software, and you may want the extra stability and extensions Firefox provides, or privacy of Tor browser, or the suckless nature of surf, so I'd like to hear what is your browser of choice!

      17 votes
    12. Meta Discussion: Is there interest in topics concerning code quality?

      I've posted a few lengthy topics here outside of programming challenges, and I've noticed that the ones that seem to have spurred the most interest and generated some discussion were ones that...

      I've posted a few lengthy topics here outside of programming challenges, and I've noticed that the ones that seem to have spurred the most interest and generated some discussion were ones that were directly related to code quality. To avoid falling for confirmation bias, though, I thought I would ask directly.

      Is there generally a greater interest in code quality discussions? If so, then what kind of things are you interested in seeing in those discussions? What do you prefer not to see? If not, then what kinds of programming-related discussions would you prefer to see more of? What about non-programming discussions?

      Also, is there any interest in an informal series of topics much like the programming challenges or the a layperson's introduction to... series (i.e. decentralized and available for anyone to participate whenever)? Personally, I'd be interested in seeing more on the subject from others!

      17 votes
    13. An informal look at the concept of reduction (alternatively: problem-solving for beginners).

      Preface One of the most common questions I see from prospective programmers and computer scientists is "where should I start?". My answer to that is a pretty consistent one: learn how to solve...

      Preface

      One of the most common questions I see from prospective programmers and computer scientists is "where should I start?". My answer to that is a pretty consistent one: learn how to solve problems effectively. But that's vague and not really all that helpful, so I figured that I should actually tackle this in a little more depth by touching on something more specific.

      Specifically, I want to touch on the subject of how to think about complex problems.


      The Rationale Behind Learning

      Before we can better understand how to effectively solve problems, it's important to consider how it is that we learn. With any subject, the standard approach is to begin with the bare basics. For programming, that's writing a Hello, World! program in the new language you're working with. For foreign languages, you learn basic common words and sentence structure. For math, you learn your basic arithmetic operations like addition and multiplication.

      From there, we add on more additional complexity and string together everything we've learned. For a foreign language, this looks like learning about new words, stringing them together in your own sentences, then learning about verb tenses and throwing them into the mix as well. With math, you take your normal number crunching and suddenly throw the concept of order of operations into the mix, then variables and how to solve for them.

      As a general rule, we first get comfortable with solving a simple problem and gradually build up toward solving increasingly more difficult ones.


      The Missing Piece

      Odds are that we've all sat in a math class at one point, and when the teacher asked a student how to solve a problem, they received an immediate "I don't know". You may or may not have been that kid yourself. I have no intention of shaming the kids who struggled (or those who still struggle) with math. Rather, I want to point to what I believe is the fundamental cause of that mental barrier that has frustrated students for generations.

      Learning is not simply a matter of adding more complexity to problems. A key part of learning, and one that I don't recall ever having emphasized during my grade school studies, is your ability to break problems down into the steps that you know how to complete and combine the different, simpler skills you've already learned to arrive at a solution. Instead, you were expected to solve many of those complex problems and learn through practice, or through pure rote memorization.

      What determined whether or not you could solve those problems was then a question of whether or not you could intuit or memorize how to solve those specific problems, and brand new problems that still made use of the same skill sets but had completely different forms would throw a wrench in that. Those who could solve any of those problems--those who, I would argue, were often mistakenly referred to as "geniuses" or "talented"--were really just those who knew how to break a problem down into simpler pieces.

      This isn't a failing on the students, but on the way they've been taught to think about problems.


      Reducing Problems

      What does it mean to "break down" a problem, though? The few times I recall a teacher ever touching on the subject, "break down the problem" and "use the skills you've already learned" were the kinds of pieces of advice passed around, completely vague and devoid of meaning for anyone who didn't already understand. How can we better grasp this important step?

      There's a term in complexity theory known as "reduction". The general idea is that if you have problems A and B, where you already know how to solve B, then if you can transform problem A so that it looks like problem B, then you can use your solution for B to solve at least part of A.

      In other words, finding the solution to a more complex problem is just a matter of finding a way to make it look like a problem you already know how to solve.

      The advice to "break down" a problem really means to perform this process of "reduction", of transforming your more complicated problem A into your simpler, known problem B.


      In Practice

      We're still discussing a vague concept, but now that we have more specific language to work with, we can more easily see how it works in practice (a reduction of its own!).

      Let's consider a conceptually simple problem: grabbing the kth largest (or smallest) item from a list. How do we solve this problem? Probably the most obvious and straightforward answer is to sort the list then grab the kth item, right?

      Notice that we gave two high-level descriptions of the steps we need to solve this problem: sorting, then grabbing the appropriate item. We can therefore then state that the problem of "grab the kth largest/smallest item from a list" can be reduced to the two problems "sort a list" and "grab the kth item from a list".

      Now, let's say we're given the problem "take this list of competitor times from the race and tell me what the top 10 race times were". What do we know about this problem? We know that we're being given a list, and we know that we need the 10 smallest items from that list. We also know that "10 smallest items" is just shorthand for "the 1st smallest item, the 2nd smallest item, ..., and the 10th smallest item". We can therefore reduce this problem to the previous one we solved by transforming it into "grab the kth smallest item from a list" and "repeat for values 1-10 for k".


      Practical Advice

      In the end, my explanation may not have helped much at all in actually grasping the concept of reduction. My intent isn't necessarily to help you understand it immediately, but to provide you a framework for a way of thinking. Even if you do grasp the general concept, you may even wonder how you're supposed to recognize these kinds of reductions out in the wild in non-academic environments. The answer, perhaps annoying, is practice. Much like an appraiser can only become good at discerning details through experience, a programmer or computer scientist can only recognize these patterns through repeated exposure.

      In general, if I had to narrow it down to a small list of tips for improving your problem solving skills, this would be it:

      • Work on grasping the concept of reduction itself.
      • Expose yourself to lots of new problems.
      • Don't shy away from difficult problems. Reduce them as much as you can and solve the pieces you're able to. Try to research the pieces you're struggling with. Return to the problem later when you have more experience if you have to, but take a crack at it first.
      • Don't accept "I don't know" as an answer in itself. Ask yourself why you don't how to solve a problem. Narrow down which pieces you're able to solve and which pieces you're not.
      • Just solve problems. Any problems. Easy ones, hard ones, and anything in between. Solving problems is a skill, and practicing it will make you better at solving problems in general, and better at recognizing the simpler problems inside of more complicated ones.
      • Don't just come up with a solution to a problem. Ensure that you understand how each piece of it works and why it works. Copy-pasting from StackOverflow can be a valid tool at your disposal, but doing so mindlessly isn't nearly as valuable as reviewing the solution, being able to determine whether or not it works before ever executing the code, and being able to discard anything unnecessary from it.

      Final Thoughts

      I'm not an authoritative voice on this subject. I'm not an educator. More than anything, I'm a life-long student and an enthusiast. There's seldom a day when I don't have to research something new in order to solve a problem I'm not familiar with, or remind myself the syntax for a function I've used several times in the past. I don't know anything about teaching others, but I do know plenty about learning, and if there's anything that has stood out to me over the years, it's the fact that I find it easier to learn about something or to solve a problem if I can transform the concept into something that's easier for me to grasp.

      Moreover, I'm human and thus prone to mistakes. Call me out on them if you notice them. I'll take any of my mistakes as learning opportunities :)

      11 votes
    14. Reflections on past lessons regarding code quality.

      Preface Over the last couple of years, I've had the opportunity to learn from the mistakes of my predecessors and put those lessons into practice. Among those lessons, three have stood out to me...

      Preface

      Over the last couple of years, I've had the opportunity to learn from the mistakes of my predecessors and put those lessons into practice. Among those lessons, three have stood out to me in particular:

      1. Consistency is king.
      2. Try not to be too clever for your own good.
      3. Good code takes time.

      I know that there are a lot of new and aspiring programmers here (and I'm admittedly far from being a guru myself), so I thought it would be good to touch on these three lessons, what they mean, and why they're so important.


      Consistency is King

      This is something that I had drilled into my head over nearly two years working on the code base at my previous job. Not by my fellow programmers (who did not exist), nor by my boss, but by the code itself.

      Consistency can mean a number of things, but there are two primary points that matter:

      1. Syntactic consistency.
      2. Architectural consistency.

      Syntactic consistency concerns standards in what your code looks like. For example, the choice between snake_case or camelCase or PascalCase for naming; function parameter order; or even something as benign as what kind of indentation and how much of it you use.

      Architectural consistency concerns standards in how you structure your code. Making sure that you either use public class properties or getter and setter methods; using multiple booleans or using bitmasks; using or not using objects for encapsulating data to be passed around; validating data within the primary object or relegating that responsibility to a validator class; and other seemingly minor decisions about how you handle certain behavior make a big difference.

      The code base I maintained had no such consistency. You could never remember whether the method you needed to call was named using snake_case or camelCase and had to perform several searches just to find it. Worse still, some methods defined to handle Ajax calls were prefixed with ajax while many weren't. Argument ordering seemed to be determined by a coin flip, and indentation seemed to vary between 2-space, 3-space, 4-space, and even 5-space indentation depending on what mood my predecessor was in at the time. You often could not tell where a function's body began and where it ended. Writing code was an exercise both in problem solving and in deciphering ancient religious texts.

      Architecturally it was no better. There was no standardization in how data was validated or sanitized, how class members were accessed or modified, how functionality was inherited, whether the functionality was encapsulated in an object method or in a function, or which objects were responsible for which behavior.

      That lack of consistency makes introducing or modifying a small feature, a task which should ordinarily be a breeze, an engineering feat of its own. Often you end up implementing that feature, after dancing around the tangled mess of spaghetti, only to find that the functionality that you implemented already existed somewhere else in the code base but was hiding out in a deep, dark corner that you never even knew was there until you had to fix some other broken feature months later and happened to stumble across it.

      Consistency means predictability, and predictability means discoverability and, more importantly, easier changes and higher confidence in those changes.


      Cleverness is a Fallacy

      In any given project, it can be tempting to do something that saves you extra lines of code, or saves on CPU cycles, or just looks awesome and does something nobody would have thought of before. As human beings and especially as craftsmen, we like to leave our mark and take pride in breaking the status quo by taking a novel and interesting approach to a problem. It can make us feel fulfilled in our work, that we've done something unique, a trademark of sorts.

      The problem with that is that it directly conflicts with the aforementioned consistency and predictability. What ends up being an engineering wonder to you ends up being an engineering nightmare to someone else. While you're enjoying the houses you build with wall studs arranged in the shape of a spider's web, the home remodelers who come along later aren't even sure if they can change part of the structure without causing the entire wall to collapse, and they're not even sure which walls are load-bearing and which aren't, so they're basically playing Jenga while blindfolded.

      The code base I maintained had a few such gems, with what looked like load-bearing walls but were actually made of papier-mâché and were only decorative in nature, and the occasional spider's web wall studs. One spider's web comes to mind in particular. It's been a while since I've worked on that piece of code, so I can't recall what exactly it did, but two query-constructing pieces of logic had overlapping query structure with the difference being the operators and data. Rather than being smart and allowing those two constructs to be different, however, my predecessor decided to be clever and the query construction was abstracted into a separate method so that the same general query structure could be used in other places (note: it never was, and was only ever used in those two instances). It was abstracted so that all original context was lost and no comments existed to explain any of it. On top of that, the method was being called from the most critical piece of the system which, unfortunately, was already a convoluted mess and desperately required a rewrite and thus required me to understand what the hell that method was even doing (incidentally, I fell in love with whiteboards as a result).

      When you feel like you're being clever, you should always stop what you're doing and make sure that what you're doing isn't actually a really terrible idea. Cleverness doesn't exist. Knowledge and intelligence do. Write intelligent code, not clever code.


      Good Code Takes Time

      Bad code more often than not is the result of impatience. We don't like to plan out the solution before we get to writing code. We like to use variables like x and temp in order to quickly achieve functional correctness of our code because stopping to think about how to name them is just additional overhead getting in the way. We don't like to scrap our bad work if we can salvage it in some way instead, because then we have to start from scratch and that's daunting. We continually work against ourselves and gradually increase our mental overhead because we try to decrease our mental overhead. As a result we find ourselves too exhausted by the end of our initial implementations to concern ourselves with fixing obvious problems. Obviously bad but functional code is preferable because we just want the task to be done and over with.

      The more you get exposed to bad code and the more you try to avoid pushing that hell onto yourself and your successors, the more you realize that you need to spend less time coding and more time researching and planning. Whereas you may have been spending upwards of 50% of your time coding previously, suddenly you find yourself spending as little as 10% of your time writing any code at all.

      Professionals from just about any field can tell you that you can either do something right or you can do it twice. You might recognize this most easily in the age-old piece of woodworking wisdom, "measure twice, cut once". The same is true of code, and doing something right means planning how to do it right in the first place before you've even started on the job.


      Putting into Practice

      I've been fortunate over the last couple of months to be able to start on a brand new project and architect it in a way that I see fit. Changes which would ordinarily take days or weeks in the old code base now take me half a day at most, and a matter of minutes at best. I remember where to find a piece of code that I need because I'm consistent and predictable about where I place things; I don't struggle to tell where something begins and where it ends because I'm consistent about structure; I don't continually hate myself when I need to make changes to my code because I don't do anything wildly out of the ordinary; and most importantly, I take my time to figure out what it is that I need to do and how I want to do it before I've written a single line of code.

      When I needed to add a web portal interface for uploading a media asset to associate with a database object, the initial implementation took me a week, due to the need for planning, adding the interface, and supporting and debugging the asset management. When I needed to extended that interface to allow for uploading the same kinds of assets for a completely different object type, it took me only half an hour, with most of that time being dedicated toward updating a Vue.js component to accept configuration via props rather than working for only the single hard-coded object type. If I need to add a case for any additional object type, it will take me only five minutes.

      That initial week of work for the web interface provided me with cost savings that would not have been feasible otherwise, and that initial week of work would have taken as many as three weeks had I not structured the API to be as consistent as it is now. Every initial lag in implementation is offset heavily by the long-term cost savings of writing good code.


      Technical Debt

      Technical debt is the cost of your code over time. The messier and worse your code gets, the more it costs you to try to change, and those costs only build up. Even good code can accumulate technical debt if the needs for your software have changed and its current architecture isn't compatible with those changes.

      No project is without technical debt. Even my own code, that I've been painstakingly working on for the last couple of months, has technical debt. Odds are a programmer far more experienced than I am will come along and want to scrap everything I've done, and will do a far better job rewriting it.

      That's okay, though. In fact, a certain amount of technical debt is good. If we try to never write any bad code whatsoever, then we could never possibly get to writing any code at all, because there are far too many unknowns for a new project.

      What's important is knowing when to pay down on that technical debt, which could mean anything from paying it up front (i.e. through planning ahead of time) to paying it down when it starts to get too expensive (e.g. refactoring a complicated section of code when changes become sufficiently difficult). That's not something you can learn through a StackOverflow post or a college lecture, and certainly not from some unknown stranger on some relatively unknown website in a long, informal blog-like post.


      Final Thoughts

      I'm far from being a great programmer. There's a lot that I don't know and I still have quite a bit to learn. I love programming, though, and more than that I enjoy sharing the lessons I've learned with others. Especially the ones that I wish I'd learned back in college.

      Please feel free to share your own experiences, learned lessons, and (if you have it) feedback here. I'd love to read up on some other thoughts on this subject!

      21 votes
    15. Discussion: The pros and cons to different approaches to solving a problem.

      It's often the case that in academic and self-teaching environments, you don't really have the opportunity to grasp and fully understand situations in which a problem has multiple valid solutions...

      It's often the case that in academic and self-teaching environments, you don't really have the opportunity to grasp and fully understand situations in which a problem has multiple valid solutions and what the implications are in choosing among them. Among those considerations are two in particular: runtime efficiency and maintainability. When these subjects are discussed, the example solutions are often comical at best, or the problems themselves too complex to fully grasp the situation at hand. Sometimes the problems are also so simple as to be completely worthless, e.g. comparing bubble sort to bogo sort.

      As such, I would like to take this opportunity to discuss practical but conceptually simple problems and the implications of the different solutions that are available. Conceptual simplicity is an absolute requirement because we want these problems to be accessible to a wider variety of readers. Problems don't necessarily need to be code-related (you could e.g. discuss something related to server administration). Bonus points for problems that include solutions with an efficiency/maintainability trade-off!

      9 votes
    16. Public access Unix systems, another alternative social environment

      I have been writing a paper on the history of a type of online social space called public access Unix systems, and I'm posting a Tildes-tailored summary here in case anyone is interested. If you...

      I have been writing a paper on the history of a type of online social space called public access Unix systems, and I'm posting a Tildes-tailored summary here in case anyone is interested. If you enjoy this and want to read more (like 10+ pages more) look at the bottom of this post for a link to the main paper-- it has citations, quotes, and everything, just like a real pseudo-academic paper!

      I wrote this because a summary didn't exist and writing it was a way for me to learn about the history. It was not written with the intent of commercial publication, but I'd still love to share it around and get more feedback, especially if that would help me further develop the description of this history and these ideas. If you have any thoughts about this, please let me know.

      What are Public Access Unix Systems?

      When the general public thinks of the Unix operating system (if it does at all), it probably isn't thinking about a social club. But at its core, Unix has a social architecture, and there is a surprisingly large subculture of people who have been using Unix and Unix-like operating systems this way for a long time.

      Public access Unix systems are multi-user systems that provide shell accounts to the general public for free or low cost. The shell account typically provides users with an email account, text-based web browsers, file storage space, a directory for hosting website files, software compilers and interpreters, and a number of tools for socializing with others on the system. The social tools include the well-known IRC (Internet Relay Chat), various flavors of bulletin-board systems, often a number of homegrown communication tools, and a set of classic Unix commands for finding information about or communicating with other system users.

      But more than just mere shell providers, public access Unix systems have always had a focus on the social community of users that develops within them. Some current systems have been online for several decades and many users have developed long-standing friendships and even business partnerships through them. i.e. they're a lot of fun and useful too.

      Of interest to Tildes members is that public access Unix systems have for the most part been non-commercial. Some take donations or charge membership fees for certain tiers of access (some in the U.S. are registered 501(c)(3) or 501(c)(7) non profits). They almost invariably do not take advertising revenue, do not sell user profile data, and the user bases within them maintain a fairly strong culture of concern about the state of the modern commercial Internet.

      This concept of a non-commercial, socially aware, creative space is what really got me interested in the history of these systems. Further, the fact that you have this socially aware, technically competent group of people using and maintaining a medium of electronic communication seems particularly important in the midst of the current corporate takeover of Internet media.

      History

      Public access Unix systems have been around since the early 1980's, back when most of the general public did not have home computers, before there was a commercial Internet, and long before the World Wide Web. Users of the early systems dialed in directly to a Unix server using a modem, and simultaneous user connections were limited by the number of modems a system had. If a system had just one modem, you might have to dial in repeatedly until the previous user logged off and the line opened up.

      These early systems were mostly used for bulletin-board functionality, in which users interacted with each other by leaving and reading text messages on the system. During this same time in the early 80's, other dial-in systems existed that were more definitively labeled "BBSes". Their history has been thoroughly documented in film (The BBS Documentary by Jason Scott) and in a great Wikipedia article. These other systems (pure BBSes) did not run the Unix OS and many advanced computer hobbyists turned up their noses at what they saw as toyish alternatives to the Unix OS.

      Access to early dial-in public access Unix systems was mostly constrained by prohibitively expensive long-distance phone charges, so the user bases drew from local calling areas. The consequence was that people might meet each other online, but there was a chance they could end up meeting in person too because they might literally be living just down the street from each other.

      The first two public access Unix systems were M-Net (in Ann Arbor, MI) and Chinet (in Chicago, IL), both started in 1982. By the late 1980's, there were more than 70 such systems online. And at their peak in the early 1990's, a list of public access Unix systems shared on Usenet contained well over 100 entries.

      Throughout the 1980's, modem speeds and computer power increased rapidly, and so did the functionality and number of users on these systems. But the 1990's were a time of major change for public access Unix systems. In 1991, the Linux operating system was first released, ushering in a new era of hobbyist system admins and programmers. And new commercial services like AOL, Prodigy and CompuServe brought hordes of new people online.

      The massive influx of new people online had two big impacts on public access Unix systems. For one, as access became easier, online time became less precious and people were less careful and thoughtful about their behavior online. Many still describe their disappointment with this period and their memory of the time when thoughtful and interesting interactions on public access Unix systems degraded to LOLCAT memes. In Usenet (newsgroups) history, the analogous impact is what is referred to as "The Eternal September".

      The second impact of this period was from the massive increase of computer hobbyists online. Within this group were a small but high-impact number of "script kiddies" and blackhat hackers that abused the openness of public access Unix systems for their own purposes (e.g. sending spam, hacking other systems, sharing illegal files). Because of this type of behavior, many public access Unix systems had to lock down previously open services, including outbound network connections and even email in some cases.

      For the next decade or so, public access Unix systems continued to evolve with the times, but usership leveled off or even decreased. The few systems that remained seemed to gain a particular sense of self-awareness in response to the growing cacophony and questionable ethics of the commercial World Wide Web. This awareness and sense of identity continues to this day, and I'll describe it more below because I think it is really important, and I expect Tildes members agree.

      2014 and Beyond

      In 2014, Paul Ford casually initiated a new phase in the history of public access Unix systems. He registered a URL for tilde.club (http://tilde.club) and pointed it at a relatively unmodified Linux server. (Note: if there is any relation between tilde.club and Tildes.net, I don't know about it.) After announcing via Twitter that anyone could sign up for a free shell account, Ford rapidly saw hundreds of new users sign up. Somehow this idea had caught the interest of a new generation. The system became really active and the model of offering a relatively unmodified *NIX server for public use (a public access Unix system under a different name) became a "thing".

      Tilde.club inspired many others to open similar systems, including tilde.town, tilde.team* and others which are still active and growing today. The ecosystem of these systems is sometimes called the tilde.verse. These systems maintain the same weariness of the commercial WWW that other public access Unix systems do, but they also have a much more active focus on building a "radically inclusive" and highly interactive community revolving around learning and teaching Unix and programming. These communities are much, much smaller than even small commercial social networks, but that is probably part of their charm. (* full disclosure, I wield sudo on tilde.team.)

      These tilde.boxes aren't the only public access Unix systems online today though. Many others have started up in the past several years, and others have carried on from older roots. One of the most well known systems alive today is the Super Dimension Fortress (SDF.org) that has been going strong for over three decades. Grex.org and Nyx.net have been online for nearly as long too. And Devio.us is another great system, with a community focused around the Unix OS, particularly OpenBSD. Not all these systems label themselves as "public access Unix systems", but they all share the same fundamental spirit.

      One system that I find particularly interesting is Hashbang (aka #!, https://hashbang.sh). Hashbang is a Debian server run and used by a number of IT professionals who are dedicated to the concept of an online hackerspace and training ground for sysadmins. The system itself is undergoing continual development, managed in a git repository, and users can interact to learn everything from basic shell scripting to devops automation tooling.

      Why is Hashbang so cool? Because it is community oriented system in which users can learn proficiency in the infrastructural skills that can keep electronic communications in the hands of the people. When you use Facebook, you don't learn how to run a Facebook. But when you use Hashbang (and by "use", I mean pour blood, sweat and tears into learning through doing), you can learn the skills to run your own system.

      Societal role

      If you've read other things I've written, or if you've interacted with me online, then you know that I feel corporate control of media is a huge, huge concern (like Herman and Chomsky type concern). It's one of the reasons I think Tildes.net is so special. Public access Unix systems are valuable here too because they are focused on person-to-person connections that are not mediated by a corporate-owned infrastructure, and they are typically non-profit organizations that do not track and sell user data.

      You're no doubt aware of the recent repeal of Net Neutrality laws in the U.S., and you're probably aware of what The Economist magazine calls "BAADD" tech companies (big, anti-competitive, addictive and destructive to democracy). One of the most important concerns underlying all of this is that corporations are increasingly in control of our news media and other means of communication. They have little incentive to provide us with important and unbiased information. Instead, they have incentive to dazzle us with vapid clickbait so that we can be corralled past advertisements.

      Public access Unix systems are not the solution to this problem, but they can be part of a broader solution. These systems are populated by independently minded users who are skeptical of the corporate mainstream media, and importantly, they teach about and control the medium of communication and social interaction itself.

      Unix as a social medium

      So what is it that makes public access Unix systems different? This seems like a particularly interesting question relative to Tildes (so interesting that I even wrote another Tildes post about it). My argument is partly that Unix itself is a social and communication medium and that the structure of this medium filters out low-effort participation. In addition to this, public access Unix systems tend to have user bases with a common sense of purpose (Unix and programming), so users can expect to find others with shared interests.

      In contrast to modern social media sites like Facebook or Twitter, you have to put in some effort to use Unix. You have to learn to connect, typically over ssh; you have to learn to navigate a command line shell; and you have to learn the commands and options to run various utilities. And to really use Unix, you have to learn a bit of programming. It's not incredibly hard in the end, but it takes significantly more effort than registering for a Facebook or Twitter account and permitting them to scan your email address book. Once you get over the learning curve, it is powerful and fun.

      This effortful medium does two things. For one, it weeds out people who aren't willing to put in effort. And for two, it provides learned users with a diverse palette of tools and utilities for building and sharing creative output.

      Public access Unix systems are all about active creation of content to be enjoyed and shared with others, and not about passive media consumption. They are about the community that develops around this purpose and not around the profit that can be squeezed out of users' attention.

      Future of public access Unix systems

      Public access Unix systems have been around for nearly four decades now. They have seen ups and downs in popularity, and they have been humming along in the background as computing has gone from the ARPANET to the spectacle of the commercial World Wide Web. Early public access Unix systems were largely about the novelty of socializing with other hobbyists through a computer, and that interest has evolved into the learning, doing and teaching model of an online hackerspace today.

      These systems are not huge, they are not coasting on advertising revenue, and they get by purely on the contributions, volunteer effort, and enthusiastic participation of their users. But as a contrast to commercial social network sites, they are an example of what online socializing can be when individuals put effort, thought, and compassion into their interactions with others. And just as importantly, they pass on the very skills that can independently maintain this social and communication medium for future generations of users.

      --

      As promised in the intro, if you're interested in reading a much more in-depth version of this article, here's the longer copy:
      https://cmccabe.sdf.org/files/pubax_unix_v01.pdf

      73 votes
    17. How do you model complicated or tricky problems to solve them? What benefit do you get from using that model?

      Everyone has their own way of visualizing a problem they're working on, and every strategy has some reason for being used. Some people prefer text (e.g. pseudocode) while others prefer diagrams,...

      Everyone has their own way of visualizing a problem they're working on, and every strategy has some reason for being used. Some people prefer text (e.g. pseudocode) while others prefer diagrams, for example. What do you use to make problems easier to approach, conceptualize, and solve? Why that particular strategy rather than some other one? What kind of practical implementations of your strategy exemplifies the benefits of your strategy for modeling the problem?

      6 votes