7 votes

The coming software apocalypse

9 comments

  1. onyxleopard
    Link
    While I liked this overall, this bunk about interfaces and how "text" is somehow an inferior medium for writing code rubbed me the wrong way. Of course making domain-specific tools and...
    • Exemplary

    With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.

    When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”

    While I liked this overall, this bunk about interfaces and how "text" is somehow an inferior medium for writing code rubbed me the wrong way. Of course making domain-specific tools and visualizations is worthwhile if you are going to spend a lot of time exploring that domain. But, how do you write the domain-specific tools and visualizers? Unless someone's already written a neat tool to give you the "god's eye view" of your Braid clone, you are going to have to build those tools yourself. And somewhere on that toolchain, as you progress toward the bare-metal silicon, you're going to hit a general purpose programming language that is written as text in a text editor (or an IDE, which is ultimately a fancy text editor).

    The takeaway isn't that programmers should throw away their text editors and start using Swift Playgrounds or Jupyter Notebooks exclusively. The point is that programmers shouldn't arbitrarily restrict themselves in their tools. They should use whatever tools are optimal, and if they need to, they should build their own tools! I personally find interactive programming with a REPL to be far more intuitive and productive to start prototyping and playing around with. I start most of the Python programs I write in an IPython session. But, eventually, I usually end up writing them in a full-fledged text editor.

    Anyone who thinks it would be worthwhile to try to build a photo editor with Photoshop or a word processor with MS Word (or vice versa) is delusional. You can't build arbitrary tools with purpose-specific tools as easily as you can with general purpose tools. Depending on the tools and what you intend to build, it may not even be possible at all. The most general purpose tool is logic itself. Any tool that does not interpret arbitrary logical expressions and execute them faithfully is going to be a hindrance to those who know what they're doing. If you don't know what you're doing, sure, more restricted tools may suffice for exploratory work. And building models is absolutely a good idea. But, you can easily build models with general purpose programming languages if you already fully understand the model before you start coding. If you were taught about Turing completeness and expressivity of languages, you'd know precisely why programmers are loathe to give up programming languages as primary tools. And when you are writing a critical program, obviously, the best thing to do would be to write a proof of its correctness and safety. But most projects aren't given the resources to do that, so here we are where programmers have to model and write logic simultaneously, and usually can't comprehensively test for all states that their programs and systems might get into.

    Personally, I wouldn't allow myself to write a truly critical program because I know I don't have the skills to prove any nontrivial program I write is correct. I learned lambda calculus, and I had to learn how to write logical proofs at one point. If I were ever in a position where I had to write a critical program, I would absolutely look into proof-solvers and TLA+ and any other tools that can ensure safety. But, for the kind of programs I write, that's not only overkill, it might prevent me from writing anything at all. For instance, creating models of text for natural language processing is incredibly difficult, and there are arcane things like Unicode where the models have been standardized, and you don't have any control over it. If I had to formally verify a model of Unicode myself just to start writing a text analysis tool, the barrier to entry would be prohibitive. (And Unicode bugs still plague us.) Most software I write has dependencies on either the standard library of the programming language, or on 3rd party libraries, and most if not all of it is not formally verified, so my dependent code can't possibly be formally verified either.

    10 votes
  2. [4]
    tesseractcat
    Link
    In previous discussions I've read, creating formally verifiable software has been depicted as much more difficult and time-intensive compared to creating regular software. I don't think this...

    In previous discussions I've read, creating formally verifiable software has been depicted as much more difficult and time-intensive compared to creating regular software. I don't think this article really touches much on that, but it seems like a really important part of the discussion.

    Personally, I don't think we're going to see a major shift in the way people write software, or software tooling, because any sort of verification or modelling will take up time that could otherwise be spent making a product (albeit a buggier one). There are instances where this doesn't apply, however: I think NASA, aerospace companies, and other industries that are very bug-averse do use formal verification.

    7 votes
    1. vektor
      Link Parent
      Where it gets interesting is the synthesis of the two: What happens if you rule out certain kinds of errors, e.g. the way rust does: Type Checking and Borrow Checking, and tada, no type errors or...

      Where it gets interesting is the synthesis of the two: What happens if you rule out certain kinds of errors, e.g. the way rust does: Type Checking and Borrow Checking, and tada, no type errors or memory allocation fuckery. It's a step in the right direction, and I feel it's a trend we'll continue to see become more prominent. Add explicit error handling (your code will not fail in a way you do not know of at compile time, [ignoring the case of the compiler's proof failing because of memory corruption or your computer failing entirely] and at the time of writing your code you have to acknowledge the ways it could fail.

      Also interesting is invariant-based testing. A proper test for a sorting algorithm could be as simple as "forall X elem [Ord], sorted(sort(X)) & countAll(sort(X)) == countAll(X)" - i.e. for all valid inputs (A list of orderable elements), sort(X) returns a value that is in sorting order and that is a permutation of the input. Then, your unit test just comes up with any and all possible X it can think of and yells at you if the criterion doesn't hold.

      I'd say we have a lot of tools already that are very good at preventing developer stupidity and allow one to focus more on translating requirements. Adoption of those tools is lacking somewhat, and we still have room for improvement. But being less stupid than javascript dependency hell is certainly possible and not as exhausting as formally verified software.

      8 votes
    2. Micycle_the_Bichael
      Link Parent
      That's all very fair! I think they did a fair job of representing the time trade-off but I also have done non-zero amount of experience with the tools so maybe I didn't notice that it was too...

      That's all very fair! I think they did a fair job of representing the time trade-off but I also have done non-zero amount of experience with the tools so maybe I didn't notice that it was too little because I'm already primed on the subject.

      As to your second point, I agree! I am 1000% certain we won't see this type of software writing take the mainstream. As long as capitalism exists, the cost/benefit analysis will almost always favor profits over quality. While I don't think it is how software will be written, I do think it is how it should be written and as an extension of that I don't think the fact that it is harder or slower is a good argument against doing things in a better way.

      2 votes
    3. trobertson
      Link Parent
      One such instance is cloud software (which is a significant portion of software engineering). Microsoft's CosmosDB has a full TLA+ model (which I think Lamport himself worked on) and that seems to...

      There are instances where this doesn't apply, however: I think NASA, aerospace companies, and other industries that are very bug-averse do use formal verification.

      One such instance is cloud software (which is a significant portion of software engineering). Microsoft's CosmosDB has a full TLA+ model (which I think Lamport himself worked on) and that seems to have been a positive experience for them. The article mentions that Amazon has done similar work for a couple of their services.

      I wouldn't be surprised to see more cloud services using TLA+ models to ensure consistent behavior.

      2 votes
  3. [4]
    Micycle_the_Bichael
    Link
    This was a link within another article posted here on tildes, but one that I enjoyed reading a lot more than I expected to given the title. The majority of the article doesn't actually talk about...

    This was a link within another article posted here on tildes, but one that I enjoyed reading a lot more than I expected to given the title. The majority of the article doesn't actually talk about he coming software apocalypse, spending most of its time discussing potential ways to completely shift how we think about software. I spend most my day-to-day writing code and I am really really intrigued by the TLA+. It seems like the natural progression from OpenApi (though I think TLA+ and OpenApi are around the same age, TLA+ being established in 2014, and Swagger becoming OpenApi in 2015 but swagger existed for a while before becoming openapi so I don't really know). Honestly TLA+ very readily fits a lot of my frustrations with software engineering and programming, and highlights a lot of the parts I like.

    3 votes
    1. [3]
      trobertson
      Link Parent
      (some clarifications) TLA+ (1999) and OpenApi (2016) seem to be solving very different problems. TLA+ is about logical correctness, whereas OpenApi seems to be about process communication. They...

      (some clarifications)

      TLA+ (1999) and OpenApi (2016) seem to be solving very different problems. TLA+ is about logical correctness, whereas OpenApi seems to be about process communication. They can both be used on the same project but they are not replacements for each other in any real sense.

      2 votes
      1. [2]
        Micycle_the_Bichael
        Link Parent
        Yeah sorry I should have been more clear in how I think of them as similar. At least in the way I have used OpenApi, it is a very useful way of defining what you want your code to look like. It...

        Yeah sorry I should have been more clear in how I think of them as similar. At least in the way I have used OpenApi, it is a very useful way of defining what you want your code to look like. It says what your endpoints are, what actions they support, what parameters do you expect and are they required, what does your response look like, how are various data structures related. Then we have a python script that takes that yaml and builds a bunch of boiler plate code and tests based on that specification. The more detailed the OpenApi spec you feed in, the better the api definition you get out. I thought TLA+ sounds similar in that you are defining what you want your solution to look like, and you get a state machine out that you can then use to prove logical correctness.

        2 votes
        1. skybrian
          Link Parent
          I haven't used TLA+ but from the introductions I've read, it doesn't seem to be about code generation. To make modeling easier, the state machine described in TLA+ is often a simplified version of...

          I haven't used TLA+ but from the introductions I've read, it doesn't seem to be about code generation. To make modeling easier, the state machine described in TLA+ is often a simplified version of the full state machine used in production code. It's up to the programmers to make sure they match.

          3 votes