8 votes

Why type systems matter for UX: an example

9 comments

  1. [2]
    skybrian
    Link
    In some cases, having a search box in the UI is the best way to get you where you want to go. (For example the OS X System Preferences and Chrome's settings both have search boxes.)

    In some cases, having a search box in the UI is the best way to get you where you want to go. (For example the OS X System Preferences and Chrome's settings both have search boxes.)

    5 votes
    1. onyxleopard
      Link Parent
      macOS goes even further in native applications that use Apple’s core APIs. GUI apps collect their functions in the menu bar, organized into more or less standardized menu items: File, Edit, View,...

      macOS goes even further in native applications that use Apple’s core APIs. GUI apps collect their functions in the menu bar, organized into more or less standardized menu items: File, Edit, View, ... (that are extensible and flexible if developers so desire to break the mold). And these menu listings can be hierarchical. And they are searchable from the special Help menu item. So, if you don’t know where the ‘Delete’ menu item is, just search for it, and the GUI will show you its location in the menu item hierarchy automatically. In this way, the API that the GUI exposes to the user is fully discoverable and searchable (and you can bind keyboard shortcuts to menu items as well).
      Unfortunately, when lazy or uninitiated developers buck the norm of macOS and don’t do things the normal way, the user who expects a discoverable, useful set of menu items will be very frustrated. But when it works, it is lovely.

      5 votes
  2. [5]
    wirelyre
    Link
    I'm realizing I probably actually wanted to post his previous article because it's more accurate and creative, though long.

    I'm realizing I probably actually wanted to post his previous article because it's more accurate and creative, though long.

    4 votes
    1. skybrian
      Link Parent
      I think the closest I've seen to this vision is Naked Objects, where the UI is automatically generated from an object-oriented API, so you'd define some classes with methods on them and that's...

      I think the closest I've seen to this vision is Naked Objects, where the UI is automatically generated from an object-oriented API, so you'd define some classes with methods on them and that's what the user would see. But that was for a single user and a single program.

      2 votes
    2. [3]
      onyxleopard
      Link Parent
      I think this is already the case, though. For those interested in a higher plane of UX, they either eventually make their way to a command line interface, or give up. A REPL (such as an...

      I think this is already the case, though. For those interested in a higher plane of UX, they either eventually make their way to a command line interface, or give up.

      A REPL (such as an interactive shell session) is exactly this sort of programming environment as described. As the Friendly Interactive Shell (fish) jokingly claims, though, (since it is not concerned with maintaining POSIX compliance):

      Finally, a command
      line shell for the 90s

      We’ve had this figured out for a good while now.

      Many users are too accustomed to graphical interfaces with spatial metaphors to explore text-based interfaces, though. UX experts have studied this stuff and have found that even with fancy tab completion etc. they are not transparently discoverable enough for average users to learn before getting frustrated. Most users want an appliance, not a tool that they have to learn to get anything out of it.

      There are also neat hybrid approaches like Alfred. I really don’t know why this hasn’t caught on more. I’ve seen half-hearted attempts at these sorts of ideas implemented in things like Jira’s modal action item interface (invoked in the site interface with the period key) or things like more recent iterations of macOS’s Spotlight, and iOS’s Shortcuts. But, overall nobody seems interested in revolutionary UX design. GUIs seem good enough for those who aren’t highly invested, and textual, command line interfaces suffice for those who are.

      2 votes
      1. [2]
        wirelyre
        Link Parent
        I don't think "appliances" and "the end of apps" conflict with each other. When I'm reading this earlier article I think more about the Acme text editor or a Smalltalk system, where it looks like...

        I don't think "appliances" and "the end of apps" conflict with each other.

        When I'm reading this earlier article I think more about the Acme text editor or a Smalltalk system, where it looks like you have an appliance, but the parts work by themselves too.

        I think the reason Alfred-alikes haven't caught on is exactly as Chiusano describes: individual resources and actions are so tightly bound within applications that a meta-program can't make a user significantly more productive, even by using multiple interfaces at once. The resources exposed are simply too coarse.

        And regarding REPLs, I think there's still a lot to improve. IDEs have panes and menus, and "quick action" boxes or whatever they're called are an extra modal interface. So pure text input per se isn't even the dominant interface model for sophisticated work.

        1 vote
        1. onyxleopard
          Link Parent
          Ah, but if Applications choose text as their API, Alfred can make them more useful! I can wrap any web search I wish with Alfred. If apps provide URI schemas, you can wrap them, too. If web apps...

          Ah, but if Applications choose text as their API, Alfred can make them more useful! I can wrap any web search I wish with Alfred. If apps provide URI schemas, you can wrap them, too. If web apps provide APIs, they can be wrapped as well, though you might need to write a shim/some glue code as an Alfred workflow. But, this is for the very advanced user. But, non advanced users can still install workflows developed by other users, so they can still treat Alfred like an appliance (except that workflows don’t have a great central repository to find them).

          Obviously the way Apple envisioned this with AppleScript (and now I guess Javascript) where native apps would expose their APIs via libraries “dictionaries”
          hasn’t caught on.

          And regarding REPLs, I think there's still a lot to improve. IDEs have panes and menus, and "quick action" boxes or whatever they're called are an extra modal interface. So pure text input per se isn't even the dominant interface model for sophisticated work.

          I think multilingual REPLs can work without tacking on graphical interfaces. The textual interface of IPyhton is so productive to me because it interprets my shell commands, Python expressions, and let’s me extend it with their “magic” % syntax.

  3. [2]
    TeMPOraL
    Link
    Reminds me of the kind of user interfaces that existed on the Lisp Machines, and of The Common Lisp Interface Manager in particular (which, in more recent times, lives on as open source project,...

    Reminds me of the kind of user interfaces that existed on the Lisp Machines, and of The Common Lisp Interface Manager in particular (which, in more recent times, lives on as open source project, McCLIM).

    One of the key features there was an easy way to create "presentations" of objects. You had an API for "outputting objects to streams" that let you put a text, image or a widget representing an object in your program. You could then do contextful UI interactions against it. On top of that, if you wanted to execute a command that accepted objects of a particular type, you could just select/point at any of the presentations on screen (of that same type) to run the command on them.

    Example interaction, riffing off TFA's example. You're in your mail program, and click "Reply to" button. All currently visible e-mails in your inbox get highlighted, and if you click on any of them, that will be the e-mail you'll be replying to. Alternatively, you can select an e-mail first, and click "Reply to", to get the standard behavior (as the argument to "Reply to" command is already selected). Say you want to add recipients; you press an "Add" button in your reply's "To:" field. All visible e-mail addresses get highlighted (including e.g. in previews of other messages in your inbox), and you can pick them.

    I poked around the codebase of one of the old CLIM implementations in the past, but I didn't do much GUI development with it. My thoughts on this paradigm are, that I think it requires you to reify and explicitly express more UI concepts than you'd otherwise be prepared to do. From the user's point of view, I think nobody is used to the concept of things on the screen being "presentations" of actual things. We're more used to specific screens offering specific actions.

    The post seems to be from 2013, and interestingly enough, Microsoft Office sort of embraced pieces of this style as it introduced and evolved the ribbon. Today, when you click on stuff, the contents of the ribbon change to reveal actions you can perform on your selections / in your state. And this turns out to be confusing, at least for me. There's no place I can browse to discover what actions I could take if I selected the right thing. I need to select and hope the action will show up on the ribbon.

    (The interface is buggy, too. Not a week goes by where Outlook confuses itself about the message I'm writing, and the "Attach" button disappears. I have to close my draft and reopen it in a separate window to get the button back.)

    3 votes
    1. wirelyre
      Link Parent
      Wow, thanks! I'm not familiar with the CLIM but I'll definitely play with it and look through the code. I noticed the same thing about the ribbon. :-)

      Wow, thanks! I'm not familiar with the CLIM but I'll definitely play with it and look through the code.

      I noticed the same thing about the ribbon. :-)

      2 votes