I agree with the feeling he has, but this article is almost completely wrong about the points it makes. His follow-up article talking about specific UI problems with specific apps is much better....
I agree with the feeling he has, but this article is almost completely wrong about the points it makes. His follow-up article talking about specific UI problems with specific apps is much better.
To elaborate on what I mean, in this article he says:
our portable computers are thousands of times more powerful than the ones that brought man to the moon. Yet every other webpage struggles to maintain a smooth 60fps scroll on the latest top-of-the-line MacBook Pro. I can comfortably play games, watch 4K videos, but not scroll web pages? How is that ok?
This is a problem mainly with the web and specifically with poorly-written apps, not with our OSes in general and not even that often with native apps (though it's certainly still possible to write slow native apps).
I specifically take exception to this:
Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler?
Well for one thing, Emacs has an incredibly shitty user interface, so much of what text editors are doing is providing a usable and discoverable user interface that Emacs just gives up on. But beyond that, they're also supporting scalable vector fonts that display full Unicode with support for right-to-left languages, and often inline insertion of graphics and even video. Do you need that in a Unix terminal? Probably not, but you certainly do in a word processor or presentation software.
As an example, here's an article on how the Windows console application works and why typing in it is so much faster than typing in Word or other applications:
We have one bare, super un-special window with no additional controls attached to it. We get our keys fed into us from just barely above the kernel given that we're processing them from window messages and not from some sort of eventing framework common to pretty much any other more complicated UI framework than ours (WPF, WinForms, UWP, Electron). And we dump our text straight onto the window surface using GDI's PolyTextOut with no frills.
In other words, they have a bare-bones window with no modern UI (like tool tips, contextual menus, etc.), and they draw directly to the screen in a way that probably wouldn't be appropriate for other types of tools.
And the very next line in the original piece is also laughably wrong:
As a general trend, we’re not getting faster software with more features. We’re getting faster hardware that runs slower software with the same features.
Bullshit! Looking back just 18 years or so ago, Apple released iPhoto for capturing, organizing, and editing photos. The initial version was basically those 3 features and not much else. Today, the Photos application which replaced iPhoto does those 3 things along with automatically finding faces in all your photos, automatically creating video montages of your photos, uploading your photos to your cloud account and keeping them in sync on all your devices, and more. In fact, even the editing features now work on the GPU allowing you to handle significantly larger files significantly faster. It also includes much more sophisticated editing features. So we are getting faster software with more features.
There was a time when the author's sentiment was spot-on. In the early 1990s software started getting incredibly bloated for the machines it ran on. When Microsoft released Word 6 for the Mac, it was dog slow. You could type and then stop and watch the words fill in. It was an embarrassment. But that's not what things are like today. Furthermore, in those days, we had mostly single-threaded applications with no protected memory. While you could run more than 1 app at a time, it was often a slow and painful experience. When one app was poorly written, it could bring down the whole machine. In the 90s, I had to restart my computer a few times a day regardless of what I was working on because it crashed that often.
Today, that's no longer the case. While there are still crashes, I can go days to weeks without seeing one. When an app does crash, it doesn't bring down my machine. I just restart that app and I'm good to go. Usually whatever I was working on was auto-saved, so I don't lose any work, either. Software is significantly better than it was before. Even in the last 10 years, I can remember Xcode 4 going from crashing a couple times a day to Xcode 8 basically never crashing. Things aren't perfect today, but they're a hell of a lot better than they were. Our apps are generally way faster and have way more features. Some are bloated and slow because they're poorly written or use terrible technologies like Electron. But in general, most are significantly faster and have way more features than they used to and are significantly more robust.
I think there is actually a problem with design philosophy here. We've largely given up on making truly powerful tools with a learning curve in favor of stuff that's accessible to the...
Well for one thing, Emacs has an incredibly shitty user interface, so much of what text editors are doing is providing a usable and discoverable user interface that Emacs just gives up on.
I think there is actually a problem with design philosophy here. We've largely given up on making truly powerful tools with a learning curve in favor of stuff that's accessible to the lowest-common-denominator user. That's fine for general use software, but it's telling that most of the stuff power users rely on these days all feel like throwbacks to the 90s. They expected you to learn the esoteric art of using and interacting with the software, and the reward was that once you got good with it you could move super quickly and elegantly to achieve things. Those options are gone now. Everything feels clumsy.
One of things I have to deal with as a designer is making sure the stuff I make is easy to interact with. Treating it as a "common denominator" problem is reductive: unless you have a specific...
One of things I have to deal with as a designer is making sure the stuff I make is easy to interact with. Treating it as a "common denominator" problem is reductive: unless you have a specific idea of the capabilities your intended userbase has, it only ever makes sense to aid access to software capabilities as best you can.
As far as I understand, good UX design – that is, the layer of product design that's about usability – is about making easy things easy and difficult things possible.
I don't think a good tool must necessarily be esoteric. It may – such is often a trade-off for power – but it needn't be.
One problem I see is that there is such a focus on making things easy to use (which is a good thing) that we tend to ignore the possibility of being extremely efficient with something. One thing...
One problem I see is that there is such a focus on making things easy to use (which is a good thing) that we tend to ignore the possibility of being extremely efficient with something. One thing in particular that bugs me is just how often I have to use a mouse. Keyboards have so many keys and they are simply forgotten in most software today, especially function keys. Mice are excellent for a lot of the complex software we use, but they are slow. You can only get so fast with a mouse. Primarily keyboard based interfaces can be blazing fast and can be used without even thinking. Even hyper specific software today fails to take advantage of this.
Ideally it's best to use both, but modern software is almost entirely centered around using a mouse, even when it isn't necessary. This translates poorly to a good keyboard interface. Granted, touchscreens only make this more common in order to keep common interfaces between platforms. I guess I just miss feeling like the flash every time I'm forced to use software with a slow interface. Some productivity software in particular bugs me in this regard, as it seems trivial to improve upon.
Edit: @Akir below mentioned that the beautification of software affects performance and efficiency. I couldn't agree more.
Shouldn't you have that though? Like, shouldn't that be a major priority in specifying new features? Power user functionality, though, would like medium or difficult things easier. But lots of bad...
unless you have a specific idea of the capabilities your intended userbase has
Shouldn't you have that though? Like, shouldn't that be a major priority in specifying new features?
is about making easy things easy and difficult things possible.
Power user functionality, though, would like medium or difficult things easier. But lots of bad design out there makes easy things easy, medium things hard, and doesn't bother with difficult things because they're "edge cases." And, too often, their idea of "easy things being easy" involves throwing a "HERE'S WHAT'S NEW IN THIS VERSION" tutorial splash before you can use the application.
And in still more cases, it's not even about making easy things easy so much as making them pretty. I see tons of applications out there where they make long animations to do things and they look cool the first 4 times and then after that I feel like I'm just waiting for animations to process all the time. These days, I am faster that many of the apps I use at work can keep up with and a big part of it is extraneous animations.
Having what I described would be a fucking godsend. Most of the time, feedback is severely-diluted. Most users don't even bother reporting bugs and suggesting changes. For those who do, the...
Shouldn't you have that though?
Having what I described would be a fucking godsend.
Most of the time, feedback is severely-diluted. Most users don't even bother reporting bugs and suggesting changes.
For those who do, the feedback is often filtered through multiple lenses of assumptions, false knowledge, disappointment, and/or any number of very human things that make arriving to a specific satisfactory conclusion very difficult, if not impossible.
Designers have to improvise, often via subtle deceit, to get the data.
A/B testing is one such feat of social engineering: giving one user the new option, leaving the other one with the old design, and seeing which gives the better target metrics (clickthroughs, orders, views, likes, other sorts of interactions...).
Building personas – fictional characters who are meant to be targeted during development – is another, used internally during design process.
Good old angry emails work, too, but these were uncommon to begin with.
Most of the information a designer gets, they have to try to make sense of before it even reaches the part of their brain that works on the solution.
But lots of bad design out there makes easy things easy, medium things hard, and doesn't bother with difficult things because they're "edge cases." And, too often, their idea of "easy things being easy" involves throwing a "HERE'S WHAT'S NEW IN THIS VERSION" tutorial splash before you can use the application.
That's not bad design, exactly: it's design that's overridden by things that are higher on the list of priorities.
Shipping new versions often, even if they suck.
Making more money from the users, even though your app is slow and tracks the user's every move.
Getting higher "user churn", even when your app doesn't exactly attract people as is.
(Also, if you ever use the phrase "user churn" in your app's design discussion, it's taken a bad, bad turn somewhere up the road.)
These all sound like problems that a decent "engagement manager" or "product owner" ought to manage around. I realize that's basically asking for a unicorn person who is both really well...
Having what I described would be a fucking godsend.
Most of the time, feedback is severely-diluted. Most users don't even bother reporting bugs and suggesting changes.
For those who do, the feedback is often filtered through multiple lenses of assumptions, false knowledge, disappointment, and/or any number of very human things that make arriving to a specific satisfactory conclusion very difficult, if not impossible.
Designers have to improvise, often via subtle deceit, to get the data.
These all sound like problems that a decent "engagement manager" or "product owner" ought to manage around. I realize that's basically asking for a unicorn person who is both really well organized, technically knowledgable, a great communicator, and sensitive to design thinking/considerations. But. . . yeah. . . now that I say it like that I guess I get why it's not common.
Hell, I'd settle for a single link with a form I can fill out saying, "When I do X the app crashes." Good luck finding anything like that on a typical website or product. With most companies...
These all sound like problems that a decent "engagement manager" or "product owner" ought to manage around.
Hell, I'd settle for a single link with a form I can fill out saying, "When I do X the app crashes." Good luck finding anything like that on a typical website or product. With most companies having no way to contact them, and now trying to send you into a labyrinth of circular document references when you have a question, I just don't even bother trying to give feedback anymore. They don't want it and won't listen in most cases.
Nikita's article reads like a painful plea from an idealistic engineer who can only operate on a certain level of quality. Anything else is deeply uncomfortable to him. I understand that, because...
Nikita's article reads like a painful plea from an idealistic engineer who can only operate on a certain level of quality. Anything else is deeply uncomfortable to him. I understand that, because I'm the same way.
When Microsoft released Word 6 for the Mac, it was dog slow. You could type and then stop and watch the words fill in. It was an embarrassment. But that's not what things are like today.
Not on a top-of-the-line Macbook, no.
I've been using a $300 laptop for several years now. (Apparently, it costed ~$390 at launch.) Had to upgrade the RAM and change the HDD to SSD to get decent performance. I can't play most games on it because of the integrated graphics chip, a terrible foundation for graphics processing.
The only reason I use VS Code is because there isn't an equivalent option for IDE that performs better.
The main reason I love Indigrid so much is because it is fast, on top of being uniquely-useful; Nikita would've loved to see such a tool.
The reason I can't hold a table of personal contents in Excel for long – regardless of its purpose – is because Excel is painfully slow to open.
I think Nikita's being dramatic and exaggerating, but his sentiment rings true with me. Software is not oriented to quality of performance: as long as it's fine on the developer's tricked-up PC, it's fine to put in production – or so I suspect. There isn't an adequate reason for consumer software to not support low-range hardware that isn't obsolete.
In the past, many game releases were plagued by performance issues. It makes total sense when you see what machines the developers were using (both the latest AND most expensive hardware)....
as long as it's fine on the developer's tricked-up PC, it's fine to put in production – or so I suspect.
In the past, many game releases were plagued by performance issues. It makes total sense when you see what machines the developers were using (both the latest AND most expensive hardware). Nowadays it seems quite a bit better.
I think part of this might just be evolving standards too. IIRC the vast VAST majority of my issues in the past involved drivers, either for the video or sound cards, and version control issues...
I think part of this might just be evolving standards too. IIRC the vast VAST majority of my issues in the past involved drivers, either for the video or sound cards, and version control issues between OpenGL or DirectX.
We also can't overrate how much it helps to have consolidated platforms, like GoG or Steam, to maintain that version control for us.
I get where you are coming from and I partially agree with you. But things were different on an app-by-app basis. The biggest change in the past decade or so in software design is that modern...
I get where you are coming from and I partially agree with you. But things were different on an app-by-app basis.
The biggest change in the past decade or so in software design is that modern commercial software is all required to be beautiful. Using fast, responsive, and predictable native UI widgets became strictly unfashionable. So programs ballooned in size as more and more media was required to make them pretty. Photoshop CS2 only took up about 300MB of hard drive space, but the current version takes up well over 1GB. Sure, the modern version does have more functionality than CS2, but not four times as much.
A big part of the problem is that developers often target devices that are better than what many consumers have and they build their application to do things that take advantage of the better hardware. The reason why I'm bringing attention to the trend of graphically intensive applications is because they performed the worst on consumer hardware where they were constantly running a bottleneck with the speed of storage. Commercial software was already being written assuming you had an SSD even while the prices of them were still sky high. Apple is actually probably the worst example of this for offering HDD-only Macs when their OS is constantly pulling data out of storage, but they seem to be getting better since only their iMacs have that option today.
That would also explain why modern applications seem to run much better today; modern PCs are much more homogeneous as far as performance goes. The speed of a single CPU core has stagnated over the past few years, and AMD's resurgence has assured that basically every CPU is fast and usually at least quad-core now. Even low-end CPU-integrated graphics are good enough for most non-gaming tasks. Memory has gotten cheaper and faster, and so have SSDs. PC manufacturers are still offering HDD-only models on the lowest end, but SSDs are saturating the midpoint and lower-middle markets, and the lowest end is being largely supplanted with flash-backed Chromebooks.
Here’s a Twitter polemic from a few days ago that shares a similar disenchantment but approaches the issue from a more humanistic, design-oriented perspective. She rails against poor design...
Here’s a Twitter polemic from a few days ago that shares a similar disenchantment but approaches the issue from a more humanistic, design-oriented perspective.
She rails against poor design decisions, offers better alternatives, and digs into the underlying reasons why the poor decisions were made in the first place and why so much software seems to keep getting worse. And she provides lots of examples and resources for further exploration.
It’s incisive, unsparing, righteously indignant, and hilarious. Also very long ;-) (100+ tweets) — worth it though if you’re interested in these things.
Goddamn. I'm halfway(?) through the thread, and it's fascinating. Every second tweet makes me want to register on Twitter and ask Amy questions. (She even talks about hierarchical lists. I'm...
Goddamn. I'm halfway(?) through the thread, and it's fascinating. Every second tweet makes me want to register on Twitter and ask Amy questions.
(She even talks about hierarchical lists. I'm making Intergrid, which is the epitome of hierarchical lists. I have so many questions I want to ask...)
Number 71 really caught my eye. Recently I was sweating creativity so I fired up my music score editor. A few minutes later I was at a dead standstill. It's agonizing trying to compose on a...
extremely rigid, fill out a form, there’s your object
extremely fluid, place anything anywhere, whatever you want
what we need is a combo: work free form then create varied structured “views” into/of/with your freeform data
Recently I was sweating creativity so I fired up my music score editor. A few minutes later I was at a dead standstill. It's agonizing trying to compose on a computer. Like you can't just write notes and play with them, you have to conceive a whole musical fragment before you can even type anything.
I think there are some primitive attempts at solving this in the "plain text" domain. It just boggles me that apparently no current software treats music entry as "forming structure from interlinked musical ideas".
Do you have an idea – even a vague one – for how a better software could be implemented in this field? What would you want to have that's achingly-absent?
Do you have an idea – even a vague one – for how a better software could be implemented in this field? What would you want to have that's achingly-absent?
Yes, thanks for asking. I have dozens of pages of notes on my ideal music notation software. My designs started from the visual interface and input methods, but they turned into more fundamental...
Yes, thanks for asking.
I have dozens of pages of notes on my ideal music notation software. My designs started from the visual interface and input methods, but they turned into more fundamental questions about representing music in computers.
Basically I think the problem is that modern notation programs are essentially MIDI editors, plus graphics editors. There is a single line of music that is strictly divided into measures, then decorated with markings.
Instead, we should treat fragments of music by themselves. I have an idea, so I type it and it just floats there. It's not attached to anything. I can edit it in place. Then, if I want, I can splice it into my score, setting a time signature, or whatever.
There's a lot more to deal with when you take those ideas as far as possible. I'm still working on the basic representation. But I really want to make this into software as soon as I can. And my other personal projects seem to be wrapping up...?
Sounds like a fun project. If you're ever ready to release it – or at least showcase – please let us know. I'm eager to see the solution you come up with.
Sounds like a fun project.
If you're ever ready to release it – or at least showcase – please let us know. I'm eager to see the solution you come up with.
I’ve been programming for 15 years now. Recently, our industry’s lack of care for efficiency, simplicity, and excellence started really getting to me, to the point of me getting depressed by my own career and IT in general.
...
Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design. Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. All planes converged to the optimal size/form/load and basically look the same.
Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it. People are often even proud about how inefficient it is, as in “why should we worry, computers are fast enough”
...
So I want to call it out: where we are today is bullshit. As engineers, we can, and should, and will do better. We can have better tools, we can build better apps, faster, more predictable, more reliable, using fewer resources (orders of magnitude fewer!). We need to understand deeply what we are doing and why. We need to deliver: reliably, predictably, with topmost quality. We can—and should–take pride in our work. Not just “given what we had…”—no buts!
A lot of this is client driven though, largely because people have no idea what reasonable expectations are and purchase decisions care about speed and cost over quality under-the-hood. This is...
This comment is right as far as it goes, but it misses the real underlying problem, which is that things that should be driven by engineering concerns - especially concerns around functional correctness and reliability - are overridden by desire for short-term growth in much of the industry, and are overlooked because of monopoly or lock-in advantages in the rest.
A lot of this is client driven though, largely because people have no idea what reasonable expectations are and purchase decisions care about speed and cost over quality under-the-hood. This is largely it's because speed and cost are measurable things that you can hold people to in a contract while things like how elegant the solution is are not. So where do you go with it?
If people were willing to let things unfold slowly it would work a lot better, but in almost every "digital transformation" I've seen, people aren't even making a big technology change until several years after it's overdue so it's always urgent. Nobody has been very forward looking about developing stuff, and the development time horizons are so long that you can't keep the lights on without shipping something.
One funny thing is, as software has insinuated itself more and more into the design and use of cars, planes, and architecture those have stopped "just working" too. The mechanical/physical stuff...
Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design. Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. All planes converged to the optimal size/form/load and basically look the same.
One funny thing is, as software has insinuated itself more and more into the design and use of cars, planes, and architecture those have stopped "just working" too. The mechanical/physical stuff works better than it ever has before, but the software (and sometimes firmware) are just trash. And of course it is! If the software dev world can't manage to put out parsimonious and elegant software, what hope do companies for whom software isn't a core competency have?
I go back and forth on how much I agree with this guy's points but there is definitely at least some truth to what he is saying. My company is going through this right now. The engineering teams...
I go back and forth on how much I agree with this guy's points but there is definitely at least some truth to what he is saying. My company is going through this right now. The engineering teams that build our products and features are too far abstracted from everything for the sake of "it being easy to build features" that shit is now terrible. We have thousands of cpus just running node.js because teams think resources and VMs are free because they come out of the ops budget not the engineering budget. We have systems that work perfectly until they hit a slow network connection, and then can cascade failures because engineers haven't thought about latency. New releases can take literal hours because of the legacy java code that is insanely inefficient but no one will take the time to rewrite. People don't think about hardware specs at all anymore because the most common programming languages are the ones that allow us to abstract memory and hardware away. We do it with docker and VM's. You can always easily throw more resources at a problem if you have budget. We're currently revamping our infrastructure to bill engineering teams based on resource utilization (easy in servers/VM's, harder in kubernetes) to get people to feel the pain of resource management and we'll be there with documents upon documents of suggestions on how to be more efficient.
I don't think that you can blame any one specific person for this, but I have noticed that when it comes to corporate operations (especially ones running at larger scales), when coming across a...
I don't think that you can blame any one specific person for this, but I have noticed that when it comes to corporate operations (especially ones running at larger scales), when coming across a problem, are at least 75% of the time going to chose to just throw money at the problems rather than investing in an efficient in-house solution. Its why companies like Oracle and IBM continue to make the obscene profits on their Enterprise solutions in spite of the multitude of competing solutions.
That sounds awful! I hope you get to make it better. I have definitely been there. Right now, though, I'm working on a professional product on the desktop, and we spend an enormous amount of time...
That sounds awful! I hope you get to make it better. I have definitely been there.
Right now, though, I'm working on a professional product on the desktop, and we spend an enormous amount of time working on performance. We hired a full-time optimization engineer, and the rest of us have recently been tasked with optimizing each of our own parts of the app to work better with newer hardware. (Which should also help with existing hardware, though to a lesser extent.)
I guess that's why the article rubbed me the wrong way a bit. I don't disagree that it's a general problem in our field, but at the same time, his narrative doesn't fully match with reality. I've been using computers since the late 70s and current applications and OSes definitely are doing a lot more than they used to. It's not perfect, but it is sooooo much better. I'd love to see it get even better, but I do appreciate how far we've come.
I agree with the feeling he has, but this article is almost completely wrong about the points it makes. His follow-up article talking about specific UI problems with specific apps is much better.
To elaborate on what I mean, in this article he says:
This is a problem mainly with the web and specifically with poorly-written apps, not with our OSes in general and not even that often with native apps (though it's certainly still possible to write slow native apps).
I specifically take exception to this:
Well for one thing, Emacs has an incredibly shitty user interface, so much of what text editors are doing is providing a usable and discoverable user interface that Emacs just gives up on. But beyond that, they're also supporting scalable vector fonts that display full Unicode with support for right-to-left languages, and often inline insertion of graphics and even video. Do you need that in a Unix terminal? Probably not, but you certainly do in a word processor or presentation software.
As an example, here's an article on how the Windows console application works and why typing in it is so much faster than typing in Word or other applications:
In other words, they have a bare-bones window with no modern UI (like tool tips, contextual menus, etc.), and they draw directly to the screen in a way that probably wouldn't be appropriate for other types of tools.
And the very next line in the original piece is also laughably wrong:
Bullshit! Looking back just 18 years or so ago, Apple released iPhoto for capturing, organizing, and editing photos. The initial version was basically those 3 features and not much else. Today, the Photos application which replaced iPhoto does those 3 things along with automatically finding faces in all your photos, automatically creating video montages of your photos, uploading your photos to your cloud account and keeping them in sync on all your devices, and more. In fact, even the editing features now work on the GPU allowing you to handle significantly larger files significantly faster. It also includes much more sophisticated editing features. So we are getting faster software with more features.
There was a time when the author's sentiment was spot-on. In the early 1990s software started getting incredibly bloated for the machines it ran on. When Microsoft released Word 6 for the Mac, it was dog slow. You could type and then stop and watch the words fill in. It was an embarrassment. But that's not what things are like today. Furthermore, in those days, we had mostly single-threaded applications with no protected memory. While you could run more than 1 app at a time, it was often a slow and painful experience. When one app was poorly written, it could bring down the whole machine. In the 90s, I had to restart my computer a few times a day regardless of what I was working on because it crashed that often.
Today, that's no longer the case. While there are still crashes, I can go days to weeks without seeing one. When an app does crash, it doesn't bring down my machine. I just restart that app and I'm good to go. Usually whatever I was working on was auto-saved, so I don't lose any work, either. Software is significantly better than it was before. Even in the last 10 years, I can remember Xcode 4 going from crashing a couple times a day to Xcode 8 basically never crashing. Things aren't perfect today, but they're a hell of a lot better than they were. Our apps are generally way faster and have way more features. Some are bloated and slow because they're poorly written or use terrible technologies like Electron. But in general, most are significantly faster and have way more features than they used to and are significantly more robust.
I think there is actually a problem with design philosophy here. We've largely given up on making truly powerful tools with a learning curve in favor of stuff that's accessible to the lowest-common-denominator user. That's fine for general use software, but it's telling that most of the stuff power users rely on these days all feel like throwbacks to the 90s. They expected you to learn the esoteric art of using and interacting with the software, and the reward was that once you got good with it you could move super quickly and elegantly to achieve things. Those options are gone now. Everything feels clumsy.
One of things I have to deal with as a designer is making sure the stuff I make is easy to interact with. Treating it as a "common denominator" problem is reductive: unless you have a specific idea of the capabilities your intended userbase has, it only ever makes sense to aid access to software capabilities as best you can.
As far as I understand, good UX design – that is, the layer of product design that's about usability – is about making easy things easy and difficult things possible.
I don't think a good tool must necessarily be esoteric. It may – such is often a trade-off for power – but it needn't be.
One problem I see is that there is such a focus on making things easy to use (which is a good thing) that we tend to ignore the possibility of being extremely efficient with something. One thing in particular that bugs me is just how often I have to use a mouse. Keyboards have so many keys and they are simply forgotten in most software today, especially function keys. Mice are excellent for a lot of the complex software we use, but they are slow. You can only get so fast with a mouse. Primarily keyboard based interfaces can be blazing fast and can be used without even thinking. Even hyper specific software today fails to take advantage of this.
Ideally it's best to use both, but modern software is almost entirely centered around using a mouse, even when it isn't necessary. This translates poorly to a good keyboard interface. Granted, touchscreens only make this more common in order to keep common interfaces between platforms. I guess I just miss feeling like the flash every time I'm forced to use software with a slow interface. Some productivity software in particular bugs me in this regard, as it seems trivial to improve upon.
Edit: @Akir below mentioned that the beautification of software affects performance and efficiency. I couldn't agree more.
Shouldn't you have that though? Like, shouldn't that be a major priority in specifying new features?
Power user functionality, though, would like medium or difficult things easier. But lots of bad design out there makes easy things easy, medium things hard, and doesn't bother with difficult things because they're "edge cases." And, too often, their idea of "easy things being easy" involves throwing a "HERE'S WHAT'S NEW IN THIS VERSION" tutorial splash before you can use the application.
And in still more cases, it's not even about making easy things easy so much as making them pretty. I see tons of applications out there where they make long animations to do things and they look cool the first 4 times and then after that I feel like I'm just waiting for animations to process all the time. These days, I am faster that many of the apps I use at work can keep up with and a big part of it is extraneous animations.
Having what I described would be a fucking godsend.
Most of the time, feedback is severely-diluted. Most users don't even bother reporting bugs and suggesting changes.
For those who do, the feedback is often filtered through multiple lenses of assumptions, false knowledge, disappointment, and/or any number of very human things that make arriving to a specific satisfactory conclusion very difficult, if not impossible.
Designers have to improvise, often via subtle deceit, to get the data.
A/B testing is one such feat of social engineering: giving one user the new option, leaving the other one with the old design, and seeing which gives the better target metrics (clickthroughs, orders, views, likes, other sorts of interactions...).
Building personas – fictional characters who are meant to be targeted during development – is another, used internally during design process.
Good old angry emails work, too, but these were uncommon to begin with.
Most of the information a designer gets, they have to try to make sense of before it even reaches the part of their brain that works on the solution.
That's not bad design, exactly: it's design that's overridden by things that are higher on the list of priorities.
Shipping new versions often, even if they suck.
Making more money from the users, even though your app is slow and tracks the user's every move.
Getting higher "user churn", even when your app doesn't exactly attract people as is.
(Also, if you ever use the phrase "user churn" in your app's design discussion, it's taken a bad, bad turn somewhere up the road.)
These all sound like problems that a decent "engagement manager" or "product owner" ought to manage around. I realize that's basically asking for a unicorn person who is both really well organized, technically knowledgable, a great communicator, and sensitive to design thinking/considerations. But. . . yeah. . . now that I say it like that I guess I get why it's not common.
Hell, I'd settle for a single link with a form I can fill out saying, "When I do X the app crashes." Good luck finding anything like that on a typical website or product. With most companies having no way to contact them, and now trying to send you into a labyrinth of circular document references when you have a question, I just don't even bother trying to give feedback anymore. They don't want it and won't listen in most cases.
A Richard Feynman of design managers is what you're asking for.
Nikita's article reads like a painful plea from an idealistic engineer who can only operate on a certain level of quality. Anything else is deeply uncomfortable to him. I understand that, because I'm the same way.
Not on a top-of-the-line Macbook, no.
I've been using a $300 laptop for several years now. (Apparently, it costed ~$390 at launch.) Had to upgrade the RAM and change the HDD to SSD to get decent performance. I can't play most games on it because of the integrated graphics chip, a terrible foundation for graphics processing.
The only reason I use VS Code is because there isn't an equivalent option for IDE that performs better.
The main reason I love Indigrid so much is because it is fast, on top of being uniquely-useful; Nikita would've loved to see such a tool.
The reason I can't hold a table of personal contents in Excel for long – regardless of its purpose – is because Excel is painfully slow to open.
I think Nikita's being dramatic and exaggerating, but his sentiment rings true with me. Software is not oriented to quality of performance: as long as it's fine on the developer's tricked-up PC, it's fine to put in production – or so I suspect. There isn't an adequate reason for consumer software to not support low-range hardware that isn't obsolete.
In the past, many game releases were plagued by performance issues. It makes total sense when you see what machines the developers were using (both the latest AND most expensive hardware). Nowadays it seems quite a bit better.
I think part of this might just be evolving standards too. IIRC the vast VAST majority of my issues in the past involved drivers, either for the video or sound cards, and version control issues between OpenGL or DirectX.
We also can't overrate how much it helps to have consolidated platforms, like GoG or Steam, to maintain that version control for us.
I get where you are coming from and I partially agree with you. But things were different on an app-by-app basis.
The biggest change in the past decade or so in software design is that modern commercial software is all required to be beautiful. Using fast, responsive, and predictable native UI widgets became strictly unfashionable. So programs ballooned in size as more and more media was required to make them pretty. Photoshop CS2 only took up about 300MB of hard drive space, but the current version takes up well over 1GB. Sure, the modern version does have more functionality than CS2, but not four times as much.
A big part of the problem is that developers often target devices that are better than what many consumers have and they build their application to do things that take advantage of the better hardware. The reason why I'm bringing attention to the trend of graphically intensive applications is because they performed the worst on consumer hardware where they were constantly running a bottleneck with the speed of storage. Commercial software was already being written assuming you had an SSD even while the prices of them were still sky high. Apple is actually probably the worst example of this for offering HDD-only Macs when their OS is constantly pulling data out of storage, but they seem to be getting better since only their iMacs have that option today.
That would also explain why modern applications seem to run much better today; modern PCs are much more homogeneous as far as performance goes. The speed of a single CPU core has stagnated over the past few years, and AMD's resurgence has assured that basically every CPU is fast and usually at least quad-core now. Even low-end CPU-integrated graphics are good enough for most non-gaming tasks. Memory has gotten cheaper and faster, and so have SSDs. PC manufacturers are still offering HDD-only models on the lowest end, but SSDs are saturating the midpoint and lower-middle markets, and the lowest end is being largely supplanted with flash-backed Chromebooks.
Here’s a Twitter polemic from a few days ago that shares a similar disenchantment but approaches the issue from a more humanistic, design-oriented perspective.
She rails against poor design decisions, offers better alternatives, and digs into the underlying reasons why the poor decisions were made in the first place and why so much software seems to keep getting worse. And she provides lots of examples and resources for further exploration.
It’s incisive, unsparing, righteously indignant, and hilarious. Also very long ;-) (100+ tweets) — worth it though if you’re interested in these things.
Goddamn. I'm halfway(?) through the thread, and it's fascinating. Every second tweet makes me want to register on Twitter and ask Amy questions.
(She even talks about hierarchical lists. I'm making Intergrid, which is the epitome of hierarchical lists. I have so many questions I want to ask...)
Thank you very much for sharing the thread.
Number 71 really caught my eye.
Recently I was sweating creativity so I fired up my music score editor. A few minutes later I was at a dead standstill. It's agonizing trying to compose on a computer. Like you can't just write notes and play with them, you have to conceive a whole musical fragment before you can even type anything.
I think there are some primitive attempts at solving this in the "plain text" domain. It just boggles me that apparently no current software treats music entry as "forming structure from interlinked musical ideas".
Do you have an idea – even a vague one – for how a better software could be implemented in this field? What would you want to have that's achingly-absent?
Yes, thanks for asking.
I have dozens of pages of notes on my ideal music notation software. My designs started from the visual interface and input methods, but they turned into more fundamental questions about representing music in computers.
Basically I think the problem is that modern notation programs are essentially MIDI editors, plus graphics editors. There is a single line of music that is strictly divided into measures, then decorated with markings.
Instead, we should treat fragments of music by themselves. I have an idea, so I type it and it just floats there. It's not attached to anything. I can edit it in place. Then, if I want, I can splice it into my score, setting a time signature, or whatever.
There's a lot more to deal with when you take those ideas as far as possible. I'm still working on the basic representation. But I really want to make this into software as soon as I can. And my other personal projects seem to be wrapping up...?
Sounds like a fun project.
If you're ever ready to release it – or at least showcase – please let us know. I'm eager to see the solution you come up with.
...
...
A lot of this is client driven though, largely because people have no idea what reasonable expectations are and purchase decisions care about speed and cost over quality under-the-hood. This is largely it's because speed and cost are measurable things that you can hold people to in a contract while things like how elegant the solution is are not. So where do you go with it?
If people were willing to let things unfold slowly it would work a lot better, but in almost every "digital transformation" I've seen, people aren't even making a big technology change until several years after it's overdue so it's always urgent. Nobody has been very forward looking about developing stuff, and the development time horizons are so long that you can't keep the lights on without shipping something.
One funny thing is, as software has insinuated itself more and more into the design and use of cars, planes, and architecture those have stopped "just working" too. The mechanical/physical stuff works better than it ever has before, but the software (and sometimes firmware) are just trash. And of course it is! If the software dev world can't manage to put out parsimonious and elegant software, what hope do companies for whom software isn't a core competency have?
I go back and forth on how much I agree with this guy's points but there is definitely at least some truth to what he is saying. My company is going through this right now. The engineering teams that build our products and features are too far abstracted from everything for the sake of "it being easy to build features" that shit is now terrible. We have thousands of cpus just running node.js because teams think resources and VMs are free because they come out of the ops budget not the engineering budget. We have systems that work perfectly until they hit a slow network connection, and then can cascade failures because engineers haven't thought about latency. New releases can take literal hours because of the legacy java code that is insanely inefficient but no one will take the time to rewrite. People don't think about hardware specs at all anymore because the most common programming languages are the ones that allow us to abstract memory and hardware away. We do it with docker and VM's. You can always easily throw more resources at a problem if you have budget. We're currently revamping our infrastructure to bill engineering teams based on resource utilization (easy in servers/VM's, harder in kubernetes) to get people to feel the pain of resource management and we'll be there with documents upon documents of suggestions on how to be more efficient.
If you could, what would you change, in your case and in general, to improve the situation?
I don't think that you can blame any one specific person for this, but I have noticed that when it comes to corporate operations (especially ones running at larger scales), when coming across a problem, are at least 75% of the time going to chose to just throw money at the problems rather than investing in an efficient in-house solution. Its why companies like Oracle and IBM continue to make the obscene profits on their Enterprise solutions in spite of the multitude of competing solutions.
That sounds awful! I hope you get to make it better. I have definitely been there.
Right now, though, I'm working on a professional product on the desktop, and we spend an enormous amount of time working on performance. We hired a full-time optimization engineer, and the rest of us have recently been tasked with optimizing each of our own parts of the app to work better with newer hardware. (Which should also help with existing hardware, though to a lesser extent.)
I guess that's why the article rubbed me the wrong way a bit. I don't disagree that it's a general problem in our field, but at the same time, his narrative doesn't fully match with reality. I've been using computers since the late 70s and current applications and OSes definitely are doing a lot more than they used to. It's not perfect, but it is sooooo much better. I'd love to see it get even better, but I do appreciate how far we've come.