A lot of searches that lead to my blog lead to the article “Why does software suck?”. That article is written in Dutch, so not very understandable for most English speaking people that end up there. So, I decided to translate it. And this is the result.
Why does software suck?
What is so complicated, complex, inhumane, about software that almost nobody seems to get it Just Right. Evidence for the fact that software sucks is numerous: viruses, spyware, bugs, crashes, updates, upgrades, versions, computer illiterates and devices that are impossible to operate.
“To Err is Human”
The question started wandering in my head. And as it often goes (in my head), I started looking for causes. What is so inherently hard or complicated about software, or the creation of it? I encountered “The Design of Everyday Things” by Donald A. Norman, which contains chapter 5, “To Err is Human”. And that might as well be the core of the entire problem.
Computers (or, in a broader sense, technology) do not err. And because of that, they can not cope very well with someone that is making mistakes in the interaction with an appliance. But people, they do make mistakes. And often. In writing this piece, I have used the backspace and delete keys multiple times, I have been hassling to get the link right and at least one sentence has been completely rewritten. And that in not even 15 lines.
Slowly I have come to the conclusion, that people just don’t have the brains to write Good Software, like we can build houses, make music or write poetry. The basics of software are so inherently mathematical and far away from the normal use of the human brain, that it is almost impossible to write error-free software. Yes, there are ‘solutions’ with good sounding names such as ‘managed code’, but they solve the problems only in the lowest layers. They will stop you making the most basic of errors (incorrect memory management, stack corruption, etc), but they don’t solve the conceptual complications of software. But, what are these conceptual complications?
Everyone who has ever touched a programming language, knows the sentence “Hello World!”. Many might also know the next quote (perhaps now, Google didn’t give me many results):
“Every program more complex than ‘Hello World!’ has bugs”
Those bugs can be brought back to several basic problems in human-computer interaction:
- computers do not accept inaccuracies, errors or gambling. It’s one or zero, nothing in between, unless programmed or engineered (which brings us recursively back to base problem 1),
- the human brain works remarkably bad in an environment where everything has to be defined accurately,
- the world is a complex place and with 1 and 2, this delivers insuperable problems.
And then, there are some side-problems that are not making things easier:
- computer software is written by writers of computer software, not her users,
- for some reason, software needs to get more and more features (probably to keep the users paying for the mistakes the programmers make),
- the trade of computer programmer is fairly young, thirty, maybe forty years; a lot has still be developed before we are ‘there’, if we ever get there.
The last three problems have possible solutions: better testing, better specifications, etc. These are no insuperable or structural problems. The first three is where the real problem lays. That’s where I will go into detail.
Computers don’t accept inaccuracies
The first part of the problem is that computers do not accept inaccuracies, unless designed to do so. If you want to solve this problem, you will have to program a system that not only accepts inaccuracies, but that can learn and develop as well. Also in ways you never could have imagined initially. Especially the could have is important. You will have to anticipate what you cannot anticipate. This is still not solved in current AI. There are some self-learning systems that accept inaccuracies, but systems that learn things that haven’t been meta-learnt, do not – as far as I know – exist yet.
As a software engineer I know how important (and incredibly annoying) it is to have to describe everything accurately. Even trivial syntax errors are not solved automatically. They are caught, but all in all, we are not so confident that the compiler can change the code and recompile it itself (or better yet, just compiles the code with the minor syntax errors still in).
Now, minor syntax errors are trivial and as such easily fixed. But there are a lot of other mistakes you can make, because computers do not accept inaccuracies. Memory allocation errors (reasonably caught with managed code and a lot of different languages), weird constructs that are inherent to the computer architecture or language constructs, or lazy programmers.
The human brain works remarkably bad in an environment where everything has to be defined accurrately
First of all, I have to admit I am not a psychologist, neurologist or biologist. I am a software engineer that has read some books about it. So there is a good chance I am hopelessly wrong. But, what I have found is that the human brain works best in an environment where not everything is laid out exactly. Trivial things are unconsciously filled in and executed. Assumptions (also wrong ones) are made and errors fixed. There is a trial-and-error process, learning and – that is what this is about – inaccuracies. Human functioning doesn’t seem black or white, but many shades of grey. That’s how we work and that is how the world works.
Hence the problems start, when human beings have to translate problems to a computer program. Everything has to be defined exactly in the domain analysis phase. It is not possible to let the computer do the assumptions or give it values like ‘around 37.5’. You will have to define the limits within which the values are correct. For that matters, it has some analogy with laws. Luckily, we have judges there that can ‘humanise’ that process. Those judges do not exist in computer land. Boundaries are hard and only because people are doing the trial-and-error cycles, can a computer program adapt to the ‘real world’.
This is true for programs that have already been made, but also the writing of these programs introduce the same problem. Before a program gets its limited functionality, it received numerous human trial-and-error cycles. During programming you forget the trivial things and you make the same mistakes again and again. And a program that is ‘done’, isn’t done, it is just ‘done enough’. After years, errors can still surface that are the result of the human-computer discrepancy.
The world is complex
Like I said before: people make mistakes. in 15 lines I made several mistakes. Imagine how many mistakes exist in the 60,000 lines in a factory automation application or the 40 million (!) lines in Windows XP. A lot of lines, a lot of code, that’s one side of the case. The other side is the complexity. Computers are complex machines (a lot of parts, a lot of modi operandae, a lot of possibilities) and hence, the controlling software is complex as well. The world around us is also complex, with many objects, many interactions and a lot of unconscious acts. This should give us a good mapping, albeit that the ‘real-world’ complexities do not have to be controlled in an exact way. Originally I am an electrical engineer and as such I have learnt that many variables do not have to be exact at all. Everything has some kind of resolution. If you calculate a resistance of 3.27kΩ, it is more than okay to put a reistor of 3.6kΩ in; a discrepancy of 10%. A closer-to-the-home example is, that you do not have to control your arm to the millimetre to pick up a cup of tea. Software does not accept these kind of errors, 1 byte wrong in the memory and your program makes mistakes, or in the best case, it crashes.
Complexities in software and source code are part of the biggest everyday problems of the software engineer. Decent memory management (also in managed languages such as Java and C#!), multithreading, (massive) parallellism, real-time aspects, are all problems that are not easy to grasp for a human being, let alone that they can be implemented in a product without errors. What we should be looking for, or try to develop, is a programming language, -environment, -device, that catches these problems for us, or better yet, makes sure we don’t make them at all. There are researches going on, there are some products on the market, but we are not there by a long shot. If we ever get there, because in my opinion it is a conceptual problem. The solution may lie in biological computers, as they exist in eXistenZ. Who knows? We’ll see.