Is there anything called Bug-free software?
I like to look at IT development from two angles: from the point of view of developers and from the point of view of society.
I can talk a great deal from the point of view of developers, as I was also a developer to varying degrees during my professional life. I do not wish to reel off my life story right here. Perhaps I should do that on a separate page.
Developers are the ones who, starting from a concept, develop the software that will implement that concept to run on a box of electronics, which will be hopefully available to an end-user.
I suppose we are familiar with the terms: hardware and software. Hardware is the box of electronics and software is the stuff that makes the hardware do things for us.
This broad statement needs to be broken down further.
There are many types of computers. The computer itself is a box of electronics, which can do nothing by itself, even if you switch them on. But, for most of us, computer is something, when you switch on, undecipherable stuff scrolls up the monitor and allow you to do things.
This basic thing that makes the electronics box do something is the so-called operating system. You have heard the popular ones: the Windows series, the Apple series, the Linux series and so on. There are operating systems, which you normally wouldn't hear of - the ones that run the so-called mainframes, super computers, microcomputers, and embedded systems. But, without going into detail, I suppose the concept of operating systems is clear. So, you are an end-user of a certain kind if you have a machine that allows you to do things.
While the hardware (the electronic stuff) are physically built and assembled, operating systems (OS) have to be developed by developers. Without entering the domain of OS development, their development also starts with a set of concepts, or requirements. The developers must faithfully achieve this blueprint and nothing else. This is where testing of the developed product comes into the scene. More on testing below.
Now, if your computer can only startup and blink, there is hardly any use of it. It must do things for you. To get the machine to do different things, you need different items of software, like Office, Internet Explorer and so on. Who creates them? Again by developers. Again guided by requirements, the development process faithfully following the intended requirements and nothing else. Testing will ensure that.
You, member of the public, become the end-user of such software.
Software are also developed for use within businesses, guide missiles, navigate you from A to B, and a million other things. Yet, they all follow this generalized, simplified path.
Computer software is immensely complex. The designs should not only cover what it should productively do, but also guard the user against the user's mistakes as well. Also, the designs should be able to deal with problems with electronic equipment connected to it, incoming and outgoing information, synchronization of activities within it… Oh, the list is extremely long!
Now, the developers have to produce the software which will be robust enough to handle all those events and also be reliable, trustworthy, and deliver reasonable throughput, and also cheap.
You ask, why these ills with computers, then?
The answer is simple: lack of proper testing!
Testing is an enormously tedious task. Especially when hundreds, or even thousands of developers, spread all over the globe contribute to a piece of software. Even worse - when it is attempted to embrace, implement and deliver thousands of features in the product.
What is stopping then?
Number one is time. Number two is cost.
Why is time a factor? It is because there are competitors in the field. Unless I get my product into market before my competition, I am doomed! So, there is more than just an incentive to get into market, even though the product is not fully tested.
Among the developer community, a claim is doing rounds, which seem to lay down the principle: 100% testing is not possible.
What about cost? Yes, time is cost. More features one wants to incorporate into your product, complex it becomes and the testing task grows exponentially, virtually.
So, what is the best strategy? Release the product and let the end-users test it for you. The release can come in several levels: alpha, beta, pre-release and so on. Testing of these releases will mostly be done by specialists and interested parties.
When the product finally hits the market, YOU test it for the producers. YOU pay the price of breakdowns and losses of all sorts.
The innocent public, that is YOU, are there to test the damn things, free of charge, while the creators rake in the profits.
Of course, they release service packs from time to time.
So, you - as a citizen, do you feel you are taken for granted and as a member of a passive market for new technology: hardware, software and services? Are there ways to minimize this exposure and aggravation?