Are we still moving forward?

With computers getting faster and software getting more and more complex it would be easy to presume that we are moving forward, but here are few things that might make you think differently.

The upgrade loop

Software companies make their money by selling software and selling it a second time, to an existing user, adding some sort of feature to entice the user to upgrade. But what happens when software reaches its peak? We are still stuck in an upgrade loop, with extraneous features being added to justify the upgrade, making the software bulky, memory hungry and slower.

Old vs New

We tend to turn a blind eye to this, but let’s just compare a few historical things to highlight the price we are paying. Let’s take a typical machine from around 1998 – a Pentium running at 90 MHz, 64Mb of RAM and a 10 Gigabyte hard drive. Compare that to a modern machine which is likely to have about 120 times faster CPU speed, has 12Gb of RAM, and a couple of terabytes hard drive space. Overall it has at least 120x more of everything so it should in theory do everything 120 times faster. But it doesn’t.

Today’s reality

For example, opening an industry standard vector drawing program (mentioning no names) takes around 45 seconds. That’s a long time to wait for an initial blank page to appear (and what is it actually doing during that time?). It didn’t take that long back in 1998 and it’s certainly not opening 120 times faster than it used to. Comparing the features of that vector software to those of 1998 it’s likely you’ll be using the same core features 95% of the time. There is rarely a case to use the newer features in our everyday workflow.

Back in the day meant getting it right first time

If you look further into the past to say the mid-80s, computers worked a little differently. Instead of hard drives, their operating systems were all held on ROM chips. You turned them on and they were instantly ready to use. Sure, you had to load software from a tape which took forever, but in principle the main machine was ready as soon as you flicked the power on. What a great way to store an operating system – virus proof, not easily corrupted and instantaneous. Why did we drop that? Well it is a costly way to provide an OS, and it’s fairly inflexible in that the ROM can’t be updated unless you buy another, so adding the latest driver becomes a problem. But here lies another issue – software companies don’t seem to test their software as well as they used to. The ease of being able to update code after its initial sale has relieved the pressure of testing code quite so thoroughly. Add to that the huge amount of code needed to support all those features you’re not likely to need and you have an environment that encourages bugs to happen. I’m not saying older software was totally bug free, but putting software on a ROM meant it had to be tested thoroughly before release, otherwise face the shame of any errors for a long time to come.

More rushing = less progress

It’s not always the software itself that is at fault, sometimes the operating system changes or another piece of software may interfere. With everything changing all the time it’s no wonder there are bugs happening. Maybe everyone should stop adding new features and tweaking the look of their interfaces and actually get their code 100% bug free. That would be a feature in itself!

Is the future modular?

Maybe there’s a way to make the software more modular, so you can choose which features you want to use and have it only load up what it needs for that functionality. Music software has been doing this for a while with things like VST plugins and it’s an excellent way of doing things. Just load a minimal base package to start with and add to it what is needed to do the job in hand. No need to load anything that’s not of use. It would be nice to see a similar thing happen in other areas of software.

Like most things, simplicity should always be the rule.

Author

David Kingston

David Kingston

Digital Director