Distributed Computing, the Next Money Saving Step?

A password breaking program called John the ripper has been modified to use the CPUShare pay-per-MIP parallel processing network as a test. This means that anyone with an MD5 hashed password can now look to rent enough machine power to take a serious shot at finding the original password.

This is a step along the current path of technology which is leading us away from single expensive solutions and on to more organic systems which can adapt to changes in usage as neccessary. We’ve already seen file sharing start to move away from using a single source which serves hundreds or thousands of users over a single connection to software which can use the computers of users who are downloading a file as a source for other users to download the file, and this move can be used as an example of how software can use more computation power as more users make use of the software (e.g. If the computer of every user filing their tax return online could be used for cross checking the tax calculations of other people you suddenly don’t need a big server room full of machines just checking calculations).

The next step for file delivery was contribution delivery networks, these were operated by companies such as Akami who took the customers data, distributed it to their various data centers around the world, and when users wanted the data Akami would deliver it from the most senisble data center in terms of speed and availability.

The latest step has been peer-to-peer (or P2P) file sharing where every computer which downloads the data can also upload it to someone else, so if I download a program from IBM, another user can then download it from me, thus reducing the load on IBMs servers.

With computing power it’s taken a bit more time to head down the same road, but we’re getting there. We started off with a “one computer per job” mentality which was not cost efficient for jobs that didn’t use the computing power available for most of the time. The next step was “one computer, many jobs” which allowed various jobs to share the computing power available in an attempt to ensure the computing power was not being wasted. This had one drawback, if one job caused the computer to become unresponsive all of the other jobs wouldn’t be done.

The next step which has been made by companies is a move to “one computer made to look like many computers doing many jobs”. This is commonly called Virtualisation and has the great benefit that if you one of your Virtual computers stops responding the others can continue working. This gives us all the benefits of the “one computer, many jobs” approach without the possibility of one job stopping everything, but isn’t the silver bullet. The main problem with this approach is that you must have enough computing power to cope with all of the jobs being busy to ensure that when the computer needs to do something for all of the jobs it’s performing it can do it and doesn’t suddenly slow down.

So where to next? Well, the way has path has already been drawn out but the people like those at CPUShare. In the same way that peer-to-peer file sharing uses the computers of end users as and when it can, distributed computing has the same principal. You still need enough computing power to do all of the jobs neccessary, but with networks like CPUShare you can rent in some computing time to cover those busy periods. Theres no need for someone to ship a physical computer to you. Theres no need to spend money on techies to install, maintain, and remove a computer when you need a bit more power. All you do is go to the CPUShare site and buy in some more power.

All thats needed now is for programmers to start writing programs this way and we should find that solving a computing slowdown becomes as simple as buying some downloadable software.

read more | digg story