What’s a Grid?
The computer on your desktop, the one you’re using to read this email, only has so much processing power, only so much speed. Your vice-president, who’s gazing out the window and thinking about his golf game, has computing power to spare. If your company had a grid set up, your computer could draw unused processing power from his computer to speed up your number-crunching software.
Grid computing boosts science studies especially, allowing vast linkups of computers to make calculations in minutes that would take a single computer months.
Google has a setup that’s similar to grid computing. All 10,000 of their Linux servers, the ones doing the searching work, they’re all linked, and share processing power. The difference is that they all think of themselves as one machine.
In a grid, each machine has its own identity, so you can keep your email from the curious eyes of Bob the intern, while still sharing your unused processing power with him when he’s trying to play some online pool.
The World Wide Grid
What we’ve discussed so far is an internal grid, a means of sharing processing power within a closed network of computers. What has the big fish biting is the idea of a worldwide grid that would allow your network in Nebraska to draw processing power from offices in China.
The worldwide grid would connect networks around the world and enable them to borrow, trade, rent, or buy processing power from anywhere. Imagine – leave your company’s computers on at night and lend their processing power to a company on the other side of the world that’s just opening up shop as you’re shutting down.
The next morning you’d arrive at work with a surplus of processing power from your partners in India, without the enormous investment in new hardware.
The worldwide grid would allow the trading and brokering of processing speed as a commodity.
The Nitty-Griddy
We’ve looked at the bright glowing future of grid computing, and it looked great in phrases and sentences. The reality of grid computing is much less advanced. Developmentally, we’ve got a car and we’re aiming for the moon. We know it’s possible, but we just don’t have the proper technology to allow for the journey.
Security is still a major issue. If you wanted your company to trade processing power off to Sri Lanka, you’d have to leave your network open to all manner of hackers, crackers and digi-whackers. Right now it’s like leaving the door to your factory open at night to let people use your machines. Some people might want to take a peak into your office.
University engineering and science research committees have been the primary grid users to date, with linkages of supercomputers allowing for quick tallies of the number of dust particles in the universe. These folks aren’t particularly concerned about security though, at least not the way IBM’s concerned about protecting their customer lists.
Security’s not the only issue either – there’s also the question of crossing platforms, and managing the logistics of the give and take of global processing power.
The Bottom Line
We’re still a long way off from the worldwide grid, so don’t sell off your servers just yet. The internal, company-wide grid is much closer though. Nathaniel Palmer, chief analyst with the Delphi Group, predicted that by next year his company will have enterprises sharing their processing power across a local grid.
The local grid will allow your company to utilize each machine to the max, which will in turn allow you to use fewer machines. Fewer machines means less expense.
Will grid computing materialize? My magic eight ball says “wait and see.” The possibility to reduce expense is real, and saving money while boosting productivity is a promising sales pitch. However, don’t look for mass migration until this latest solution has lost its buzz and faced the sober world of making money.
Garrett French is the editor of murdok’s eBusiness channel. You can talk to him directly at WebProWorld, the eBusiness Community Forum.