Channel Insider content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Back, somewhere before the dawn of time, when I first got involved with computers, computing resources were incredibly expensive. On the big iron machines that I first encountered, the cost of use was so high that users were allotted and billed for CPU seconds. And the cost per second could be quite significant.

Of course, this was also in the day of punch cards and batch jobs, where a couple of days of work spent punching cards was followed by dropping the cards off at the computing center for the program they contained to be executed at some point in the future by the computer operators. For the cutting edge in computer technology I had available a teletype terminal with a paper tape reader and a 110-baud modem, so that I could do work remotely. But whatever I chose to do was limited by the available CPU time allotted to my account.

While not completely obsolete, time-sharing on big iron these days is pretty much limited to academics arguing for cycles on supercomputers to run their pet projects. The business world has moved on to dedicated computing resources that don’t require explicit sharing between different applications and departments; the cost of the hardware and software has dropped so low (compared to the old mainframe days) that this makes business and economic sense.


But now it appears we have come full circle with Sun’s announcement on Sept. 21 of N1 Grid Computing Pay-Per-Use Cycles. In this concept users will be able to buy computing cycles on other people’s computers on an as-needed basis. The PPU concept is based on making use of the N1 Grid Container technology that was announced last march.

For more on Sun’s pay-for-use computing announcement, click here.

Grid Containers are a software partitioning technology within Solaris 10 that is designed to allow server resources to be used more efficiently by creating as many virtual servers within the physical hardware as the hardware can support, up to a maximum of 4,000 containers. Each container looks like its own Solaris server with a dedicated IP address, host name, memory space, file area and root password.

With server consolidation being a big play at the moment, Grid Containers are Sun’s answer to the consolidation question in the Solaris world. Given that Sun sells both hardware and software, it makes sense for the company to provide an easy-to-implement partitioning solution for Solaris in order to push the sales of the big Sun boxes as consolidation servers, much as many Windows Server vendors are now offering VMware with their big SMP boxes in order to consolidate multiple Windows Servers into a single piece of hardware.

While the original N1 Grid vision involved the virtualization of every service within your data center, this new PPU model expands that vision with the promise of making compute power available on an as-needed basis across the Internet. As Jonathan Schwartz pointed out in his blog, this technology is not for latency-sensitive workloads since there is no way to overcome the inherent latency issues that any Internet connection represents. Instead, he’s expecting this technology to be used for discrete workloads that can be handed off to a computational cluster and delivered back to the customer when the project ends.

Next Page: Convincing customers on price, security.

Aside from giving flashbacks to the days of mainframe batch jobs, I don’t see any inherent problems with the concept. What I do see is the necessity of convincing customers that the job can be done for a reasonable price, and that the security of the process can be completely trusted. While Schwartz uses a conversation with the CIO of an investment bank to get his point across, my first thought was, “How do you convince any business to release sensitive business data for processing on a computer out of their control, across the Internet, that at the very least gives the administrators of those computers access to your corporate data?”

Schwartz does seem to think large companies will build their own N1 grids and offer their own PPU plans to the public, but there does seem to be a bit of a disconnect between the two ideas. If you are large enough to need your own grid, then why are you buying time from someone else, or if your grid has excess capacity to rent to other companies, why did you build it that way in the first place? I can see answers to both these questions, but can’t guess which way the market will go.

I do see this concept as becoming very popular within large corporate enterprises running Solaris—not for the ability to buy external cycles, but as a methodology for budgeting and charging for IT resources within a corporation. This same PPU model would work really well in organizations that are constantly looking for ways to accurately bill/charge IT time to multiple departments or business units within a large company. As companies migrate their applications to the grid, the virtualized resources would simplify the management of the hardware and applications and reduce the costs associated with running line-of-business applications, bringing the scalability and flexibility of the grid computing model to a broader audience.

At this time there is a dearth of information available about the details of the PPU process beyond the announcement of the technology and the price of $1 per CPU per hour, but Sun will be opening up a test drive of the technology starting on Oct. 4. You can register for the test drive, which will give you access to your own PPU container, at www.sun.com/tech-center.

Check out eWEEK.com’s Utility Computing Center for the latest utility computing news, reviews and analysis.

Subscribe for updates!

This field is required This field is required