Sun Microsystems on Oct. 17 will unveil its Project Blackbox, an initiative designed to address such issues as power, cooling and infrastructure deployment for companies in such areas as Web 2.0 and high-performance computing.
The plan calls for delivering all the technology traditionally found in a 10,000-square-foot data centerfrom servers to storage to softwarepre-integrated and ready to roll inside a standard shipping container.
Basically, a customer orders what they want, Sun builds it inside a container and within a few weeks the container is delivered to the customer’s site. The user simply plugs in the power, networking and chilled water and it’s ready to go.
Greg Papadopoulos, Sun’s executive vice president and chief technology officer, spoke with eWEEK Senior Editor Jeffrey Burt about the concept.
What was the inspiration behind the Blackbox Project?
Looking at the fact that everybody who is using computing today are actually sort of custom-building larger systems. And most of the computer business is giving people the piece parts. It’s like we gave people power generators and said, “Go build a power plant.” And we thought, “Maybe we should go look at what engineering it would do at this level.”
So we sort of looked at the holistic problem, that computing is not just the server or storage or networking gear, but how those fit together. And then how they are powered, and how they’re cooled and what’s the facility for them. We wanted to engineer that.
So this is, how do you go after very high-scaled deployments that need to be exceptionally efficient, low-cost, ecologically responsible, and then basically challenging the assumption that you’ve had for so many years in computing that people and machines live together.
It goes back to the operator who used to hang tapes and change chad out. So we designed spaces that could handle both people and machines, and that was a happy thing to do for a long time until things like power and cooling and a whole bunch of other requirements became so excessive that it’s actually massively inefficient and time consuming to go design data centers now.
So was the drive behind this because of power and cooling, was it because of space constraints, was it because of the need for data centers to be more flexible and more dynamic?
Yes, yes, yes. All of that. The reality of how the idea got started was I was visiting Danny [Hillis, co-chairman and CTO at Applied Minds] in Burbank [Calif.], and we used to work together designing supercomputers, and we were talking about the trend toward smaller, faster servers and what’s the smallest, densest thing you could make and in the typical contrarian style, with Danny it was, well, what’s the biggest one you can make? And then you say, “Well, you know, if you make it any bigger than a shipping container, you can’t move it around easily. If you make it exactly a shipping container, then you get this whole, interesting worldwide infrastructure. OK, so then let’s use that as a design point. That’s going to be the size.”
Then we went through the capture of, so what really goes on and what’s important as people are building out grids and things, we we’ve been building out our own grid, so we were taking a lot of that learning and incorporating that into the design.
You’ve got a number of patents pending on this, including two of them for the cooling technology. Can you talk about the cooling system in this and how it works?
It’s one of those things that we looked at the cooling for a long time. One thing you could think of is, well, let’s put kind of a raised floor there and do that kind of thing, up-down, sideways airflow, and then this very simple design came out that says, “Oh, let’s put all of the racks from to back in a circle inside the containera ring around the outsideand then just circulate the air through each server and just keep circulating it.
Of course, you do that for more than a few seconds, and you’ll have a blast furnace. So you interpose heat exchangers in between each rackthe exit of [air from] one rack is cooled down and is directly the cool air for the next rack. There are no other things getting in the way. In fact, it forms this kind of perfect cyclonic flow inside the box, and it’s very quiet, it’s very efficient, it keeps all the air contained within the units so that a lot of things like fire suppression and other things get a lot easier. It lets you have it in atmospheres that outside of the container are not necessarily clean. You don’t have to worry about that. It’s just for uses inside. It’s a really elegant breakthrough.
Next Page: Primary users.
So when you look at this, who do you envision being the primary users?
There are two classes of users. The sort of Google-dot-next, people who are on this curve of very high-end, growing infrastructure requirements to build things like Web services, software as a service, those kinds of things.
That’s characterized by scale that’s important, efficiency is incredibly importantwhat does this cost to buy, operate, etc.as well as the reaction time. The ability to go provision these things. Google’s spending, what, two years to go put in a data center? That just seems archaic for technology that, as [Sun Chairman] Scott [McNealy] says, has the shelf life of a banana.
At that level, this is designed for really high efficiency. The other part or this is that there is a strong responsibility with this, which is, we will take it back. So at the end, when you’re done with it and it’s no longer the latest technology that you want to power that part of the network, we’ll come and pick it up and responsibly recycle it.
The other class are people who have really intense mobile requirements that just need to be able to reactively site computing somewhere. That can be governments of various flavors, disaster recovery, certainly Web 2.0 kinds of companies that want to move computing more favorably towards where power is the past, where networking [is important]let’s go plug our sites in Europe and Asia and Africa and the like.
Do you envision this as a temporary, stop-gap need for these companies, or is this a more longer-term thing for them?
No, I think this is the way that computing gets done. It’s engineered infrastructure. It’s like, today what we do is we build all this computing stuff and there’s fierce competition among the components [makers].
We talk about industry standards and commoditization and high volume, all this stuff, and yet, at the very end of the game, someone does a fully architected custom view of a raised-floor data center, and all of them are different and you say, “Well, what’s that all about?”
It’s like you’re driving around, and every place you need to park your car has all these great scale economies [so] you have to build a custom garage. Well, we ought to engineer that, too, and get that into mass manufacturing. This is sort of another angle for you, as the mass manufacturing of the data center.
When you look at this, there certainly are advantages to the end user. What are the advantages to Sun operating in this way?
We’re fundamentally a systems company. Our model today is we sell systems that comprise hardwareserve and storageand the software, Solaris, that goes on top of it. That’s our ideal sale. We’ll sell the components independentwe’ll give away the components independentso that’s all the new modern business models here.
This is the system, the next era in system design and system engineering for us. It’s what we do. Think of it this way: we have been building computers that attach to networks and now we’re building computers from networks and the other thing that’s in the middle of this container is a network. These hundreds of servers [and] storage units that are in one of these things typically are interconnected by a high-speed network that’s inside that container.
Next Page: Industry trends.
It sounds likecertainly on a much larger scalesomething similar to what HP is doing with its Lights Out Project, the idea being to put the hardware together with the power and cooling and networking technology into a self-contained unit. Do you see this as a trend in the industry itself, outside of simply what Sun is doing?
I think there is a huge pent-up demand for somebody to figure this out. I think that step where we’ve done a pretty radical, out-of-the-box [move], if you will, is [asking], “So, what did you need the data center for in the first place?”
All these other designs [for cooling] are basically, yeah, we’re going to give a new wrap, or we’re going to bring chilled water into this rack, or gas exchange, or we’ll design a set of racks that do that. It’s always in the context of, install it in your machine room. This is, there is no machine room. There’s a container port.
Unless you talk about how a hundred of these things fit together and what your infrastructure is to support that and how built them and designed them and commissioned them in places different from where you deploy them.
Even lights out, as HP talks about in the data center, people still have to bring it there and put it together. Here, again, because of the magic about this, you can ship these containers anywhere really cheaply around the world, you can put them together where ever you want them put together, and then ship them. You have them on the spot hooked up and running, and that’s a very different cycle around not only speed of deployment, but also where you need the skills.
Are you able to do things like copy it exactly? A lot of these folks have patterns of pieces of their data center service or whatever, and they’d like to get that pattern exactly deployed in India, and they don’t want anybody messing with it.
How do you see what you’re doing in with this project impacting what you offer for a more traditional data center environment?
I think for the time being, we’re gong to be focused on innovation and driving this design point really hard. We have things for, ok, I have a data center and I want more traditional access to it. We’re doing a lot of work in that area, too. That’s not what Project Blackbox is about. But we do have a lot of what Andy Bechtolsheim is putting a lot of energy into.
Those markets don’t go away, those are important markets and any big customer of ours is a portfolio in any case. “Here’s my core IT stuff, and, no, it doesn’t make any sense for me to put my Siebel implementation into this thing.” On the other hand, if I go over to Salesforce.com, they’ll go, “Yeah, sure.”
Going back to that core enterprise, well that core enterprise might be a company in the package transportation business, and what they really need is high-performance computing because it solves the traveling salesman or cargo loading problem. Boy, that’s going to consume a lot of power and computing power, and how would they go about deploying that? And now here’s an opportunity to do that in a much more efficient style. These things will coexist.
Next Page: Problem-solving.
I understand that. But if I’m sitting there with a traditional data center and I’ve got some power and cooling problems, and all of a sudden I learn that Sun has this new cooling design for this Project Blackbox, I may wonder about the chances of seeing that design being available to my data center.
We are doing similar things in the data center. It is an important problem to solve. People kick around numbers like 70 percent of IT shops are out of data center space or power or cooling capacity. Certainly in talking with customers you feel it. I feel it when I talk to so many customers who say, “I’m out of space. Help me.”
Here’s a real quick way for them to incrementally solve that problem. So it may be that we should be focusing on how do we get the right requirements and engineering into it so that we’re capturing more and more of the customer requirements and just don’t look back at the data center from the point of view of this design point.
So as we said, there’s the really efficient high-scale scale-out stuff, there are people who have extreme mobility requirements, like government and those applications, and then this third area that I think we’re both talking about, which is core IT stuff where the data center just isn’t working right now. What role can this play?
What role will virtualization play in this project?
Virtualization, in the way that we think about it in a context like Blackbox is, it’s really an essential lubricant. It’s that lubricating plate between the physical hardware assets, whether they’re racked or bladed or whatever they are, from the logical demands that are on them in terms of operating system and the software stacks on top of them.
It’s a lubricating plate because you can move things around. You can take an image, put it on its own system, maybe put another one next to it that isn’t being efficiently utilized, maybe move it somewhere else if you need more horsepower, maybe put a thousand of them together in a grid, you can get something else accomplished. So it’s really that abstraction layer or lubricating plate between operating systems and hardware and that we get out of the idea that when you deploy an application, it’s forever bound to the kit on which you deploy it.
That’s really an essential piece in thinking about how you get maximum utility out of a Blackbox design, because you really don’t want to care at the detailed level [that] my stack is running on exactly that [server] that is sitting on rack 3, position 2. That’s not the idea. It’s a sea of computing stuff.
Check out eWEEK.com’s for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.