Feeling Testy?By Wayne Rash | Print
Re-Imagining Linux Platforms to Meet the Needs of Cloud Service Providers
If XP SP2 has showed us anything, it's that one way or another, it's vital to test your solutions before you ship them.
When I was interviewing resellers and integrators for a recent story about Windows XP SP2, I was struck by the unanimity of one piece of advice. It revolved around one word. "Test," they said. And they were right. It's something I've come to appreciate since it's saved me in many ways.
It was just a few weeks ago, for example, when I needed to integrate tape backup into an HP ML-310 server. Not a big task, and with HP's well-thought-out design, it should have been something that took only a few minutes.
What I didn't know until I started testing was that the tape drive provided by HP wouldn't work with Novell NetWare, which was running on the server. Apparently even HP didn't know this. But it turned out to be the case. Fortunately, I'd conducted testing on that solution right away and was able to return the drive for credit.
Earlier this year I was looking at a series of dual-processor workstations to see which would work better in an environment that required very heavy multitasking demands.
One of the targets was a dual-processor machine from IBM that used AMD Opteron processors. Two others were an MPC and an HP that used Intel Xeon processors. Most people would simply follow the conventional wisdom and assume that the Opterons would be faster, but in the particular environment for which we were testing, that wasn't the case. But I'd never have known had it not been for testing.
And, of course, everywhere you look there are recommendations that you test SP2 before using it with your custom applications. But in fact, you should test it, and all other software for that matter, on every type of platform on which you wish to use it.
In one case I found that two nearly identical HP workstations had different results. With one, SP2 worked fine, with the other it didn't. The only differences were a slight difference in processor speed and the fact that the one that didn't work used a SCSI disk controller and the other machine used an ATA controller.
Custom applications have the potential for even more problems. Over the years I've seen apps that would mysteriously fail, only to find out later that some overlooked piece of code depended on something that wasn't available in just the same way on the target computer. It could be anything. In one case where this happened, the failure was due to the absence of a floppy drive, even though the app didn't use it.
The real bottom line here is that you can't assume anything. Regardless of what it is you're trying to do when you integrate a system, or even just sell a system for a specific use, you must test. That, of course, implies a couple of other things.
First, if you want to keep from making yourself crazy, simplify your environment as much as possible. That means you give everyone the same desktop computer and the same peripherals, at least to the extent possible. You keep everyone on the same operating system. You use the same applications. And this means really the same. Not different versions of XP or Word or whatever.
Second, for every variation from your standard, you need to test. This means that if the developers have different computers from the office staff, you need to test both. If the art department uses Macs, you need to test those, too, and make sure they will interoperate with your PCs or your server environment.
Finally, you must test to make sure everything will work with your networking environment and your storage systems. If nothing else encourages you to keep the environment under control, that will do it. Fail to do that, and you'll feel a lot worse than just testy.