BladeRunner Is Linux Cluster-Ready

By Francis Chu  |  Posted 2005-01-10 Email Print this article Print
 
 
 
 
 
 
 

Penguin Computing's first blade system offers Linux clustering out of the box.

Penguin Computing Inc.'s first blade system, the BladeRunner, is a cluster-ready Linux solution with a reasonable price tag. eWEEK Labs' tests show that the BladeRunner is a good choice for server consolidation projects or for high-density Linux computing clusters in midsize companies.



Click here to read the full review of BladeRunner.

Penguin Computing Inc.'s first blade system, the BladeRunner, is a cluster-ready Linux solution with a reasonable price tag. eWEEK Labs' tests show that the BladeRunner is a good choice for server consolidation projects or for high-density Linux computing clusters in midsize companies.

At first glance, the BladeRunner is a typical Intel Corp.-based midrange blade system targeted at departmental environments. But Penguin also sells the BladeRunner preinstalled with the Scyld Beowulf cluster operating system from Scyld Software, a Penguin Computing company, providing enterprises with a cluster-ready blade system out of the box.

BladeRunner server blades are equipped with Low Voltage Intel Xeon processors, which support speeds of 2GHz and 2.4GHz. The blades have a 533MHz FSB (front-side bus) with two DIMM (dual in-line memory module) slots that can hold up to 4GB of memory. Dual on-board Gigabit Ethernet chips provide standard Wake on LAN and Preboot Execution Environment features.

The $23,400 BladeRunner we tested is an entry-level cluster in a box. The server blades in the system were configured as one master node and a cluster of five compute nodes. The master-node blade had dual 2.4GHz LV Xeon processors, 2GB of memory and a 60GB hard drive. The compute-node blades had the same chip and amount of memory but were running headless (without any internal hard drive).

The BladeRunner's compact 4U (7-inch) chassis makes it a good fit for high-density computing . Supporting up to 12 dual Xeon processor blades per chassis, the BladeRunner can populate an industry-standard rack with as many as 240 processors in 120 Linux cluster nodes.

The BladeRunner competes with other Tier 1 blades such as Dell Inc.'s PowerEdge 1655MC, Hewlett-Packard Co.'s ProLiant BL20p and Sun Microsystems Inc.'s Sun Fire B1600.

The BladeRunner falls short of its rivals when it comes to hardware: It doesn't support 64-bit Xeon processors and uses slower memory and slower FSB.

The BladeRunner falls between its competitors in terms of maximum blade density. Both the Sun Fire B1600 and ProLiant BL20p support as many as 16 blades per chassis, although the ProLiant chassis is bigger, while the PowerEdge 1655MC supports as many as six blades per chassis.

The BladeRunner's chassis is easily serviceable and has good redundancy features. The chassis has four 660-watt power supplies in a 3+1 configuration, three fan cages with four hot-pluggable fans, a Gigabit Ethernet switch with as many as eight ports with an integrated system management processor and a built-in KVM (keyboard, video and mouse)/KVM-over-IP module. (The BladeRunner can support two Gigabit Ethernet switches running in tandem for high availability.)

The BladeRunner chassis isn't as expandable as higher-end systems on the market, including IBM's BladeCenter or HP's ProLiant BL40p, because it does not support Fibre Channel or InfiniBand switching options. The BladeRunner is still well-equipped to handle departmental environments—server blades come standard with a PCI-X Mezzanine Card to support a Fibre Channel interface or an additional network port, a less expensive option for connecting the BladeRunner to external storage systems.

The BladeRunner uses mobile ATA hard drives for internal blade storage, similar to the Sun Fire B1600. Each BladeRunner blade server can accommodate two 60GB, 5,400-rpm mobile ATA drives.

Next page: SATA disk blades option.

Penguin also offers an interesting SATA (Serial ATA) disk blades option that's suitable for sites looking for more blade storage.

The expansion SATA disk blades can support RAID 0 and 1, and two SATA blades can be mirrored in a RAID 10 configuration. This optional setup consists of a standard server blade with a four-channel SATA minibackplane attached to two disk blades, and each disk blade can be outfitted with two 250GB SATA drives. This configuration will reduce processor density in favor of more internal storage capacity. Pricing for this option starts at $3,100 per blade.

IT managers who use the BladeRunner for Linux cluster applications can easily administer and provision nodes using Scyld Beowulf's built-in provisioning features.

Using the BladeRunner in noncluster environments will require that IT managers use third-party tools for provisioning and image deployment. Competitors such as HP and Sun, in contrast, package their blades with strong management suites—HP's ProLiant Essentials Rapid Deployment Pack and Sun's N1 Grid Provisioning Server, respectively.

Our test system had Beowulf preinstalled, and we easily configured a five-node Beowulf cluster during tests using the BeoSetup GUI.

The BeoSetup GUI provides a single administration point for the entire cluster and runs on the master node. Adding nodes to the cluster was a simple drag-and-drop procedure, but we also could have used the command line. The GUI provides a useful at-a-glance window, BeoStatus, for monitoring cluster performance metrics such as CPU load, memory usage and disk usage.

Scyld Beowulf provides advanced cluster administration features such as job mapping and file system replication.

During tests, we powered off a compute node, and the internal job mapper automatically stopped running jobs on the unavailable resources. When we powered the blade back on, the system automatically detected the node and rejoined the blade to the cluster.

We ran the compute nodes as diskless blades, but IT managers can install hard drives on the nodes and use the storage space for caching or storing local copies of data sets. BeoSetup can be configured to automatically check and re-create the node file systems without user intervention.

During tests, we managed our BladeRunner system through the integrated management port in the eight-port Gigabit switch.

The integrated management software can be accessed remotely via SNMP, the command line or the Web interface.

We found the Web interface easy to use, offering quick links to system health status and hardware information.

The Web interface also provides links to switch management information such as the process for setting up VLANs (virtual LANs), but Linux-savvy administrators will likely be better off using the command-line interface via the serial console or Telnet.

Technical Analyst Francis Chu can be reached at francis_chu@ziffdavis.com.

Check out eWEEK.com's for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.

 
 
 
 
 
 
 
 
 
























 
 
 
 
 
 

Submit a Comment

Loading Comments...
























 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date