Channel Insider content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Has the malware problem gotten out of control? An aggressive test of anti-virus products indicates that it has, at least by some measures. It’s a tough call.

I worry less and less personally about malware, even though I’m barraged by it day and night. I’ve got a gateway security device that scans for it with a Kaspersky scanner and my mail server runs Sunbelt Software’s Ninja, which uses both Authentium and Bitdefender to scan everything coming through, *and* I have desktop anti-virus on almost all my systems. I’ve got my belt and suspenders and my pants are nailed to my gut.

And it’s a good thing I’ve got all this protection, because tests by independent test group AV-Test paint a dark picture of the detection capabilities of most products.

For many years there has been a standard of sorts for testing anti-virus products called the “WildList.” The problem with the WildList is that it’s relatively small and contains only certain types of malware, and everyone knows its contents. As Andreas Marx of AV-Test puts it, “the WildList is not reflecting today’s threats, but more or less ‘historical’ threats only (e.g. which malware was widespread two months ago?).” So it’s not surprising that (according to AV-Test) most scanners can detect 100 percent of it. What you should worry about is the huge number of other threats out there.

Appliances from Ironport and Secure Computing close the malware-fix gap. Click here to read more.

AV-Test ran a huge test of backdoors (59,053), bots (70,658) and trojan horses (159,971) for a total of 289,682 malware samples. They ran them through 33 products. We have separate numbers on the bots, backdoors and trojans, but check out the table, below, for the ranked results of overall detection percentage.

How do the numbers look? If you ask me, not good. Five vendors scored over 99 percent, which has to be considered excellent in so large a test sample. Another six scored over 95 percent. Another seven were over 90 percent, and this is approximately the median, which was 90.42 percent. Half the products did worse than this; 10 were under 75 percent and four were under 50 percent. That’s pretty bad.

Several of the best products, like my own mail security product, use multiple engines. The No. 1 product, WebWasher by Secure Computing, for example, detected 99.97 percent or all but 87 out of the 289,682 samples. It uses the AntiVir engine (the No. 2 product) in combination with an engine Secure Computing developed on its own. Not all products that use multiple engines score better as a result, as they may configure those engines less aggressively.

And then there are many companies trying to come at the problem from the other direction, whitelisting programs that the user should be allowed to run and disallowing everything else. This is an old idea and has proven difficult to manage in the past, and it misses malicious code run through most vulnerabilities like buffer overflows. I hear from just about all of these “alternative” approach vendors and I wonder if their time will ever come, but their mission is becoming more important.

Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.

Check out eWEEK.com’s for the latest security news, reviews and analysis. And for insights on security coverage around the Web, take a look at eWEEK.com Security Center Editor Larry Seltzer’s Weblog.

More from Larry Seltzer