Members
  • Total Members: 14176
  • Latest: toxxxa
Stats
  • Total Posts: 42940
  • Total Topics: 16140
  • Online Today: 4152
  • Online Ever: 51419
  • (01. January 2010., 10:27:49)









Author Topic: Evaluating anti-virus tests: Why some reviews are better than others?!  (Read 3541 times)

0 Members and 2 Guests are viewing this topic.

Samker

  • SCF Administrator
  • *****
  • Posts: 7528
  • KARMA: 322
  • Gender: Male
  • Whatever doesn't kill us makes us stronger.
    • SCforum.info - Samker's Computer Forum
 
The fundamental job of an anti-virus product is to find and remove malicious and unwanted code. The key measure of its effectiveness is how well it’s able to isolate any threat and so prevent its spread. Sounds easy, but it's far from straightforward. Today’s threats are more complex than ever before. Much of today’s malware, including Trojans, backdoors and spammers’ proxy servers, is purpose-built to hijack users’ machines; and a single Trojan can easily be found on several thousand infected PCs. Malicious code may be embedded in e-mail, injected into fake software packs, or placed on ‘grey-zone’ web pages.

It’s not surprising that detection capability is considered by users to be a key factor in their selection of an anti-virus solution.

But how do you decide which product has the best detection? It's easy, right? You check out one of the anti-virus product reviews and see which product finds the most viruses. There are lots of them, so you just need to find the latest one. Sadly, it's not that simple. Comparative reviews are not all the same. Some are better than others at evaluating the detection capabilities of different anti-virus products. The complexity of today’s threats, and the environment they operate in, means that testing the detection capabilities of anti-virus products today is a difficult business. It's costly, it takes time and it requires a good deal of expertise.

So what types of review exist and how do they shape-up?


Magazine reviews

Most magazines simply do not have the resources necessary to conduct an effective anti-virus detection test. So unless the magazine is re-printing the results of a test carried out by an independent test organisation2, the review is highly unlikely to offer a fair assessment of the detection capabilities of anti-virus products, for several reasons.

In the first place, if the magazine decides to provide its own test bed, the results are likely to be badly skewed due to the ridiculously small number of samples used in the test. Let's imagine for the sake of argument that a comprehensive collection consists of 100 viruses. Of these, two products being tested [let's call them SuperScan and TopScan] each miss ten percent of the viruses, but a different ten percent - as shown below:


Full collection




Now, if the magazine decides to base its detection test on just half of this collection, the results will be skewed in one of several possible ways, as shown in the diagrams below:


Possible subsets used by the magazine



It's not difficult to see how the figures could become further distorted if the test-set is even smaller [and many magazine reviews in the past have been based on just a handful of samples] and if more anti-virus scanners are included.

In addition, there may also be a problem related to where the samples come from. Unless the reviewer has access to a virus collection belonging to a bona fide anti-virus researcher [and, for obvious reasons, researchers are very careful about who they give samples to], there's no guarantee that all the files will be real viruses. It's common for virus collections, particularly those that have not come from legitimate sources [a web-site, for example] to contain garbage, non-viral samples. Why should this matter? Well, if SuperScan correctly fails to flag an infection in one of these garbage files, the reviewer [who considers all samples in the collection to be infected] will penalise the product. TopScan, on the other hand, which generates a false alarm by identifying a virus where there isn't one, will be rated a better product.

There are, of course, exceptions. Virus Bulletin [hosted by the UK anti-virus vendor Sophos] and SC Magazine [which conducts fee-based anti-virus certifications] both carry out much more extensive anti-virus tests. However, both the Virus Bulletin ‘VB100%’ and the SC Magazine ‘Checkmark’ certifications are based on detection of ‘in the wild’ samples. And this raises other problems


Tests and certifications based on the WildList

The WildList, established in the early 1990's by anti-virus researcher Joe Wells and now published monthly by the WildList Organization, aims to keep track of which viruses are spreading in the real world.3 Users are clearly most concerned about these threats [as opposed to those found only in the virus laboratory] and over the years detection of 'in the wild' viruses, as defined by the WildList, has become the de facto measure by which anti-virus products are judged. Fee-based anti-virus certification tests, most notably ICSA Labs [part of TrueSecure Corporation] and SC Magazine, are based on detection of WildList samples. In addition, as noted above, the Virus Bulletin ‘VB100%’ is awarded on the basis of a product's ability to detect WildList viruses. However, using WildList viruses as a yardstick to measure the detection capability of anti-virus products is not as clear-cut as it may at first seem.

To be included in the WildList, a virus must be reported by at least two separate WildList reporters [a group of 70 virus information professionals, many of whom work in the anti-virus industry]. However, there's no guarantee that what's reported provides an accurate picture of what's really out there. If a company's chosen anti-virus product finds and removes a virus without difficulty, will they bother to contact the vendor's support department to report the infection? It's much more likely that they will simply move on to the next job. So the WildList is more a measure of 'problem' viruses that required a support call than a reflection of all viruses found in the field.

Also, the WildList is compiled monthly, but it's a retrospective list of viruses reported. In other words, there's a time lag between receiving the reports and publishing the data. The WildList is always a month out-of-date, at best!

Today's threats spread faster than ever before and there’s now a higher risk than ever before of being hit by a new piece of malicious code. More than 80% of new malicious programs are found in the field, on real machines, not just in so-called ‘zoo’ collections. So the term ‘in the wild’ is somewhat outmoded.


Comprehensive anti-virus detection tests

Testing the detection capabilities of anti-virus scanners is a complex business, requiring time, money and expertise. To be truly effective, a detection test must be comprehensive in its approach. Several academic institutions have developed such expertise over many years. The Virus Test Center, University of Hamburg, AV-Test GmbH and AV-comparatives.org all conduct serious anti-virus detection tests.

So why are these tests a more effective measure of the detection capabilities of anti-virus products? There are several reasons.

1. They are truly independent. They receive no money from anti-virus vendors and have no commercial interest in the outcome of the tests.

2. Their detection tests are comprehensive in nature.

They include extensive collections, containing many types of threat, not just WildList samples.
They test on multiple platforms.
They test a product's ability to scan inside commonly used compressed and archive formats like ZIP, LHA, RAR and CAB.
They test a product's reliability - does it generate false alarms [that is, making a mistake and saying there's a virus when there isn't one].
They include proactive detection tests - how well do anti-virus products find new, unknown viruses.

Summary

Testing the detection capabilities of anti-virus products is a complex business, beyond the scope of any of the non-specialist computer magazines, unless they are using samples provided by an anti-virus research organisation or re-printing the results of a more in-depth study of anti-virus products. Moreover, even the more in-depth certification schemes like ICSA Labs and SC Magazine are based on detection of just WildList samples. So using them to differentiate between anti-virus products is problematic.

So how can you assess the detection capabilities of different anti-virus products? No product can claim to be ahead in every detection test. The key is to look for a consistent track record in multiple tests. And the more rigorous, independent tests carry greater significance because of their comprehensive nature.

(Copyright by David Emm / Kaspersky)

http://esac.kaspersky.fr/index.php?PageID=9



Samker's Computer Forum - SCforum.info


 

With Quick-Reply you can write a post when viewing a topic without loading a new page. You can still use bulletin board code and smileys as you would in a normal post.

Name: Email:
Verification:
Type the letters shown in the picture
Listen to the letters / Request another image
Type the letters shown in the picture:
Second Anti-Bot trap, type or simply copy-paste below (only the red letters):www.scforum.info:

Enter your email address to receive daily email with 'SCforum.info - Samker's Computer Forum' newest content:

Terms of Use | Privacy Policy | Advertising