A lookout point
Aiming both to keep the SPAM situation under control and to test our systems efficiency, we set up a few procedures that allow real time monitoring of inbound email traffic, towards Spin and its customers.
This showed interesting results ( although not that obvious ) on some trends, like the continuing perceptible growth of SPAM, and also gave us hints on the total amount of viruses on the Net.
All in all, we are creating a small observatory for all data ( even rather alarming ones ) related to the signal/noise ratio, which the email, modern age most marvellous communication means, depends on.
This website has been designed to host the data gathered by our systems and make it available to whom it may concern.
Obviously, what is reflected here is not, and cannot be, the true email status, but only a point of view, a simple analysis.
This is not irrelevant, as the sample quality highly influences the final data typology. In fact, most of our customers being businesses based in Italy, overnight or weekend trusted traffic is very poor, which makes reject percentages higher at these times ( as spam/traffic ratio graphs show ).
Service providers with a wider and more varied users typology would clearly get quite different results, though using the same measures.
Moreover, a sensible analysis of an entity, such as email traffic, has all the limitations typical to any quantitative measure of a qualitative datum. How to define what is SPAM and what it is not?
The only guidance we can rely upon is: SPAM is what I consider as SPAM, hence it is to be blocked.
All the assumptions rough data analysis are built on are therefore to be considered in the light of this initial statement, hoping it could be useful enough to its aim, strenghtened by the low impact of false positives detected by our customers.
Graphs and analysis methods
Spam Rejection Detail
The first question we wanted to answer was: "How much of the total traffic do we actually block and according to which rules?"
Many emails can be blocked through public Blocklists, such as the ones
Spamhaus looks after. Many other rejections are the result
of a dedicated work by Spin, such as the Blocklist
LOSABL, or of the rules meant to block emails
sent by software especially created to produce SPAM, rather than by senders notoriously known as spammers.
An analysis of the real data on the blocks carried out would therefore allow us to assess
our systems real efficiency as opposed to what is obtained by "public domain" systems, along with supplying
an extremely precise estimation of how much of the total traffic we were actually blocking.
The result of this work is available
here, where our graphs represent different time spans.
The analysis is carried out directly on our external servers' logs, where the rejection takes place,
and is then updated on real time.
The main side effect of this kind of analysis is the fact that the different efficiency of overlapping
Blocklists is bound to appear distorted. In fact, SPAM sources blocked by a couple of lists will be entirely blocked
by the first one, in order of application, to the detriment of the second one's apparent efficiency.
The graph of the block reasons
reproduced here, for example, clearly shows the huge gap between
Spamhaus's XBL and
DSBL, although the two lists are rather equally valid.
This is due to the fact that XBL is one of the first Blocklists to be looked up, whereas DSBL is amongst the last ones.