Google Safe Browsing “is a service provided by Google that provides lists of URLs for web resources that contain malware or phishing content”, as mentioned on Wikipedia.
Over the past days, this blog has been considered as a malware site by the Google Safe Browsing robot.
As a result, users of Chrome, Chromium and Firefox received a warning message that this web site was hosting malware. Of course, this was not true. Although I tried several times to ask Google to review their decision, I was unable to convince their robots.
Finally, I appealed to the humans running StopBadware, who effectively noticed there was no malware hosted on this site, nor any link from this site pointing to malware hosted elsewhere. I am now removed from their listings (and Google’s). End of the story, you’d think. Short of the fact that this public shaming hurts my reputation, it also meant the visits to this blog dropped by 70%. I suspect part of the remaining 30% was the monitoring system I had put in place to check if Google had fixed their listing. I do not have anything to sell, so I did not really loose any business. Still, I can figure out the nightmare it would be for a e-commerce site to blacklisted in such a way. As a user, I can understand the value of being warned about dangerous web sites. However, the few times I received this warning when visiting web sites, it always looked like false positives. Hence the question, is the medication not worse than the disease ?
I just got an e-mail from someone currently attending the IGF meeting in Geneva . The e-mail ended up in my spam folder because the IP address used for the WLAN at the meeting is on a spambot/virusbot blacklist, namely cbl.abuseat.org. Apparently some guy there has his computer infected by a spambot or a virusbot. Because the local host uses a NAT, all the computers share the same public IP address. This means that all the attendees to the meeting risk seeing their e-mails blacklisted somewhere.
Funny this comes from the very people who would like to set up strategies to fight cybercrime …
Lesson to be learned:
One: NATs are a nuisance. They are responsible for collateral damage.
Two: In a hostile networking environment, never ever trust the local network and fire up your ssh or IPsec tunnel to a machine you can trust.
Three: give us IPv6 as soon as possible to get rid of NATs
I recently switched to a new position in my day job. I moved to another campus and office, where I found on my desk a computer with the default standard configuration. The default browser in this configuration is Internet Explorer 6.
I am still in a state of shock. Over the last four years in my previous position, I had been using Firefox as my main browser, mostly because of AdblockPlus, a remarkably efficient advertisement blocker.
With IE6, I have rediscovered how advertising on web sites can be distracting and invading. Suddenly, the pop-up windows, Flash animations and other nasties are there again. Unlike a paper magazine, when you only need to turn the page to ignore them, advertisements on web sites really prevent you to work until you close the pop-up window, stop the animation, turn off the volume, etc.
I guess one could say that Wladimir Palant, the developer of Adblock Plus, is one of the greatest benefactors to computer productivity over the last few years. Thanks, mate. Great job. I am forever grateful.
Greylisting is a technology deployed on mail servers that has proved to be effective against spam. I use it here. However, I have yet to find a greylisting daemon for Postfix that works well with IPv6. This morning again, a message from an IPv6 SMTP host came in and the greylisting daemon did not know what to do, until I white listed the host in question.
I have tried both SQLGrey and Policyd. They work, to a degree, but are not yet as smart as they are on the IPv4 side.
Typically, it should automatically white list entire /64s for IPv6, just like it white lists /24s on IPv4. If they support either PostgreSQL or MySQL, it is even better.
Any suggestions welcome.
I added a page to this blog, detailing some of the tricks I use to keep spam at a minimum level. The first part talks about Sendmail tricks I found here and there on maling lists and web sites. I take this opportunity to thank the authors.
It seems to me it is more efficient to fight spam at the SMTP session level. This saves CPU cycles, bandwidth and disk space. Spam filtering at a later stage, typically at the delivery agent or at the mail reader is less efficient. From the spammer’s point of view, if the message got past your SMTP gateway, then there is a chance that someone will read it.
I will add and/or detail these tricks in the coming weeks.
Until we have ISPs really commited to eliminate spammers from their network, either on their own initiative or being forced to by governments, the best thing we can do is to frustate the spammers as much as possible so as to make their business unprofitable.