Results tagged “rant” from Just Another Hacker

Url scanning seems to be an emerging trend. Detecting malware distribution channels and preventing infections is easier than cleaning up the mess they make. The basis of the idea is good, but the current implementations. I have been mulling on this for a while, ever since I read Russ McRae's post (rant?) on url shorteners needing to detect malware.

The initial problems that url scanners face are simple evasion techniques, such as the click to get infected method that you can see in my previous post. This blogspot url scores quite cleanly.
And why shouldn't it? It doesn't contain anything directly malicious and so it should score cleanly until reputation or reactive defense catches up with it. Listen you say, who cares about the herding page, it doesn't do anything, it's the delivery page we care about. If a user visits a "benign" page that redirects him to malware, it will still be stopped at the malicious page!

Alas dear friend, a simple server side block is all it takes to stop from accessing the offending page (
Other documented techniques seen in the wild include only delivering the malicious pay load on 1 of x requests, user agent filtering, js obfu that will break automated deobfu and more. I have seen an alert box break browser automation, so there is no shortage of options for the bad guys. However considering how simple it is to shutdown todays url scanners I doubt we will see too many advanced techniques yet. Url scanning might overcome these simple bypasses in the future, but they should not be considered defense and certainly not a replacement for your desktop AV.

The reason behind the change is a simple one. They do not (currently) fudge NXDOMAIN records like openDNS do. This has a tendency to break RBL queries, openDNS "solves" this problem by making exceptions for known RBLs. As you can see from this OLD discussion on the openDNS forums this has been their policy for a long time.

The default RBL services used by the movable type spamlookup plugin are and I also use additional lookups like stopforumspam, spamhaus and others. As a result I was constantly experiencing false positives for comments and trackbacks. Changing to google solved all these problems. If you are using niche RBLs and openDNS I would recommend that you test these.

$ host	A
 !!! A record has zero ttl
$ host A !!! A record has zero ttl

$ host does not exist at, try again
$ host does not exist at, try again

I have taken the liberty of reporting these two to openDNS as they are common for MT users, however there are several other RBLs that I use which aren't covered by openDNS. By changing to google public DNS I don't have to put up with false positives. It also saves me the hassle of having to verify and "fix" RBLs every time I make changes.

If you want to make the change you can find the details at:

Stopping the cleanfeed

If you, like me is concerned about the governments proposed cleanfeed, then TAKE ACTION.


Vote in smh's poll

Sign this petition

Add Conroy to Santa's naughty list

Write to a minister and get them to take action

Sign this petition too;

Participate in the online and offline blackout protest

Add a twibbon to your twitter avatar

Chime in at BorB, get the attention of ACS

She might be with the ALP, but she is listening. Leave a comment on kate Lundy's blog;

For further calls to action and news, stay tuned at

Check back here for some more tools and filter bypass tutorials in the new year
Making me publish this stupid post and managing yet another login to a site I don't personally use. Ok, so maybe it's not such a bad idea to allow blog claiming, and supporting logins... BUT they should be able to index blogs without having someone make a claim, or at the very least allow the authorization to be added as a html comment or as a separate file. I suppose they consider the forced posting to be a marketing tool. To me it tastes awful...


The changes to package kit which allows non privileged users to install fedora signed packages without escalation privileges makes me glad I'm not a fedora user. There is just a crapton of potential for breakage and security abuse bundled in here and since I'm a reasonable fellow I will even supply some examples

Graudit, reducing false positives

Some anon called "R" left a comment today, but it was on a page where I had accidentally left comments on, so I won't publish it. He complained about false positives in graudit, and it is not the first time I have head this, or seen it for that matter. So I thought I would address it publicly, R's comment was;

"graudit seems to trip on things like "update_profile(", proudly hilighting "file(" :)"

This is true (I mostly see it around function names containing mail) and I would very much like to correct all the false positives matches and avoid any false negative ones too for that matter. However, this is a hobby project for me. I am not a company selling software, nor am I paid or given time off by my employer to work on graudit. Therefore my contribution to the project very much depends on my real life activities.

Graudit is meant to be a rough auditing tool. You run it against large/new projects so you can pick some starting points for your audit or even spot some low hanging fruit. It is not a complete solution and cannot validate whether what it highlights is exploitable or not. Since it uses grep it saves me from spending time on parsing engines for the supported languages, but it does make it harder to write signatures that are completely free of false positives. Regular expressions aren't that great for parsing :(

However, it is opensource, feel free to fix the issue and submit a patch, otherwise you will probably have to wait for version 1.5+ before any radical changes to the signatures happen. Until then I guess you will have to live with some false positives.
I've always had to deal with it, and I don't find MT's spam modules to very helpful in easing the pain of managing trackback spam. So I thought it might just be worth blocking some IPs. I did a little grep and without any further ado I present the numbers taken from 6 months worth of apache logs;

root@localhost# zgrep tb.cgi access.log* | awk '{print $1}' | sort | uniq -c | sort -n -r |head -25

Sometimes I wish I could easily group by CIDR on the CLI
No Clean Feed - Stop Internet Censorship in Australia
Creative Commons License
This weblog is licensed under a Creative Commons License.