Results tagged “rant” from Just Another Hacker

Url scanning seems to be an emerging trend. Detecting malware distribution channels and preventing infections is easier than cleaning up the mess they make. The basis of the idea is good, but the current implementations. I have been mulling on this for a while, ever since I read Russ McRae's post (rant?) on url shorteners needing to detect malware.

The initial problems that url scanners face are simple evasion techniques, such as the click to get infected method that you can see in my previous post. This blogspot url scores quite cleanly.
urlscanner-cleanly.jpg
And why shouldn't it? It doesn't contain anything directly malicious and so it should score cleanly until reputation or reactive defense catches up with it. Listen you say, who cares about the herding page, it doesn't do anything, it's the delivery page we care about. If a user visits a "benign" page that redirects him to malware, it will still be stopped at the malicious page!

Alas dear friend, a simple server side block is all it takes to stop http://scanner.novirusthanks.org from accessing the offending page (http://allhqpics.com/the-guy-with-the-largest-dick-on-the-planet.html).
av-ip-ban-avoidance.jpg 
Other documented techniques seen in the wild include only delivering the malicious pay load on 1 of x requests, user agent filtering, js obfu that will break automated deobfu and more. I have seen an alert box break browser automation, so there is no shortage of options for the bad guys. However considering how simple it is to shutdown todays url scanners I doubt we will see too many advanced techniques yet. Url scanning might overcome these simple bypasses in the future, but they should not be considered defense and certainly not a replacement for your desktop AV.

The reason behind the change is a simple one. They do not (currently) fudge NXDOMAIN records like openDNS do. This has a tendency to break RBL queries, openDNS "solves" this problem by making exceptions for known RBLs. As you can see from this OLD discussion on the openDNS forums this has been their policy for a long time.

The default RBL services used by the movable type spamlookup plugin are bsb.spamlookup.net and sc.surbl.org. I also use additional lookups like stopforumspam, spamhaus and others. As a result I was constantly experiencing false positives for comments and trackbacks. Changing to google solved all these problems. If you are using niche RBLs and openDNS I would recommend that you test these.

[OpenDNS]
$ host nopes.grrrr.bsb.spamlookup.net 208.67.222.222
nopes.grrrr.bsb.spamlookup.net	A	208.69.32.132
 !!! nopes.grrrr.bsb.spamlookup.net A record has zero ttl
$ host nopes.grrrr.bsb.empty.us 208.67.222.222 nopes.grrrr.bsb.empty.us A 208.69.32.132 !!! nopes.grrrr.bsb.empty.us A record has zero ttl
FAIL!

[Google]
$ host nopes.grrrr.bsb.spamlookup.net 8.8.8.8
nopes.grrrr.bsb.spamlookup.net does not exist at google-public-dns-a.google.com, try again
$ host nopes.grrrr.bsb.empty.us 8.8.8.8 nopes.grrrr.bsb.empty.us does not exist at google-public-dns-a.google.com, try again
WINNAR!

I have taken the liberty of reporting these two to openDNS as they are common for MT users, however there are several other RBLs that I use which aren't covered by openDNS. By changing to google public DNS I don't have to put up with false positives. It also saves me the hassle of having to verify and "fix" RBLs every time I make changes.

If you want to make the change you can find the details at: http://code.google.com/speed/public-dns/


Stopping the cleanfeed

|
If you, like me is concerned about the governments proposed cleanfeed, then TAKE ACTION.

Visit http://nocleanfeed.com

Vote in smh's poll
http://www.smh.com.au/polls/politics/form.html

Sign this petition
http://act.ly/1jk

Add Conroy to Santa's naughty list
http://www.thegiftofcensorship.com/

Write to a minister and get them to take action
http://www.crikey.com.au/2009/12/16/dont-waste-your-time-waste-theirs-a-guide-to-writing-to-ministers/

Sign this petition too;
http://www.getup.org.au/campaign/SaveTheNet/442

Participate in the online and offline blackout protest
http://www.internetblackout.com.au/

Add a twibbon to your twitter avatar
http://bit.ly/6u7Uxy

Chime in at BorB, get the attention of ACS
http://beastorbuddha.com/2009/12/15/internet-filtering-trial-and-report-flawed/

She might be with the ALP, but she is listening. Leave a comment on kate Lundy's blog;
http://www.katelundy.com.au/2009/12/21/further-thoughts-on-the-filter/

For further calls to action and news, stay tuned at http://www.somebodythinkofthechildren.com/

Check back here for some more tools and filter bypass tutorials in the new year
Making me publish this stupid post and managing yet another login to a site I don't personally use. Ok, so maybe it's not such a bad idea to allow blog claiming, and supporting logins... BUT they should be able to index blogs without having someone make a claim, or at the very least allow the authorization to be added as a html comment or as a separate file. I suppose they consider the forced posting to be a marketing tool. To me it tastes awful...

QFFGFDWBFVD6

The changes to package kit which allows non privileged users to install fedora signed packages without escalation privileges makes me glad I'm not a fedora user. There is just a crapton of potential for breakage and security abuse bundled in here and since I'm a reasonable fellow I will even supply some examples

Graudit, reducing false positives

|
Some anon called "R" left a comment today, but it was on a page where I had accidentally left comments on, so I won't publish it. He complained about false positives in graudit, and it is not the first time I have head this, or seen it for that matter. So I thought I would address it publicly, R's comment was;

"graudit seems to trip on things like "update_profile(", proudly hilighting "file(" :)"

This is true (I mostly see it around function names containing mail) and I would very much like to correct all the false positives matches and avoid any false negative ones too for that matter. However, this is a hobby project for me. I am not a company selling software, nor am I paid or given time off by my employer to work on graudit. Therefore my contribution to the project very much depends on my real life activities.

Graudit is meant to be a rough auditing tool. You run it against large/new projects so you can pick some starting points for your audit or even spot some low hanging fruit. It is not a complete solution and cannot validate whether what it highlights is exploitable or not. Since it uses grep it saves me from spending time on parsing engines for the supported languages, but it does make it harder to write signatures that are completely free of false positives. Regular expressions aren't that great for parsing :(

However, it is opensource, feel free to fix the issue and submit a patch, otherwise you will probably have to wait for version 1.5+ before any radical changes to the signatures happen. Until then I guess you will have to live with some false positives.
I've always had to deal with it, and I don't find MT's spam modules to very helpful in easing the pain of managing trackback spam. So I thought it might just be worth blocking some IPs. I did a little grep and without any further ado I present the numbers taken from 6 months worth of apache logs;

root@localhost# zgrep tb.cgi access.log* | awk '{print $1}' | sort | uniq -c | sort -n -r |head -25
   3390 74.86.238.186
    471 206.51.226.198
    451 208.53.130.221
    435 64.34.172.35
    329 66.96.208.53
    318 67.159.44.159
    299 65.60.37.195
    257 76.73.1.50
    248 208.85.242.212
    188 208.53.137.178
    169 72.167.36.70
    161 208.43.255.125
    148 212.227.114.150
    140 65.18.193.119
    139 74.63.64.94
    138 69.65.58.166
    137 66.197.167.120
    136 208.109.171.65
    129 74.86.60.98
    128 66.45.240.66
    120 64.59.71.191
    113 67.159.44.63
     99 64.202.163.76
     98 85.17.145.7
     93 64.191.50.30


Sometimes I wish I could easily group by CIDR on the CLI
No Clean Feed - Stop Internet Censorship in Australia
Creative Commons License
This weblog is licensed under a Creative Commons License.