The type of abuse recently seen on FreshPorts isn’t a big deal. I would ignore it if it was on my own server. However, I’m using a “paid” service and the credits go faster when pillocks do pillocky stuff.
While I hope I’ve covered what I’ve done, I’ve been sick with a cold for a week, and helping to look after two < 4 year-olds for two weekends in a row. Perhaps I've overlooked something and you can't get started with what I've listed here. If so, please comment. In this post:
- FreeBSD 14.1
- fail2ban-1.1.0
- nginx-1.26.2
The main problems I’ve been setting: One IP address doing way more traffic and any others. It’s not a search engine, it’s someone doing interesting (at least to them) stuff:
% sudo cut -f 1 -w /jails/nginx01/var/log/nginx/freshports.org-access.log.0 | sort | uniq -c | sort -rn | head 17187 165.154.192.177 6917 65.21.35.183 3618 91.242.162.4 2118 66.249.64.103 1639 178.170.197.187 1156 151.201.145.126 1146 54.146.212.13 1108 34.228.185.123 956 216.244.66.238 875 47.76.209.138
Looking at another bad client
Let’s see some of the crap they do. Looking another day, I found 13.92.235.212:
[0:10 aws-1 dan /usr/local/etc/fail2ban/jail.d] % sudo cut -f 1 -w /jails/nginx01/var/log/nginx/freshports.org-access.log.0 | sort | uniq -c | sort -rn | head 3710 13.92.235.212 1707 66.249.64.103 1155 151.201.145.126 878 216.244.66.238 864 2604:a880:800:10::3156:8001 864 2603:1030:403:3::46b 832 66.249.64.104 807 103.30.197.6 793 41.90.40.10 674 146.212.23.195
Let’s save those away:
[0:16 aws-1 dan /usr/local/etc/fail2ban/jail.d] % sudo grep 13.92.235.212 /jails/nginx01/var/log/nginx/freshports.org-access.log.0 > ~/tmp/13.92.235.212
Of those 3710 requests, it had 2880 results with 429 codes:
[0:16 aws-1 dan /usr/local/etc/fail2ban/jail.d] % sudo grep 13.92.235.212 /jails/nginx01/var/log/nginx/freshports.org-access.log.0 | grep -c ' 429 ' 2889
It was performing nicely, and not requesting too fast (i.e. no 503 codes):
[0:16 aws-1 dan /usr/local/etc/fail2ban/jail.d] % sudo grep 13.92.235.212 /jails/nginx01/var/log/nginx/freshports.org-access.log.0 | grep -c ' 503 ' 0
Let’s see what they were doing:
[0:16 aws-1 dan /usr/local/etc/fail2ban/jail.d] % head -20 ~/tmp/13.92.235.212 13.92.235.212 - - [02/Sep/2024:08:26:44 +0000] "GET /js HTTP/1.1" 404 12800 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:44 +0000] "POST /js/webforms/upload/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:44 +0000] "POST /magmi/web/magmi.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:44 +0000] "GET /magmi/plugins/ono/test.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:45 +0000] "POST /magmi/web/plugin_upload.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:45 +0000] "POST /magmi/web1/plugin_upload.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:45 +0000] "POST /magmi/web2/plugin_upload.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:45 +0000] "GET /magmi/plugins/test.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:45 +0000] "GET /db.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:45 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:46 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:46 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:47 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:47 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:48 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:48 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:49 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:49 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:50 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" 13.92.235.212 - - [02/Sep/2024:08:26:50 +0000] "GET /downloader/index.php HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6" [0:17 aws-1 dan /usr/local/etc/fail2ban/jail.d] %
This is a script, looking for knowing issues, most of they giving a 404.
How many 404?
[0:18 aws-1 dan /usr/local/etc/fail2ban/jail.d] % grep -c ' 404 ' ~/tmp/13.92.235.212 821
In this case, it jumps out by a factor of 2.5 times more traffic. The first IP is doing as much traffic as the other 9 IP addresses. I’ve already added in some rate limiting via Nginx, however, now I want to start banning those IP addresses for some time.
Getting started with fail2ban
First, I installed fail2ban on the host. It will monitor log files in the jails. I’m not going to explain what jails are. You’ll need to read their docs.
pkg install fail2ban
Then I created my new jails. I could have named these with .conf, but the convention is for non-package files to be labelled .local.
[18:00 aws-1 dan /usr/local/etc/fail2ban/jail.d] % cat nginx-bad-urls.local [nginx-bad-urls] port = http,https filter = botsearch-common logpath = /jails/nginx01/var/log/nginx/*access.log maxretry = 40 findtime = 30m bantime = 128h enabled = true [18:00 aws-1 dan /usr/local/etc/fail2ban/jail.d] % cat nginx-limit-req.local [nginx-limit-req] port = http,https filter = nginx-limit-req logpath = /jails/nginx01/var/log/nginx/*error.log maxretry = 10 findtime = 5m bantime = 128h enabled = true [18:00 aws-1 dan /usr/local/etc/fail2ban/jail.d] % cat nginx-bad-clients.local [nginx-bad-clients] port = http,https filter = nginx-bad-clients logpath = /jails/nginx01/var/log/nginx/*access.log maxretry = 40 findtime = 3m bantime = 128h enabled = true
You will notice that some jails are monitoring access logs and some are monitoring error logs.
With these new filters (filters referenced above and not shown below are standard with fail2ban).
This is based on nginx-bad-request.conf, and I’m sure the journalmatch directive is not required in there. In this, I’m looking for 429 or 404 response codes. I’ve implemented the 429 return codes for hosts which are requesting too much too quickly (i.e. nginx rate limiting). That work is outside the scope of this post.
[18:02 aws-1 dan /usr/local/etc/fail2ban/filter.d] % cat nginx-bad-clients.local # Fail2Ban filter to match web requests for misbehaving clients # [INCLUDES] # Load regexes for filtering before = botsearch-common.conf [Definition] failregex = ^<HOST> - \S+ \[\] "[^"]*" 429 .+$ ^<HOST> - \S+ \[\] "[^"]*" 404 .+$ ignoreregex = datepattern = {^LN-BEG}%%ExY(?P<_sep>[-/.])%%m(?P=_sep)%%d[T ]%%H:%%M:%%S(?:[.,]%%f)?(?:\s*%%z)? ^[^\[]*\[({DATE}) {^LN-BEG} # DEV Notes: # Based on nginx-botsearch filter # # Author: Dan Langille
Testing the regex
One IP address was acting up, so I put all their records into one file. Running the regex against it, I found:
[18:08 aws-1 dan /usr/local/etc/fail2ban/filter.d] % fail2ban-regex ~/tmp/13.92.235.212 nginx-bad-clients.local Running tests ============= Use filter file : nginx-bad-clients, basedir: /usr/local/etc/fail2ban Use datepattern : {^LN-BEG}%ExY(?P<_sep>[-/.])%m(?P=_sep)%d[T ]%H:%M:%S(?:[.,]%f)?(?:\s*%z)? ^[^\[]*\[({DATE}) {^LN-BEG} : Default Detectors Use log file : /usr/home/dan/tmp/13.92.235.212 Use encoding : UTF-8 Results ======= Failregex: 3710 total |- #) [# of hits] regular expression | 1) [2889] ^<HOST> - \S+ \[\] "[^"]*" 429 .+$ | 2) [821] ^<HOST> - \S+ \[\] "[^"]*" 404 .+$ `- Ignoreregex: 0 total Date template hits: |- [# of hits] date format | [3710] ^[^\[]*\[(Day(?P<_sep>[-/])MON(?P=_sep)ExYear[ :]?24hour:Minute:Second(?:\.Microseconds)?(?: Zone offset)?) `- Lines: 3710 lines, 0 ignored, 3710 matched, 0 missed [processed in 0.18 sec]
It found 2889 429 lines and 821 lines with a 404. Good.
In the following, I confirm those results:
[22:22 aws-1 dan /usr/local/etc/fail2ban/filter.d] % grep -c ' 429 ' ~/tmp/13.92.235.212 2889 [22:23 aws-1 dan /usr/local/etc/fail2ban/filter.d] % grep -c ' 404 ' ~/tmp/13.92.235.212 821
The filter is good.
Configuring fail2ban to ban with pf
I added this line to /etc/pf.conf:
table <f2b> persist
It’s with my other tables, near the top of the file.
That name is the default used by fail2ban.
After my first block statement, I placed this line:
anchor "f2b/*" block drop in log quick on $PUBLIC from <f2b> to any
Key to understanding this this: the banned ip addresses do not go into the f2b table. I’ll demonstrate that later.
jail.local
This is my jail.local file. It lists the jails I created and enables them. It also defines the default actions to be pf (see also usr/local/etc/fail2ban/action.d/pf.conf).
[19:59 aws-1 dan /usr/local/etc/fail2ban] % cat jail.local [DEFAULT] banaction = pf[actiontype=] banaction_allports = pf[actiontype= ] [nginx-bad-clients] enabled = true [nginx-bad-urls] enabled = true [nginx-limit-req] enabled = true
With that in place, I started fail2ban, and I have no idea what that errors are or how to fix them. However, all seems well (this particular command was run after fail2ban had been running for a few days; I had first stopped it).
[20:08 aws-1 dan /usr/local/etc/fail2ban] % sudo service fail2ban start 2024-09-08 20:08:29,530 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,530 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,530 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,530 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,530 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,530 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,530 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,531 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,531 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,531 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,531 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,531 fail2ban.configreader [82044]: ERROR No section: 'Definition' 2024-09-08 20:08:29,531 fail2ban.configreader [82044]: ERROR No section: 'Definition' Server ready [20:08 aws-1 dan /usr/local/etc/fail2ban] %
Look at /var/log/fail2ban.log, it has loads of information.
The blocked addresses
After a while, I had some blocked addresses:
[19:46 aws-1 dan ~] % sudo fail2ban-client banned [{'nginx-limit-req': ['52.156.166.87', '84.241.193.134', '159.100.22.240', '13.50.224.91', '77.22.36.31']}, {'nginx-bad-clients': ['13.79.90.234', '139.59.92.140', '159.100.22.240', '20.42.217.170', '2402:3a80:1953:b303::2', '31.220.101.115', '46.101.1.149', '46.254.106.77', '52.156.166.87', '66.94.112.192', '66.94.113.11', '84.241.193.134', '91.92.247.106', '204.12.229.224', '143.110.177.105', '185.49.126.94', '88.117.127.30', '13.79.91.107', '13.74.63.110', '202.79.173.132', '13.74.98.138', '77.22.36.31']}, {'nginx-bad-urls': []}]
Let’s check with pf:
[19:46 aws-1 dan ~] % sudo pfctl -T show -t f2b | grep 52.156.166.87 [19:47 aws-1 dan ~] % sudo pfctl -T show -t f2b [19:47 aws-1 dan ~] %
It’s empty? Yes, it is. That’s what sent me searching to see where I’d gone wrong.
Instead, I should have done this:
[19:50 aws-1 dan ~] % sudo pfctl -a f2b/nginx-bad-clients -T show -t f2b-nginx-bad-clients | grep 52.156.166.87 52.156.166.87
WTF is that stuff? It’s using an anchor and a table name I never created. fail2ban created it. It means less pf work for you.
I know I’ll be coming back to this page for that command alone.
LibreNMS graphs
I use LibreNMS for gathering metrics. They have a fail2ban plug-in, so I enabled it.
Added this to /usr/local/etc/snmpd.conf:
extend fail2ban /usr/local/etc/snmp/fail2ban -c -C /var/cache/librenms/fail2ban
Create the directory:
sudo mkdir /var/cache/librenms chown snmpd:snmpd /var/cache/librenms
If you don’t create the directory, you’ll see this error:
% sudo /usr/local/etc/snmp/fail2ban -c -C /var/cache/librenms/fail2ban '/var/cache/librenms/fail2ban' does not exist or is to old and -U was not given at /usr/local/etc/snmp/fail2ban line 220.
If you use LibreNMS, you know how to enable this for the client: rediscover it.
Here are my graphs from later.
Room for faster blocks?
Perhaps I should come back this one : FreeBSD: Fail2Ban With PF Doesn’t Kill Connection State
Old news
After doing this work, I found a found this 10 year old post. Since then, I created a solution for ensuring fail2ban starts after the jails. It was suggested by both Garrett Wollman and feld. Create a new rc.d script.
Not everyone needs this ordering. I may not need this ordering. Some of you might. For example, the filesystems in your jail are not available until after the jail has started. If so, this solution is for you.
[17:43 aws-1 dan /usr/local/etc/rc.d] % cat fail2ban-after-jails #!/bin/sh # # PROVIDE: fail2ban_after_jails # REQUIRE: jail # BEFORE: fail2ban # KEYWORD:
This shows fail2ban starting last:
[17:43 aws-1 dan /usr/local/etc/rc.d] % rcorder /etc/rc.d/* /usr/local/etc/rc.d/* | tail -5 /usr/local/etc/rc.d/ec2_bootmail /usr/local/etc/rc.d/ec2_loghostkey /etc/rc.d/securelevel /usr/local/etc/rc.d/fail2ban-after-jails /usr/local/etc/rc.d/fail2ban