Bittorrent Sync

BitTorrent-Sync-logo-256px11I have now totally replaced Dropbox with the new Bittorrent Sync. The main driving factor for me was storage space, however there is the added benefit of having control of all the computers storing data which minimises the potential risk of data being accessed without my permission (such as the NSA).

Dropbox is priced at $100 USD per year for 100GB of disk space. As a comparison, Backupsy offer 250GB for $5 USD per month, or 500GB for $7 USD per month (with a coupon code). A lower cost overall and with the option for monthly payments, and with 5 times the storage.

I found this gem which details an apt repository for Debian Linux variants so that Bittorrent Sync can be installed via apt.

It also seemed an advantage to be able to provide FTP access to selected parts of my sync’ed files with downloads at much faster speeds than my ADSL2+ connections can achieve. It also seemed an advantage to be able to sync specific folders to a web server, without having multiple Dropbox accounts.

The only disadvantages are linux experience is required to maintain a VPS, and the .SyncArchive isn’t as user friendly as the Dropbox website, otherwise they’re virtually the same.

Whitelists in Postfix

I had to setup my own whitelists for my Postfix installation. Bigpond is one example of somebody who gets listed in RBL blacklists often, and well they don’t attempt spam against me all that often but do send alot of legitimate e-mail my way, so it deserves a whitelist entry.

On Debian to setup a whitelist all you have to do is edit /etc/postfix/main.cf so that smtpd_client_restrictions includes something like ‘check_client_access hash:/path/to/rbl_override’

For example this is what I have in my /etc/postfix/main.cf:

smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, check_client_access hash:/path/to/rbl_override, reject_rbl_client tor.dnsbl.sectoor.de, reject_rbl_client dnsbl.ahbl.org, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client zen.spamhaus.org, permit

Some hosts use SPF records in DNS to specify what mail servers are safe to accept e-mail from. To lookup this information you can run:

host -txt bigpond.com

Then you need to create your rbl_override file. It should have something like this:

# Bigpond (taken from `host -t txt bigpond.com`)
61.9.168.0/24 OK
61.9.189.0/24 OK
61.9.169.0/24 OK
61.9.190.0/24 OK

 

You will also need to run postmap on the rbl_override file, to create a machine parse able binary with your whitelist database.

My Postfix installation is spread across several servers, however through a rig with rsync I regularly synchronize some binaries and configs to every server so I simply use this to send my whitelist.

Resetting the root password for MySQL on Debian

I have a half-configured VPS that I’m finally getting around to pooling into my geographic cluster that I’m developing.

I needed MySQL and realized I had installed it, but I’d since forgotten the root password, and I never stored a unique password in KeePass like I normally do.

So I had to reset it. It was a fairly easy process. From the root shell I executed:

/etc/init.d/mysql stop

/usr/bin/mysqld_safe –skip-grant-tables &

mysql –user=root mysql

When prompted for a password I just hit the return key, then I executed the SQL commands:

UPDATE user SET Password=PASSWORD(‘some password here’) WHERE User=’root’;

flush privileges;

exit

Then back at the root shell I restarted MySQL so it started under normal conditions, and I tested that my new password worked:

/etc/init.d/mysql stop;sleep 2;/etc/init.d/mysql start

mysql –user=root –password

This time I made sure I saved the password in KeePass.

Daemonizing rsync on Debian

I use rsync a lot in my multiple server environment. For example its handy to consolidate all the log files from Apache (which sometimes serves the same site from many servers) into the one place, or in my GeoIP BIND setup where BIND’s normal zone transfers no longer work and a tool like rsync is required to replace zone transfers, or relocating telephone recordings from Asterisk to another machine more suited to distributing them to those with access as it has more storage and bandwidth.

There are two problems with rsync & Debian – Debian doesn’t start the rsync daemon and provides no init.d script for startup at boot, and rsync has no facility for PID files when executing a file transfer.

The first solution is to add the following command to /etc/rc.local – this command can also be crontab’ed to ensure the rsync daemon is running – and this command is superior to normal initialization as it checks the PID file and won’t start rsync if its already running:

/sbin/start-stop-daemon -p /var/run/rsyncd.pid -u root -x /usr/bin/rsync -n rsync -S — –daemon

Further to that, crontab’ed rsync jobs can sometimes have a huge hit of data that will take some time to transfer. Or perhaps there are problems with the network causing slower than normal file transfers. There are many scenarios where the crontab’ed job would be executed again before a previous job has completed.

The solution again is to check PID files. My answer was to write a small shell script (below) which creates a PID file for rsync jobs based on a specified “name” (which probably should be associated with the rsync share name and/or the host specified in the job). The solution is good enough to allow runs of rsync every minute, or maybe even multiple times per minute.

To execute rsync with this script you would run:

./rsync-pid.sh “rsync –avz –compress-level=9 –delete –password-file=/path/to/password/file /path/to/local/data/* rsync://user@host/path/to/remote/data” processname

 

The script’s source code is as follows:

#!/bin/sh

if [ -e /var/run/rsync/$2.pid ]
then
        pid=`cat /var/run/rsync/$2.pid`
        ps=`ps auwx | grep rsync | grep $pid | grep -v grep | wc -l`
        if [ $ps -eq 0 ]
        then
                /bin/rm /var/run/rsync/$2.pid
                unset pid
        fi
fi

if [ ! -e /var/run/rsync/$2.pid ]
then
        pid=`echo $$`
        echo $pid >/var/run/rsync/$2.pid

        $1

        /bin/rm /var/run/rsync/$2.pid
else
        echo $2 is already running!
fi

The disadvantages of Exigent VPS accounts

I signed up for a $10 VPS account and put Exigent through a few runs and I must say I’m not very impressed.

First of all there are a couple hidden charges with Exigent. If an invoice goes overdue, they’ll suspend your VPS just the same as any provider, but charge you a $35 late fee.

Last night there was also a scheduled outage as Exigent planned a reboot of their VPS servers. That all went to plan ok. But it would have been nice if Exigent restarted VPS containers, instead waiting for me to action a boot of my VPS.

And finally there is the memory weirdness. Fair enough, at Jumba my VPS with 512MB of memory did not offer enough to run LAMP (Linux + Apache + MySQL + PHP) and a full fledged e-mail server with amavisd-new and spamd for filtering. The moment I setup amavisd-new I found the Jumba VPS would frequently have killed processes and crashed services etc. So I got the Exigent account simply for running an e-mail server. And guess what, with 1GB or double the memory, I still can’t run a full fledged e-mail server. Comparing the memory consumption between the Jumba and Exigent accounts demonstrates that the Exigent account requires more memory to run the same services with the same software versions. I’m yet to find an explanation as to why.

Over the Christmas period I’ll have to either figure out the memory issues I have with Exigent, or discontinue them.

Combining the access_log from multiple web servers into a single file

Further to my blog the other day about remote syslogd with Debian I run numerous web servers that serve the same site and visitors are directed to them based on GeoIP in BIND.

Making sense of the log files is difficult as they’re spread over separate files on separate servers. Thankfully awstats comes with a tool that helps solve this problem.

logresolvemerge.pl ships with awstats and you define each log file as a parameter, and it will sort them chronologically and output the results – so then you just direct the output into a file.

So for example on Debian you would:

/usr/share/awstats/tools/logresolvemerge.pl /var/log/apache2/somesite_access_log_node1 /var/log/apache2/some_site_access_log_node2 /var/log/apache2/some_site_access_log_node3 > /var/log/apache2/somesite_access_log

And its probably sensible to use rsync to send your logs to a centralized location.

Once you’ve used logresolvemerge.pl you can then use tools like awstats on the combined log file.

Delete bounced mail delivery reports from Postfix’s mailq

Today I examined one of my mail servers to find its mailq was loaded with failed delivery reports to spammers that wouldn’t accept the delivery reports.

The machine was continually retrying these requests. So I wished to delete them obviously.

As there was over 200 of them it needed some code to make the deletion easier.

So I ran:

mailq | grep ‘MAILER-DAEMON’ | awk ‘{ system("postsuper -d " $1) }’

And my mailq was then empty.

Remote syslogd with Debian

I’m running a number of OpenVZ based Debian VPS accounts for my hosting needs and I’ve busted out into using GeoIP in BIND to direct clients of my hosted services to their nearest available server to maximize performance and add the ability to failover.

A crucial piece in this puzzle is logging. Most logging on Linux systems is done with syslogd and Debian uses sysklogd. syslogd for years has had remote logging capabilities and its very simple to setup.

First I setup some “log servers” which will receive logs from other hosts.

I edited /etc/default/syslogd so that the SYSLOGD variable was defined like:

SYSLOGD="-r -s example.com -l node1.example.com:node2.example.com:node3.example.com"

Because OpenVZ hosting providers are bodgy and not all do proper reverse DNS, I created an /etc/hosts file that accurately produced reverse DNS on the logging hosts. I actually made just one file and used rsync to keep it in sync on other “log servers”.

I also edited /etc/logrotate.conf so that two options were tweaked:

# keep 1 year (52 weeks) worth of backlogs

rotate 52

# uncomment this if you want your log files compressed

compress

So then my “log servers” were all setup. I thought with disk space so cheap in certain locations, I may as well keep lots of logs. As its centralized its easier to monitor for log over-runs.

I then went on to edit /etc/syslogd.conf on each node so that it contained:

*.*    @syslog1.example.com

*.*    @syslog2.example.com

*.*    @syslog3.example.com

With this configuration every node will log in the same files and hosts will have their short hostname printed at the start of each log entry. The data is transmitted over UDP.

By having numerous sites where logs are stored means that logging is redundant and a comprehensive log set is always available. Additionally I never disabled the local logging facilities, so each node still has its own logs in /var/log

My final step was to whinge to one provider as the clock is set by the hosting provider in OpenVZ VPS containers. One provider had a clock that was inaccurate by 67 seconds. Clearly accurate time is important in a logging application and this effectively renders that node useless until the hosting provider fixes it.

An example of the advantage of this configuration… now on one of the “log servers” when I `tail –f /var/log/mail.log` I receive the activity for every single mail server I operate.

Browser detection with PHP’s get_browser()

I had to make a small web page to play the Internet stream for the radio station this week.

We’ve had alot of issues in the past with users of Internet Explorer having poor quality playback. This is because the web browser handles all HTTP requests for Adobe Flash and IE doesn’t handle live MP3 streams too well.

So the idea is to detect when Internet Explorer is being used and instead of using Adobe Flash, embed Windows Media Player which plays the stream fine.

get_browser() does this task great. However it requires the php_browscap.ini file.

So this file is kept up to date I created the cronjob:

1 1 */7 * *     root    /usr/bin/wget "http://browsers.garykeith.com/stream.asp?PHP_BrowsCapINI" -O /etc/php5/browscap.ini >/dev/null 2>&1

I then edited the php.ini file so that under [browscap] it contains:

browscap = /etc/php5/browscap.ini

Tuning a Brooktree TV Tuner card to an FM radio station on Debian Linux

brooktree848I had a brain storm a few weeks ago while planning to perform some computer hardware upgrades at the radio station I work for. They have a Debian Linux server that does quite a few different tasks including recording their broadcast and encoding their Internet stream.

To achieve this they had an FM tuner attached to the sound card on their server.

We got some new mainboards and we were upgrading the server anyway. I realized I had quite a few Brooktree TV tuner cards.

So I installed one in their server so the FM tuner can be controlled by software. Setting up the card wasn’t so straight forward as there wasn’t much documentation but it was fairly easy.

Debian did not detect my card correctly and did not identify the FM tuner component. So I had to unload the kernel module and load it with a few parameters. So I executed:

rmmod bttv

rmmod tuner

modprobe bttv tuner=2 radio=1

modprobe tuner

And then I had the driver loaded properly and /dev/radio0 exists. So I ran “aptitude install tuner” to install the tuner command line utility. Once installed I could then tune the FM receiver with the command:

tuner –c /dev/radio0 –f 96.1 -q

And then the radio was tuned.

To make it always work I then added all the rmmod, modprobe & tuner commands to /etc/rc.local so its all setup on boot.