Email move was simple

Following up from my last post, I went ahead and moved all my personal email domains over to Fastmail. I also pre-paid for a pretty lengthy subscription, since it was so cheap for a long term, even cheaper than Zoho was if I paid for three years. That’s three less years to worry about emails. 

I very much debated setting up a cloud based mail server, that I would be able to keep encrypted, and totally private, but, for now, I think I’m okay with Fastmail, they’re not mining my emails for ads, and that was my primary concern. Total privacy would be nice, but that does come at a cost. I wasn’t really looking forward to managing another mail server, since part of my job is watching over a few of them already, I know it can be a pain, especially managing spam policies, and keeping up with intrusion attempts, updates, etc. 

The move was really simple. I have DNS all over the place, I use Godaddy,, and Amazon Route 53, which I really need to clean up and keep everything in one place. I like for DNS, its easy, free, and simple, I should move everything to that. Outside of just moving MX records, adding aliases, and domains was all it took, and mail moved over without a hitch.

I’m contemplating migrating my 16,500 or so Gmail messages into the new service, since I have the space, I think it might be easier to clean up old mail while using the new interface. Google is getting more and more annoying with my email, and I feel like my privacy is violated more every year that goes by with them (I’ve had a gmail account since beta!). Only problem, a few hundred people and companies have my gmail address. I guess it’s time to start making a move towards privacy!

Email, Privacy, and You

I’ve been thinking a lot about email privacy lately. It seems the free accounts are now mining your email for ways to show you advertisements. This is not something I want, it raises all sorts of privacy concerns. Last week, I had a friend say in an email he was thinking of going to Ireland. A few hours later, I checked my gmail, only to see an email with the subject: “flash deals on trips to Ireland!”. This is not okay. If Google is reading all my email (with a machine), then it has everything about me, where I eat, where I shop, what my hobbies are, my doctors appointments, what my friends are doing, where I’m going on my calendar, this is not good.

I have plenty of private servers floating around that I could use as a private email server, this is probably the ideal scenario, I can build a system with encrypted communication, and an encrypted file system, it’s private to me, so nobody can use it. I’ve built plenty of email servers before, and still manage a few for clients. The main issue for me with private servers is spam filtering. Managing spam is a big hassle, keeping lists updated, updating rules, packages, etc. If I have a service do the filtering, I’m still lacking privacy, using someone who is mining my emails looking for spam, may also be logging them for advertisement purposes. On client systems, we usually use Google for spam filtering, it works great, or we just give them G suite all together, or 365, both are pretty effortless to manage.

Maybe the next best thing is a service I can trust. My questions though, can I use my domains? Will it be around? Can my mail be encrypted on their disk so that only my login decrypts? I use zoho right now for my personal domains, its cheap, and a good service, but I don’t know anything about their privacy policy, I should check into it. I have a protonmail account, its great in theory, but I don’t much use it, since you have to pay a lot for the services I need (I have about 12 domains I need email access on). A friend of mine just made the switch from Gmail over to fastmail and is giving it a good recommendation, they happen to have a month free trial.

I think I’ll move a test domain to fastmail and see how it goes for now. It’s going to cost me double per year over zoho.. We’ll see.

iSCSI Target Server Choices

I manage a small a set of Citrix Xenserver hosts for various infrastructure functions, for storage, I’ve been running openfiler for about 3 years now, since the last reboot, my uptime is 1614 days! It’s pretty solid, but the interface seems buggy, there’s a lot of things in there I don’t use. When I do need to go change something, it’s so long in between uses, that I have to re-read documentation to figure out what the heck it’s doing. I’ve got a new Xenserver cluster coming online soon, and have been researching, thinking, dreaming, of what I’m going to use for VM storage this time.

Openfiler, really has been mostly great. My server load runs about 1.13 always, which somewhat bugs me, mostly due to conary (its package manager) running. Openfiler is almost never updated which isn’t a bad thing, since the machine is inside our firewall, without internet access unless I set a specific nat rule for it. I’m running it on an old Dell 310 server with two 2TB drives running RAID1, it’s got 4GB ram and boots to the same drives as openfiler runs its magic on (this server was originally implemented as a quick fix, to get us off local Xen storage, so we could do rolling restarts). It’s not a problem, but now, 3 years later, I notice, the latest version, IS THE SAME version I have installed and have been running for the last 1614 days… So maybe it’s time to find something new.

So I build out a nice Dell 530 server, dual 16gb flash cards, dual 120gig write intensive SSDs, a bunch of 2TB SATA drives, dual six core procs, and 32gig ram, dual power supplies, nice RAID card. The system arrived, and I had a lot of good feedback for NAS4Free, both online (googling, lots of reddit threads), and even in person recommendations. I was pretty excited about it honestly, I’m a little unfamiliar with FreeBSD, but have used it on and off in my now 20 year Linux career. I went ahead and installed the thing to the 16gb flash, as recommended. I disabled RAID on the server, and setup all the drives as SATA. Booted to the system and got rolling. It was really simple, seems easy to use, does WAY more than I could even actually want, in a storage device. I setup a big lun, with ZFS and iSCSI, added the write intensive SSDs as cache, installed all the recent updates, and was ready.. Then I read documentation a bit.

  • iSCSI can’t make use of SSD write cache.. Well, I guess I get an all SSD lun.
    • “A dedicated log device will have no effect on CIFS, AFP, or iSCSI as these protocols rarely use synchronous writes.”
  • Don’t use more than 50% of your storage space with ZFS and iSCSI.. WHAT?
    • “At 90% capacity, ZFS switches from performance- to space-based optimization, which has massive performance implications. For maximum write performance and to prevent problems with drive replacement, add more capacity before a pool reaches 80%. If you are using iSCSI, it is recommended to not let the pool go over 50% capacity to prevent fragmentation issues.”

So, this was some sad news, no write caching, cant use more than 50% of my disk space, but, I decided to press on. I went home for the night. The next morning I got a friendly email from my new server that it had some critical updates, cool, I though, so I installed the updates, now it wants to reboot. So, I let NAS4free reboot, two days later, more critical updates and a reboot required.. This is a bad thing for me. I run servers that really need to be up 24/7/365, yes, we run everything clustered, and redundant, and can reboot a server without anyone noticing, but not the entire storage device, that kills the point of having my VMs all stay up. This is still okay, because we have a second VM cluster, which has “the sister machines” to all our cluster nodes going into it. I just dont want to have to fully shutdown a VM cluster so the storage host can reboot once or twice a week. Kudos to the NAS4Free guys though, it’s a really good thing they are so active, it’s just not going to be the device for me.

So, I ripped it apart. Created 2xRAID1 SSD, a RAID10 set out of the 2TB drives, and installed my best friend Debian. Debian is rock solid, I only need to reboot for kernel updates, and that’s very few. Installed iscsitarget, setup my block devices using lvm, and bam! Within 30 minutes I had an iSCSI target setup and connected to Xen.

Reliability? I see a lot of ZFS fanboys touting that hardware RAID sucks, ZFS is awesome, good luck recovering your data, etc. I really haven’t had problems with RAID in the 15+ years I’ve been using it. We buy vendor supported hardware, if something dies, Dell sends me a new one. I backup onsite and offsite. I haven’t had to restore from a backup (other than testing restores), in years. I think this will all be okay.

Next article, I’ll write about setting up my iSCSI target, since there wasn’t many decent articles out there, I’ll write about it. It’s really pretty simple. Even have multipath IO working.

No country for old men

Retiring a bunch of old servers over the next few months. I actually feel bad letting these guys go, they’ve done such a good job. This guy was about 10 years old, last reboot was in 2011, and he’s still running like a champ, we replaced him about a year ago and left him running just in case, but it’s time to retire the old bugger. Thanks for lasting 2299 days without a reboot Centos 5!Server uptime


Kamailio – Changing the From URI for Level3

So Level3 uses the E.164 recommendation for sending caller information. The problem with this is that they send a + prefix to the phone number. The problem with sending the + in the caller number, is that a common desk phone (Polycom/Cisco/Yealink/Aastra) will try to make an IP call to the number, or just fail. It seems like only cell phones handle the + character in a number.

So to keep that plus out of the network, I added the following code to my kamailio.cfg to “filter” out the + before sending to the caller.

$avp(s:from) = $(fu{re.subst,/\+1//g});
  if ($(avp(s:from){s.len}) == 0) { $avp(s:from)  = $fu; }

Maybe there is a better way, but this is working in production. Let me know if anyone has a better method!

The Microsoft/Android war: Which patents are at stake?

Good article over at Network World about Patents Microsoft has claimed to hold. I’ve blogged about the silliness of patents like this in the past. People at the patent offices must be not tech savvy. I would NEVER have let such vague patents be allowed through.

Check some of them out: The Microsoft/Android war: Which patents are at stake?

Trixbox Polycom Directory of all extensions for IP650

I needed to quickly generate a full directory for a receptionist console. Since Trixbox doesn’t do this, I wrote some Perl to do so. It’s pretty simple, you will need to install Polycom::Contact::Directory from CPAN. It will connect to the localhost MySQL server and pull all extensions out, make an xml and save it to the appropriate path. You will need to supply the MAC address, I guess I could modify a bit to pull the MAC out of the Endpoint Manager table, but I like being able to just supply the MAC.

Thanks Zachary Blair for the easy module!

#!/usr/bin/perl -w
# Quick script to hack out a directory for a mac address. I use it for the
# receptionist's BLF on her IP650 with sidecars.
use strict;
use Polycom::Contact::Directory;
use DBI;

# Grab the MAC address from ARGV and make a file
my $mac = $ARGV[0] or die "No MAC Specified\n";
my $contactFile = "/tftpboot/polycom/contacts/$mac-directory.xml";

# Create a new Empty Directory
my $dir = Polycom::Contact::Directory->new();
# Connect to the trixbox MySQL DB
my $dbh = DBI->connect('dbi:mysql:asterisk:localhost:3306','root','passw0rd',{ RaiseError => 1});

# Pull an array ref for the extensions
my $userAry = $dbh->selectall_arrayref("SELECT extension,name FROM users ORDER BY extension");


# Set counter for speed dial index
my $x = 1;

# Loop through extensions
for my $a (@$userAry) {
# Split the trixbox name into first and last.
my ($fn,$ln) = split(/\s+/,$a->[1],2);

# My contacts are generally dirty, I'll make them look better. Some people may want
# to comment this out if you have people with unique capitalization.
$fn = ucfirst(lc($fn));
$ln = ucfirst(lc($ln));

# Insert the record into the object.
# I like the labels to be: extension firstname lastname "3721 Awesome Dude"
#  -- buddy_watching lets the polycom monitor BLF status. For this to work,
#     you must have"presence" feature.1.enabled="1" in
#     /tftpboot/sip.cfg
#  -- Check Polycom::Contact Documentation for Options
{   first_name => $fn,                  # <fn> in xml
last_name  => $ln,                  # <ln> in xml
contact    => "$a->[0]",            # <ct> in xml
label      => "$a->[0] $fn $ln",    # <lb> in xml
buddy_watching => 1,                # <bw> in xml
speed_index => $x,                  # <sd> in xml
buddy_block => 0,                   # <bb> in xml
auto_divert => 0,                   # <ad> in xml
auto_reject => 0,                   # <ar> in xml


# Save the contact file.


Setting up Memcached for HTML::Mason

Updated the corporate website today to include memcached. It was hitting our legacy application’s MSSQL database (which we have to still use), a ton, and slowing down the *choke* windows application.

Anyways, memcached saved the day! Way less hits on the database, and only took a few simple hooks to implement! I know I could have used Mason’s cache, but it isn’t distributed across servers that were not on this web server.

We use HTML::Mason for the site, so just a few simple hooks did the job.
1) Preloaded the Cache::Memcached module into my mod_perl.
2) Most of the website is driven off part number lookup. Even non parts are actually parts in our database, but just have content associated with them. So in the part retrieve Mason page, I added a line to load up memcached.

my $memd = new Cache::Memcached {
'servers' => [ "", "" ],

I get a $pn variable in from all other places so I check for its existence in the cache.

$mPart = $memd->get($pn);

Then just add a hook around my standard DB call and then a set after the pull and assign if we hit an else.

if (!$mPart) {
$partList = $dbh->selectrow_arrayref("SELECT blablabla from priceBook WHERE itemID = '$pn'");
$memd->set($pn,$partList,600); # Expire cache at 10 minutes (600 seconds).
} else {
$partList = $mPart;

cURL issue with backwpup for WordPress

I noticed my Amazon S3 backups weren’t running lately using backwpup for wordpress.

Took a little digging, but I had to take the certificate verification out of the amazon aws php api.
Commented out line 614:

//curl_setopt($curl_handle, CURLOPT_TIMEOUT, 5184000);

Changed line 624 and 625 to false:

         curl_setopt($curl_handle, CURLOPT_SSL_VERIFYPEER, false);
         curl_setopt($curl_handle, CURLOPT_SSL_VERIFYHOST, false);

Ended up working great after that.