Email move was simple

Following up from my last post, I went ahead and moved all my personal email domains over to Fastmail. I also pre-paid for a pretty lengthy subscription, since it was so cheap for a long term, even cheaper than Zoho was if I paid for three years. That’s three less years to worry about emails. 

I very much debated setting up a cloud based mail server, that I would be able to keep encrypted, and totally private, but, for now, I think I’m okay with Fastmail, they’re not mining my emails for ads, and that was my primary concern. Total privacy would be nice, but that does come at a cost. I wasn’t really looking forward to managing another mail server, since part of my job is watching over a few of them already, I know it can be a pain, especially managing spam policies, and keeping up with intrusion attempts, updates, etc. 

The move was really simple. I have DNS all over the place, I use Godaddy, dns.he.net, and Amazon Route 53, which I really need to clean up and keep everything in one place. I like he.net for DNS, its easy, free, and simple, I should move everything to that. Outside of just moving MX records, adding aliases, and domains was all it took, and mail moved over without a hitch.

I’m contemplating migrating my 16,500 or so Gmail messages into the new service, since I have the space, I think it might be easier to clean up old mail while using the new interface. Google is getting more and more annoying with my email, and I feel like my privacy is violated more every year that goes by with them (I’ve had a gmail account since beta!). Only problem, a few hundred people and companies have my gmail address. I guess it’s time to start making a move towards privacy!

Email, Privacy, and You

I’ve been thinking a lot about email privacy lately. It seems the free accounts are now mining your email for ways to show you advertisements. This is not something I want, it raises all sorts of privacy concerns. Last week, I had a friend say in an email he was thinking of going to Ireland. A few hours later, I checked my gmail, only to see an email with the subject: “flash deals on trips to Ireland!”. This is not okay. If Google is reading all my email (with a machine), then it has everything about me, where I eat, where I shop, what my hobbies are, my doctors appointments, what my friends are doing, where I’m going on my calendar, this is not good.

I have plenty of private servers floating around that I could use as a private email server, this is probably the ideal scenario, I can build a system with encrypted communication, and an encrypted file system, it’s private to me, so nobody can use it. I’ve built plenty of email servers before, and still manage a few for clients. The main issue for me with private servers is spam filtering. Managing spam is a big hassle, keeping lists updated, updating rules, packages, etc. If I have a service do the filtering, I’m still lacking privacy, using someone who is mining my emails looking for spam, may also be logging them for advertisement purposes. On client systems, we usually use Google for spam filtering, it works great, or we just give them G suite all together, or 365, both are pretty effortless to manage.

Maybe the next best thing is a service I can trust. My questions though, can I use my domains? Will it be around? Can my mail be encrypted on their disk so that only my login decrypts? I use zoho right now for my personal domains, its cheap, and a good service, but I don’t know anything about their privacy policy, I should check into it. I have a protonmail account, its great in theory, but I don’t much use it, since you have to pay a lot for the services I need (I have about 12 domains I need email access on). A friend of mine just made the switch from Gmail over to fastmail and is giving it a good recommendation, they happen to have a month free trial.

I think I’ll move a test domain to fastmail and see how it goes for now. It’s going to cost me double per year over zoho.. We’ll see.

iSCSI Target Server Choices

I manage a small a set of Citrix Xenserver hosts for various infrastructure functions, for storage, I’ve been running openfiler for about 3 years now, since the last reboot, my uptime is 1614 days! It’s pretty solid, but the interface seems buggy, there’s a lot of things in there I don’t use. When I do need to go change something, it’s so long in between uses, that I have to re-read documentation to figure out what the heck it’s doing. I’ve got a new Xenserver cluster coming online soon, and have been researching, thinking, dreaming, of what I’m going to use for VM storage this time.

Openfiler, really has been mostly great. My server load runs about 1.13 always, which somewhat bugs me, mostly due to conary (its package manager) running. Openfiler is almost never updated which isn’t a bad thing, since the machine is inside our firewall, without internet access unless I set a specific nat rule for it. I’m running it on an old Dell 310 server with two 2TB drives running RAID1, it’s got 4GB ram and boots to the same drives as openfiler runs its magic on (this server was originally implemented as a quick fix, to get us off local Xen storage, so we could do rolling restarts). It’s not a problem, but now, 3 years later, I notice, the latest version, IS THE SAME version I have installed and have been running for the last 1614 days… So maybe it’s time to find something new.

So I build out a nice Dell 530 server, dual 16gb flash cards, dual 120gig write intensive SSDs, a bunch of 2TB SATA drives, dual six core procs, and 32gig ram, dual power supplies, nice RAID card. The system arrived, and I had a lot of good feedback for NAS4Free, both online (googling, lots of reddit threads), and even in person recommendations. I was pretty excited about it honestly, I’m a little unfamiliar with FreeBSD, but have used it on and off in my now 20 year Linux career. I went ahead and installed the thing to the 16gb flash, as recommended. I disabled RAID on the server, and setup all the drives as SATA. Booted to the system and got rolling. It was really simple, seems easy to use, does WAY more than I could even actually want, in a storage device. I setup a big lun, with ZFS and iSCSI, added the write intensive SSDs as cache, installed all the recent updates, and was ready.. Then I read documentation a bit.

  • iSCSI can’t make use of SSD write cache.. Well, I guess I get an all SSD lun.
    • “A dedicated log device will have no effect on CIFS, AFP, or iSCSI as these protocols rarely use synchronous writes.”
  • Don’t use more than 50% of your storage space with ZFS and iSCSI.. WHAT?
    • “At 90% capacity, ZFS switches from performance- to space-based optimization, which has massive performance implications. For maximum write performance and to prevent problems with drive replacement, add more capacity before a pool reaches 80%. If you are using iSCSI, it is recommended to not let the pool go over 50% capacity to prevent fragmentation issues.”

So, this was some sad news, no write caching, cant use more than 50% of my disk space, but, I decided to press on. I went home for the night. The next morning I got a friendly email from my new server that it had some critical updates, cool, I though, so I installed the updates, now it wants to reboot. So, I let NAS4free reboot, two days later, more critical updates and a reboot required.. This is a bad thing for me. I run servers that really need to be up 24/7/365, yes, we run everything clustered, and redundant, and can reboot a server without anyone noticing, but not the entire storage device, that kills the point of having my VMs all stay up. This is still okay, because we have a second VM cluster, which has “the sister machines” to all our cluster nodes going into it. I just dont want to have to fully shutdown a VM cluster so the storage host can reboot once or twice a week. Kudos to the NAS4Free guys though, it’s a really good thing they are so active, it’s just not going to be the device for me.

So, I ripped it apart. Created 2xRAID1 SSD, a RAID10 set out of the 2TB drives, and installed my best friend Debian. Debian is rock solid, I only need to reboot for kernel updates, and that’s very few. Installed iscsitarget, setup my block devices using lvm, and bam! Within 30 minutes I had an iSCSI target setup and connected to Xen.

Reliability? I see a lot of ZFS fanboys touting that hardware RAID sucks, ZFS is awesome, good luck recovering your data, etc. I really haven’t had problems with RAID in the 15+ years I’ve been using it. We buy vendor supported hardware, if something dies, Dell sends me a new one. I backup onsite and offsite. I haven’t had to restore from a backup (other than testing restores), in years. I think this will all be okay.

Next article, I’ll write about setting up my iSCSI target, since there wasn’t many decent articles out there, I’ll write about it. It’s really pretty simple. Even have multipath IO working.

No country for old men

Retiring a bunch of old servers over the next few months. I actually feel bad letting these guys go, they’ve done such a good job. This guy was about 10 years old, last reboot was in 2011, and he’s still running like a champ, we replaced him about a year ago and left him running just in case, but it’s time to retire the old bugger. Thanks for lasting 2299 days without a reboot Centos 5!Server uptime

 

Kamailio – Changing the From URI for Level3

So Level3 uses the E.164 recommendation for sending caller information. The problem with this is that they send a + prefix to the phone number. The problem with sending the + in the caller number, is that a common desk phone (Polycom/Cisco/Yealink/Aastra) will try to make an IP call to the number, or just fail. It seems like only cell phones handle the + character in a number.

So to keep that plus out of the network, I added the following code to my kamailio.cfg to “filter” out the + before sending to the caller.

<br />
$avp(s:from)=$fu;<br />
$avp(s:from) = $(fu{re.subst,/\+1//g});<br />
  if ($(avp(s:from){s.len}) == 0) { $avp(s:from)  = $fu; }<br />
uac_replace_from("$avp(s:from)");<br />

Maybe there is a better way, but this is working in production. Let me know if anyone has a better method!

The Microsoft/Android war: Which patents are at stake?

Good article over at Network World about Patents Microsoft has claimed to hold. I’ve blogged about the silliness of patents like this in the past. People at the patent offices must be not tech savvy. I would NEVER have let such vague patents be allowed through.

Check some of them out: The Microsoft/Android war: Which patents are at stake?

Trixbox Polycom Directory of all extensions for IP650

I needed to quickly generate a full directory for a receptionist console. Since Trixbox doesn’t do this, I wrote some Perl to do so. It’s pretty simple, you will need to install Polycom::Contact::Directory from CPAN. It will connect to the localhost MySQL server and pull all extensions out, make an xml and save it to the appropriate path. You will need to supply the MAC address, I guess I could modify a bit to pull the MAC out of the Endpoint Manager table, but I like being able to just supply the MAC.

Thanks Zachary Blair for the easy module!

</p>
<p>#!/usr/bin/perl -w<br />
# Quick script to hack out a directory for a mac address. I use it for the<br />
# receptionist's BLF on her IP650 with sidecars.<br />
use strict;<br />
use Polycom::Contact::Directory;<br />
use DBI;</p>
<p># Grab the MAC address from ARGV and make a file<br />
my $mac = $ARGV[0] or die "No MAC Specified\n";<br />
my $contactFile = "/tftpboot/polycom/contacts/$mac-directory.xml";</p>
<p># Create a new Empty Directory<br />
my $dir = Polycom::Contact::Directory->new();<br />
# Connect to the trixbox MySQL DB<br />
my $dbh = DBI->connect('dbi:mysql:asterisk:localhost:3306','root','passw0rd',{ RaiseError => 1});</p>
<p># Pull an array ref for the extensions<br />
my $userAry = $dbh->selectall_arrayref("SELECT extension,name FROM users ORDER BY extension");</p>
<p>$dbh->disconnect();</p>
<p># Set counter for speed dial index<br />
my $x = 1;</p>
<p># Loop through extensions<br />
for my $a (@$userAry) {<br />
# Split the trixbox name into first and last.<br />
my ($fn,$ln) = split(/\s+/,$a->[1],2);</p>
<p># My contacts are generally dirty, I'll make them look better. Some people may want<br />
# to comment this out if you have people with unique capitalization.<br />
$fn = ucfirst(lc($fn));<br />
$ln = ucfirst(lc($ln));</p>
<p># Insert the record into the object.<br />
# I like the labels to be: extension firstname lastname "3721 Awesome Dude"<br />
#  -- buddy_watching lets the polycom monitor BLF status. For this to work,<br />
#     you must have feature.1.name="presence" feature.1.enabled="1" in<br />
#     /tftpboot/sip.cfg<br />
#  -- Check Polycom::Contact Documentation for Options<br />
$dir->insert(<br />
{   first_name => $fn,                  # <fn> in xml<br />
last_name  => $ln,                  # <ln> in xml<br />
contact    => "$a->[0]",            # <ct> in xml<br />
label      => "$a->[0] $fn $ln",    # <lb> in xml<br />
buddy_watching => 1,                # <bw> in xml<br />
speed_index => $x,                  # <sd> in xml<br />
buddy_block => 0,                   # <bb> in xml<br />
auto_divert => 0,                   # <ad> in xml<br />
auto_reject => 0,                   # <ar> in xml</p>
<p>},<br />
);<br />
$x++;<br />
}</p>
<p># Save the contact file.<br />
$dir->save($contactFile);</p>
<p>1;<br />

Setting up Memcached for HTML::Mason

Updated the corporate website today to include memcached. It was hitting our legacy application’s MSSQL database (which we have to still use), a ton, and slowing down the *choke* windows application.

Anyways, memcached saved the day! Way less hits on the database, and only took a few simple hooks to implement! I know I could have used Mason’s cache, but it isn’t distributed across servers that were not on this web server.

We use HTML::Mason for the site, so just a few simple hooks did the job.
1) Preloaded the Cache::Memcached module into my mod_perl.
2) Most of the website is driven off part number lookup. Even non parts are actually parts in our database, but just have content associated with them. So in the part retrieve Mason page, I added a line to load up memcached.

my $memd = new Cache::Memcached {<br />
'servers' => [ "10.10.1.44:11211", "10.10.1.40:11211" ],<br />
};

I get a $pn variable in from all other places so I check for its existence in the cache.
$mPart = $memd->get($pn);

Then just add a hook around my standard DB call and then a set after the pull and assign if we hit an else.
if (!$mPart) {<br />
$partList = $dbh->selectrow_arrayref("SELECT blablabla from priceBook WHERE itemID = '$pn'");<br />
$memd->set($pn,$partList,600); # Expire cache at 10 minutes (600 seconds).<br />
} else {<br />
$partList = $mPart;<br />
}

cURL issue with backwpup for WordPress

I noticed my Amazon S3 backups weren’t running lately using backwpup for wordpress.

Took a little digging, but I had to take the certificate verification out of the amazon aws php api.
wp-content/plugins/backwpup/libs/aws/lib/requestcore/requestcore.class.php:
Commented out line 614:

//curl_setopt($curl_handle, CURLOPT_TIMEOUT, 5184000);

Changed line 624 and 625 to false:
<br />
         curl_setopt($curl_handle, CURLOPT_SSL_VERIFYPEER, false);<br />
         curl_setopt($curl_handle, CURLOPT_SSL_VERIFYHOST, false);

Ended up working great after that.