Server Stuff

I use my linode server for pretty much everything. However, one of its main jobs is to house all of my email needs. I have several domains which receive personal email and that all collects on the server. For quite some time, I’ve used “fetchmail” to collect my work mail and deliver it via the local mail server. It would then be filtered and delivered to a 2nd Inbox. I had three total inboxes, one for my main account, a secondary for a backup account, and the third for my work email. The reason for the multiple mailboxes was to make it easier to setup email “profiles.” Using the console based mutt email client, I have setup folder rules. If I am in the work inbox, mutt will use specific settings that I use for work. Mainly reply to address, outgoing name, and signature.

In a nutshell, fetchmail is really a pain. It’s config file is really about the stupidest setup that I’ve ever seen, and all in all, it seemed like a lot of extra steps for something that really should be pretty simple.

Enter getmail. It is the same style program, but is much easier to install and doesn’t require a local server to filter and deliver mail. Instead, it can save to local folders using several different storage methods (mbox file, maildir, etc).

It was time to add another Inbox. This time, I needed to add one to Gmail. So, if you are looking to grab mail remotely from Gmail and store it somewhere, here is a quick and simple getmail config file that gets the job done:

[retriever]
type = SimplePOP3SSLRetriever
server = pop.gmail.com
username = username@gmail.com
password = password

[destination]
type = Maildir
path = ~/.maildir/.Gmail/

[options]
delete = true
verbose = 0
message_log = ~/.getmail/gmail.log

If that is the only mail you want getmail to get, create a ~/.getmail directory and name the file getmailrc. However, getmail has the ability to gather mail from multiple locations using the –rcfile switch, i.e.:

getmail –rcfile ~/.getmail/getmailrc_gmail

Stick that in your crontab and you are off. Last, I just setup the Gmail folder in mutt, and it will periodically search that folder for new mail. For security’s sake, please make sure you chmod 600 everything in your ~/.getmail directory, or snoopy users can see your email passwords.

I certainly got very close:

11:47:10 up 322 days, 2:35, 1 user, load average: 0.00, 0.00, 0.00

Almost a full year of uptime on a server that generally used to be very busy. That stat was run on my old colocated server before I switched to the virtual server at another facility. I hate to pull the plug on the old girl, but it must be done. Maybe I can bring the server back home and fire it up versus putting it in the graveyard for good…

All of the websites have been migrated, and I decided to migrate mail ahead of schedule, just in case there were problems.

Well the migration went perfectly. I accidentally skipped one mail account, but got that fixed in a few seconds. Other then that, everything went perfectly. The new server is now taking care of 100% of the operations, with my home server acting as a backup MX server for mail. The old server I have in the old co-location facility will stay there probably until the end of September sometime. I want to give that company time before I yank it out so they don’t give me any guff over anything.

Other then that, I would have to say that migrating “stuff” in Linux is a snap. I’ve tried doing this in Windows and really it’s not even worth the bother since it never worked anyway.

The next step is to grab the old colo server and turn that hardware into my new home server. Since I am already running gentoo-amd64 on the current server, all I should have to do is get a new server case (since the current colo case is rackmount), mount, and change some kernel drivers. Other then that, things should just work. If you know of a larger sized, non-rack mount server case, shoot me a comment as I am now looking for one.

Anyway, the migration was a snap. No data was lost, and my users hardly noticed. Linux++

The migration is coming along nicely thus far. I have most of the user webpages moved. The last thing I have to work on is the ton of Gallery installs on the server. That is going to take some time because the users have a LOT of pictures. So far, everything has gone off without a single issue. I’m actually pretty surprised.

After the gallery installs are moved, then it is on to user home directories. That won’t be a quick process either, because all of their mail is in their home directory (~/.maildir to be exact). The old co-location facility has been having some serious connection lag issues lately, so it is making things a little difficult to transfer. I mean 15K/s, that’s like dial-up speeds.

That should take care of itself soon enough and hopefully I’ll have some better connection speeds to finish the migration.

Knock on wood, but it really went a whole lot easier then I expected. A WHOLE LOT… Of course, now that I said that, I’m sure I put the jinx on the rest of the migration.

Oh yeah, and if you are reading this, you are on the new server. It has 4 (yes they are “virtual”) processors, and really does fly…

I am finally starting to outgrow my relationship with my current co-location service. In a nutshell, I built and configured this server and pay a company to run it in their facility. It’s been great for years and really has been incredibly cheap compared to most co-location costs. However, due to raising costs as well as the new technology of virtually hosted machines, I will be moving to a new facility.

The biggest problem I have at this time is remote access. It takes me roughly 1.5 hours to get to the facility, so if/when there is a problem, that really isn’t very convenient. I am currently sharing a connection with eric and he has been more then gracious to help with problems since he lives far closer to the facility then I do. However, I hate relying on other people, and moreso, I hate to waste eric’s precious time, so it is time to move. With the new system, it will allow me to have full console access to the machine, so if I have to reboot it or get at the boot process (failed kernel upgrade, etc), I can do that via ssh or the web. Also, using a virtual based system also means that hardware failures will no longer be my problem; the facility takes care of that themselves. They will also handle hardware and systems upgrades; all I need to worry about is the OS and software.

Now the hard part. I have to start migrating all of users and hosted web sites to the new system. I have what I believe will be a decent and rock solid plan to migrate. I have already purchased the virtual machine and have begun the process of getting the necessary software and daemons working there. After some testing, I will start to migrate the web data. This is going to take some time actually, because rsync-ing 12GB of data won’t happen immediately. After that is complete I plan to start migrating user home directories which include mail setup. Again, I do not see a problem here, but transferring that much data across the Net will take some time. During that time, I will have to shut some things down and allow the backup mail server at my home to queue up old messages. Once the migration is complete, I can enable the mail system at the new facility and offload the messages manually from the backup server.

Again, this is the plan. I’m sure I will run into a hiccup here and there, but that’s expected. My main goal is no data loss. I would hate to lose any email messages that I have and I bet my users agree.

The good news is that once the migration is complete, I will have one hell of a server left over. That means I will be able to upgrade my server at home and will really be set.

gkrellm screenshot

If you have a *nix workstation you have probably either seen or used GKrellM. It’s a handy dandy program to give you up to the second stats on most of the important data on your system. Disk space/activity, network traffic, CPU and memory usage, processes, the works.

If you are looking to run it on remote headless machines, ssh is your friend and makes life easier. Install GKrellM on the remote box. In a local terminal, do the following:

ssh -N -f -L 19150:127.0.0.1:19150 user@host.com

Obviously, change the “user@host.com” to your user and host. You can also change the 19150 to pretty much anything, just make sure you match that in the gkrellmd.conf file. Once the gkrellmd daemon is started, you can then connect to it by running:

/usr/bin/gkrellm -s 127.0.0.1 -P 19150

Again, changing the 19150 port to match whatever you used in the above set. Configs are stored locally and can be manually edited, or you can use the standard GUI setup. You can then run the above with different ports for different hosts.

Happy monitoring!

Thanks to Eric, I finally found mailing list management software that is not only easy to setup, but super easy to use. If you are looking for something easy yet powerful, check out mlmmj. Most of the configs as well as message storage are all flat text files, so everything is super fast and simple to configure. It only took a few minutes to get a new list setup and running, and most of that time would have been saved if I actually read the documentation.

But come on, what fun is that?

Testing. Testing. Are we on the air? I upgraded to WordPress 2.5 today. It was listed as a “major upgrade” and boy were they right. The entire admin section was completely re-written. So far I like it. It hides a lot of the stuff that I rarely or never use, and that is always a plus. So far so good!