This article describes my way of setting up YATE, and explains the reasoning behind some of the choices made. I use two machines for my setup, to allow for maximum performance for YATE, not interrupted by MySQL and web-frontend handling.
Ensure your case is cooled well, especially the harddrives. I've got a cooler in front of them.
For the RAID 1 we will be creating later on you need two harddrives of approximately the same capacity.
!!! Ensure that your server continues booting, even with missing keyboard, mouse and other things gone wrong (Halt On: "No errors") !!!
TBD: Complete BIOS check.
Choosing a Linux Distribution
Debian Lenny is the distribution of my choice. I've been working with Debian for quite some time now, I am used to it's powerful apt package management system, its editors and file layout. Still, apt is the main reason to go for Debian.
Lenny is the newest stable distribution at the time of writing this.
Note: Diana of Null Team will tell you to go with Fedora Core and PostgreSQL. While this MAY be a better choice, I have nearly zero experience there, and lots with MySQL and Debian.
Drive and filesystem layout
This HowTo has a nice overview of drive technologies and explains some choices. Unfortunately, the document is rather old, yet, it is recommended in the Debian install guide.
Setting up a RAID 1 on a running LVM System (Debian Lenny).
- use parallel controllers / channels (one drive on each IDE channel)
- filesystem: ext3
(reiserfs seems to be unstable – as in data loss – for some people. This may not be the case for newer versions of it.)
- software RAID 1 = mirrored RAID with two drives
- LVM (logical volume manager):
partitions can be resized on-the-fly and – more important – one can take snapshots of partitions while working on them
The way to go is to setup one disk with LVM first, just ignore the second disk for time being. After everything is set up, we create the RAID array (see the article for Debian Lenny above).
The reasoning behind the RAID is as follows: if one drive fails, the system will still continue operation and boot. It leaves time for me to replace the failed drive. In a backup-only situation the system would come down hard, and there would be no service until the problem would be fixed. This would be fatal, of course.
This extra stability is bought by a performance impact on the CPU, especially writing is slower. As the system is going to read mostly (the database is on another server, remember?), this is not a huge issue.
TBD: add my drive layout
- Unchoose desktop system from the standard package sets to be installed. After installing the packages – which takes a surprisingly short time – the system will reboot.
- The Debian installer used ext2 for the /boot partition, I converted it to an ext3 partition using tune2fs -j /dev/hda1, and editing the /etc/fstab
TBD: apt setup, sshd, remove exim4 (boot log!), apt security package servers, temperature monitoring, GRUB failsafe, setting time
Partly based on this article ("Der perfekte Server – Debian Lenny" – DE).
- apt-get install ssh openssh-server
After setting up the SSH server, we can continue the setup from another computer, which will be more convenient.
- apt-get install molly-guard
Molly Guard demands you to type the name of the system before a reboot / halt / shut down. Useful if you have several SSH sessions with different servers, to protect you from human error (rebooting the wrong server).
TBD: ssh on different port for more security?
TBD: Attempt to close device '/dev/cdrom' which is not open.
Set up your RAID by following this article ("Setting up RAID 1 on a runing LVM System"). Setting up RAID on LVM Systems is a complicated business.
My installation is a bit different from the one described in the article:
- I installed Debian on /dev/hda (the P-ATA drive). The other RAID 1 drive is going to be /dev/sda (the S-ATA drive).
- /dev/hda is partitioned according to the guided LVM partitioner of the Debian installer (/dev/hda1 = /boot; LVM on /dev/hda2)
sfdisk -d /dev/sda | sfdisk /dev/sdb
in my setup is:
sfdisk -d /dev/hda | sfdisk /dev/sda
Simulate a harddrive failure by removing the drive's power supply:
cat /proc/mdstat – output if dev/sda failed:
md1 : active raid1 hda2
195109312 blocks [2/1] [U_]md0 : active raid1 hda1
248896 blocks [2/1] [U_]cat /proc/mdstat – output if dev/hda failed:
md1 : active raid1 sda2
195109312 blocks [2/1] [_U]md0 : active raid1 sda1
248896 blocks [2/1] [_U]
Recovery in case of drive replacement. Also useful is this article.
In the next step we install OpenVZ using this article.
After rebooting verify that you're indeed running on an OpenVZ kernel: uname -r
… edit /etc/sysctl.conf according to the article (networking settings).
Note: This file also includes further security settings for your networking environment, which I might research and post later.
Bin gerade vor dem Abschnitt Configuring stehengeblieben.