Jan 302011
 

Und die Chinesen "attackieren" unseren Server weiter. Heute habe ich erfahren, dass wir innerhalb von 30 Tagen 125 GB incoming traffic auf unserer SIP VZ hatten, das meiste selbstverständlich von einer einzigen chinesischen IP. Sehr schön wird das auch dadurch gezeigt dass der Outgoing Traffic nur 5 GB betragen hat (und da die Gespräche sowohl rein als auch raus laufen, also durch den Server geproxyied werden bei uns, entspricht das eher dem wahren Gesprächsaufkommen).

Ich habe ausgerechnet, dass es ca. 0,38 MBit pro sec Dauertraffic entspricht, d.h. wir haben genug Reserven. (mit einer 100 MBit Anbindung, die bei Bedarf auf 1 GBit umgestellt werden kann).

 Posted by at 12:37 pm
Jan 052011
 

In another post I already blocked the Chinese from scanning my YATE server (actually attempting a kind of DOS attack? Perhaps this would bring another VoIP software down?)

Anyways, the Chinese are back and they have changed their IP adress to

218.72.254.43

Also the message is slightly different:

sip:venice@<my IP>

I will attempt to block them once again, if this happens a third time with yet another IP, I will think about other measures.

 Posted by at 10:12 pm
Sep 032010
 

How can you test your YATE installation?

By trying to place a call from the YATE machine to one of your lines.

  • set up and activate music on hold (moh), I attached mine to madplay, set up in the configuration under the name madplay. Thus I get moh/madplay as an available data source (another would be tone/ring)
  • start YATE if it is not already running.
  • telnet localhost 5038 to login to YATE (rmanager has to be activated!)
  • callgen set called=yourlinenumber
  • callgen set source=moh/madplay
  • callgen single

The device attached to the line should get a call with Caller ID "YATE", which will be routed to moh/madplay, and automatically disconnected after a minute.

You need to setup your call only once, and can continue testing with callgen single as often as you need to.

Here's more information in the original documentation.

Please note: you can't call outgoing numbers (as set up in your regex routing table) this way.

Also, you may need to enable the callgen and the rmanager modules in your yate.conf, if they had been disabled.

An easier way: simply enter

call moh/madplay yourlinenumber

Other interesting links

 Posted by at 7:29 pm
Oct 302009
 

This article describes my way of setting up YATE, and explains the reasoning behind some of the choices made. I use two machines for my setup, to allow for maximum performance for YATE, not interrupted by MySQL and web-frontend handling.

Hardware

Ensure your case is cooled well, especially the harddrives. I've got a cooler in front of them.

For the RAID 1 we will be creating later on you need two harddrives of approximately the same capacity.

BIOS

!!! Ensure that your server continues booting, even with missing keyboard, mouse and other things gone wrong (Halt On:  "No errors") !!!

TBD: Complete BIOS check.

Choosing a Linux Distribution

Debian Lenny is the distribution of my choice.  I've been working with Debian for quite some time now, I am used to it's powerful apt package management system, its editors and file layout. Still, apt is the main reason to go for Debian.

Lenny is the newest stable distribution at the time of writing this.

Note: Diana of Null Team will tell you to go with Fedora Core and PostgreSQL. While this MAY be a better choice, I have nearly zero experience there, and lots with MySQL and Debian.

Drive and filesystem layout

This HowTo has a nice overview of drive technologies and explains some choices. Unfortunately, the document is rather old, yet, it is recommended in the Debian install guide.
Setting up a RAID 1 on a running LVM System (Debian Lenny).

  • use parallel controllers / channels (one drive on each IDE channel)
  • filesystem: ext3
    (reiserfs seems to be unstable – as in data loss – for some people. This may not be the case for newer versions of it.)
  • software RAID 1 = mirrored RAID with two drives
  • LVM (logical volume manager):
    partitions can be resized on-the-fly and – more important – one can take snapshots of partitions while working on them

The way to go is to setup one disk with LVM first, just ignore the second disk for time being. After everything is set up, we  create the RAID array (see the article for Debian Lenny above).

The reasoning behind the RAID is as follows: if one drive fails, the system will still continue operation and boot. It leaves time for me to replace the failed drive. In a backup-only situation the system would come down hard, and there would be no service until the problem would be fixed. This would be fatal, of course.

This extra stability is bought by a performance impact on the CPU, especially writing is slower. As the system is going to read mostly (the database is on another server, remember?), this is not a huge issue.

TBD: add my drive layout

Installation

  • Unchoose desktop system from the standard package sets to be installed. After installing the packages – which takes a surprisingly short time – the system will reboot.
  • The Debian installer used ext2 for the /boot partition, I converted it to an ext3 partition using tune2fs -j /dev/hda1, and editing the /etc/fstab

TBD: apt setup, sshd, remove exim4 (boot log!), apt security package servers, temperature monitoring, GRUB failsafe, setting time

Packages

Partly based on this article ("Der perfekte Server – Debian Lenny" – DE).

  • apt-get install ssh openssh-server
    After setting up the SSH server, we can continue the setup from another computer, which will be more convenient.
  • apt-get install molly-guard
    Molly Guard demands you to type the name of the system before a reboot / halt / shut down. Useful if you have several SSH sessions with different servers, to protect you from human error (rebooting the wrong server).

TBD: ssh on different port for more security?

TBD: Attempt to close device '/dev/cdrom' which is not open.

RAID 1

Set up your RAID by following this article ("Setting up RAID 1 on a runing LVM System"). Setting up RAID on LVM Systems is a complicated business.

My installation is a bit different from the one described in the article:

  • I installed Debian on /dev/hda (the P-ATA drive). The other RAID 1 drive is going to be /dev/sda (the  S-ATA drive).
  • /dev/hda is partitioned according to the guided LVM partitioner of the Debian installer (/dev/hda1 = /boot; LVM on /dev/hda2)

Thus:

sfdisk -d /dev/sda | sfdisk /dev/sdb

in my setup is:

sfdisk -d /dev/hda | sfdisk /dev/sda

Simulate a harddrive failure by removing the drive's power supply:

cat /proc/mdstat – output if dev/sda failed:

md1 : active raid1 hda2[0]
195109312 blocks [2/1] [U_]md0 : active raid1 hda1[0]
248896 blocks [2/1] [U_]cat /proc/mdstat – output if dev/hda failed:

md1 : active raid1 sda2[1]
195109312 blocks [2/1] [_U]md0 : active raid1 sda1[1]
248896 blocks [2/1] [_U]

Recovery in case of drive replacement. Also useful is this article.

OpenVZ

In the next step we install OpenVZ using this article.

After rebooting verify that you're indeed running on an OpenVZ kernel: uname -r

… edit /etc/sysctl.conf according to the article (networking settings).
Note: This file also includes further security settings for your networking environment, which I might research and post later.

Bin gerade vor dem Abschnitt Configuring stehengeblieben.

 Posted by at 9:16 pm