Using tc directly to manage rate limiting can be tricky and a little bit complex to understand. I have found that shorewall provides an easy to use and understand interface to achieve the same purpose. The concept is based on devices, rules and classes as represented by the three files in shorewall – tcdevices, tcrules and tcclasses.
The devices basically refer to the interfaces rate limiting should happen; the rules are the different types of traffic that you wish to implement rules for. ie : all inbound http traffic should be given a certain class; And the classes define how the rating is done for that class.
In my example, I have a 10Mb/s connection to the internet that I need to share between office use, external servers such as DNS and HTTP and some other thing such as VPN access. In my /etc/shorewall/tcdevices I have specified my internet facing interface:
#INTERFACE IN-BANDWITH OUT-BANDWIDTHeth2 10mbit 10mbit
I then define some rules such as :
1 0.0.0.0/0 0.0.0.0/0 tcp 20,21,22 1 0.0.0.0/0 0.0.0.0/0 tcp - 20,21,22 2 0.0.0.0/0 0.0.0.0/0 tcp 53 2 0.0.0.0/0 0.0.0.0/0 udp 53
The above ‘tags’ the packets that match the follow rules. So as an example, all ssh traffic is tagged as 1. In my full example I have rules that make up 4 tagged ‘classes’. In the tcclasses file I have the following:
#INTERFACE MARK RATE CEIL PRIORITY OPTIONS eth2 1 10*full/100 full 1 eth2 2 44*full/100 full 2 eth2 3 14*full/100 full 3 eth2 4 28*full/100 full 4 eth2 5 4*full/100 10*full/100 5
What I basically have now is the identification of the type of traffic I want to shape (done by the rules) and over which interface I want to apply the shaping (done by the devices). The classes then determines the amount of bandwidth assigned for each rule.
If you, like me, run a linux firewall you may find your connection doing weird things like taking a long time or sometimes dying completely – often with little to no load on the machine. One thing that is worth looking at is the maximum number of tracked connections that the firewall is maintaining:
# cat /proc/sys/net/ipv4/ip_conntrack_max 16640
# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count
The above shows that the maximum number of connections has been reached. Once this starts happening, then connections that can no longer be maintained will be dropped – causing all kinds of trouble.
kernel: ip_conntrack: table full, dropping packet.
The way around this is to increase the number of maximum connections. This can be done easily and on the fly by running the following command:
# echo 131072 > /proc/sys/net/ipv4/ip_conntrack_max # cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count 44796
To make the configuration permanent across reboots the following line must be added to the /etc/sysctl.conf file:
net.ipv4.ip_conntrack_max = 131072
ZDNet have an interesting article – http://blogs.zdnet.com/BTL/?p=28005&tag=nl.e539 10 features that should be in Windows by default that are in Linux.
Here are my 10 –
- SSH. Secure remote connectivity to a machine. How windows admins survive with only having VNC or Terminal server is beyond me.
- Exchange support. Out of the box Windows cannot support exchange. This is absurd. Both Linux and Mac support this out the box.
- A Web server. Being able to throw up a simple page for any reason is really handy at times.
- Syslog. To be able to capture all logs either locally or a centralised logging server.
- Top. A quick text based view on what the hell is eating up all your resources.
- VLC. To this day I still struggle with ‘Codecs’. Nightmare. Install VLC and be done with it – watch any format of video and also play MP3s. It is also a streaming server.
- vi / vim. Edit and notepad just does not cut the mustard.
- KVM. Virtual machine support out of the box.
- LVM. Built in enterprise class volume management system.
- Firefox. Yes, yes I know that windows has a browser… but it is shit.
If IPtables is too daunting for you, then shorewall is probably the easiest way to setup and maintain an iptables based firewall on Linux. Let me go through the basics.
Interfaces & Zones
Shorewall builds some intelligence into your firewall thinking. Networks can be considered as ‘zones’ – i.e. your internet interface (ppp0) could be your ‘web’ zone and your internal network could be considered as your ‘int’ zone. I do not know why shorewall only allows 3 letter zone names… but that is not relevant here.
So – what I am doing is setting up a basic internet sharing firewall. It has 2 zones (web & int) for WWW and internal network. first thing, setup up your interfaces file:
# vi /etc/shorewall/interfaces #ZONE INTERFACE BROADCAST OPTIONS web ppp+ - dhcp,upnp int eth0
Here are my 2 zones – my dialup interface (ppp) and my internal network (eth0). The ‘+’ in ppp+ simply means it is a wildcard (ppp0,ppp1…etc). I have specified dhcp in my options to indicate that this interface gets it’s ipaddress dynamically. See man shorewall-interfaces for more information on interfaces.
For this example – /etc/shorewall/zones is very basic. Unless you are doing IPSEC tunnelling or any other fancy stuff, you can input the following:
# vi /etc/shorewall/zones #ZONE TYPE OPTIONS IN OUT # OPTIONS OPTIONS fw firewall web - int -
You see that there are three zones in the zones file. The third one is a zone for the firewall itself.
This is where Shorewall makes some sense over other firewalls. Instead of having millions of rules from one zone to another, you can simply specifiy a policy for communication between entire zones. You need not have a separate rules in addition like some firewalls do – Juniper. Here I have setup two policies that allow the Firewall to get to the other zones:
# vi /etc/shorewall/policy #SOURCE DEST POLICY LOG LIMIT CONNECTLIMIT $FW web ACCEPT $FW int ACCEPT
I then want to ensure that the internal zone (int) can get out to the internet (web) and I want to be able to administrate the firewall from my internal zone from any of my internal machines. You may not want the second policy, but I am just putting it in as an example.
int web ACCEPT int $FW ACCEPT
I then want to implement a default drop policy for internet traffic (web) to the firewall or my internal network and it’s a good idea to have a reject ALL ALL rule at the end.
web all ACCEPT all all ACCEPT
So now you have default policies based on your zones. You do not need to manage individual rules for inter-zone traffic.
Masquarade / NAT
Ok this is the part that makes your internet connection sharing work. Effectively what you want to do is have all your internal machines route through you firewall. To do this, you must NAT your internal traffic to your internet interface on your firewall. Shorewall makes this so simple. One line:
# vi /etc/shorewall/masq eth0 192.168.2.0/24 detect
Done. Restart the service – /etc/init.d/shorewall restart. You now have internet sharing for your internal network users through your linux firewall.
The last thing that you may want to do is to have traffic from the web reach a particular machine in the network. For example, you may have a web server on your internal network that want to be accessible from the internet – this simple rule will allow traffic to be forwarded from the firewall to the correct machine on the internal network:
# vi /etc/shorewall/rules DNAT web int:192.168.2.100 tcp 80,443
The rule implements allows traffic from the internet (web) to the internal network machine – 192.168.2.100 on tcp ports 80 and 443.
That is really just a basic introduction to shorewall. To see the syntax of any of the shorewall the man pages are accessible as follows:
# man shorewall-rules
Don’t you just hate it when you go through the Fedora install, specifically select KDE as your desktop manager and yet you are still faced with GDM… It used to be the case where you could edit a sysconfig file, but for reasons best known to the gnome zealots at Fedora, it is well and truly hidden.
Well, fear not. edit :
and change the order of the display managers in the sript.
Total fail imho.
I had the problem recently with spacewalk not correctly displaying the virtual machines and the hosts that they belong to. For example I would register a host machine – in this case, a Xen Hypervisor. The machine and all it’s details would show up, but then there would be no information about the guests under the virtualization tab.
My setup is that I am running a CentOS5.3 environment and am using spacewalk to manage updates and packages for all machines – real and virtual.
It took me quite a while to figure out what my problem was. Firstly, the host machine must have rhn-virtualization packages installed. Probably not a common mistake, but one that I made.
# rpm -qa | grep -i virtualization rhn-virtualization-host-5.2.0-5 rhn-virtualization-common-5.2.0-5
Once that was sorted out the other thing that I needed to make sure of was that I create 2 activiation keys in spacewalk – one for the hypervisors and one for the guests. Using the rhnreg_ks command I could then register each machine with the activation key that corresponds to the machine type that they belong to:
# rhnreg_ks --activationkey=1-14bd43a98cb9a4d25d976748f9daf01f --serverUrl=https://spacewalk.server.com/XMLRPC --force
Once that was done, I could then see the virtual machines associated with the host.
After my bonjour woes yesterday, I started to fish around and read up some more about bonjour. It is a very useful protocol but what stood out for me was that services can be advertised on linux via bonjour. I knew that this was possible but did not fully appreciate the capabilities. I have hacked around before to get a time machine disk that has been shared using samba from linux, working on Mac, but I never knew that it would be so easy to do it via bonjour using avahi and netatalk – both opensource tools.
Install and configure netatalk. I used Fedora 11, so yum install netatalk. Do what you need to do on your distro to install. The edit the config file:
# vi /etc/atalk/netatalk.conf
# Set which daemons to run (papd is dependent upon atalkd): ATALKD_RUN=no PAPD_RUN=no CNID_METAD_RUN=yes AFPD_RUN=yes TIMELORD_RUN=no A2BOOT_RUN=no
# Set which daemons to run (papd is dependent upon atalkd): ATALKD_RUN=no PAPD_RUN=yes CNID_METAD_RUN=yes AFPD_RUN=yes TIMELORD_RUN=no A2BOOT_RUN=no
papd is used for sharing attached printers on your linux box to other Mac boxen. CNID_METAD_RUN=yes is very important as it is likely your file systems are not hfs+, so you need this to handle the metadata.
Next edit the file /etc/atalk/afpd.conf and check that the last line at the bottom is:
- -transall -uamlist uams_randnum.so,uams_dhx.so -nosavepassword -advertise_ssh
Now you need to add a shared volume. Here I will show you how to share users’ home directories and how to add a volume that you wish to use for time machine backups. Edit the /etc/atalk/AppleVolumes.default file and add a ~ to the bottom of the file if it is not allready there:
# vi /etc/atalk/AppleVolumes.default ~
The above will allow users’ to share their home directories to their Mac machines. To add a volume for Time Machine, Add the following line (editing it to your prefs) to the same file at the bottom:
/Volumes/TimeMachine TimeMachine allow:username1,username2 cnidscheme:cdb options:usedots,upriv
Where username1 and username2 are the users that are allowed to access the volume. You can now start netatalk and set to start at boot –
# /etc/init.d/atalk start # chkconfig atalk on
The next thing is to configure avahi daemon – install first if not already done so:
# yum install -y nss-mdns # yum install -y avahi-compat-libdns_sd # yum install -y avahi
Edit /etc/nsswitch.conf to look as follows:
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 mdns
Avahi needs to be configured so that it knows which services it should advertise. Just for now, we are only advertising the afp sharing service. To advertise the service we add the following service config file:
<?xml version="1.0" standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name replace-wildcards="yes">%h</name> <service> <type>_afpovertcp._tcp</type> <port>548</port> </service> <service> <type>_device-info._tcp</type> <port>0</port> <txt-record>model=Xserve</txt-record> </service> </service-group>
Thats it! All you need to do now is start the avahi-daemon
# /etc/init.d/avahi-daemon start
Opening up your finder in mac should now show your linux machine in the ‘Shared’ section on the left:
If you look closely, you will see that the icon is not a BSOD windows icon, but actually an Xserver icon
The last couple of things that you must do is instruct your Mac to use ‘unsupported network volumes’ :
defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1
And a bit of hack – but if you have an issue writing to the backup disk for the first time using Time Machine, then you may need to create a sparse image locally and then copy over to the mounted image. Creating the sparse image is easy enough. Open disk utility and from the file menu select – New -> Blank Disc Image
To get the file name that Time Machine wants to use – It is simply the joining of the machine name and the MAC address of the device that the backup will be performed over. In my case the name would look like – bonobo_pro_00236c8e678c.sparsebundle
It can be seen that I changed the image Format to sparse bundle disk image. Once the file has been created, it can then be copied to the volume that you wish to use for your time machine backups – and then Viola! You can enable Time Machine.
As I find out what else I can do with bonjour, mac & linux – I will post up here…