How to set the hostname from DHCP in Ubuntu 14.04

I have a lot of virtual servers, and I like preparing a “golden image” and instantiate it many times. One of the steps is to set the hostname for the host, from my private DHCP server. It usually works, but sometimes it fails and I didn’t know why. So I got tired of such indetermination and this time…

I learned how to set the hostname from DHCP in Ubuntu 14.04

Well, so I have my dnsmasq server that acts both as a dns server and as a dhcp server. I investigated how a host gets its hostname from DHCP and it seems that it occurs when the DHCP server sends it by default (option hostname).

I debugged the DHCP messages using this command from other server

# tcpdump -i eth1 -vvv port 67 or port 68

And I saw that my dhcp server was properly sending the hostname (dnsmasq sends it by default), and I realized that the problem was in the dhclient script. I googled a lot and I found some clever scripts that got the name from a nameserver and were started as hooks from the dhcpclient script. But if the dhcp protocol sets the hostname, why do I have to create other script to set the hostname???.

Finally I got this blog entry and I realized that it was a bug in the dhclient script: if there exists an old dhcp entry in /var/lib/dhcp/dhclient.eth0.leases (or eth1, etc.), it does not set the hostname.

At this point you have two options:

  • The easy: in /etc/rc.local include a line that removes that file
  • The clever (suggested in the blog entry): include a hook script that unsets the hostname
echo unset old_host_name >/etc/dhcp/dhclient-enter-hooks.d/unset_old_hostname

How to create a simple Docker Swarm cluster

I have an old computer cluster, and the nodes have not any virtualization extensions. So I’m trying to use it to run Docker containers. But I do not want to choose in which of the internal nodes I have to run the containers. So I am using Docker Swarm, and I will use it as a single Docker host, by calling the main node to execute the containers and the swarm will decide the host in which the container will be ran. So this time…

I learned how to create a simple Docker Swarm cluster with a single front-end and multiple internal nodes

The official documentation of Docker includes this post that describes how to do it, but whereas it is very easy, I prefer to describe my specific use case.


  • 1 Master node with the public IP and the private IP
  • 3 Nodes with the private IPs, and

I want to call the master node to create a container from other computer (e.g., and leave the master to choose in which internal node is hosted the container.

Preparing the master node

First of all, I will install Docker

$ curl -sSL | sh

Now it is needed to install consul that is a backend for key-value storage. It will run as a container in the front-end (and it will be used by the internal nodes to synchronize with the master)

$ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap

Finally I will launch the swarm master

$ docker run -d -p 4000:4000 swarm manage -H :4000 --advertise consul://

(*) remember that consul is installed in the front-end, but you could detach it and install in another node if you want (need) to.

Installing the internal nodes

Again, we should install Docker and export docker through the IP

$ curl -sSL | sh

And once it is running, it is needed to expose the docker API through the IP address of the node. The easy way to test it is to launch the daemon using the following option:

$ docker daemon -H tcp:// -H unix:///var/run/docker.sock

Now you should be able to issue command line options such as

$ docker -H :2375 info

or even from other hosts

$ docker -H info

The underlying aim is that with swarm you are able to expose the local docker daemon to be used remotely in the swarm.

To make the changes persistent, you should set the parameters in the docker configuration file /etc/default/docker:

DOCKER_OPTS="-H tcp:// -H unix:///var/run/docker.sock"

It seems that docker version 1.11 has a bug and does not properly use that file (at least in ubuntu 16.04). So you can modify the file /lib/systemd/system/docker.service and set new commandline to launch the docker daemon.

ExecStart=/usr/bin/docker daemon -H tcp:// -H unix:///var/run/docker.sock -H fd://

Finally now we have to launch the swarm on each node

  • On node
docker run --restart=always -d swarm join --advertise= consul://
  • On node
docker run --restart=always -d swarm join --advertise= consul://
  • On node
docker run --restart=always -d swarm join --advertise= consul:// steps: communicating containers between them

Next steps: communicating the containers

If you launch new containers as usual (i.e. docker run -it containerimage bash), you will get containers with overlapping IPs. This is because you are using the default network scheme in the individual docker servers.

If you want to have a common network, you need to create an overlay network that spans across the different docker daemons.

But in order to be able to make it, you need to change the way that the docker daemons are being started. You need a system to coordinate the network, and it can be the same consul that we are using.

So you have to append the next flags to the command line that starts docker:

 --cluster-advertise eth1:2376 --cluster-store consul://

You can add the parameters to the docker configuration file /etc/default/docker. In the case of the internal nodes, the result will be the next (according to our previous modifications):

DOCKER_OPTS="-H tcp:// -H unix:///var/run/docker.sock --cluster-advertise eth1:2376 --cluster-store consul://"

As stated before, docker version 1.11 has a bug and does not properly use that file. In the meanwhile you can modify the file /lib/systemd/system/docker.service and set new commandline to launch the docker daemon.

ExecStart=/usr/bin/docker daemon -H tcp:// -H unix:///var/run/docker.sock --cluster-advertise eth1:2376 --cluster-store consul://

(*) We are using eth1 because it is the device in which our internal IP address is. You should use the device to which the 10.100.0.x address is assigned.

Now you must restart the docker daemons of ALL the nodes in the swarm.

Once they have been restarted, you can create a new network for the swarm:

$ docker -H network create swarm-network

And then you can use it for the creation of the containers:

$ docker -H run -it --net=swarm-network ubuntu:latest bash

Now the IPs will be given in a coordinated way, and the containers will have several IPs (the IP in the swarm and its IP in the local docker server).

Some more words on this

This post is made in May/2016. Both docker and swarm are evolving and maybe this post is outdated soon.

Some things that bother me on this installation…

  • While using the overlay network, if you expose one port using the flag -p, the port is exposed in the IP from the internal docker host. I think that you should be able to express in which IP you want to expose the port or use the IP from the main server.
    • I solve this issue by using a development made by me IPFloater: Once I create the container, I get the internal IP in which the port is exposed and I create a redirection in IPFloater, to be able to access the container through a specific IP.
  • Consul fails A LOT. If I leave the swarm running for hours (i.e. 8 hours) consul will probably fail. If I run a command like this: “docker run –rm=true swarm list consul://”, it states that it has a fail. Then I have to delete the container and create a new one.


How to configure a simple router with iptables in Ubuntu

If you have a server with two network cards, you can set a simple router that NATs a private range to a public one, by simply installing iptables and configuring it. This time…

I learned how to configure a simple router with iptables in Ubuntu


  1. I have one server with two NICs: eth0 and eth1, and several servers that have at least one NIC (e.g. eth1).
  2. The eth0 NIC is connected to a public network, with IP
  3. I want that the other servers have access to the public network through the main server.

How to do it

I will make it using IPTables, so I need to install IPTables

$ apt-get install iptables

My main server has the IP address in eth1. The other servers also have their IPs in the range of 10.0.0.x (either using static IPs or DHCP).

Now I will create some iptables rules in the server, by adding these lines to /etc/rc.local file just before the exit 0 line.

echo "1" > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -s ! -d -j MASQUERADE
iptables -A FORWARD -d -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s -i eth1 -j ACCEPT

These rules mean that:

  1. I want to forward traffic
  2. The traffic that comes from the network 10.0.0.x and is not directed to the network gains access to the internet through NAT.
  3. We forward the traffic from the connections made from the internal network to the origin IP.
  4. We accept the traffic that cames from the internal network and goes to the internal network.

Easier to modify

Here it is a script that you can use to customize the NAT for your site:

ovsnode01:~# cat > enable_nat <<\EOF

case "$1" in
echo "1" > /proc/sys/net/ipv4/ip_forward
exit 0;;
exit 0;;
exit 1
ovsnode01:~# chmod +x enable_nat
ovsnode01:~# ./enable_nat start

Now you can use this script in the ubuntu startup. Just move it to /etc/init.d and issue the next command:

update-rc.d enable_nat defaults 99 00

How to disable IPv6 in Ubuntu 14.04

I am tired of trying to update Ubuntu and see that it is trying to use the IPv6 that hangs the installation (ocasionally forever). My solution still is to disable IPv6 until it is properly used, so this time…

I learned how to disable IPv6 in Ubuntu 14.04

There are a lot of resources on how to disable IPv6 in Ubuntu 14.04, and this is yet another one (but this is mine).

It is as easy as

$ sudo su -
$ cat >> /etc/sysctl.conf << EOT
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
$ sysctl -p

To check if it has worked, you can issue a command like

$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6

If it returns 1, it has been properly disabled. Otherwise try to reboot and check it again.

How to create unprivileged LXC containers in Ubuntu 14.04

I am used to use LXC containers in Ubuntu. But I usually run them under the root user. I think that this is a bad practise, and I want to run them using my user. The answer is: unprivileged containers and this time…

I learned how to create unprivileged LXC containers in Ubuntu 14.04

My prayers were answered very soon and I found a very useful resource from Stéphane Graber (which is the LXC and LXD project leader at Canonical). I am summarizing the steps here, as they have been made by me, but his post is plenty of useful information, and I really recommend reading it to know what is going on.

Starting point…

I had a working Ubuntu 14.04.4 LTS installation, and a working LXC 2.0.0 installation. I was already able to run privileged containers by issuing commands like

$ sudo lxc-start -n mycontainer


If you need info about installing LXC 2, please go to this link, but it is possible that you do not need my post, yet.

First we set up the uid and gid mappings

calfonso@mmlin:~$ sudo usermod --add-subuids 100000-165536 calfonso
calfonso@mmlin:~$ sudo usermod --add-subgids 100000-165536 calfonso

Now we authorize our user to bridge devices to lxcbr0 (with a maximum quota of 10 bridges; you can set the number that best fits to you)

calfonso@mmlin:~$ sudo bash -c 'echo "calfonso veth lxcbr0 10" >> /etc/lxc/lxc-usernet'

Now let lxc access our home folder, because it needs to read the configuration file

calfonso@mmlin:~$ sudo chmod +x $HOME

And create the configuration file (you can set your values for the bridges to which you have allowed the access in the configuration file, mac addresses, etc.)

calfonso@mmlin:~$ mkdir -p $HOME/.config/lxc
calfonso@mmlin:~$ cat >> ~/.config/lxc/default.conf << EOT
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536 = veth = lxcbr0 = up = 00:16:3e:xx:xx:xx

And finally we are ready to create the unprivileged container

calfonso@mmlin:~$ lxc-create -t download -n myunprivilegedcont -- -d ubuntu -r xenial -a amd64

And that’s all 🙂

(*) Remember that you can still use privileged containers by issuing lxc-* commands using “sudo”

Recommended information

If you want to know more about the uidmap, gidmap, how the bridging permissions work, etc., I recommend you to read this post.