How to deal with parameters in bash scripts like a pro

I use to develop bash scripts, and I usually have a problem with flags and parameters. I like to allow parameters like a pro: using the long flags (e.g. –flag), the reduced flags (e.g. -f), but I want to allow combinations of several flags (e.g. -fc). And so this time…

I learned how to deal with parameters in bash scripts like a pro

This time I have started to use bash arrays, that are like C arrays or python arrays, but in bash ūüėČ I could explain little by little my script, but I’m including here the final script (this is an extract from one of my developments: ec4docker):

CREATE=
TERMINATE=
CONFIG_FILE=
n=0
while [ $# -gt 0 ]; do
    if [ "${1:0:1}" == "-" -a "${1:1:1}" != "-" ]; then
        for f in $(echo "${1:1}" | sed 's/\(.\)/-\1 /g' ); do
            ARR[$n]="$f"
            n=$(($n+1))
        done
    else
        ARR[$n]="$1"
        n=$(($n+1))
    fi
    shift
done
n=0
while [ $n -lt ${#ARR[@]} ]; do
    case "${ARR[$n]}" in
        --create | -c)          CREATE=True;;
        --terminate | -t)       TERMINATE=True;;
        --yes | -y)             ASSUME_YES=True;;
        --config-file | -f)     n=$(($n+1))
                                [ $n -ge ${#ARR[@]} ] && usage && exit 1
                                CONFIG_FILE="${ARR[$n]}";;
        --help | -h)            usage && exit 0;;
        *)                      usage && exit 1;;
    esac
    n=$(($n+1))
done

In this way, you allow to issue commands like

$ ./myapp -ctyf config.conf

But also mix parameter styles

$ ./myapp --create -ty --config-file myapp.conf

Technical details

I like the solution, but I also like the technical details (because I am a code-freak). So I share some technical issues here:

  • The first “while” simply parses the commandline to expand the combined parameters. In fact, if searches for expressions like ‘-fct’ and splits them into a set of expressions ‘-f’, ‘-c’, ‘-t’. So if you do not want to split parameters in this way, you can substitute the first “while” by
ARR=( "$@" )
  • The second “while” is needed because we want to allow parameters that need more than one flag (e.g. -f <config file>). Any time that it is expected to have a parameter for a flag, we need to check if we have enough parameters and if not, raise an error. If you do not need any parameter with extra values, you could substitute the second while by:
for ARRVAL in "${ARR[@]}"; do
  case "$ARRVAL" in

How to create a simple Docker Swarm cluster

I have an old computer cluster, and the nodes have not any virtualization extensions. So I’m trying to use it to run Docker containers. But I do not want to choose in which of the internal nodes I have to run the containers. So I am using Docker Swarm, and I will use it as a single Docker host, by calling the main node to execute the containers and the swarm will decide the host in which the container will be ran. So this time…

I learned how to create a simple Docker Swarm cluster with a single front-end and multiple internal nodes

The official documentation of Docker includes this post that describes how to do it, but whereas it is very easy, I prefer to describe my specific use case.

Scenario

  • 1 Master node with the public IP 111.22.33.44 and the private IP 10.100.0.1.
  • 3 Nodes with the private IPs 10.100.0.2,¬†10.100.0.3 and¬†10.100.0.4

I want to call the master node to create a container from other computer (e.g. 111.22.33.55), and leave the master to choose in which internal node is hosted the container.

Preparing the master node

First of all, I will install Docker

$ curl -sSL https://get.docker.com/ | sh

Now it is needed to install consul that is a backend for key-value storage. It will run as a container in the front-end (and it will be used by the internal nodes to synchronize with the master)

$ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap

Finally I will launch the swarm master

$ docker run -d -p 4000:4000 swarm manage -H :4000 --advertise 10.100.0.1:4000 consul://10.100.0.1:8500

(*) remember that consul is installed in the front-end, but you could detach it and install in another node if you want (need) to.

Installing the internal nodes

Again, we should install Docker and export docker through the IP

$ curl -sSL https://get.docker.com/ | sh

And once it is running, it is needed to expose the docker API through the IP address of the node. The easy way to test it is to launch the daemon using the following option:

$ docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock

Now you should be able to issue command line options such as

$ docker -H :2375 info

or even from other hosts

$ docker -H 10.100.0.2:2375 info

The underlying aim is that with swarm you are able to expose the local docker daemon to be used remotely in the swarm.

To make the changes persistent, you should set the parameters in the docker configuration file /etc/default/docker:

DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"

It seems that docker version 1.11 has a bug and does not properly use that file (at least in ubuntu 16.04). So you can modify the file /lib/systemd/system/docker.service and set new commandline to launch the docker daemon.

ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -H fd://

Finally now we have to launch the swarm on each node

  • On node 10.100.0.2
docker run --restart=always -d swarm join --advertise=10.100.0.2:2375 consul://10.100.0.1:8500
  • On node 10.100.0.3
docker run --restart=always -d swarm join --advertise=10.100.0.3:2375 consul://10.100.0.1:8500
  • On node 10.100.0.4
docker run --restart=always -d swarm join --advertise=10.100.0.4:2375 consul://10.100.0.1:8500Next steps: communicating containers between them

Next steps: communicating the containers

If you launch new containers as usual (i.e. docker run -it containerimage bash), you will get containers with overlapping IPs. This is because you are using the default network scheme in the individual docker servers.

If you want to have a common network, you need to create an overlay network that spans across the different docker daemons.

But in order to be able to make it, you need to change the way that the docker daemons are being started. You need a system to coordinate the network, and it can be the same consul that we are using.

So you have to append the next flags to the command line that starts docker:

 --cluster-advertise eth1:2376 --cluster-store consul://10.100.0.1:8500

You can add the parameters to the docker configuration file /etc/default/docker. In the case of the internal nodes, the result will be the next (according to our previous modifications):

DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise eth1:2376 --cluster-store consul://10.100.0.1:8500"

As stated before, docker version 1.11 has a bug and does not properly use that file. In the meanwhile you can modify the file /lib/systemd/system/docker.service and set new commandline to launch the docker daemon.

ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise eth1:2376 --cluster-store consul://10.100.0.1:8500

(*) We are using eth1 because it is the device in which our internal IP address is. You should use the device to which the 10.100.0.x address is assigned.

Now you must restart the docker daemons of ALL the nodes in the swarm.

Once they have been restarted, you can create a new network for the swarm:

$ docker -H 10.100.0.1:4000 network create swarm-network

And then you can use it for the creation of the containers:

$ docker -H 10.100.0.1:4000 run -it --net=swarm-network ubuntu:latest bash

Now the IPs will be given in a coordinated way, and the containers will have several IPs (the IP in the swarm and its IP in the local docker server).

Some more words on this

This post is made in May/2016. Both docker and swarm are evolving and maybe this post is outdated soon.

Some things that bother me on this installation…

  • While using the overlay network, if you expose one port using the flag -p, the port is exposed in the IP from the internal docker host. I think that you should be able to express in which IP you want to expose the port or use the IP from the main server.
    • I solve this issue by using a development made by me¬†IPFloater: Once I create the container, I get the internal IP in which the port is exposed and I create a redirection in IPFloater, to be able to access the container through¬†a specific IP.
  • Consul fails¬†A LOT. If I leave the swarm running for hours (i.e. 8 hours) consul will probably fail. If I run a command like this: “docker run –rm=true swarm list consul://10.100.0.1:8500”, it states that it has a fail. Then I have to delete the container and create a new one.

 

How to configure a simple router with iptables in Ubuntu

If you have a server with two network cards, you can set a simple router that NATs a private range to a public one, by simply installing iptables and configuring it. This time…

I learned how to configure a simple router with iptables in Ubuntu

Scenario

  1. I have one server with two NICs: eth0 and eth1, and several servers that have at least one NIC (e.g. eth1).
  2. The eth0 NIC is connected to a public network, with IP 111.22.33.44
  3. I want that the other servers have access to the public network through the main server.

How to do it

I will make it using IPTables, so I need to install IPTables

$ apt-get install iptables

My main server has the IP address 10.0.0.1 in eth1. The other servers also have their IPs in the range of 10.0.0.x (either using static IPs or DHCP).

Now I will create some iptables rules in the server, by adding these lines to /etc/rc.local file just before the exit 0 line.

echo "1" > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE
iptables -A FORWARD -d 10.0.0.0/24 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s 10.0.0.0/24 -i eth1 -j ACCEPT

These rules mean that:

  1. I want to forward traffic
  2. The traffic that comes from the network 10.0.0.x and is not directed to the network gains access to the internet through NAT.
  3. We forward the traffic from the connections made from the internal network to the origin IP.
  4. We accept the traffic that cames from the internal network and goes to the internal network.

Easier to modify

Here it is a script that you can use to customize the NAT for your site:

ovsnode01:~# cat > enable_nat <<\EOF
#!/bin/bash
IFACE_WAN=eth0
IFACE_LAN=eth1
NETWORK_LAN=10.0.0.0/24

case "$1" in
start)
echo "1" > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o $IFACE_WAN -s $NETWORK_LAN ! -d $NETWORK_LAN -j MASQUERADE
iptables -A FORWARD -d $NETWORK_LAN -i $IFACE_WAN -o $IFACE_LAN -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s $NETWORK_LAN -i $IFACE_LAN -j ACCEPT
exit 0;;
stop)
iptables -t nat -D POSTROUTING -o $IFACE_WAN -s $NETWORK_LAN ! -d $NETWORK_LAN -j MASQUERADE
iptables -D FORWARD -d $NETWORK_LAN -i $IFACE_WAN -o $IFACE_LAN -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -D FORWARD -s $NETWORK_LAN -i $IFACE_LAN -j ACCEPT
exit 0;;
esac
exit 1
EOF
ovsnode01:~# chmod +x enable_nat
ovsnode01:~# ./enable_nat start

Now you can use this script in the ubuntu startup. Just move it to /etc/init.d and issue the next command:

update-rc.d enable_nat defaults 99 00

How to create a multi-LXC infrastructure using custom NAT and DHCP server

I am creating complex infrastructures with LXC. This time I wanted to create a realistic infrastructure that would simulate a real multi computer infrastructure, and I wanted to control all the components. I know that LXC will provide IP addresses through its DHCP server, and it will NAT the internal IPs, but I want to use my own DHCP server to provide multiple IP ranges. So this time…

I learned how to create a multi-LXC infrastructure using custom NAT router and DHCP server with multiple IP ranges

My virtual container infrastructure will consist of:

  • 1 container that will have 2 network interfaces: 1 to a “public” network (the LXC network) and 1 to a “private” network (to a private bridge).
  • 1 container in a “private network” (with IP in the range of 10.0.0.x)
  • 1 container in other “private network” (with IP in the range of 10.1.0.x)

The containers in the private network will be able to NAT to the outern world through the router.

I am using unprivileged containers (to learn how to create them, please read my previous post), but it is easy to execute all of this using the privileged containers (i.e. sudo lxc-* commands).

Creating the “private network”

First I create a bridge that will act as the switch for the private network:

calfonso@mmlin:~$ sudo su -
root@mmlin:~# cat >> /etc/network/interfaces << EOT
auto privbr
iface privbr inet manual
bridge_ports none
EOT
root@mmlin:~# ifup privbr

Then I will give permissions for my user to be able to add devices to that bridge

(* this is a unprivileged container specific step)

calfonso@mmlin:~$ sudo bash -c 'echo "calfonso veth privbr 100" >> /etc/lxc/lxc-usernet'

Setting the router

Now¬†I will create a container named “router”

$ lxc-create -t download -n router -- -d ubuntu -r xenial -a amd64

And I will edit the configuration to set the proper network configuration. Edit the file $HOME/.local/share/lxc/router/config and modify it to leave it like this one:

# Distribution configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.userns.conf
lxc.arch = x86_64

# Container specific configuration
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
lxc.rootfs = /home/calfonso/.local/share/lxc/router/rootfs
lxc.rootfs.backend = dir
lxc.utsname = router

# Network configuration
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:9b:1b:70

# Private interface for the first range
lxc.network.type = veth
lxc.network.link = privbr
lxc.network.flags = up
lxc.network.ipv4 = 10.0.0.1/24
lxc.network.hwaddr = 20:00:00:9b:1b:70

# Additional interface for the other private range
lxc.network.type = veth
lxc.network.link = privbr
lxc.network.flags = up
lxc.network.ipv4 = 10.1.0.1/24
lxc.network.hwaddr =20:20:00:9b:1b:70

Now you can start your container, and check that you have your network devices:

calfonso@mmlin:~$ lxc-start -n router 
calfonso@mmlin:~$ lxc-attach -n router 
root@router:/# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:16:3e:9b:1b:70 
 inet addr:10.0.3.87 Bcast:10.0.3.255 Mask:255.255.255.0
 ...
eth1 Link encap:Ethernet HWaddr 20:00:00:9b:1b:70 
 inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
 ...
eth2 Link encap:Ethernet HWaddr 20:20:00:9b:1b:70 
 inet addr:10.1.0.1 Bcast:10.1.0.255 Mask:255.255.255.0
...
lo Link encap:Local Loopback 
...

DHCP Server

And now, install DNSMASQ to be used as the nameserver and the DHCP server:

root@router:/# apt-get update
root@router:/# apt-get -y dist-upgrade
root@router:/# apt-get install -y dnsmasq
root@router:/# cat >> /etc/dnsmasq.d/priv-network.conf << EOT
except-interface=eth0
bind-interfaces
dhcp-range=tag:if1,10.0.0.2,10.0.0.100,1h
dhcp-range=tag:if2,10.1.0.2,10.1.0.100,1h
dhcp-host=20:00:00:*:*:*,set:if1
dhcp-host=20:20:00:*:*:*,set:if2
EOT
root@router:/# service dnsmasq restart

Setting up as router

I will use iptables to NAT the internal networks. So we’ll need to install iptables:

root@router:/# apt-get install iptables

Then I have to add the following lines to the /etc/rc.local file (just before the exit 0 line)

echo "1" > /proc/sys/net/ipv4/ip_forward

iptables -t nat -A POSTROUTING -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE
iptables -A FORWARD -d 10.0.0.0/24 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s 10.0.0.0/24 -i eth1 -j ACCEPT

iptables -t nat -A POSTROUTING -s 10.1.0.0/24 ! -d 10.1.0.0/24 -j MASQUERADE
iptables -A FORWARD -d 10.1.0.0/24 -o eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s 10.1.0.0/24 -i eth2 -j ACCEPT

And now it is time to reboot the container

root@router:/# exit
calfonso@mmlin:~$ lxc-stop -n router 
calfonso@mmlin:~$ lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6 
myunprivilegedcont STOPPED 0 - - - 
router STOPPED 0 - - - 
calfonso@mmlin:~$ lxc-start -n router 
calfonso@mmlin:~$ lxc-attach -n router 
root@router:/# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-A POSTROUTING -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE
-A POSTROUTING -s 10.1.0.0/24 ! -d 10.1.0.0/24 -j MASQUERADE
root@router:/#

And that’s all for our router.

(*) All these steps would also work for a common server (either physical o virtual) that will act as a router and DHCP server.

Creating the other containers

Now we are ready to create containers in our subnet. We’ll create a container in network 10.0.0.x (which will be named node_in_0) and other container in network 10.1.0.x (which will be named node_in_1).

Creating a container in network 10.0.0.x

First we create a container (as we did with the router)

calfonso@mmlin:~$ lxc-create -t download -n node_in_0 -- -d ubuntu -r xenial -a amd64

And now we’ll edit its configuration file¬†.local/share/lxc/node_in_0/config to set it like this

# Distribution configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.userns.conf
lxc.arch = x86_64

# Container specific configuration
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
lxc.rootfs = /home/calfonso/.local/share/lxc/node_in_0/rootfs
lxc.rootfs.backend = dir
lxc.utsname = node_in_0

# Network configuration
lxc.network.type = veth
lxc.network.link = privbr
lxc.network.flags = up
lxc.network.hwaddr = 20:00:00:e9:79:10

(*) The most noticeable modifications are related to the network device: set the private intarface in lxc.network.link and set the first octects to the hwaddr to the mask set in the DNSMASQ server (I left the other as those that LXC generated by itself).

Now you can start your container and it will get an IP in the range 10.0.0.x.

calfonso@mmlin:~$ lxc-start -n node_in_0 
calfonso@mmlin:~$ lxc-attach -n node_in_0 
root@node_in_0:/# ifconfig
eth0 Link encap:Ethernet HWaddr 20:00:00:e9:79:10 
 inet addr:10.0.0.53 Bcast:10.0.0.255 Mask:255.255.255.0
 ...

lo Link encap:Local Loopback 
...

Creating a container in network 10.1.0.x

Again, we have to create a container

calfonso@mmlin:~$ lxc-create -t download -n node_in_1 -- -d ubuntu -r xenial -a amd64

And now we’ll edit its configuration file¬†.local/share/lxc/node_in_1/config to set it like this

# Distribution configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.userns.conf
lxc.arch = x86_64

# Container specific configuration
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
lxc.rootfs = /home/calfonso/.local/share/lxc/node_in_0/rootfs
lxc.rootfs.backend = dir
lxc.utsname = node_in_0

# Network configuration
lxc.network.type = veth
lxc.network.link = privbr
lxc.network.flags = up
lxc.network.hwaddr = 20:20:00:92:45:86

And finally, start the container and check that it has the expected IP address

calfonso@mmlin:~$ lxc-start -n node_in_1 
calfonso@mmlin:~$ lxc-attach -n node_in_1 
root@node_in_1:/# ifconfig
eth0 Link encap:Ethernet HWaddr 20:20:00:92:45:86 
 inet addr:10.1.0.12 Bcast:10.1.0.255 Mask:255.255.255.0
...
lo Link encap:Local Loopback 
...

Ah, you can check that these containers are able to access the internet ūüėČ

root@node_in_1:/# ping -c 3 www.google.es
PING www.google.es (216.58.201.131) 56(84) bytes of data.
64 bytes from mad06s25-in-f3.1e100.net (216.58.201.131): icmp_seq=1 ttl=54 time=8.14 ms
64 bytes from mad06s25-in-f3.1e100.net (216.58.201.131): icmp_seq=2 ttl=54 time=7.91 ms
64 bytes from mad06s25-in-f3.1e100.net (216.58.201.131): icmp_seq=3 ttl=54 time=7.86 ms

--- www.google.es ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 7.869/7.973/8.140/0.119 ms

(*) If you can check that our router is effectively the router (I you don’t trust me ūüėČ ), you can check it

root@node_in_1:/# apt-get install traceroute
root@node_in_1:/# traceroute www.google.es
traceroute to www.google.es (216.58.201.131), 30 hops max, 60 byte packets
 1 10.1.0.1 (10.1.0.1) 0.101 ms 0.047 ms 0.054 ms
 2 10.0.3.1 (10.0.3.1) 0.102 ms 0.053 ms 0.045 ms
...
8 google-router.red.rediris.es (130.206.255.2) 7.694 ms 7.561 ms 7.659 ms
 9 72.14.235.18 (72.14.235.18) 8.100 ms 7.973 ms 11.170 ms
10 216.239.40.217 (216.239.40.217) 8.077 ms 8.114 ms 8.040 ms
11 mad06s25-in-f3.1e100.net (216.58.201.131) 7.976 ms 7.910 ms 8.106 ms

Last words on this…

Wow, there are a lot of learned things here…

  1. Creating a basic DHCP and DNS server with DNSMASQ
  2. Creating a NAT router with iptables
  3. Creating a bridge without a device attached to it

Each topic would could have be a post in this blog, but the important thing in here is that all these tasks are made inside LXC containers ūüôā

How to disable IPv6 in Ubuntu 14.04

I am tired of trying to update Ubuntu and see that it is trying to use the IPv6 that hangs the installation (ocasionally forever). My solution still is to disable IPv6 until it is properly used, so this time…

I learned how to disable IPv6 in Ubuntu 14.04

There are a lot of resources on how to disable IPv6 in Ubuntu 14.04, and this is yet another one (but this is mine).

It is as easy as

$ sudo su -
$ cat >> /etc/sysctl.conf << EOT
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
EOT
$ sysctl -p

To check if it has worked, you can issue a command like

$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6

If it returns 1, it has been properly disabled. Otherwise try to reboot and check it again.

How to create unprivileged LXC containers in Ubuntu 14.04

I am used to use LXC containers in Ubuntu. But I usually run them under the root user. I think that this is a bad practise, and I want to run them using my user. The answer is:¬†unprivileged containers and this time…

I learned how to create unprivileged LXC containers in Ubuntu 14.04

My prayers were answered very soon and I found a very useful resource from Stéphane Graber (which is the LXC and LXD project leader at Canonical). I am summarizing the steps here, as they have been made by me, but his post is plenty of useful information, and I really recommend reading it to know what is going on.

Starting point…

I had a working Ubuntu 14.04.4 LTS installation, and a working LXC 2.0.0 installation. I was already able to run privileged containers by issuing commands like

$ sudo lxc-start -n mycontainer

 

If you need info about installing LXC 2, please go to this link, but it is possible that you do not need my post, yet.

First we set up the uid and gid mappings

calfonso@mmlin:~$ sudo usermod --add-subuids 100000-165536 calfonso
calfonso@mmlin:~$ sudo usermod --add-subgids 100000-165536 calfonso

Now we authorize our user to bridge devices to lxcbr0 (with a maximum quota of 10 bridges; you can set the number that best fits to you)

calfonso@mmlin:~$ sudo bash -c 'echo "calfonso veth lxcbr0 10" >> /etc/lxc/lxc-usernet'

Now let lxc access our home folder, because it needs to read the configuration file

calfonso@mmlin:~$ sudo chmod +x $HOME

And create the configuration file (you can set your values for the bridges to which you have allowed the access in the configuration file, mac addresses, etc.)

calfonso@mmlin:~$ mkdir -p $HOME/.config/lxc
calfonso@mmlin:~$ cat >> ~/.config/lxc/default.conf << EOT
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
EOT

And finally we are ready to create the unprivileged container

calfonso@mmlin:~$ lxc-create -t download -n myunprivilegedcont -- -d ubuntu -r xenial -a amd64

And that’s all ūüôā

(*) Remember that you can still use privileged containers by issuing lxc-* commands using “sudo”

Recommended information

If you want to know more about the uidmap, gidmap, how the bridging permissions work, etc., I recommend you to read this post.