Docker 1.12 – Deploying Docker Services within Docker Swarm Mode and Docker Machine

Docker 1.12 – Deploying Docker Services within Docker Swarm Mode and Docker Machine

Source Repo: https://github.com/chrisleekr/docker-swarm-scalable-web-application-architecture

 

This is proof-of-concept project to set up Docker Swarm in development environment with single command.

 

This project involves:

  • Docker Machine
  • Docker Swarm – Docker Built-in Orchestration
  • Local Docker Registry
  • Docker Network
  • Docker Service

Alt text

Note: This project is created for just practice. Not suitable for production use.
Prerequisites

Usage

 $ git clone https://github.com/chrisleekr/docker-swarm-scalable-web-application-architecture.git
 $ cd docker-swarm-scalable-web-application-architecture
 $ ./run_swarm.sh

If run on windows, requires git shell https://git-for-windows.github.io/.

 

After shell script is completed, you can connect instances to:

  • Visualizer UI: http://${MANAGER1_IP}:8500
  • Web Access: http://${MANAGER1_IP}
  • DB Access: tcp://${MANAGER1_IP}:3306

 

asciicast

Note: Any node IP will be accessible to instances, not only manager1 node IP.

To stop all machines, run stop_swarm.sh

$ ./stop_swarm.sh

Features

  • Launch multiple Docker Machines and configure Docker Swarm automatically
  • Demonstrate Docker Swarm which is introduced in Docker 1.12.1+
  • Deploy local Docker Registry, which can use pull images across swarm nodes (https://hub.docker.com/_/registry/)
  • Build docker image via Dockerfile in Docker Machine and push to local Docker Registry
  • Create service with Docker Hub image MySQL (https://hub.docker.com/_/mysql/)
  • Create service with custom build docker image (./docker-app-config/Dockerfile)
  • Emulate scaling up for launched service

Screenshots
Alt text Alt text Alt text Alt text
How it works
Note This section is a bit descriptive for reference purpose.

  1. Launching docker machine manager1
    1. Remove existing docker machine manager1
    2. Create new docker machine manager1
    3. Stop newly created machine to add shared folder
    4. Add current folder as shared folder to docker machine
    5. Restart docker machine manager1
    6. Create new folder /docker into docker machine manager1
    7. Mount shared folder to /docker
  2. Launching docker machine worker1
    1. Repeat aforementioned step #1-i to #1-vii
  3. Launching docker machine worker2
    1. Repeat aforementioned step #1-i to #1-vii
  4. Get lead manager manager1 host IP address
  5. Initialize swarm in lead manager manager1
    1. Get swarm join token for manager node
    2. Construct swarm join command for manager node
    3. Run join command to join manager nodes
  6. Join worker nodes to swarm
    1. Get swarm join token for worker node
    2. Construct swarm join command for worker node
    3. Run join command to join worker node worker1
    4. Run join command to join worker node worker2
  7. Launch docker container manomarks/visualizer
  8. Launch docker registry service registry:2
  9. Build docker image for service web (Apache+PHP)
    1. Go to /docker/docker-app-config and build docker image docker-app-php
    2. Tag built docker image to localhost:5000/docker-app-php
    3. Push localhost:5000/docker-app-php to local docker registry
  10. Create docker network frontend
  11. Run MySQL service in docker machine manager1
    1. Clean MySQL data folder
    2. Create mysql service (single instance) for MySQL:5.7 to manager1 node
  12. Run Apache & PHP docker-app-php service
    1. Create web service for docker image docker-app-php across swarm nodes
    2. Scale up web service to 4 instances

Source Repo: https://github.com/chrisleekr/docker-swarm-scalable-web-application-architecture

 

run_swarm.sh

#!/bin/bash

# Define list of machine name to launch
#   Currently, three machines will be launched and accessible via docker-machine ssh
#   $ docker-machine ssh manager1
machines=( "manager1"  "worker1" "worker2" )

# Loop defined machines
for machine in "${machines[@]}"
do

	echo "############################################"
	echo "==> 1. Launching machine - $machine"
	echo "############################################"

	echo "==> Remove existing machine if available - $machine"
	docker-machine rm $machine -f

	echo "==> Create new machine - $machine"
	docker-machine create -d virtualbox $machine

    echo "==> Stop newly created machine to add shared folder"
	docker-machine stop $machine

	echo "==> Add current folder as shared folder into docker machine"
	if [ "$(uname)" == "Darwin" ]; then
	    # Do something under Mac OS X platform
	    VBoxManage sharedfolder add $machine --name docker --hostpath $(pwd) --automount
	elif [ "$(expr substr $(uname -s) 1 5)" == "Linux" ]; then
	    # Do something under GNU/Linux platform
	    VBoxManage sharedfolder add $machine --name docker --hostpath $(pwd) --automount
	elif [ "$(expr substr $(uname -s) 1 10)" == "MINGW64_NT" ]; then
	    # Do something under Windows NT platform
	    "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" sharedfolder add $machine --name docker --hostpath $(pwd) --automount
	fi

    echo "==> Restart machine - $machine"
	docker-machine start $machine

    echo "==> Create new folder called /docker in root folder"
	docker-machine ssh $machine "sudo mkdir -p /docker"

    echo "==> Mount the shared folder to /docker path with read/write mode"
    if [ "$(uname)" == "Darwin" ]; then
    	docker-machine ssh $machine "sudo mount -t vboxsf -o defaults,dmode=777,fmode=666 docker /docker"
    elif [ "$(expr substr $(uname -s) 1 5)" == "Linux" ]; then
    	docker-machine ssh $machine "sudo mount -t vboxsf -o defaults,dmode=777,fmode=666 docker /docker"
   	elif [ "$(expr substr $(uname -s) 1 10)" == "MINGW64_NT" ]; then
		docker-machine ssh $machine "sudo mount -t vboxsf docker /docker"
	fi

	docker-machine ssh $machine "ls /docker"

done


echo "############################################"
echo "==> 2. Get lead manager host IP Address"
echo "############################################"
#IPADDR=$(docker-machine ssh manager1 ifconfig eth1 | grep 'inet addr:' | cut -d: -f3 | awk '{print $1}')
IPADDR=$(docker-machine ip manager1)
echo "==> Lead Manager IP Address: ${IPADDR}"


echo "############################################"
echo "==> 3. Initialize swarm in lead manager 1"
echo "############################################"
CMD_SWARM_INIT="docker swarm init --advertise-addr ${IPADDR}:2377 --listen-addr ${IPADDR}:2377"
docker-machine ssh manager1 "${CMD_SWARM_INIT}"

echo "==> Get swarm join token for manager from lead manager 1"
SWARM_JOIN_TOKEN=$(docker-machine ssh manager1 docker swarm join-token -q manager)
CMD_SWARM_JOIN="docker swarm join --token ${SWARM_JOIN_TOKEN} ${IPADDR}:2377"
echo "==> Got swarm join command for manager => ${CMD_SWARM_JOIN}"


echo "############################################"
echo "==> 4. Join worker nodes to swarm"
echo "############################################"

echo "==> Get swarm join token for worker from lead manager 1"
SWARM_JOIN_TOKEN=$(docker-machine ssh manager1 docker swarm join-token -q worker)
CMD_SWARM_JOIN="docker swarm join --token ${SWARM_JOIN_TOKEN} ${IPADDR}:2377"
echo "==> Got swarm join command for worker => ${CMD_SWARM_JOIN}"
echo "==> Run swarm join command to worker 1"
docker-machine ssh worker1 ${CMD_SWARM_JOIN}
echo "==> Run swarm join command to worker 2"
docker-machine ssh worker2 ${CMD_SWARM_JOIN}


echo "############################################"
echo "==> 5. Run Docker Swarm Visualizer for monitoring swarm nodes"
echo "		https://github.com/ManoMarks/docker-swarm-visualizer"
echo "############################################"
docker-machine ssh manager1 "docker run -it -d -p 8080:8080 -e HOST=${IPADDR} -v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer"
#echo "==> Visualizer URL: http://${IPADDR}:8080"


echo "############################################"
echo "==> 6. Run docker registry service to manager images in swarm"
echo "		https://hub.docker.com/_/registry/"
echo "############################################"
docker-machine ssh manager1 "docker service create --name registry --publish 5000:5000 registry:2"
#docker-machine ssh manager1 curl -sS localhost:5000/v2/_catalog
#docker-machine ssh manager1 docker pull alpine
#docker-machine ssh manager1 docker tag alpine localhost:5000/alpine
#docker-machine ssh manager1 docker push localhost:5000/alpine
#docker-machine ssh manager1 curl -sS localhost:5000/v2/_catalog

echo "############################################"
echo "==> 7. Build docker image for Apache & PHP "
echo "      Refer ./docker-app-config/Dockerfile"
echo "############################################"
echo "==> Go to /docker/docker-app-config and build docker image 'docker-app-php'"
docker-machine ssh manager1 "cd /docker/docker-app-config && docker build -t docker-app-php ."
echo "==> Tag built docker image to localhost:5000/docker-app-php"
docker-machine ssh manager1 "docker tag docker-app-php localhost:5000/docker-app-php"
echo "==> Push localhost:5000/docker-app-php to local docker registry"
docker-machine ssh manager1 "docker push localhost:5000/docker-app-php"

echo "############################################"
echo "==> 8. Create docker network 'frontend'"
echo "      All docker services will be laucnhed under the docker network 'frontend' to provide access to each node"
echo "############################################"
docker-machine ssh manager1 "docker network create frontend --driver overlay"

echo "############################################"
echo "==> 9. Run MySQL service in lead manager node"
echo "      Since MySQL replication is not been implemented, launch only one MySQL service in manager only"
echo "############################################"
echo "==> Clean MySQL data folder"
docker-machine ssh manager1 "sudo rm -rf /docker/docker-db-data"
docker-machine ssh manager1 "sudo mkdir -p /docker/docker-db-data"

echo "==> Create service (single instance) for MySQL:5.7 to lead manager node"
docker-machine ssh manager1 "docker service create --name mysql \
	--publish 3306:3306 \
	--network frontend \
	--replicas 1 \
	--mount type=bind,src=/docker/docker-db-data,dst=/var/lib/mysql,readonly=false  \
	--constraint 'node.hostname==manager1' \
	-e MYSQL_ROOT_PASSWORD=root \
	-e MYSQL_DATABASE=docker \
	-e MYSQL_USER=docker \
	-e MYSQL_PASSWORD=docker \
	mysql:5.7 "

echo "############################################"
echo "==> 10. Run Apache & PHP (docker-app-php) service"
echo "############################################"
echo "==> Create service for docker-app-php across swarm node"
docker-machine ssh manager1 "docker service create --name web \
	--publish 80:80 \
	--network frontend \
	--replicas 1 \
	--mount type=bind,src=/docker/docker-app,dst=/var/www/site,readonly=false \
	localhost:5000/docker-app-php:latest"
echo "==> Scale up service to 4"
docker-machine ssh manager1 "docker service scale web=4"

echo "############################################"
echo "==> Visualizer URL: http://${IPADDR}:8080"
echo "==> Web: http://${IPADDR}"
echo "==> MySQL: tcp://${IPADDR}:3306"
echo "############################################"

 

Raspberry pi에 Let’s Encrypt SSL 설치하기

  1. Raspberry Pi의 IP를 알아낸다음에 Router에서 DMZ를 설정하여 외부 IP로 접속할 경우 Raspberry Pi로 접근이 되도록 한다.
  2. Raspberry Pi에 아파치를 설치한다.
  3. 외부 IP로 접속하여 Apache 설치 여부를 확인한다.
  4. Raspberry Pi에 다이나믹 도메인을 연결한다
  5. NoIP Dynamic Update Client를 다운로드 및 설치한다.
  6. 다이나믹 도메인 (http://xxxxx.ddns.net)에 접속이 되는지 확인한다.
  7. Certbot을 다운받는다
    • 설치방법: https://certbot.eff.org/#pip-apache
      $ wget https://dl.eff.org/certbot-auto
      $ chmod a+x certbot-auto
      $ certbot-auto --apache
    • 설치 중에 도메인을 물어보는데, 그때 위에서 설정한 다이나믹 도메인을 입력한다.
  8. 설치가 모두 끝나면 https://<도메인>으로 접속해본다.
  9. 접속이 성공적이면 https://www.ssllabs.com/ssltest/analyze.html?d=<도메인> 에서 SSL테스트를 해본다.

– 끝 –

Vagrant Scalable Web Application Architecture

Deploying scalable web application using Vagrant, Consul, Consul Template, nginx and HAProxy

Source Repo: https://github.com/chrisleekr/vagrant-scalable-web-application-architecture

This is proof-of-concept project to spin up scalable web application architecture with Vagrant.

The project involves:

  • Vagrant for launching multiple VM instances and persistent storage
  • Consul for health checking and service discovery
  • Consul Template for automated load balancer management
  • nginx for HTTP server load balancing
  • HAProxy for MySQL server load balancing

Alt text

Note: This project is created for just practice. Not suitable for production use.

Prerequisites

Usage

$ git clone https://github.com/chrisleekr/vagrant-scalable-web-application-architecture.git
$ cd vagrant-scalable-web-application-architecture
$ vagrant up

asciicast

After vagrant machines are running, you can connect instances to:

  • Consul WEB UI: 192.168.100.11:8500
  • Web Load Balancing Machine: 192.168.100.20
  • DB Load Balancing Machine: 192.168.100.21

After vagrant halt/suspend, need to run provisioning scripts again to synchronize MySQL from master to slave machine

$ vagrant halt
$ vagrant up && vagrant provision

If having the error message ‘The guest additions on this VM do not match the install version of VirtualBox!’, then run following command before vagrant up

$ vagrant plugin install vagrant-vbguest

Features

  • Launch scalable web application architecture in single command
  • Used Consul to manage server nodes, service discovery and health checking
  • Configured web server load balancing with Consul Template + nginx reserve proxy
  • Persistent storage for web servers using Vagrant synced folder
  • Configured database server load balancing with Consul Template + HAProxy
  • Configured MySQL Two-way Master-Master replication
  • Persistent storage for database using Vagrant synced folder

What to do after launching Vagrant

Once all vagrant instances up, you can access Consul Web UI by opening browser http://192.168.100.11:8500. Then you will see services like consul, web-lb, web, db-lb and db. In Nodes section, you will see nodes like consul1, web-lb, web1, db-lb, db1 and so on. If you see services and nodes as following screenshot, then it is successfully up and running.

Alt textAlt text

Now, you can start installing WordPress to test your architecture. Open browser and access to http://192.168.100.20. Then you will see WordPress installation screen like below. You can setup WordPress with following information.

    Database Name: wordpress
    Username: root
    Password: root
    Database Host: db-lb.service.consul
    Table Prefix: wp_
    Site Title: [Any title you want, e.g. Test Website]
    Username: [Any username you want, e.g. admin]
    Password: [Any password you want]
    Your Email: [Any email you want]

Alt textAlt textAlt textAlt textAlt textAlt text

After installing WordPress, you can now check server is properly doing load balancing. In the browser, go tohttp://192.168.100.20/server.php. I added simple PHP script to display web server IP and database hostname. If you see Web Server IP and DB Hostname are changing on refresh, then it is successfully configured. Alt textAlt text

As this is test environment, you can access DB directly via any MySQL client tool.

    db-lb.local
    Host: 192.168.100.21
    Username: root
    Password: root

    db1.local
    Host: 192.168.100.41
    Username: root
    Password: root

    db2.local
    Host: 192.168.100.42
    Username: root
    Password: root

If you see exactly same tables between db1.local and db2.local databases, then replication is successfully configured.Alt textAlt text

Environments

Alt text

The Vagrant contains:

  • 3 x Consul servers
  • 1 x nginx load balancer for web servers
  • 3 x Apache web servers
  • 1 x HAProxy load balancer
  • 2 x MySQL master-master replication servers

Note In order to reduce the launching time, 2 x Consul servers are commented as single Consul server will still work well. Consul recommends to launch at least 3 x Consul servers to prevent single point of failures. In addition, 1 x Apache web server is commented to not launch initially.If you would like to test completed architectures, then uncomment VM definitions.

Following list depicts detailed environment configurations for each VM:

  • Consul servers
    • Consul 1 – Bootstrap, Web UI
    • Consul 2
      • Private IP: 192.168.100.12
      • Hostname: consulserver2.local
      • Commented to not launch in initial checkout
    • Consul 3
      • Private IP: 192.168.100.13
      • Hostname: consulserver3.local
      • Commented to not launch in initial checkout
  • Web server load balancer
    • Private IP: 192.168.100.20
    • Hostname: web-lb.local
    • Web Access URL: http://192.168.100.20:80
    • Configured with Consul Template and nginx reverse proxy
    • This instance will be access point for internet users.
  • Web servers
    • Web server 1
      • Private IP: 192.168.100.31
      • Hostname: web1.local
      • Configured with Apache web server
      • When the instance is launched, then Consul Template in Web server load balancer will generate new nginx config file.
    • Web server 2
      • Private IP: 192.168.100.32
      • Hostname: web2.local
      • Same as Web server 1
      • Commented to not launch in initial checkout
    • Web server 3
      • Private IP: 192.168.100.33
      • Hostname: web3.local
      • Same as Web server 1
      • Commented to not launch in initial checkout
  • Database load balancer
    • Private IP: 192.168.100.21
    • Hostname: db-lb.local
    • Database Access: tcp://192.168.100.21:3306
    • Configured with Consul Template and HAProxy
    • This instance will be access point for web servers to access database.
  • Databases
    • Database server 1
      • Private IP: 192.168.100.41
      • Hostname: db1.local
      • This instance is configured Master-Master replication with Database server2.
      • Database Name/Username/Password: wordpress/root/root
      • When the instance is launched, then Consul Template in Database load balancer will generate new HAProxy config file.
    • Database server 2
      • Private IP: 192.168.100.42
      • Hostname: db2.local
      • This instance is configured Master-Master replication with Database server 1.
      • Same as Database server 1

How it works

Note This section is a bit descriptive because I would like to make a note in detail about how it works. I want to make detailed instructions to not make same mistakes and for future reference.

  1. Consul servers will be launched first.
    1. Consul server 1(consulserver1.local) will be launched and provisioning script will be executed.
    2. Update package list and upgrade system (Currently commented out. If need, uncomment it)
    3. Set the Server Timezone to Australia/Melbourne
    4. Enable Ubuntu Firewall and allow SSH & Consul agent
    5. Add consul user
    6. Install necessary packages
    7. Copy an upstart script to /etc/init so the Consul agent will be restarted if we restart the virtual machine
    8. Get the Consul agent zip file and install it
    9. Consul UI needs to be installed
    10. Create the Consul configuration directory and consul log file
    11. Copy the Consul configurations
    12. Start Consul agent
    13. Consul server 2(consulserver2.local) will be launched and provisioning script will be executed.
    14. Repeat aforementioned step #1-ii to #1-viii
    15. Create the Consul configuration directory and consul log file
    16. Copy the Consul configurations
    17. Start Consul agent
    18. Consul server 3(consulserver3.local) will be launched and provisioning script will be executed.
    19. Repeat aforementioned step #1-ii to #1-viii
    20. Create the Consul configuration directory and consul log file
    21. Copy the Consul configurations
    22. Start Consul agent
  2. Web load balancer(web-lb.local) will be launched in following.
    1. Repeat aforementioned step #1-ii to #1-viii
    2. Create the Consul configuration directory and consul log file
    3. Copy the Consul configurations
    4. Start Consul agent
    5. Install and configure dnsmasq
    6. Start dnsmasq
    7. Create consul-template configuration folder and copy nginx.conf template
    8. Install nginx
    9. Download consul-template and copy to /usr/local/bin
    10. Copy an upstart script to /etc/init, so the Consul template and nginx will be restarted if we restart the virtual machine
    11. Start consul-template and nginx will be started via consul-template
  3. Web servers will be launched next.
    1. Web server 1(web1.local) will be launched and provisioning script will be executed.
    2. Repeat aforementioned step #2-i to 2-vi
    3. Install apache & php5 packages
    4. Copy apache site configuration files
    5. Start apache server
    6. Download latest WordPress file and extract to /var/www
    7. Web server 2(web2.local) will be launched and provisioning script will be executed.
    8. Repeat aforementioned step #3-i to 3-v
    9. Web server 3(web3.local) will be launched and provisioning script will be executed.
    10. Repeat aforementioned step #3-i to 3-v
  4. Database load balancer(db-lb.local) will be launched next.
    1. Repeat aforementioned step #2-i to 2-vi
    2. Install MySQL packages – mysql-client
    3. Install HAProxy
    4. Create consul-template configuration folder and copy haproxy.conf template
    5. Download consul-template and copy to /usr/local/bin
    6. Copy an upstart script to /etc/init so the Consul template and HAProxy will be restarted if we restart the virtual machine
    7. Start consul-template and HAProxy will be started via consul-template
  5. Database servers will be launched next.
    1. Database server 1(db1.local) will be launched and provisioning script will be executed.
    2. Repeat aforementioned step #2-i to #2-iv
    3. Install MySQL specific packages and settings – mysql-server mysql-client
    4. Setup MySQL server
      • Move initial database file to persistent directory
      • Setting up MySQL DB and root user
      • Set up root user’s host to be accessible from any remote
      • Create replication user
      • Create HAProxy user
      • Restart MySQL server
    5. Install and configure dnsmasq
    6. Start dnsmasq
    7. Database server 2(db2.local) will be launched and provisioning script will be executed.
    8. Repeat aforementioned step #5-i to #5-iv
    9. Setting up MySQL replication, starting with installing sshpass to access SSH to MySQL server 1
    10. Check MySQL server 1 connection
    11. Dump wordpress database from MySQL server 1 to /vagrant/data/wordpress.sql
    12. Import wordpress database to MySQL server 2 from /vagrant/data/wordpress.sql
    13. Get current log file and position in MySQL server 1
    14. Change master host to MySQL server 1, log file and position in MySQL server 2 machine
    15. Get current log file and position in MySQL server 2
    16. Change master host to MySQL server 2, log file and position in MySQL server 1 machine
    17. Test replication by creating table called test_table

Secure Raspberry Pi with iptables, PSAD, Fail2ban and OSSEC

Disable ping

$ echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

 

Install iptables and iptables-persistent

$ sudo apt-get install iptables iptables-persistent
$ sudo service iptables-persistent start

 

Create shell script

$ nano reset_iptables.sh
#!/usr/bin/env bash
iptables -F
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
iptables -A INPUT -f -j DROP
iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP
iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP
iptables -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP
iptables -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP
iptables -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP
#
# PACKETS chain
#
iptables -N PACKET
iptables -A PACKET -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN -m limit --limit 1/sec -j ACCEPT
iptables -A PACKET -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK RST -m limit --limit 1/sec -j ACCEPT
#limit ping to 1 per second
#iptables -A INPUT -p icmp -j DROP
#iptables -A PACKET -p icmp -m icmp --icmp-type 8 -m limit --limit 1/sec -j ACCEPT
#
# STATE_TRACK chain (connection tracking)
#
iptables -N STATE_TRACK
iptables -A STATE_TRACK -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A STATE_TRACK -m state --state INVALID -j DROP
#
# PORTSCAN chain (drop common attacks)
#
iptables -N PORTSCAN
iptables -A PORTSCAN -p tcp --tcp-flags ACK,FIN FIN -j DROP
iptables -A PORTSCAN -p tcp --tcp-flags ACK,PSH PSH -j DROP
iptables -A PORTSCAN -p tcp --tcp-flags ACK,URG URG -j DROP
iptables -A PORTSCAN -p tcp --tcp-flags FIN,RST FIN,RST -j DROP
iptables -A PORTSCAN -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP
iptables -A PORTSCAN -p tcp --tcp-flags SYN,RST SYN,RST -j DROP
iptables -A PORTSCAN -p tcp --tcp-flags ALL ALL -j DROP
iptables -A PORTSCAN -p tcp --tcp-flags ALL NONE -j DROP
iptables -A PORTSCAN -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP
iptables -A PORTSCAN -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP
iptables -A PORTSCAN -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP

# Disable ping response
iptables -A OUTPUT -p icmp -o eth0 -j ACCEPT
iptables -A INPUT -p icmp -j DROP

#allow all outgoing access
iptables -A OUTPUT -o eth0 -j ACCEPT
iptables -I INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT
$ sudo chmod a+x reset_iptables.sh
$ sudo ./reset_iptables.sh

 

Install PSAD (Port Scan Attack Detection)

References:

$ wget http://cipherdyne.org/psad/download/psad-2.4.3.tar.gz
$ tar xvfz psad-2.4.3.tar.gz
$ cd psad-2.4.3
$ sudo ./install.pl

Set configurations

$ sudo nano /etc/psad/psad.conf
EMAIL_ADDRESSES psad@loopback; # Don't want to receive any email
HOSTNAME raspberrypi; # Set to hostname
HOME_NET 192.168.1.0/24; # Set to internal IP
HTTP_PORTS 8080; # Set custom HTTP port
SHELLCODE_PORTS !8080; # Set custom HTTP port
ENABLE_AUTO_IDS Y; # Set to automatically configure iptables
AUTO_IDS_DANGER_LEVEL 1; # Set to 1 for strict
AUTO_BLOCK_TIMEOUT 999999999; # Make permanent
ENABLE_AUTO_IDS_EMAILS N;
IPT_AUTO_CHAIN1 DROP, src, filter, INPUT, 1, PSAD_BLOCK_INPUT, 1;
#IPT_AUTO_CHAIN2 DROP, dst, filter, OUTPUT, 1, PSAD_BLOCK_OUTPUT, 1;
#IPT_AUTO_CHAIN3 DROP, both, filter, FORWARD, 1, PSAD_BLOCK_FORWARD, 1;

Set IP address that to be ignored

$ nano /etc/psad/auto_dl
192.168.1.0/24 0; #ignore interal ip
221.229.1.0/24 5; # permanent ban
$ sudo psad --sig-update
$ sudo service psad restart
$ sudo service psad status

 

Install Fail2ban

References:

$ sudo apt-get install fail2ban postfix

Set configurations

$ sudo nano /etc/fail2ban/jail.conf
# Fail2Ban configuration file.
#
# This file was composed for Debian systems from the original one
#  provided now under /usr/share/doc/fail2ban/examples/jail.conf
#  for additional examples.
#
# To avoid merges during upgrades DO NOT MODIFY THIS FILE
# and rather provide your changes in /etc/fail2ban/jail.local
#
# Author: Yaroslav O. Halchenko <debian@onerussian.com>
#
# $Revision$
#

# The DEFAULT allows a global definition of the options. They can be overridden
# in each jail afterwards.

[DEFAULT]

# "ignoreip" can be an IP address, a CIDR mask or a DNS host
ignoreip = 127.0.0.0/24 192.168.1.0/24
bantime  = -1
maxretry = 3

# "backend" specifies the backend used to get files modification. Available
# options are "gamin", "polling" and "auto".
backend = auto

#
# Destination email address used solely for the interpolations in
# jail.{conf,local} configuration files.
destemail = fail2ban@loopback

#
# ACTIONS
#

# Default banning action (e.g. iptables, iptables-new,
# iptables-multiport, shorewall, etc) It is used to define
# action_* variables. Can be overridden globally or per
# section within jail.local file
banaction = iptables-multiport

# email action. Since 0.8.1 upstream fail2ban uses sendmail
# MTA for the mailing. Change mta configuration parameter to mail
# if you want to revert to conventional 'mail'.
mta = sendmail

# Default protocol
protocol = tcp

# Specify chain where jumps would need to be added in iptables-* actions
chain = INPUT

#
# Action shortcuts. To be used to define action parameter

# The simplest action to take: ban only
action_ = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]

# ban & send an e-mail with whois report to the destemail.
action_mw = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
              %(mta)s-whois[name=%(__name__)s, dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s"]

# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action_mwl = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
               %(mta)s-whois-lines[name=%(__name__)s, dest="%(destemail)s", logpath=%(logpath)s, chain="%(chain)s"]

# Choose default action.  To change, just override value of 'action' with the
# interpolation to the chosen action shortcut (e.g.  action_mw, action_mwl, etc) in jail.local
# globally (section [DEFAULT]) or per specific section
action = %(action_)s

#
# JAILS
#

# Next jails corresponds to the standard configuration in Fail2ban 0.6 which
# was shipped in Debian. Enable any defined here jail by including
#
# [SECTION_NAME]
# enabled = true

#
# in /etc/fail2ban/jail.local.
#
# Optionally you may override any other parameter (e.g. banaction,
# action, port, logpath, etc) in that section within jail.local

[ssh]

enabled  = true
port     = ssh
filter   = sshd
logpath  = /var/log/auth.log
maxretry = 3


[ssh-ddos]

enabled  = true
port     = ssh
filter   = sshd-ddos
logpath  = /var/log/auth.log
maxretry = 3

#
# HTTP servers
#

[apache]

enabled  = true
port     = http,https
filter   = apache-auth
logpath  = /var/log/apache*/*error.log
maxretry = 3

# default action is now multiport, so apache-multiport jail was left
# for compatibility with previous (<0.7.6-2) releases
[apache-multiport]

enabled   = true
port      = http,https
filter    = apache-auth
logpath   = /var/log/apache*/*error.log
maxretry  = 3

[apache-noscript]

enabled  = true
port     = http,https
filter   = apache-noscript
logpath  = /var/log/apache*/*error.log
maxretry = 3

[apache-overflows]

enabled  = true
port     = http,https
filter   = apache-overflows
logpath  = /var/log/apache*/*error.log
maxretry = 2

[invalidmethod]

enabled  = true
port     = http,https
filter	 = invalidmethod
logpath	 = /var/log/apache*/*access.log
findtime = 10800
maxretry = 3

#
# FTP servers
#

[vsftpd]

enabled  = true
port     = ftp,ftp-data,ftps,ftps-data
filter   = vsftpd
logpath  = /var/log/vsftpd.log
# or overwrite it in jails.local to be
# logpath = /var/log/auth.log
# if you want to rely on PAM failed login attempts
# vsftpd's failregex should match both of those formats
maxretry = 3


#
# Mail servers
#

[postfix]

enabled  = true
port     = smtp,ssmtp
filter   = postfix
logpath  = /var/log/mail.log



 

Update iptables-multiport.conf to set persistent.bans

$ sudo nano /etc/fail2ban/action.d/iptables-multiport.conf
# Fail2Ban configuration file
#
# Author: Cyril Jaquier
# Modified by Yaroslav Halchenko for multiport banning
# $Revision$
#

[Definition]

# Option:  actionstart
# Notes.:  command executed once at the start of Fail2Ban.
# Values:  CMD
#
actionstart = iptables -N fail2ban-<name>
              iptables -A fail2ban-<name> -j RETURN
              iptables -I <chain> -p <protocol> -m multiport --dports <port> -j fail2ban-<name>
	      cat /etc/fail2ban/persistent.bans | awk '/^fail2ban-<name>/ {print $2}' \
               | while read IP; do iptables -I fail2ban-<name> 1 -s $IP -j DROP; done

# Option:  actionstop
# Notes.:  command executed once at the end of Fail2Ban
# Values:  CMD
#
actionstop = iptables -D <chain> -p <protocol> -m multiport --dports <port> -j fail2ban-<name>
             iptables -F fail2ban-<name>
             iptables -X fail2ban-<name>

# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
actioncheck = iptables -n -L <chain> | grep -q fail2ban-<name>

# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    <ip>  IP address
#          <failures>  number of failures
#          <time>  unix timestamp of the ban time
# Values:  CMD
#
actionban = iptables -I fail2ban-<name> 1 -s <ip> -j DROP
	echo "fail2ban-<name> <ip>" >> /etc/fail2ban/persistent.bans

# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    <ip>  IP address
#          <failures>  number of failures
#          <time>  unix timestamp of the ban time
# Values:  CMD
#
actionunban = iptables -D fail2ban-<name> -s <ip> -j DROP
		sed -i /<ip>/d /etc/fail2ban/persistent.bans

[Init]

# Defaut name of the chain
#
name = default

# Option:  port
# Notes.:  specifies port to monitor
# Values:  [ NUM | STRING ]  Default:
#
port = ssh

# Option:  protocol
# Notes.:  internally used by config reader for interpolations.
# Values:  [ tcp | udp | icmp | all ] Default: tcp
#
protocol = tcp

# Option:  chain
# Notes    specifies the iptables chain to which the fail2ban rules should be
#          added
# Values:  STRING  Default: INPUT
chain = INPUT

Add extra filter to prevent unauthorised access to http

$ sudo nano /etc/fail2ban/filter.d/invalidmethod.conf
# Fail2Ban configuration file
#
#
# $Revision: 1 $
#

[Definition]
# Option: failregex Notes.: Regexp to catch invalid method 
# abovementioned bots. Values: TEXT
#
failregex = ^<HOST> .*\"(GET|HEAD|PUT|POST) [^\"]+\" 401.*

# Option: ignoreregex Notes.: regex to ignore. If this regex 
# matches, the line is ignored. Values: TEXT
#
ignoreregex = ^<HOST> .*\"(GET|POST) [^\"]+\" 200.*

$ sudo nano /etc/fail2ban/jail.conf
[invalidmethod]

enabled  = true
port     = http,https
filter	 = invalidmethod
logpath	 = /var/log/apache*/*access.log
findtime = 10800
maxretry = 3
$ sudo /etc/init.d/fail2ban restart

 

Check jail is configured correctly

$ sudo fail2ban-client status
Status
|- Number of jail:     	9
`- Jail list:  		invalidmethod, apache-noscript, postfix, ssh-ddos, apache-multiport, vsftpd, ssh, apache-overflows, apache

 

Install OSSEC (Open Source HIDS SECurity)

References

$ sudo apt-get install build-essential inotify-tools
$ wget https://bintray.com/artifact/download/ossec/ossec-hids/ossec-hids-2.8.3.tar.gz
$ tar -zxf ossec-hids-2.8.3.tar.gz
$ cd ossec-hids-2.8.3
$ sudo ./install.sh
(en/br/cn/de/el/es/fr/hu/it/jp/nl/pl/ru/sr/tr) [en]:

OSSEC HIDS v2.8 Installation Script - http://www.ossec.net

 You are about to start the installation process of the OSSEC HIDS.
 You must have a C compiler pre-installed in your system.
 If you have any questions or comments, please send an e-mail
 to dcid@ossec.net (or daniel.cid@gmail.com).

  - System: Linux kuruji 3.13.0-36-generic
  - User: root
  - Host: kuruji

  -- Press ENTER to continue or Ctrl-C to abort. --

1- What kind of installation do you want (server, agent, local, hybrid or help)? local

  - Local installation chosen.

2- Setting up the installation environment.

  - Choose where to install the OSSEC HIDS [/var/ossec]:
  - Installation will be made at  /var/ossec .

3- Configuring the OSSEC HIDS.

  3.1- Do you want e-mail notification? (y/n) [y]:

  - What's your e-mail address? sammy@example.com
  - We found your SMTP server as: mail.example.com.
  - Do you want to use it? (y/n) [y]:

--- Using SMTP server:  mail.example.com.

  3.2- Do you want to run the integrity check daemon? (y/n) [y]:

- Running syscheck (integrity check daemon).

  3.3- Do you want to run the rootkit detection engine? (y/n) [y]:

- Running rootcheck (rootkit detection).

  3.4- Active response allows you to execute a specific command based on the events received.  

   Do you want to enable active response? (y/n) [y]:

   Active response enabled.

  Do you want to enable the firewall-drop response? (y/n) [y]:

- firewall-drop enabled (local) for levels >= 6

   - Default white list for the active response:
      - 8.8.8.8
      - 8.8.4.4

   - Do you want to add more IPs to the white list? (y/n)? [n]:

3.6- Setting the configuration to analyze the following logs:
    -- /var/log/auth.log
    -- /var/log/syslog
    -- /var/log/dpkg.log

 - If you want to monitor any other file, just change
   the ossec.conf and add a new localfile entry.
   Any questions about the configuration can be answered
   by visiting us online at http://www.ossec.net .


   --- Press ENTER to continue ---

 - System is Debian (Ubuntu or derivative).
 - Init script modified to start OSSEC HIDS during boot.

 - Configuration finished properly.

 - To start OSSEC HIDS:
                /var/ossec/bin/ossec-control start

 - To stop OSSEC HIDS:
                /var/ossec/bin/ossec-control stop

 - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf

    ---  Press ENTER to finish (maybe more information below). ---
$ sudo /var/ossec/bin/ossec-control start
$ cd /var/www
$ git clone https://github.com/ossec/ossec-wui.git ossec
$ cd ossec
$ ./setup.sh

Open http://{http_host}/ossec