How To Use Traefik v2 as a Reverse Proxy for Docker Containers on Ubuntu 20.04

Introduction

Docker can be an efficient way to run web applications in production, but you may want to run multiple applications on the same Docker host. In this situation, you’ll need to set up a reverse proxy. This is because you only want to expose ports 80 and 443 to the rest of the world.

Traefik is a Docker-aware reverse proxy that includes a monitoring dashboard. Traefik v1 has been widely used for a while, and you can follow this earlier tutorial to install Traefik v1). But in this tutorial, you’ll install and configure Traefik v2, which includes quite a few differences.

The biggest difference between Traefik v1 and v2 is that frontends and backends were removed and their combined functionality spread out across routersmiddlewares, and services. Previously a backend did the job of making modifications to requests and getting that request to whatever was supposed to handle it. Traefik v2 provides more separation of concerns by introducing middlewares that can modify requests before sending them to a service. Middlewares make it easier to specify a single modification step that might be used by a lot of different routes so that they can be reused (such as HTTP Basic Auth, which you’ll see later). A router can also use many different middlewares.

In this tutorial you’ll configure Traefik v2 to route requests to two different web application containers: a WordPress container and an Adminer container, each talking to a MySQL database. You’ll configure Traefik to serve everything over HTTPS using Let’s Encrypt.

Prerequisites

To complete this tutorial, you will need the following:

      • One Ubuntu 20.04 server with a sudo non-root user and a firewall. You can set this up by following your Ubuntu 20.04 initial server setup guide.
      • Docker installed on your server, which you can accomplish by following Steps 1 and 2 of How to Install and Use Docker on Ubuntu 20.04.
      • Docker Compose installed using the instructions from Step 1 of How to Install Docker Compose on Ubuntu 20.04.
      • A domain and three A records, db-admin.your_domainblog.your_domain and monitor.your_domain. Each should point to the IP address of your server. You can learn how to point domains to DigitalOcean Droplets by reading through DigitalOcean’s Domains and DNS documentation. Throughout this tutorial, substitute your domain for your_domain in the configuration files and examples.

        Step 1 — Configuring and Running Traefik

        The Traefik project has an official Docker image, so you will use that to run Traefik in a Docker container.

        But before you get your Traefik container up and running, you need to create a configuration file and set up an encrypted password so you can access the monitoring dashboard.

        You’ll use the htpasswd utility to create this encrypted password. First, install the utility, which is included in the apache2-utils package:

         $ sudo apt-get install apache2-utils

        The output from the program will look like this:

        Output
        admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/

        Youu’ll use this output in the Traefik configuration file to set up HTTP Basic Authentication for the Traefik health check and monitoring dashboard. Copy the entire output line so you can paste it later.

        To configure the Traefik server, you’ll create two new configuration files called traefik.toml and traefik_dynamic.toml using the TOML format. TOML is a configuration language similar to INI files, but standardized. These files let us configure the Traefik server and various integrations, or providers, that you want to use. In this tutorial, you will use three of Traefik’s available providers: api, docker, and acme. The last of these, acme, supports TLS certificates using Let’s Encrypt.

        Create and open traefik.toml using nano or your preferred text editor:

         $ nano traefik.toml

        First, you want to specify the ports that Traefik should listen on using the entryPoints section of your config file. You want two because you want to listen on port 80 and 443. Let’s call these web (port 80) and websecure (port 443).

        Add the following configurations:>

        traefik.toml
        [entryPoints]
          [entryPoints.web]
            address = ":80"
            [entryPoints.web.http.redirections.entryPoint]
              to = "websecure"
              scheme = "https"
        
          [entryPoints.websecure]
            address = ":443"

    Note that you are also automatically redirecting traffic to be handled over TLS.

    Next, configure the Traefik api, which gives you access to both the API and your dashboard interface. The heading of [api] is all that you need because the dashboard is then enabled by default, but you’ll be explicit for the time being.

    Add the following code:

    traefik.toml
    ...
    [api]
      dashboard = true

    To finish securing your web requests you want to use Let’s Encrypt to generate valid TLS certificates. Traefik v2 supports Let’s Encrypt out of the box and you can configure it by creating a certificates resolver of the type acme.

    Let’s configure your certificates resolver now using the name lets-encrypt:

    traefik.toml
    ...
    [certificatesResolvers.lets-encrypt.acme]
      email = "your_email@your_domain"
      storage = "acme.json"
      [certificatesResolvers.lets-encrypt.acme.tlsChallenge]

    This section is called acme because ACME is the name of the protocol used to communicate with Let’s Encrypt to manage certificates. The Let’s Encrypt service requires registration with a valid email address, so to have Traefik generate certificates for your hosts, set the email key to your email address. You then specify that you will store the information that you will receive from Let’s Encrypt in a JSON file called acme.json.

    The acme.tlsChallenge section allows us to specify how Let’s Encrypt can verify that the certificate. You’re configuring it to serve a file as part of the challenge over port 443.

    Finally, you need to configure Traefik to work with Docker.

    Add the following configurations:

    traefik.toml
    ...
    [providers.docker]
      watch = true
      network = "web"

    The docker provider enables Traefik to act as a proxy in front of Docker containers. You’ve configured the provider to watch for new containers on the web network, which you’ll create soon.

    Our final configuration uses the file provider. With Traefik v2, static and dynamic configurations can’t be mixed and matched. To get around this, you will use traefik.toml to define your static configurations and then keep your dynamic configurations in another file, which you will call traefik_dynamic.toml. Here you are using the file provider to tell Traefik that it should read in dynamic configurations from a different file.

    Add the following file provider:

    traefik.toml
     $ providers.file
     $ filename = "traefik_dynamic.toml"

    Your completed traefik.toml will look like this:

    traefik.toml
    [entryPoints]
      [entryPoints.web]
        address = ":80"
        [entryPoints.web.http.redirections.entryPoint]
          to = "websecure"
          scheme = "https"
    
      [entryPoints.websecure]
        address = ":443"
    
    [api]
      dashboard = true
    
    [certificatesResolvers.lets-encrypt.acme]
      email = "your_email@your_domain"
      storage = "acme.json"
      [certificatesResolvers.lets-encrypt.acme.tlsChallenge]
    
    [providers.docker]
      watch = true
      network = "web"
    
    [providers.file]
      filename = "traefik_dynamic.toml"

    Save and close the file.

    Now let’s create traefik_dynamic.toml.

    The dynamic configuration values that you need to keep in their own file are the middlewares and the routers. To put your dashboard behind a password you need to customize the API’s router and configure a middleware to handle HTTP basic authentication. Let’s start by setting up the middleware.

    The middleware is configured on a per-protocol basis and since you’re working with HTTP you’ll specify it as a section chained off of http.middlewares. Next comes the name of your middleware so that you can reference it later, followed by the type of middleware that it is, which will be basicAuth in this case. Let’s call your middleware simpleAuth.

    Create and open a new file called traefik_dynamic.toml:

     $ nano traefik_dynamic.toml

    Add the following code. This is where you’ll paste the output from the htpasswd command:

    traefik_dynamic.toml
    [http.middlewares.simpleAuth.basicAuth]
      users = [
        "admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/"
      ]

    To configure the router for the api you’ll once again be chaining off of the protocol name, but instead of using http.middlewares, you’ll use http.routers followed by the name of the router. In this case, the api provides its own named router that you can configure by using the [http.routers.api] section. You’ll configure the domain that you plan on using with your dashboard also by setting the rule key using a host match, the entrypoint to use websecure, and the middlewares to include simpleAuth.

    Add the following configurations:

    traefik_dynamic.toml
    ...
    [http.routers.api]
      rule = "Host(`monitor.your_domain`)"
      entrypoints = ["websecure"]
      middlewares = ["simpleAuth"]
      service = "api@internal"
      [http.routers.api.tls]
        certResolver = "lets-encrypt"

    The web entry point handles port 80, while the websecure entry point uses port 443 for TLS/SSL. You automatically redirect all of the traffic on port 80 to the websecure entry point to force secure connections for all requests.

    Notice the last three lines here configure a service, enable tls, and configure certResolver to “lets-encrypt”. Services are the final step to determining where a request is finally handled. The api@internal service is a built-in service that sits behind the API that you expose. Just like routers and middlewares, services can be configured in this file, but you won’t need to do that to achieve your desired result.

    Your completed traefik_dynamic.toml file will look like this:

    traefik_dynamic.toml
    [http.middlewares.simpleAuth.basicAuth]
      users = [
        "admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/"
      ]
    
    [http.routers.api]
      rule = "Host(`monitor.your_domain`)"
      entrypoints = ["websecure"]
      middlewares = ["simpleAuth"]
      service = "api@internal"
      [http.routers.api.tls]
        certResolver = "lets-encrypt"

    Save the file and exit the editor.

    With these configurations in place, you will now start Traefik

    Step 2 – Running the Traefik Container

    In this step you will create a Docker network for the proxy to share with containers. You will then access the Traefik dashboard. The Docker network is necessary so that you can use it with applications that are run using Docker Compose.

    Create a new Docker network called web:

     $ docker network create web

    When the Traefik container starts, you will add it to this network. Then you can add additional containers to this network later for Traefik to proxy to.

    Next, create an empty file that will hold your Let’s Encrypt information. You’ll share this into the container so Traefik can use it:

     $ touch acme.json

    Traefik will only be able to use this file if the root user inside of the container has unique read and write access to it. To do this, lock down the permissions on acme.json so that only the owner of the file has read and write permission.

     $ chmod 600 acme.json

    Once the file gets passed to Docker, the owner will automatically change to the root user inside the container.

    Finally, create the Traefik container with this command:

     $ docker run -d \
    $ -v /var/run/docker.sock:/var/run/docker.sock \
    $ -v $PWD/traefik.toml:/traefik.toml \
    $ -v $PWD/traefik_dynamic.toml:/traefik_dynamic.toml \
    $ -v $PWD/acme.json:/acme.json \
    $ -p 80:80 \
    $ -p 443:443 \
    $ --network web \
    $ --name traefik \
    $ traefik:v2.2
    

    This command is a little long. Let’s break it down.

    You use the -d flag to run the container in the background as a daemon. You then share your docker.sock file into the container so that the Traefik process can listen for changes to containers. You also share the traefik.toml and traefik_dynamic.toml configuration files into the container, as well as acme.json.

    Next, you map ports :80 and :443 of your Docker host to the same ports in the Traefik container so Traefik receives all HTTP and HTTPS traffic to the server.

    You set the network of the container to web, and you name the container traefik.

    Finally, you use the traefik:v2.2 image for this container so that you can guarantee that you’re not running a completely different version than this tutorial is written for.

    A Docker image’s ENTRYPOINT is a command that always runs when a container is created from the image. In this case, the command is the traefik binary within the container. You can pass additional arguments to that command when you launch the container, but you’ve configured all of your settings in the traefik.toml file.

    With the container started, you now have a dashboard you can access to see the health of your containers. You can also use this dashboard to visualize the routers, services, and middlewares that Traefik has registered. You can try to access the monitoring dashboard by pointing your browser to https://monitor.your_domain/dashboard/ (the trailing / is required).

    You will be prompted for your username and password, which are admin and the password you configured in Step 1.

    Once logged in, you’ll see the Traefik interface:

    Empty Traefik dashboard

    You will notice that there are already some routers and services registered, but those are the ones that come with Traefik and the router configuration that you wrote for the API.

    You now have your Traefik proxy running, and you’ve configured it to work with Docker and monitor other containers. In the next step you will start some containers for Traefik to proxy.

    Step 3 — Registering Containers with Traefik
    With the Traefik container running, you’re ready to run applications behind it. Let’s launch the following containers behind Traefik:

    A blog using the official WordPress image.
    A database management server using the official Adminer image.
    You’ll manage both of these applications with Docker Compose using a docker-compose.yml file.

    Create and open the docker-compose.yml file in your editor:

     $ nano docker-compose.yml

    Add the following lines to the file to specify the version and the networks you’ll use:

    docker-compose.yml
    version: "3"
    
    networks:
      web:
        external: true
      internal:
        external: false

    You use Docker Compose version 3 because it’s the newest major version of the Compose file format.

    For Traefik to recognize your applications, they must be part of the same network, and since you created the network manually, you pull it in by specifying the network name of web and setting external to true. Then you define another network so that you can connect your exposed containers to a database container that you won’t expose through Traefik. You’ll call this network internal.

    Next, you’ll define each of your services, one at a time. Let’s start with the blog container, which you’ll base on the official WordPress image. Add this configuration to the bottom of the file:

    docker-compose.yml
    ...
    
    services:
      blog:
        image: wordpress:4.9.8-apache
        environment:
          WORDPRESS_DB_PASSWORD:
        labels:
          - traefik.http.routers.blog.rule=Host(`blog.your_domain`)
          - traefik.http.routers.blog.tls=true
          - traefik.http.routers.blog.tls.certresolver=lets-encrypt
          - traefik.port=80
        networks:
          - internal
          - web
        depends_on:
          - mysql

    The environment key lets you specify environment variables that will be set inside of the container. By not setting a value for WORDPRESS_DB_PASSWORD, you’re telling Docker Compose to get the value from your shell and pass it through when you create the container. You will define this environment variable in your shell before starting the containers. This way you don’t hard-code passwords into the configuration file.

    The labels section is where you specify configuration values for Traefik. Docker labels don’t do anything by themselves, but Traefik reads these so it knows how to treat containers. Here’s what each of these labels does:

    traefik.http.routers.adminer.rule=Host(`blog.your_domain`) creates a new router for your container and then specifies the routing rule used to determine if a request matches this container.
    traefik.routers.custom_name.tls=true specifies that this router should use TLS.
    traefik.routers.custom_name.tls.certResolver=lets-encrypt specifies that the certificates resolver that you created earlier called lets-encrypt should be used to get a certificate for this route.
    traefik.port specifies the exposed port that Traefik should use to route traffic to this container.
    With this configuration, all traffic sent to your Docker host on port 80 or 443 with the domain of blog.your_domain will be routed to the blog container.

    You assign this container to two different networks so that Traefik can find it via the web network and it can communicate with the database container through the internal network.

    Lastly, the depends_on key tells Docker Compose that this container needs to start after its dependencies are running. Since WordPress needs a database to run, you must run your mysql container before starting your blog container.

    Next, configure the MySQL service:

    docker-compose.yml
    services:
    ...
      mysql:
        image: mysql:5.7
        environment:
          MYSQL_ROOT_PASSWORD:
        networks:
          - internal
        labels:
          - traefik.enable=false

    You’re using the official MySQL 5.7 image for this container. You’ll notice that you’re once again using an environment item without a value. The MYSQL_ROOT_PASSWORD and WORDPRESS_DB_PASSWORD variables will need to be set to the same value to make sure that your WordPress container can communicate with the MySQL. You don’t want to expose the mysql container to Traefik or the outside world, so you’re only assigning this container to the internal network. Since Traefik has access to the Docker socket, the process will still expose a router for the mysql container by default, so you’ll add the label traefik.enable=false to specify that Traefik should not expose this container.

    Finally, define the Adminer container:

    docker-compose.yml
    services:
    ...
      adminer:
        image: adminer:4.6.3-standalone
        labels:
          - traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`)
          - traefik.http.routers.adminer.tls=true
          - traefik.http.routers.adminer.tls.certresolver=lets-encrypt
          - traefik.port=8080
        networks:
          - internal
          - web
        depends_on:
          - mysql

    This container is based on the official Adminer image. The network and depends_on configuration for this container exactly match what you’re using for the blog container.

    The line traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`) tells Traefik to examine the host requested. If it matches the pattern of db-admin.your_domain, Traefik will route the traffic to the adminer container over port 8080.

    Your completed docker-compose.yml file will look like this:

    docker-compose.yml
    version: "3"
    
    networks:
      web:
        external: true
      internal:
        external: false
    
    services:
      blog:
        image: wordpress:4.9.8-apache
        environment:
          WORDPRESS_DB_PASSWORD:
        labels:
          - traefik.http.routers.blog.rule=Host(`blog.your_domain`)
          - traefik.http.routers.blog.tls=true
          - traefik.http.routers.blog.tls.certresolver=lets-encrypt
          - traefik.port=80
        networks:
          - internal
          - web
        depends_on:
          - mysql
    
      mysql:
        image: mysql:5.7
        environment:
          MYSQL_ROOT_PASSWORD:
        networks:
          - internal
        labels:
          - traefik.enable=false
    
      adminer:
        image: adminer:4.6.3-standalone
        labels:
        labels:
          - traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`)
          - traefik.http.routers.adminer.tls=true
          - traefik.http.routers.adminer.tls.certresolver=lets-encrypt
          - traefik.port=8080
        networks:
          - internal
          - web
        depends_on:
          - mysql

    Save the file and exit the text editor.

    Next, set values in your shell for the WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD variables:

    $ export WORDPRESS_DB_PASSWORD=secure_database_password
    $ export MYSQL_ROOT_PASSWORD=secure_database_password
    

    Substitute secure_database_password with your desired database password. Remember to use the same password for both WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD.

    With these variables set, run the containers using docker-compose:

     $ docker-compose up -d

    Now watch the Traefik admin dashboard while it populates.

    Populated Traefik dashboard

    If you explore the Routers section you will find routers for adminer and blog configured with TLS:

    HTTP Routers w/ TLS

    Navigate to blog.your_domain, substituting your_domain with your domain. You’ll be redirected to a TLS connection and you can now complete the WordPress setup:

    WordPress setup screen

    Now access Adminer by visiting db-admin.your_domain in your browser, again substituting your_domain with your domain. The mysql container isn’t exposed to the outside world, but the adminer container has access to it through the internal Docker network that they share using the mysql container name as a hostname.

    On the Adminer login screen, enter root for Username, enter mysql for Server, and enter the value you set for MYSQL_ROOT_PASSWORD for the Password. Leave Database empty. Now press Login.

    Once logged in, you’ll see the Adminer user interface.

    Adminer connected to the MySQL database

    Both sites are now working, and you can use the dashboard at monitor.your_domain to keep an eye on your applications.

    Conclusion
    In this tutorial, you configured Traefik v2 to proxy requests to other applications in Docker containers.

    Traefik’s declarative configuration at the application container level makes it easy to configure more services, and there’s no need to restart the traefik container when you add new applications to proxy traffic to since Traefik notices the changes immediately through the Docker socket file it’s monitoring.

    To learn more about what you can do with Traefik v2, head over to the official Traefik documentation.

How To Serve Flask Applications with Gunicorn and Nginx on Ubuntu 20.04

Introduction
In this guide, you will build a Python application using the Flask microframework on Ubuntu 20.04. The bulk of this article will be about how to set up the Gunicorn application server and how to launch the application and configure Nginx to act as a front-end reverse proxy.

Before starting this guide, you should have:

  • A server with Ubuntu 20.04 installed and a non-root user with sudo privileges. Follow our initial server setup guide for guidance.
  • Nginx installed, following Steps 1 and 2 of How To Install Nginx on Ubuntu 20.04.
  • A domain name configured to point to your server. You can purchase one on Namecheap or get one for free on Freenom. You can learn how to point domains to DigitalOcean by following the relevant documentation on domains and DNS. Be sure to create the following DNS records:
    • An A record with your_domain pointing to your server’s public IP address.
    • An A record with www.your_domain pointing to your server’s public IP address.
  • Familiarity with the WSGI specification, which the Gunicorn server will use to communicate with your Flask application. This discussion covers WSGI in more detail.

Step 1 — Installing the Components from the Ubuntu Repositories

Our first step will be to install all of the pieces we need from the Ubuntu repositories. This includes pip, the Python package manager, which will manage our Python components. We will also get the Python development files necessary to build some of the Gunicorn components.

First, let’s update the local package index and install the packages that will allow us to build our Python environment. These will include python3-pip, along with a few more packages and development tools necessary for a robust programming environment:

 $ sudo apt update
 $ sudo apt install python3-pip python3-dev build-essential libssl-dev libffi-dev python3-setuptools

With these packages in place, let’s move on to creating a virtual environment for our project.

Next, we’ll set up a virtual environment in order to isolate our Flask application from the other Python files on the system.

Start by installing the python3-venv package, which will install the venv module:

sudo apt install python3-venv

Next, let’s make a parent directory for our Flask project. Move into the directory after you create it:

  • mkdir ~/myproject
  • cd ~/myproject

Create a virtual environment to store your Flask project’s Python requirements by typing:

  • python3 -m venv myprojectenv

This will install a local copy of Python and pip into a directory called myprojectenv within your project directory.

Before installing applications within the virtual environment, you need to activate it. Do so by typing:

  • source myprojectenv/bin/activate

Your prompt will change to indicate that you are now operating within the virtual environment. It will look something like this: (myprojectenv)user@host:~/myproject$.

Now that you are in your virtual environment, you can install Flask and Gunicorn and get started on designing your application.

First, let’s install wheel with the local instance of pip to ensure that our packages will install even if they are missing wheel archives:

  • pip install wheel

 

Note

Regardless of which version of Python you are using, when the virtual environment is activated, you should use the pip command (not pip3).

 

Next, let’s install Flask and Gunicorn:

  • pip install gunicorn flask

Creating a Sample App

Now that you have Flask available, you can create a simple application. Flask is a microframework. It does not include many of the tools that more full-featured frameworks might, and exists mainly as a module that you can import into your projects to assist you in initializing a web application.

While your application might be more complex, we’ll create our Flask app in a single file, called myproject.py:

  • nano ~/myproject/myproject.py

The application code will live in this file. It will import Flask and instantiate a Flask object. You can use this to define the functions that should be run when a specific route is requested:

~/myproject/myproject.py
from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
    return "<h1 style='color:blue'>Hello There!</h1>"

if __name__ == "__main__":
    app.run(host='0.0.0.0')

This basically defines what content to present when the root domain is accessed. Save and close the file when you’re finished.

If you followed the initial server setup guide, you should have a UFW firewall enabled. To test the application, you need to allow access to port 5000:

 $ sudo ufw allow 5000

Now you can test your Flask app by typing:

  • python myproject.py

You will see output like the following, including a helpful warning reminding you not to use this server setup in production:

Output
* Serving Flask app "myproject" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

Visit your server’s IP address followed by :5000 in your web browser:

http://your_server_ip:5000

You should see something like this:

Flask sample app

When you are finished, hit CTRL-C in your terminal window to stop the Flask development server.

Creating the WSGI Entry Point

Next, let’s create a file that will serve as the entry point for our application. This will tell our Gunicorn server how to interact with the application.

Let’s call the file wsgi.py:

  • nano ~/myproject/wsgi.py

In this file, let’s import the Flask instance from our application and then run it:

~/myproject/wsgi.py
from myproject import app

if __name__ == "__main__":
    app.run()

Save and close the file when you are finished.

Your application is now written with an entry point established. We can now move on to configuring Gunicorn.

Before moving on, we should check that Gunicorn can serve the application correctly.

We can do this by simply passing it the name of our entry point. This is constructed as the name of the module (minus the .py extension), plus the name of the callable within the application. In our case, this is wsgi:app.

We’ll also specify the interface and port to bind to so that the application will be started on a publicly available interface:

  • cd ~/myproject
  • gunicorn –bind 0.0.0.0:5000 wsgi:app

You should see output like the following:

Output
[2020-05-20 14:13:00 +0000] [46419] [INFO] Starting gunicorn 20.0.4
[2020-05-20 14:13:00 +0000] [46419] [INFO] Listening at: http://0.0.0.0:5000 (46419)
[2020-05-20 14:13:00 +0000] [46419] [INFO] Using worker: sync
[2020-05-20 14:13:00 +0000] [46421] [INFO] Booting worker with pid: 46421

Visit your server’s IP address with :5000 appended to the end in your web browser again:

http://your_server_ip:5000

You should see your application’s output:

Flask sample app

When you have confirmed that it’s functioning properly, press CTRL-C in your terminal window.

We’re now done with our virtual environment, so we can deactivate it:

  • deactivate

Any Python commands will now use the system’s Python environment again.

Next, let’s create the systemd service unit file. Creating a systemd unit file will allow Ubuntu’s init system to automatically start Gunicorn and serve the Flask application whenever the server boots.

Create a unit file ending in .service within the /etc/systemd/system directory to begin:

 $ sudo nano /etc/systemd/system/myproject

Inside, we’ll start with the [Unit] section, which is used to specify metadata and dependencies. Let’s put a description of our service here and tell the init system to only start this after the networking target has been reached:

/etc/systemd/system/myproject.service
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target

Next, let’s open up the [Service] section. This will specify the user and group that we want the process to run under. Let’s give our regular user account ownership of the process since it owns all of the relevant files. Let’s also give group ownership to the www-data group so that Nginx can communicate easily with the Gunicorn processes. Remember to replace the username here with your username:

/etc/systemd/system/myproject.service
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target

[Service]
User=sammy
Group=www-data

Next, let’s map out the working directory and set the PATH environmental variable so that the init system knows that the executables for the process are located within our virtual environment. Let’s also specify the command to start the service. This command will do the following:

  • Start 3 worker processes (though you should adjust this as necessary)
  • Create and bind to a Unix socket file, myproject.sock, within our project directory. We’ll set an umask value of 007 so that the socket file is created giving access to the owner and group, while restricting other access
  • Specify the WSGI entry point file name, along with the Python callable within that file (wsgi:app)

Systemd requires that we give the full path to the Gunicorn executable, which is installed within our virtual environment.

Remember to replace the username and project paths with your own information:

/etc/systemd/system/myproject.service
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target

[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myproject
Environment="PATH=/home/sammy/myproject/myprojectenv/bin"
ExecStart=/home/sammy/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app

Finally, let’s add an [Install] section. This will tell systemd what to link this service to if we enable it to start at boot. We want this service to start when the regular multi-user system is up and running:

/etc/systemd/system/myproject.service
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target

[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myproject
Environment="PATH=/home/sammy/myproject/myprojectenv/bin"
ExecStart=/home/sammy/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app

[Install]
WantedBy=multi-user.target

With that, our systemd service file is complete. Save and close it now.

We can now start the Gunicorn service we created and enable it so that it starts at boot:

 $ sudo systemctl start myproject
 $ sudo systemctl enable myproject

Let’s check the status:

 $ sudo systemctl status myproject

You should see output like this:

Output
● myproject.service - Gunicorn instance to serve myproject
     Loaded: loaded (/etc/systemd/system/myproject.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2020-05-20 14:15:18 UTC; 1s ago
   Main PID: 46430 (gunicorn)
      Tasks: 4 (limit: 2344)
     Memory: 51.3M
     CGroup: /system.slice/myproject.service
             ├─46430 /home/sammy/myproject/myprojectenv/bin/python3 /home/sammy/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app
             ├─46449 /home/sammy/myproject/myprojectenv/bin/python3 /home/sammy/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app
             ├─46450 /home/sammy/myproject/myprojectenv/bin/python3 /home/sammy/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app
             └─46451 /home/sammy/myproject/myprojectenv/bin/python3 /home/sammy/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app

If you see any errors, be sure to resolve them before continuing with the tutorial.

Our Gunicorn application server should now be up and running, waiting for requests on the socket file in the project directory. Let’s now configure Nginx to pass web requests to that socket by making some small additions to its configuration file.

Begin by creating a new server block configuration file in Nginx’s sites-available directory. Let’s call this myproject to keep in line with the rest of the guide:

 $ sudo nano /etc/nginx/sites-available/myproject

Open up a server block and tell Nginx to listen on the default port 80. Let’s also tell it to use this block for requests for our server’s domain name:

/etc/nginx/sites-available/myproject
server {
    listen 80;
    server_name your_domain www.your_domain;
}

Next, let’s add a location block that matches every request. Within this block, we’ll include the proxy_params file that specifies some general proxying parameters that need to be set. We’ll then pass the requests to the socket we defined using the proxy_pass directive:

/etc/nginx/sites-available/myproject
server {
    listen 80;
    server_name your_domain www.your_domain;

    location / {
        include proxy_params;
        proxy_pass http://unix:/home/sammy/myproject/myproject.sock;
    }
}

Save and close the file when you’re finished.

To enable the Nginx server block configuration you’ve just created, link the file to the sites-enabled directory:

 $ sudo ln -s /etc/nginx/sites-available/myproject

With the file in that directory, you can test for syntax errors:

 $ sudo nginx -t

If this returns without indicating any issues, restart the Nginx process to read the new configuration:

 $ sudo systemctl restart nginx

Finally, let’s adjust the firewall again. We no longer need access through port 5000, so we can remove that rule. We can then allow full access to the Nginx server:

 $ sudo ufw delete allow 5000
 $ sudo ufw allow 'Nginx Full'

You should now be able to navigate to your server’s domain name in your web browser:

http://your_domain

You should see your application’s output:

Flask sample app

If you encounter any errors, trying checking the following:

  • sudo less /var/log/nginx/error.log: checks the Nginx error logs.
  • sudo less /var/log/nginx/access.log: checks the Nginx access logs.
  • sudo journalctl -u nginx: checks the Nginx process logs.
  • sudo journalctl -u myproject: checks your Flask app’s Gunicorn logs.

To ensure that traffic to your server remains secure, let’s get an SSL certificate for your domain. There are multiple ways to do this, including getting a free certificate from Let’s Encryptgenerating a self-signed certificate, or buying one from another provider and configuring Nginx to use it by following Steps 2 through 6 of How to Create a Self-signed SSL Certificate for Nginx in Ubuntu 20.04. We will go with option one for the sake of expediency.

Install Certbot’s Nginx package with apt:

  • sudo apt install python3-certbot-nginx

Certbot provides a variety of ways to obtain SSL certificates through plugins. The Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary. To use this plugin, type the following:

  • sudo certbot –nginx -d your_domain -d www.your_domain

This runs certbot with the --nginx plugin, using -d to specify the names we’d like the certificate to be valid for.

If this is your first time running certbot, you will be prompted to enter an email address and agree to the terms of service. After doing so, certbot will communicate with the Let’s Encrypt server, then run a challenge to verify that you control the domain you’re requesting a certificate for.

If that’s successful, certbot will ask how you’d like to configure your HTTPS settings:

Output
Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel):

Select your choice then hit ENTER. The configuration will be updated, and Nginx will reload to pick up the new settings. certbot will wrap up with a message telling you the process was successful and where your certificates are stored:

Output
IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/your_domain/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/your_domain/privkey.pem
   Your cert will expire on 2020-08-18. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

If you followed the Nginx installation instructions in the prerequisites, you will no longer need the redundant HTTP profile allowance:

 $ sudo ufw delete allow 'Nginx HTTP'

To verify the configuration, navigate once again to your domain, using https://:

https://your_domain

You should see your application output once again, along with your browser’s security indicator, which should indicate that the site is secured.

Conclusion

In this guide, you created and secured a simple Flask application within a Python virtual environment. You created a WSGI entry point so that any WSGI-capable application server can interface with it, and then configured the Gunicorn app server to provide this function. Afterwards, you created a systemd service file to automatically launch the application server on boot. You also created an Nginx server block that passes web client traffic to the application server, relaying external requests, and secured traffic to your server with Let’s Encrypt.

Flask is a very simple, but extremely flexible framework meant to provide your applications with functionality without being too restrictive about structure and design. You can use the general stack described in this guide to serve the flask applications that you design.

Easy Guide to Add Free SSL in WordPress with Let’s Encrypt

In this tutorial, we will learn how to add free SSL in WordPress with Let’s Encrypt.

What is SSL and Let’s Encrypt?

Every internet user shares lots of personal information each day. We do that when shopping online, creating accounts, signing into different websites, etc.

If not properly encrypted, then this information can be spied upon and stolen. This is where SSL comes in. It provides the encryption technology to secure the connection between a user’s browser and the web server.

Each site is issued a unique SSL certificate for identification purposes. If a server is pretending to be on HTTPS, and it’s certificate doesn’t match, then most modern browsers will warn the user from connecting to the site.

Unsecure connection warning in Google Chrome

Previously, the only way to secure sites with SSL was by using a paid SSL certificate.

Let’s Encrypt is a free open certificate authority that aims to provide SSL certificate for general public. It is a project of Internet Research Group, a public service corporation. Let’s Encrypt is sponsored by many companies including Google, Facebook, Sucuri, Mozilla, Cisco, etc.

Let's Encrypt

Having said that, let’s take a look at how you can add free SSL certificate to your WordPress site with Let’s Encrypt.

Easy Way – Using a Host That Offers Built-in Free SSL

As Let’s Encrypt is becoming popular, some WordPress hosting companies have already started offering built-in easy SSL set up.

The easiest way to add Let’s Encrypt free SSL to WordPress is by signing up with a hosting company that offers a built-in integration.

Setting up Free SSL with Let’s Encrypt on SiteGround

SiteGround is one of the most trusted and well-known hosting companies offering built-in integration of free SSL. We use Siteground for our List25 website.

Here is how to enable Let’s Encrypt free SSL in SiteGround.

Simply login to your cPanel dashboard and scroll down to the security section. There you will need to click on the Let’s Encrypt icon.

Let's Encrypt icon in cPanel

This will bring you to the Let’s Encrypt install page. You will need to select the domain name where you want to use the free SSL, and then provide a valid email address.

Installletsencrypt

You can now click on the install button. Let’s encrypt will now issue a unique SSL certificate for your website. Once it’s finished, you will see a success message.

Let's Encrypt installed

That’s all, you have successfully integrated Let’s Encrypt free SSL to your WordPress site.

However, your WordPress site is not yet ready to use it. First you will need to update your WordPress URLs and then fix insecure content issue.

Don’t worry we will show you how to do that. Skip to the section on updating URLs and fixing insecure content issues.

Setting up Free SSL with Let’s Encrypt on DreamHost

DreamHost is another popular WordPress hosting service provider that’s offering built-in integration to setup free SSL on any of your domains hosted with them.

First you need to login to your Dreamhost dashboard. Under the Domains section, you need to click on secure hosting.

Secure Hosting

On the secure hosting page, you need to click on ‘Add Secure Hosting’ button to continue.

Dreamhost will now ask you to select your domain. Below that it will give you an option to add free SSL certificate from Let’s Encrypt. You need to make sure that this checkbox is checked.

Adding secure hosting

You can optionally choose to add a unique IP to your domain name. It is not required, but will improve compatibility with older versions of Internet Explorer on Windows XP.

Click on Add Now button to finish the setup. DreamHost will now start setting up your Free SSL certificate with Let’s encrypt. You will see a success message like this:

Success message after adding free SSL on DreamHost

You have successfully added a free SSL certificate with Let’s Encrypt to your WordPress site on DreamHost.

You still need to update WordPress URLs and fix insecure content issue. Jump to the section, updating WordPress URLs after setting up SSL.

Installing Let’s Encrypt Free SSL on Other Web Hosts

Let’s Encrypt free SSL is a domain based SSL certificate. This means that if you have a domain name, then you can add it on any web host.

However, if your web host does not offer an easy integration like SiteGround or DreamHost, then you will need to go through a somewhat lengthy procedure.

This procedure differs from one web host to another. Most hosting companies have a support document explaining how to do that. You can also contact their support staff for detailed instructions.

BlueHost one of the official WordPress hosting providers allows you to add third-party SSL to your domains hosted with them. For detailed instructions, take a look at their SSL installation of 3rd party certificate page.

Updating WordPress URLs After Setting up SSL

After setting up the Free SSL certificate with Let’s Encrypt, the next step is to move your WordPress URL from HTTP to HTTPS.

A normal site without SSL certificate uses HTTP protocol. This is usually highlighted with http prefix in web addresses, like this:

http://www.example.com

Secure websites with SSL certificates use HTTPS protocol. This means that their addresses look like this:

https://www.example.com

Without changing the URLs in your WordPress site, you will not be using SSL and your site will not be secure for collecting sensitive data.

Having said, let’s see how to move WordPress URLs from http to https:

For Brand New WordPress Website

If you are working on a brand new website, then you can just go to your WordPress admin area and click on settings. There you will need to update the WordPress URL and Site URL fields to use https.

Setting up WordPress to use HTTPS in URLs for a new website

Don’t forget to save your changes.

For Existing WordPress Sites

If your site has been live for a while, then chances are that it is indexed by search engines. Other people may have linked to it using http in the URL. You need to make sure that all traffic is redirected to the https URL.

First thing you need to do is install and activate the Really Simple SSL plugin. For more details, see our step by step guide on how to install a WordPress plugin.

The plugin will automatically detect your SSL certificate and set up your website to use it. In most cases, you will not have to make any more changes. The plugin will also fix insecure content issue.

Update Google Analytics Settings

If you have Google Analytics installed on your WordPress site, then you need to update its settings and add your new url with https.

Login to your Google Analytics dashboard and click on ‘Admin’ at the top menu. Next, you need to click on property settings under your website.

There you will see the default URL option. Click on http and then select https.

Changing default URL in Google Analytics

Don’t forget to click on the save button to store your settings.

More Options:

How To Install Let’s Encrypt SSL With Nginx on CentOS 7

How To Install Let’s Encrypt SSL on Ubuntu With Apache

That’s all, we hope this tutorial helped you add Free SSL in WordPress with Let’s Encrypt.