How to install and configure shadowsocks in ubuntu machine

shadowsocks wpcademy

Installing and configuring Shadowsocks on an Ubuntu machine involves several steps, including updating the system, installing necessary dependencies, downloading and setting up Shadowsocks, and configuring it to run as a service. Here’s a step-by-step guide to help you through the process:

Step 1: Update the System

First, ensure your system is up to date:

sudo apt update
sudo apt upgrade -y

Step 2: Install Necessary Dependencies

Shadowsocks requires Python and pip (Python package manager). Install them with:

sudo apt install python3 python3-pip -y

Step 3: Install Shadowsocks

Use pip to install Shadowsocks:

sudo pip3 install shadowsocks

Step 4: Configure Shadowsocks

Create a configuration file for Shadowsocks. The default location for the configuration file is /etc/shadowsocks/config.json. You might need to create the directory first:

sudo mkdir -p /etc/shadowsocks

Then create the configuration file:

sudo nano /etc/shadowsocks/config.json

Here’s a sample configuration:

{
    "server": "0.0.0.0",
    "server_port": 8388,
    "local_address": "127.0.0.1",
    "local_port": 1080,
    "password": "your_password",
    "timeout": 300,
    "method": "aes-256-cfb",
    "fast_open": false
}

Replace "your_password" with a strong password. You can also adjust the "server_port" and "method" as needed.

Step 5: Run Shadowsocks

To start Shadowsocks manually, use the following command:

sudo ssserver -c /etc/shadowsocks/config.json

Step 6: Configure Shadowsocks to Run as a Service

To ensure Shadowsocks starts automatically on system boot, create a systemd service file:

sudo nano /etc/systemd/system/shadowsocks.service

Add the following content to the file:

[Unit]
Description=Shadowsocks Proxy Server
After=network.target

[Service]
ExecStart=/usr/local/bin/ssserver -c /etc/shadowsocks/config.json
Restart=on-failure

[Install]
WantedBy=multi-user.target

Save and close the file. Then, enable and start the Shadowsocks service:

sudo systemctl enable shadowsocks
sudo systemctl start shadowsocks

Step 7: Verify the Service

Check the status of the Shadowsocks service to ensure it is running correctly:

sudo systemctl status shadowsocks

If everything is set up correctly, the status should indicate that Shadowsocks is active and running.

Additional Configuration

For enhanced security and performance, consider configuring additional settings such as:

Firewall Rules: Allow the Shadowsocks server port through the firewall.

sudo ufw allow 8388/tcp
sudo ufw allow 8388/udp
sudo ufw enable
  • Optimizations: Adjust TCP settings or use fast_open if your kernel supports it.

By following these steps, you should have a fully functional Shadowsocks server running on your Ubuntu machine.

Read more from Shadowsocks documentation

Initial Server Setup with Ubuntu

ubuntu-server-setup

Introduction
As part of the initial setup for a brand-new Ubuntu server, you should carry out a few crucial configuration tasks. These changes will improve your server’s security and usability and lay a strong foundation for further activities.

Step 1 — Logging in as root

log in now as the root user using the following command (substitute the highlighted portion of the command with your server’s public IP address

ssh root@your_server_ip

Accept the warning about host authenticity if it appears. If you are using password authentication, provide your root password to log in.

The root user is the administrative user in a Linux environment that has very broad privileges. Because of the heightened privileges of the root account, you are discouraged from using it on a regular basis. This is because the root account is able to make very destructive changes, even by accident.

The next step is setting up a new user account with reduced privileges for day-to-day use.

Step 2 — Creating a New User

Once you are logged in as root, you’ll be able to add the new user account. In the future, we’ll log in with this new account instead of root.

This example creates a new user called john, but you should replace that with a username that you like:

adduser john

You will be asked a few questions, starting with the account password.

Enter a strong password and, optionally, fill in any of the additional information if you would like. This is not required and you can just hit ENTER in any field you wish to skip.

Step 3 — Granting Administrative Privileges
Now we have a new user account with regular account privileges. However, we may sometimes need to do administrative tasks.

To avoid having to log out of our normal user and log back in as the root account, we can set up what is known as superuser or root privileges for our normal account. This will allow our normal user to run commands with administrative privileges by putting the word sudo before the command.

To add these privileges to our new user, we need to add the user to the sudo group.

usermod -aG sudo john

Step 4 — Setting Up a Basic Firewall
Ubuntu servers can use the UFW firewall to make sure only connections to certain services are allowed. We can set up a basic firewall using this application.

Applications can register their profiles with UFW upon installation. These profiles allow UFW to manage these applications by name. OpenSSH, the service allowing us to connect to our server now, has a profile registered with UFW.

You can see this by typing:

ufw app list
Output
Available applications:
  OpenSSH

We need to make sure that the firewall allows SSH connections so that we can log back in next time. We can allow these connections by typing:

ufw allow OpenSSH

Afterwards, we can enable the firewall by typing:

ufw enable

Type y and press ENTER to proceed. You can see that SSH connections are still allowed by typing:

ufw status

As the firewall is currently blocking all connections except for SSH, if you install and configure additional services, you will need to adjust the firewall settings to allow traffic in.

Step 5 — Enabling External Access for Your Regular User

Now that we have a regular user for daily use, we need to make sure we can SSH into the account directly.

If you logged in to your root account using a password, then password authentication is enabled for SSH. You can SSH to your new user account by opening up a new terminal session and using SSH with your new username:

ssh john@your_server_ip

After entering your regular user’s password, you will be logged in. Remember, if you need to run a command with administrative privileges, type sudo before it like this:

sudo command_to_run

You will be prompted for your regular user password when using sudo for the first time each session (and periodically afterwards).

How to connect to a remote MySQL database using Linux terminal

MySQL is an open-source relational database management system (RDBMS).

connect remote mysqlIn this article we will connect to remote MySQL database in the simple way. After you setup a user with proper access rights run the below commands.

$ mysql -u yourUser -p -h

Lets explanation the above command:

-u tells mysql what is your username

-p tells mysql you have a password and will prompt you to enter it after you press enter

-h tells mysql the hostname or IP address of your MySQL server

You can learn more form MySQL database documentation

How To Install OpenEMR on Ubuntu 20.04 with a LAMP Stack (Apache, MySQL, PHP)

Introduction

OpenEMR is an open source electronic health records and medical practice management tool. It is used by physicians and healthcare facilities to manage electronic medical records, prescriptions, patient demographic tracking, scheduling, reports, and electronic billing. At the time of this publication, OpenEMR supports more than 30 languages.

In this tutorial, you will install OpenEMR on an Ubuntu 20.04 server running a LAMP environment (Linux, Apache, MySQL, PHP).

    • An Ubuntu 20.04 server with a non-root sudo-enabled user account and a basic firewall. This can be configured using our initial server setup guide for Ubuntu 20.04.
    • A fully installed LAMP stack, including Apache, MySQL, and PHP, with firewall settings adjusted to allow HTTP traffic. Instructions for installing a LAMP stack can be found in Steps 1 through 3 in our guide How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 20.04. Note that Steps 4 through 6 of the LAMP guide are optional as they are for testing purposes and unnecessary for this tutorial.

Step 1 — Installing Additional PHP Extensions

When setting up our LAMP stack, a minimal set of extensions were required to get PHP to communicate with MySQL. OpenEMR requires two additional PHP extensions that you will need to install for it to work correctly. Use apt to update your server’s package list and install the php-xml and php-mbstring extensions:

 $ sudo apt update
 $sudo apt install php-xml php-mbstring

After both extensions have been installed, you’ll need to reload the Apache web server for changes to take effect:

 $ sudo systemctl reload apache2

When your webserver has reloaded, you should be ready to proceed to the next step.

Step 2 — Create a MySQL Database for OpenEMR
You will now create a database in MySQL for OpenEMR. First, log in to MySQL as the database root user:

 $ sudo mysql

Once you are logged into MySQL as the database root user, create a database named openemr with the following command:

mysql> CREATE DATABASE openemr;
Next, create a new user and assign them a password by replacing PASSWORD below with a strong password of your choosing:
mysql>CREATE USER 'openemr_user'@'localhost' IDENTIFIED BY 'PASSWORD';
Next, grant the new user permission to the openemr database:
mysql> GRANT ALL PRIVILEGES ON openemr.* TO 'openemr_user'@'localhost';
To enable these changes, enter the following command:
mysql> FLUSH PRIVILEGES;
Once you have flushed the privileges, you can now exit MySQL:
mysql> exit

You are now ready to proceed to the next step.

Step 3 — Configuring PHP for OpenEMR

In this step, you’ll make some changes to the php.ini file as recommended by OpenEMR documentation. If you followed all prerequisites within a fresh Ubuntu 20.04 server, the php.ini that applies to your Apache web server should be located at /etc/php/7.4/apache2/php.ini. In case you have a different PHP version, this path may be slightly different. Adjust as necessary and open the file with a text editor of your choice. Here, we’ll use nano:

 $ sudo nano /etc/php/7.4/apache2/php.ini

Once you are in the php.ini file, you will change the values of several options as recommended by OpenEMR. If you are using nano, you can search for these options using CTRL + W. If there is a semicolon ; in front of the option you are adjusting, make sure to delete it as a semicolon is used to comment out an option.

Values for the following options should be changed:

max_ input_vars
This option limits the number of variables your server can use in a single function. OpenEMR requires this option to have the value 3000:

/etc/php/7.4/apache2/php.ini
max_input_vars = 3000
max_execution_time
This option limits the amount of time (in seconds) a script is allowed to run before being terminated. OpenEMR requires this option to have the value 60:

/etc/php/7.4/apache2/php.ini


max_execution_time = 60
max_input_time
This option limits the time in seconds a script is allowed to parse input data. OpenEMR requires this option to have the value -1, which means that the max_execution_time is used instead:

/etc/php/7.4/apache2/php.ini

/etc/php/7.4/apache2/php.ini

post_max_size
This option limits the size of a post, including uploaded files. OpenEMR requires this option to have a value of 30M:

/etc/php/7.4/apache2/php.ini

post_max_size = 30M
memory_limit
This option limits the amount of memory a script is allowed to allocate. OpenEMR requires this option to have a value of 256M:

/etc/php/7.4/apache2/php.ini

memory_limit = 256M
mysqli.allow_local infile
This option enables access to local files with LOAD DATA statements. OpenEMR requires this option to be turned on:

/etc/php/7.4/apache2/php.ini

mysqli.allow_local_infile = On
When you are done adjusting the options, save and exit the file. If you are using nano, you can do that by pressing CTRL+X, then Y and ENTER to confirm.

Next, you’ll need to reload the Apache web server for changes to take effect:

 $ sudo systemctl reload apache2

When your webserver has reloaded, you should be ready to proceed to the next step.
Step 4 — Downloading OpenEMR
In this step, you will download OpenEMR and prepare its files for installation. To start, download OpenEMR using the command wget, which retrieves files from the internet:

 $ wget https://downloads.sourceforge.net/project/openemr/OpenEMR%20Current/5.0.2.1/openemr-5.0.2.tar.gz

Next, extract the files using the tar command. The xvzf argument is used to tell the shell to extract the files (x), name the files extracted (v), uncompress the files with gzip (z), and use the file named in the command (f).

 $ tar xvzf openemr*.tar.gz

When the files are done being extracted, you should have a directory named openemr-5.0.2. Change the directory name to openemr using the mv command:

 $ mv openemr-5.0.2 openemr

Next, move the directory to your HTML directory:

 $ sudo mv openemr /var/www/html/

You now need to change the ownership of the directory. Use the chown command and R flag to set the owner of all files and the group associated with openemr to www-data:

 $ sudo chown -R www-data:www-data /var/www/html/openemr

For the installation process, OpenEMR also requires you to change the permissions of the sqlconf.php file so that all users can read and write the file but cannot execute it. After the installation is finished, we’ll change these permissions once again to secure your setup. These permissions can be granted with the chmod command using 666 as argument:

 $ sudo chmod 666 /var/www/html/openemr/sites/default/sqlconf.php

After you change the permissions for the sqlconf.php file, you are ready to proceed to the next step.
Step 4 — Installing OpenEMR
In this step, you will install OpenEMR through a web browser and configure the Apache web server. Open a web browser and navigate to http://server_ip/openemr, replacing server_ip with the IP address of your server.

If everything is working correctly, the browser should display the OpenEMR Setup page:

OpenEMR setup page
Click Proceed to Step 1. You should now be directed to a new OpenEMR Setup page for Step 1 of the installation process:

OpenEMR setup page — Step 1
On the new page, select I have already created a database as you already created an OpenEMR database in Step 3 of this tutorial. Then click Proceed to Step 2.

Your browser should now display Step 2 of the OpenEMR Setup:

OpenEMR setup page — Step 2
In the Login and Password fields in the MySQL Server Details section, enter the username and password you picked in Step 3.

In the OpenEMR Initial User Details section, create an Initial User Login Name and password.

If you’d like to enable 2 Factor Authentication for the initial user, click the option Enable 2FA.

Then click Create DB and User. It may take a few minutes for the next page to load. This page will verify the successful creation of the user and database:

OpenEMR setup page — Step 3
Click Proceed to Step 4 to continue. The next page will confirm the creation and configuration of the Access Control List:

OpenEMR setup page — Step 4

Click Proceed to Step 5 to continue. The next page will show you the required PHP configurations for OpenEMR. Your current configuration should match their requirements as you already adjusted them in Step 4.

OpenEMR setup page — Step 5
Click Proceed to Step 6 to continue. The next page will show you how to configure your Apache Web Server for OpenEMR:

OpenEMR setup page — Step 6
To configure the Apache Web Server for OpenEMR, create a new configuration file named openemr.conf. You can do that from your terminal using the nano editor:

 $ sudo nano /etc/apache2/sites-available/openemr.conf

Inside the file, paste the following directives:

/etc/apache2/sites-available/openemr.conf
<Directory “/var/www/html/openemr”>
AllowOverride FileInfo
Require all granted

<Directory “/var/www/html/openemr/sites”>
AllowOverride None

<Directory “/var/www/html/openemr/sites/*/documents”>
Require all denied

Save and close the file. Then, restart Apache so that the changes are loaded:

 $ sudo systemctl restart apache2

Next, return to the browser and click Proceed to Select a Theme. On the next page, select a theme and click Proceed to Final Step:

OpenEMR setup page — Step 7
You should now be directed to the final setup page with confirmation details regarding your installation:

OpenEMR setup page — Final Step
This page will also give the user name and password details for your initial user. Make sure to have these details available before you leave the page. When you are ready, click the link at the bottom to start using OpenEMR.

A window will pop up asking whether you want to register your installation. After making your choice, log in to OpenEMR with your initial user credentials. Once you are logged in, your browser should display the OpenEMR dashboard:

OpenEMR dashboard
Before going any further, make sure to change the file permissions as indicated in the next step.

Step 5 — Changing FileSystem Permissions
To improve the security of the system, OpenEMR advises users to change permissions of several files after installation. In this step, you will change the permissions of these files to further restrict read and write access.

First, you will change the permissions of the sqlconf.php file whose permissions you modified in Step 3 to give the owner read and write access and group members only read access.

These permissions can be granted with the chmod command using 644 as the argument:

 $ sudo chmod 644 openemr/library/sqlconf.php

Next, you will change the permissions of several other files to allow only the file owner to read and write the file.

Grant these permissions by using the chmod command with the 600 argument on the following files:

 $ sudo chmod 600 openemr/acl_setup.php
 $ sudo chmod 600 openemr/acl_upgrade.php
 $ sudo chmod 600 openemr/setup.php
 $ sudo chmod 600 openemr/sql_upgrade.php
 $ sudo chmod 600 openemr/gacl/setup.php
 $ sudo chmod 600 openemr/ippf_upgrade.php

Your files should now have more secure permission settings.

In addition to changing file permissions, OpenEMR’s documentation strongly advises additional steps for securing each of OpenEMR’s components. These steps include deleting scripts in OpenEMR after installation, enforcing strong passwords, enabling HTTPS-only traffic, adjusting the firewall, and hardening Apache. Make sure to visit OpenEMR’s security documentation to learn more about how you can best protect your OpenEMR installation.

Conclusion
You have now installed OpenEMR on an Ubuntu 20.04 server using Apache, MySQL, and PHP. For instructions on pointing a domain name to your server, you can follow our guide How To Point to DigitalOcean Nameservers From Common Domain Registrars. For OpenEMR documentation, you can visit the OpenEMR Wiki Page.

How to Install WordPress with LEMP on Ubuntu 20.04

Introduction

WordPress is the most popular content management systems (CMS) on the internet currently, allows users to set up flexible blogs and websites using a MySQL backend with PHP processing. WordPress has seen an incredible adoption rate among new and experienced engineers alike, and is a great choice for getting a website up and running efficiently. After an initial setup, almost all administration for WordPress websites can be done through its graphical interface— these features and more make WordPress a great choice for websites built to scale.

In this tutorial, you’ll focus on getting an instance of WordPress set up on a LEMP stack (Linux, Nginx, MySQL, and PHP) for an Ubuntu 20.04 server.

Prerequisites

In order to complete this tutorial, you’ll need access to an Ubuntu 20.04 server. To successfully install WordPress with LEMP on your server, you’ll also need to perform the following tasks before starting this tutorial:

  • Create a sudo user on your server: The steps in this tutorial are using a non-root user with sudo privileges. You can create a user with sudo privileges by following our Ubuntu 20.04 initial server setup tutorial.
  • Install a LEMP stack: WordPress will need a web server, a database, and PHP in order to correctly function. Setting up a LEMP stack (Linux, Nginx, MySQL, and PHP) fulfills all of these requirements. Follow this tutorial to install and configure this software.
  • Secure your site with SSL: WordPress serves dynamic content and handles user authentication and authorization. TLS/SSL is the technology that allows you to encrypt the traffic from your site so that your connection is secure. The way you set up SSL will depend on whether you have a domain name for your site.

When you are finished with setup, log in to your server as the sudo user to continue.

Step 1 — Creating a MySQL Database and User for WordPress

WordPress uses MySQL to manage and store site and user information. Although you already have MySQL installed, let’s create a database and a user for WordPress to use.

To get started, log in to the MySQL root (administrative) account. If MySQL is configured to use the auth_socket authentication plugin (which is default), you can log in to the MySQL administrative account using sudo:

 $ sudo mysql

If you have changed the authentication method to use a password for the MySQL root account, use the following command instead:

 $ mysql -u root -p

You will be prompted for the password you set for the MySQL root account.

Once logged in, create a separate database that WordPress can control. You can call this whatever you would like, but we will be using wordpress in this guide to keep it simple. You can create a database for WordPress by entering:

Mysql> CREATE DATABASE wordpress DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;
Note: Every MySQL statement must end in a semi-colon (;). If you’ve encountered an error, check to make sure the semicolon is present.

Next, let’s create a separate MySQL user account that we will use exclusively to operate on our new database. Creating single-purpose databases and accounts is a good idea from a management and security standpoint. We’ll use the name wordpressuser in this guide — feel free to change this if you’d like.

In the following command, you are going to create an account, set a password, and grant access to the database you created. Remember to choose a strong password here:

CREATE USER 'wordpressuser'@'localhost' IDENTIFIED BY 'password';
GRANT ALL ON wordpress.* TO 'wordpressuser'@'localhost';

You now have a database and user account, each made specifically for WordPress.

With the database tasks complete, let’s exit out of MySQL by typing:

EXIT

The MySQL session will exit, returning you to the regular Linux shell.

Step 2 — Installing Additional PHP Extensions

When setting up the LEMP stack, it required a very minimal set of extensions to get PHP to communicate with MySQL. WordPress and many of its plugins leverage additional PHP extensions, and you’ll use a few more in this tutorial.

Let’s download and install some of the most popular PHP extensions for use with WordPress by typing:

 $ sudo apt update
 $ sudo apt install php-curl php-gd php-intl php-mbstring php-soap php-xml php-xmlrpc php-zip

Note: Each WordPress plugin has its own set of requirements. Some may require additional PHP extension packages to be installed. Check your plugin documentation to discover its PHP requirements. If they are available, they can be installed with apt as demonstrated above.

When you are finished installing the extensions, restart the PHP-FPM process so that the running PHP processor can leverage the newly installed features:

 $ sudo systemctl restart php7.4-fpm

You now have all of the PHP extensions needed, installed on the server.

Step 3 — Configuring Nginx
Next, let’s make a few adjustments to our Nginx server block files. Based on the prerequisite tutorials, you should have a configuration file for your site in the /etc/nginx/sites-available/ directory configured to respond to your server’s domain name or IP address and protected by a TLS/SSL certificate. We’ll use /etc/apache2/sites-available/wordpress as an example here, but you should substitute the path to your configuration file where appropriate.

Additionally, we will use /var/www/wordpress as the root directory of our WordPress install in this guide. Again, you should use the web root specified in your own configuration.

Note: It’s possible you are using the /etc/nginx/sites-available/default default configuration (with /var/www/html as your web root). This is fine to use if you’re only going to host one website on this server. If not, it’s best to split the necessary configuration into logical chunks, one file per site.

Open your site’s server block file with sudo privileges to begin:

 $ sudo nano /etc/nginx/sites-available/wordpress

Within the main server block, let’s add a few location blocks.

Start by creating exact-matching location blocks for requests to /favicon.ico and /robots.txt, both of which you do not want to log requests for.

Use a regular expression location to match any requests for static files. We will again turn off the logging for these requests and will mark them as highly cacheable, since these are typically expensive resources to serve. You can adjust this static files list to contain any other file extensions your site may use:

/etc/nginx/sites-available/wordpress
server {
    . . .

    location = /favicon.ico { log_not_found off; access_log off; }
    location = /robots.txt { log_not_found off; access_log off; allow all; }
    location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
        expires max;
        log_not_found off;
    }
    . . .
}

Inside of the existing location / block, let’s adjust the try_files list. Comment out the default setting by prepending the line with a pound sign (#) and then add the highlighted line. This way, instead of returning a 404 error as the default option, control is passed to the index.php file with the request arguments.

This should look something like this:

/etc/nginx/sites-available/wordpress
server {
. . .
location / {
#try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php$is_args$args;
}
. . .
}

When you are finished, save and close the file.

Now, let’s check our configuration for syntax errors by typing:

 $ sudo nginx -t

If no errors were reported, reload Nginx by typing:

$ sudo systemctl reload nginx

Next, let’s download and set up WordPress.

Step 4 — Downloading WordPress
Now that your server software is configured, let’s download and set up WordPress. For security reasons, it is always recommended to get the latest version of WordPress directly from the project’s website.

Change into a writable directory and then download the compressed release by typing:

 $ cd /tmp

This changes your directory to the temporary folder. Then, enter the following command to download the latest version of WordPress in a compressed file:

curl -LO https://wordpress.org/latest.tar.gz

Note: The -LO flag is used to get directly to the source of the compressed file. -L ensures that fetching the file is successful in the case of redirects, and -O writes the output of our remote file with a local file that has the same name. To learn more about curl commands, visit How to Download Files with cURL
Extract the compressed file to create the WordPress directory structure:

tar xzvf latest.tar.gz

You will be moving these files into our document root momentarily, but before you do, let’s copy over the sample configuration file to the filename that WordPress actually reads:

cp /tmp/wordpress/wp-config-sample.php /tmp/wordpress/wp-config.php

Now, let’s copy the entire contents of the directory into our document root. We’re using the -a flag to make sure our permissions are maintained, and a dot at the end of our source directory to indicate that everything within the directory should be copied (including hidden files):

sudo cp -a /tmp/wordpress/. /var/www/wordpress

Now that our files are in place, you’ll assign ownership to the www-data user and group. This is the user and group that Nginx runs as, and Nginx will need to be able to read and write WordPress files in order to serve the website and perform automatic updates:

sudo chown -R www-data:www-data /var/www/wordpress

Files are now in the server’s document root and have the correct ownership, but you still need to complete some additional configuration.

Step 5 — Setting up the WordPress Configuration File
Next, let’s make some changes to the main WordPress configuration file.

When you open the file, you’ll start by adjusting some secret keys to provide some security for our installation. WordPress provides a secure generator for these values so that you don’t have to come up with values on your own. These are only used internally, so it won’t hurt usability to have complex, secure values here.

To grab secure values from the WordPress secret key generator, type:

curl -s https://api.wordpress.org/secret-key/1.1/salt/

You will get back unique values that look something like this:
Warning: It is important that you request unique values each time. Do NOT copy the values shown below!

Output
define(‘AUTH_KEY’, ‘1jl/vqfs define(‘SECURE_AUTH_KEY’, ‘E2N-h2]Dcvp+aS/p7X DO NOT COPY THESE VALUES {Ka(f;rv?Pxf})CgLi-3’);
define(‘LOGGED_IN_KEY’, ‘W(50,{W^,OPB%PB define(‘NONCE_KEY’, ‘ll,4UC)7ua+8<!4VM+ DO NOT COPY THESE VALUES #`DXF+[$atzM7 o^-C7g’);
define(‘AUTH_SALT’, ‘koMrurzOA+|L_lG}kf DO NOT COPY THESE VALUES 07VC*Lj*lD&?3w!BT#-‘);
define(‘SECURE_AUTH_SALT’, ‘p32*p,]z%LZ+pAu:VY DO NOT COPY THESE VALUES C-?y+K0DK_+F|0h{!_xY’);
define(‘LOGGED_IN_SALT’, ‘i^/G2W7!-1H2OQ+t$3 DO NOT COPY THESE VALUES t6**bRVFSD[Hi])-qS`|’);
define(‘NONCE_SALT’, ‘Q6]U:K?j4L%Z]}h^q7 DO NOT COPY THESE VALUES 1% ^qUswWgn+6&xqHN&%’);

These are configuration lines that you can paste directly in your configuration file to set secure keys. Copy the output you received now.

Now, open the WordPress configuration file:

 $ sudo nano /var/www/wordpress/wp-config.php

Find the section that contains the dummy values for those settings. It will look something like this:
/var/www/wordpress/wp-config.php
. . .

define(‘AUTH_KEY’, ‘put your unique phrase here’);
define(‘SECURE_AUTH_KEY’, ‘put your unique phrase here’);
define(‘LOGGED_IN_KEY’, ‘put your unique phrase here’);
define(‘NONCE_KEY’, ‘put your unique phrase here’);
define(‘AUTH_SALT’, ‘put your unique phrase here’);
define(‘SECURE_AUTH_SALT’, ‘put your unique phrase here’);
define(‘LOGGED_IN_SALT’, ‘put your unique phrase here’);
define(‘NONCE_SALT’, ‘put your unique phrase here’);
Delete those lines and paste in the values you copied from the command line:

/var/www/wordpress/wp-config.php
. . .

define(‘AUTH_KEY’, ‘VALUES COPIED FROM THE COMMAND LINE’);
define(‘SECURE_AUTH_KEY’, ‘VALUES COPIED FROM THE COMMAND LINE’);
define(‘LOGGED_IN_KEY’, ‘VALUES COPIED FROM THE COMMAND LINE’);
define(‘NONCE_KEY’, ‘VALUES COPIED FROM THE COMMAND LINE’);
define(‘AUTH_SALT’, ‘VALUES COPIED FROM THE COMMAND LINE’);
define(‘SECURE_AUTH_SALT’, ‘VALUES COPIED FROM THE COMMAND LINE’);
define(‘LOGGED_IN_SALT’, ‘VALUES COPIED FROM THE COMMAND LINE’);
define(‘NONCE_SALT’, ‘VALUES COPIED FROM THE COMMAND LINE’);
Next, let’s modify some of the database connection settings at the beginning of the file. You’ll have to adjust the database name, the database user, and the associated password that was configured within MySQL.

The other change you should make is to set the method that WordPress uses to write to the filesystem. Since you’ve given the web server permission to write where it needs to, you can explicitly set the filesystem method to “direct”. Failure to set this with our current settings would result in WordPress prompting for FTP credentials when we perform some actions. Add this setting below the database connection settings, or anywhere else in the file:

/var/www/wordpress/wp-config.php
. . .

define( ‘DB_NAME’, ‘wordpress’ );

/** MySQL database username */
define( ‘DB_USER’, ‘wordpressuser’ );

/** MySQL database password */
define( ‘DB_PASSWORD’, ‘password’ );

. . .

define( ‘FS_METHOD’, ‘direct’ );
Save and close the file when you’re done.

Step 6 — Completing the Installation Through the Web Interface
Now that the server configuration is complete, you can finish up the installation through WordPress’ web interface.

In your web browser, navigate to your server’s domain name or public IP address:

http://server_domain_or_IP/wordpress
Select the language you would like to use:

WordPress language selection

Next, you will come to the main setup page.

Select a name for your WordPress site and choose a username (it is recommended not to choose something like “admin” for security purposes). A strong password is generated automatically. Save this password or select an alternative strong password.

Enter your email address and select whether you want to discourage search engines from indexing your site:

WordPress setup installation

When you click ahead, you will be taken to a page that prompts you to log in:

WordPress login prompt

Once you log in, you will be taken to the WordPress administration dashboard:

WordPress login prompt

Conclusion
WordPress should be installed and ready to use! Some common next steps are to choose the permalinks setting for your posts (can be found in Settings > Permalinks) or to select a new theme (in Appearance > Themes). If this is your first time using WordPress, explore the interface a bit to get acquainted with your new CMS.

How To Use Traefik v2 as a Reverse Proxy for Docker Containers on Ubuntu 20.04

Introduction

Docker can be an efficient way to run web applications in production, but you may want to run multiple applications on the same Docker host. In this situation, you’ll need to set up a reverse proxy. This is because you only want to expose ports 80 and 443 to the rest of the world.

Traefik is a Docker-aware reverse proxy that includes a monitoring dashboard. Traefik v1 has been widely used for a while, and you can follow this earlier tutorial to install Traefik v1). But in this tutorial, you’ll install and configure Traefik v2, which includes quite a few differences.

The biggest difference between Traefik v1 and v2 is that frontends and backends were removed and their combined functionality spread out across routersmiddlewares, and services. Previously a backend did the job of making modifications to requests and getting that request to whatever was supposed to handle it. Traefik v2 provides more separation of concerns by introducing middlewares that can modify requests before sending them to a service. Middlewares make it easier to specify a single modification step that might be used by a lot of different routes so that they can be reused (such as HTTP Basic Auth, which you’ll see later). A router can also use many different middlewares.

In this tutorial you’ll configure Traefik v2 to route requests to two different web application containers: a WordPress container and an Adminer container, each talking to a MySQL database. You’ll configure Traefik to serve everything over HTTPS using Let’s Encrypt.

Prerequisites

To complete this tutorial, you will need the following:

      • One Ubuntu 20.04 server with a sudo non-root user and a firewall. You can set this up by following your Ubuntu 20.04 initial server setup guide.
      • Docker installed on your server, which you can accomplish by following Steps 1 and 2 of How to Install and Use Docker on Ubuntu 20.04.
      • Docker Compose installed using the instructions from Step 1 of How to Install Docker Compose on Ubuntu 20.04.
      • A domain and three A records, db-admin.your_domainblog.your_domain and monitor.your_domain. Each should point to the IP address of your server. You can learn how to point domains to DigitalOcean Droplets by reading through DigitalOcean’s Domains and DNS documentation. Throughout this tutorial, substitute your domain for your_domain in the configuration files and examples.

        Step 1 — Configuring and Running Traefik

        The Traefik project has an official Docker image, so you will use that to run Traefik in a Docker container.

        But before you get your Traefik container up and running, you need to create a configuration file and set up an encrypted password so you can access the monitoring dashboard.

        You’ll use the htpasswd utility to create this encrypted password. First, install the utility, which is included in the apache2-utils package:

         $ sudo apt-get install apache2-utils

        The output from the program will look like this:

        Output
        admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/

        Youu’ll use this output in the Traefik configuration file to set up HTTP Basic Authentication for the Traefik health check and monitoring dashboard. Copy the entire output line so you can paste it later.

        To configure the Traefik server, you’ll create two new configuration files called traefik.toml and traefik_dynamic.toml using the TOML format. TOML is a configuration language similar to INI files, but standardized. These files let us configure the Traefik server and various integrations, or providers, that you want to use. In this tutorial, you will use three of Traefik’s available providers: api, docker, and acme. The last of these, acme, supports TLS certificates using Let’s Encrypt.

        Create and open traefik.toml using nano or your preferred text editor:

         $ nano traefik.toml

        First, you want to specify the ports that Traefik should listen on using the entryPoints section of your config file. You want two because you want to listen on port 80 and 443. Let’s call these web (port 80) and websecure (port 443).

        Add the following configurations:>

        traefik.toml
        [entryPoints]
          [entryPoints.web]
            address = ":80"
            [entryPoints.web.http.redirections.entryPoint]
              to = "websecure"
              scheme = "https"
        
          [entryPoints.websecure]
            address = ":443"

    Note that you are also automatically redirecting traffic to be handled over TLS.

    Next, configure the Traefik api, which gives you access to both the API and your dashboard interface. The heading of [api] is all that you need because the dashboard is then enabled by default, but you’ll be explicit for the time being.

    Add the following code:

    traefik.toml
    ...
    [api]
      dashboard = true

    To finish securing your web requests you want to use Let’s Encrypt to generate valid TLS certificates. Traefik v2 supports Let’s Encrypt out of the box and you can configure it by creating a certificates resolver of the type acme.

    Let’s configure your certificates resolver now using the name lets-encrypt:

    traefik.toml
    ...
    [certificatesResolvers.lets-encrypt.acme]
      email = "your_email@your_domain"
      storage = "acme.json"
      [certificatesResolvers.lets-encrypt.acme.tlsChallenge]

    This section is called acme because ACME is the name of the protocol used to communicate with Let’s Encrypt to manage certificates. The Let’s Encrypt service requires registration with a valid email address, so to have Traefik generate certificates for your hosts, set the email key to your email address. You then specify that you will store the information that you will receive from Let’s Encrypt in a JSON file called acme.json.

    The acme.tlsChallenge section allows us to specify how Let’s Encrypt can verify that the certificate. You’re configuring it to serve a file as part of the challenge over port 443.

    Finally, you need to configure Traefik to work with Docker.

    Add the following configurations:

    traefik.toml
    ...
    [providers.docker]
      watch = true
      network = "web"

    The docker provider enables Traefik to act as a proxy in front of Docker containers. You’ve configured the provider to watch for new containers on the web network, which you’ll create soon.

    Our final configuration uses the file provider. With Traefik v2, static and dynamic configurations can’t be mixed and matched. To get around this, you will use traefik.toml to define your static configurations and then keep your dynamic configurations in another file, which you will call traefik_dynamic.toml. Here you are using the file provider to tell Traefik that it should read in dynamic configurations from a different file.

    Add the following file provider:

    traefik.toml
     $ providers.file
     $ filename = "traefik_dynamic.toml"

    Your completed traefik.toml will look like this:

    traefik.toml
    [entryPoints]
      [entryPoints.web]
        address = ":80"
        [entryPoints.web.http.redirections.entryPoint]
          to = "websecure"
          scheme = "https"
    
      [entryPoints.websecure]
        address = ":443"
    
    [api]
      dashboard = true
    
    [certificatesResolvers.lets-encrypt.acme]
      email = "your_email@your_domain"
      storage = "acme.json"
      [certificatesResolvers.lets-encrypt.acme.tlsChallenge]
    
    [providers.docker]
      watch = true
      network = "web"
    
    [providers.file]
      filename = "traefik_dynamic.toml"

    Save and close the file.

    Now let’s create traefik_dynamic.toml.

    The dynamic configuration values that you need to keep in their own file are the middlewares and the routers. To put your dashboard behind a password you need to customize the API’s router and configure a middleware to handle HTTP basic authentication. Let’s start by setting up the middleware.

    The middleware is configured on a per-protocol basis and since you’re working with HTTP you’ll specify it as a section chained off of http.middlewares. Next comes the name of your middleware so that you can reference it later, followed by the type of middleware that it is, which will be basicAuth in this case. Let’s call your middleware simpleAuth.

    Create and open a new file called traefik_dynamic.toml:

     $ nano traefik_dynamic.toml

    Add the following code. This is where you’ll paste the output from the htpasswd command:

    traefik_dynamic.toml
    [http.middlewares.simpleAuth.basicAuth]
      users = [
        "admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/"
      ]

    To configure the router for the api you’ll once again be chaining off of the protocol name, but instead of using http.middlewares, you’ll use http.routers followed by the name of the router. In this case, the api provides its own named router that you can configure by using the [http.routers.api] section. You’ll configure the domain that you plan on using with your dashboard also by setting the rule key using a host match, the entrypoint to use websecure, and the middlewares to include simpleAuth.

    Add the following configurations:

    traefik_dynamic.toml
    ...
    [http.routers.api]
      rule = "Host(`monitor.your_domain`)"
      entrypoints = ["websecure"]
      middlewares = ["simpleAuth"]
      service = "api@internal"
      [http.routers.api.tls]
        certResolver = "lets-encrypt"

    The web entry point handles port 80, while the websecure entry point uses port 443 for TLS/SSL. You automatically redirect all of the traffic on port 80 to the websecure entry point to force secure connections for all requests.

    Notice the last three lines here configure a service, enable tls, and configure certResolver to “lets-encrypt”. Services are the final step to determining where a request is finally handled. The api@internal service is a built-in service that sits behind the API that you expose. Just like routers and middlewares, services can be configured in this file, but you won’t need to do that to achieve your desired result.

    Your completed traefik_dynamic.toml file will look like this:

    traefik_dynamic.toml
    [http.middlewares.simpleAuth.basicAuth]
      users = [
        "admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/"
      ]
    
    [http.routers.api]
      rule = "Host(`monitor.your_domain`)"
      entrypoints = ["websecure"]
      middlewares = ["simpleAuth"]
      service = "api@internal"
      [http.routers.api.tls]
        certResolver = "lets-encrypt"

    Save the file and exit the editor.

    With these configurations in place, you will now start Traefik

    Step 2 – Running the Traefik Container

    In this step you will create a Docker network for the proxy to share with containers. You will then access the Traefik dashboard. The Docker network is necessary so that you can use it with applications that are run using Docker Compose.

    Create a new Docker network called web:

     $ docker network create web

    When the Traefik container starts, you will add it to this network. Then you can add additional containers to this network later for Traefik to proxy to.

    Next, create an empty file that will hold your Let’s Encrypt information. You’ll share this into the container so Traefik can use it:

     $ touch acme.json

    Traefik will only be able to use this file if the root user inside of the container has unique read and write access to it. To do this, lock down the permissions on acme.json so that only the owner of the file has read and write permission.

     $ chmod 600 acme.json

    Once the file gets passed to Docker, the owner will automatically change to the root user inside the container.

    Finally, create the Traefik container with this command:

     $ docker run -d \
    $ -v /var/run/docker.sock:/var/run/docker.sock \
    $ -v $PWD/traefik.toml:/traefik.toml \
    $ -v $PWD/traefik_dynamic.toml:/traefik_dynamic.toml \
    $ -v $PWD/acme.json:/acme.json \
    $ -p 80:80 \
    $ -p 443:443 \
    $ --network web \
    $ --name traefik \
    $ traefik:v2.2
    

    This command is a little long. Let’s break it down.

    You use the -d flag to run the container in the background as a daemon. You then share your docker.sock file into the container so that the Traefik process can listen for changes to containers. You also share the traefik.toml and traefik_dynamic.toml configuration files into the container, as well as acme.json.

    Next, you map ports :80 and :443 of your Docker host to the same ports in the Traefik container so Traefik receives all HTTP and HTTPS traffic to the server.

    You set the network of the container to web, and you name the container traefik.

    Finally, you use the traefik:v2.2 image for this container so that you can guarantee that you’re not running a completely different version than this tutorial is written for.

    A Docker image’s ENTRYPOINT is a command that always runs when a container is created from the image. In this case, the command is the traefik binary within the container. You can pass additional arguments to that command when you launch the container, but you’ve configured all of your settings in the traefik.toml file.

    With the container started, you now have a dashboard you can access to see the health of your containers. You can also use this dashboard to visualize the routers, services, and middlewares that Traefik has registered. You can try to access the monitoring dashboard by pointing your browser to https://monitor.your_domain/dashboard/ (the trailing / is required).

    You will be prompted for your username and password, which are admin and the password you configured in Step 1.

    Once logged in, you’ll see the Traefik interface:

    Empty Traefik dashboard

    You will notice that there are already some routers and services registered, but those are the ones that come with Traefik and the router configuration that you wrote for the API.

    You now have your Traefik proxy running, and you’ve configured it to work with Docker and monitor other containers. In the next step you will start some containers for Traefik to proxy.

    Step 3 — Registering Containers with Traefik
    With the Traefik container running, you’re ready to run applications behind it. Let’s launch the following containers behind Traefik:

    A blog using the official WordPress image.
    A database management server using the official Adminer image.
    You’ll manage both of these applications with Docker Compose using a docker-compose.yml file.

    Create and open the docker-compose.yml file in your editor:

     $ nano docker-compose.yml

    Add the following lines to the file to specify the version and the networks you’ll use:

    docker-compose.yml
    version: "3"
    
    networks:
      web:
        external: true
      internal:
        external: false

    You use Docker Compose version 3 because it’s the newest major version of the Compose file format.

    For Traefik to recognize your applications, they must be part of the same network, and since you created the network manually, you pull it in by specifying the network name of web and setting external to true. Then you define another network so that you can connect your exposed containers to a database container that you won’t expose through Traefik. You’ll call this network internal.

    Next, you’ll define each of your services, one at a time. Let’s start with the blog container, which you’ll base on the official WordPress image. Add this configuration to the bottom of the file:

    docker-compose.yml
    ...
    
    services:
      blog:
        image: wordpress:4.9.8-apache
        environment:
          WORDPRESS_DB_PASSWORD:
        labels:
          - traefik.http.routers.blog.rule=Host(`blog.your_domain`)
          - traefik.http.routers.blog.tls=true
          - traefik.http.routers.blog.tls.certresolver=lets-encrypt
          - traefik.port=80
        networks:
          - internal
          - web
        depends_on:
          - mysql

    The environment key lets you specify environment variables that will be set inside of the container. By not setting a value for WORDPRESS_DB_PASSWORD, you’re telling Docker Compose to get the value from your shell and pass it through when you create the container. You will define this environment variable in your shell before starting the containers. This way you don’t hard-code passwords into the configuration file.

    The labels section is where you specify configuration values for Traefik. Docker labels don’t do anything by themselves, but Traefik reads these so it knows how to treat containers. Here’s what each of these labels does:

    traefik.http.routers.adminer.rule=Host(`blog.your_domain`) creates a new router for your container and then specifies the routing rule used to determine if a request matches this container.
    traefik.routers.custom_name.tls=true specifies that this router should use TLS.
    traefik.routers.custom_name.tls.certResolver=lets-encrypt specifies that the certificates resolver that you created earlier called lets-encrypt should be used to get a certificate for this route.
    traefik.port specifies the exposed port that Traefik should use to route traffic to this container.
    With this configuration, all traffic sent to your Docker host on port 80 or 443 with the domain of blog.your_domain will be routed to the blog container.

    You assign this container to two different networks so that Traefik can find it via the web network and it can communicate with the database container through the internal network.

    Lastly, the depends_on key tells Docker Compose that this container needs to start after its dependencies are running. Since WordPress needs a database to run, you must run your mysql container before starting your blog container.

    Next, configure the MySQL service:

    docker-compose.yml
    services:
    ...
      mysql:
        image: mysql:5.7
        environment:
          MYSQL_ROOT_PASSWORD:
        networks:
          - internal
        labels:
          - traefik.enable=false

    You’re using the official MySQL 5.7 image for this container. You’ll notice that you’re once again using an environment item without a value. The MYSQL_ROOT_PASSWORD and WORDPRESS_DB_PASSWORD variables will need to be set to the same value to make sure that your WordPress container can communicate with the MySQL. You don’t want to expose the mysql container to Traefik or the outside world, so you’re only assigning this container to the internal network. Since Traefik has access to the Docker socket, the process will still expose a router for the mysql container by default, so you’ll add the label traefik.enable=false to specify that Traefik should not expose this container.

    Finally, define the Adminer container:

    docker-compose.yml
    services:
    ...
      adminer:
        image: adminer:4.6.3-standalone
        labels:
          - traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`)
          - traefik.http.routers.adminer.tls=true
          - traefik.http.routers.adminer.tls.certresolver=lets-encrypt
          - traefik.port=8080
        networks:
          - internal
          - web
        depends_on:
          - mysql

    This container is based on the official Adminer image. The network and depends_on configuration for this container exactly match what you’re using for the blog container.

    The line traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`) tells Traefik to examine the host requested. If it matches the pattern of db-admin.your_domain, Traefik will route the traffic to the adminer container over port 8080.

    Your completed docker-compose.yml file will look like this:

    docker-compose.yml
    version: "3"
    
    networks:
      web:
        external: true
      internal:
        external: false
    
    services:
      blog:
        image: wordpress:4.9.8-apache
        environment:
          WORDPRESS_DB_PASSWORD:
        labels:
          - traefik.http.routers.blog.rule=Host(`blog.your_domain`)
          - traefik.http.routers.blog.tls=true
          - traefik.http.routers.blog.tls.certresolver=lets-encrypt
          - traefik.port=80
        networks:
          - internal
          - web
        depends_on:
          - mysql
    
      mysql:
        image: mysql:5.7
        environment:
          MYSQL_ROOT_PASSWORD:
        networks:
          - internal
        labels:
          - traefik.enable=false
    
      adminer:
        image: adminer:4.6.3-standalone
        labels:
        labels:
          - traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`)
          - traefik.http.routers.adminer.tls=true
          - traefik.http.routers.adminer.tls.certresolver=lets-encrypt
          - traefik.port=8080
        networks:
          - internal
          - web
        depends_on:
          - mysql

    Save the file and exit the text editor.

    Next, set values in your shell for the WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD variables:

    $ export WORDPRESS_DB_PASSWORD=secure_database_password
    $ export MYSQL_ROOT_PASSWORD=secure_database_password
    

    Substitute secure_database_password with your desired database password. Remember to use the same password for both WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD.

    With these variables set, run the containers using docker-compose:

     $ docker-compose up -d

    Now watch the Traefik admin dashboard while it populates.

    Populated Traefik dashboard

    If you explore the Routers section you will find routers for adminer and blog configured with TLS:

    HTTP Routers w/ TLS

    Navigate to blog.your_domain, substituting your_domain with your domain. You’ll be redirected to a TLS connection and you can now complete the WordPress setup:

    WordPress setup screen

    Now access Adminer by visiting db-admin.your_domain in your browser, again substituting your_domain with your domain. The mysql container isn’t exposed to the outside world, but the adminer container has access to it through the internal Docker network that they share using the mysql container name as a hostname.

    On the Adminer login screen, enter root for Username, enter mysql for Server, and enter the value you set for MYSQL_ROOT_PASSWORD for the Password. Leave Database empty. Now press Login.

    Once logged in, you’ll see the Adminer user interface.

    Adminer connected to the MySQL database

    Both sites are now working, and you can use the dashboard at monitor.your_domain to keep an eye on your applications.

    Conclusion
    In this tutorial, you configured Traefik v2 to proxy requests to other applications in Docker containers.

    Traefik’s declarative configuration at the application container level makes it easy to configure more services, and there’s no need to restart the traefik container when you add new applications to proxy traffic to since Traefik notices the changes immediately through the Docker socket file it’s monitoring.

    To learn more about what you can do with Traefik v2, head over to the official Traefik documentation.

How To Set Up Physical Streaming Replication with PostgreSQL 12 on Ubuntu 20.04

Physical Streaming Replication with PostgreSQL 12 on Ubuntu 20.04

Introduction

Streaming replication is a popular method you can use to horizontally scale your relational databases. It uses two or more copies of the same database cluster running on separate machines. One database cluster is referred to as the primary and serves both read and write operations; the others, referred to as the replicas, serve only read operations. You can also use streaming replication to provide high availability of a system. If the primary database cluster or server were to unexpectedly fail, the replicas are able to continue serving read operations, or (one of the replicas) become the new primary cluster.

PostgreSQL is a widely used relational database that supports both logical and physical replication. Logical replication streams high-level changes from the primary database cluster to the replica databases. Using logical replication, you can stream changes to just a single database or table in a database. However, in physical replication, changes to the WAL (Write-Ahead-Logging) log file are streamed and replicated in the replica clusters. As a result, you can’t replicate specific areas of a primary database cluster, but instead all changes to the primary are replicated.

In this tutorial, you will set up physical streaming replication with PostgreSQL 12 on Ubuntu 20.04 using two separate machines running two separate PostgreSQL 12 clusters. One machine will be the primary and the other, the replica.

To complete this tutorial, you will need the following:

  • Two separate machines Ubuntu 20.04 machines; one referred to as the primary and the other referred to as the replica. You can set these up with our Initial Server Setup Guide, including non-root users with sudo permissions and a firewall.
  • Your firewalls configured to allow HTTP/HTTPS and traffic on port 5432—the default port used by PostgreSQL 12. You can follow How To Set Up a Firewall with ufw on Ubuntu 20.04 to configure these firewall settings.
  • PostgreSQL 12 running on both Ubuntu 20.04 Servers. Follow Step 1 of the How To Install and Use PostgreSQL on Ubuntu 20.04 tutorial that covers the installation and basic usage of PostgreSQL on Ubuntu 20.04.

Step 1 — Configuring the Primary Database to Accept Connections

In this first step, you’ll configure the primary database to allow your replica database(s) to connect. By default, PostgreSQL only listens to the localhost (127.0.0.1) for connections. To change this, you’ll first edit the listen_addresses configuration parameter on the primary database.

On your primary server, run the following command to connect to the PostgreSQL cluster as the default postgres user:

 $ sudo -u postgres psql

Once you have connected to the database, you’ll modify the listen_addresses parameter using the ALTER SYSTEM command:

  • ALTER SYSTEM SET listen_addresses TO ‘your_replica_IP_addr’;

Replace 'your_replica_IP_addr' with the IP address of your replica machine.

You will receive the following output:

Output
ALTER SYSTEM

The command you just entered instructs the PostgreSQL database cluster to allow connections only from your replica machine. If you were using more than one replica machine, you would list the IP addresses of all your replicas separated by commas. You could also use '*' to allow connections from all IP addresses, however, this isn’t recommended for security reasons.

Note: You can also run the command on the database from the terminal using psql -c as follows:

 $ sudo -u postgres psql -c "ALTER SYSTEM SET listen_addresses TO 'your_replica_IP_adder';"

Alternatively, you can change the value for listen_addresses by manually editing the postgresql.conf configuration file, which you can find in the /etc/postgresql/12/main/ directory by default. You can also get the location of the configuration file by running SHOW config_file; on the database cluster.

To open the file using nano use:

 $ sudo nano /etc/postgresql/12/main/postgresql.conf

 

Once you’re done, your primary database will now accept connections from other machines. Next, you’ll create a role with the appropriate permissions that the replica will use when connecting to the primary.

Step 2 — Creating a Special Role with Replication Permissions

Now, you need to create a role in the primary database that has permission to replicate the database. Your replica will use this role when connecting to the primary. Creating a separate role just for replication also has security benefits. Your replica won’t be able to manipulate any data on the primary; it will only be able to replicate the data.

To create a role, you need to run the following command on the primary cluster:

  • CREATE ROLE test WITH REPLICATION PASSWORD ‘testpassword’ LOGIN;

You’ll receive the following output:

Output
CREATE ROLE

This command creates a role named test with the password 'testpassword', which has permission to replicate the database cluster.

PostgreSQL has a special replication pseudo-database that the replica connects to, but you first need to edit the /etc/postgresql/12/main/pg_hba.conf configuration file to allow your replica to access it. So, exit the PostgreSQL command prompt by running:

  • \q

Now that you’re back at your terminal command prompt, open the /etc/postgresql/12/main/pg_hba.conf configuration file using nano:

 $ sudo nano /etc/postgresql/12/main/pg_hba.conf

Append the following line to the end of the pg_hba.conf file:

/etc/postgresql/12/main/pg_hba.conf
. . .
host    replication     test    your-replica-IP/32   md5

This ensures that your primary allows your replica to connect to the replication pseudo-database using the role, test, you created earlier. The host value means to accept non-local connections via plain or SSL-encrypted TCP/IP sockets. replication is the name of the special pseudo-database that PostgreSQL uses for replication. Finally, the value md5 is the type of authentication used. If you want to have more than one replica, just add the same line again to the end of the file with the IP address of your other replica.

To ensure these changes to the configuration file are implemented, you need to restart the primary cluster using:

 $ sudo systemctl restart postgresql@12-main

If your primary cluster restarted successfully, it is correctly set up and ready to start streaming once your replica connects. Next, you’ll move on to setting up your replica cluster.

Step 3 — Backing Up the Primary Cluster on the Replica

As you are setting up physical replication with PostgreSQL in this tutorial, you need to perform a physical backup of the primary cluster’s data files into the replica’s data directory. To do this, you’ll first clear out all the files in the replica’s data directory. The default data directory for PostgreSQL on Ubuntu is /var/lib/postgresql/12/main/.

You can also find PostgreSQL’s data directory by running the following command on the replica’s database:

  • SHOW data_directory;

Once you have the location of the data directory, run the following command to remove everything:

 $ sudo -u postgres rm -r /var/lib/postgresql/12/main/*

Since the default owner of the files in the directory is the postgres user, you will need to run the command as postgres using sudo -u postgres.

Note:
If in the exceedingly rare case a file in the directory is corrupted and the command does not work, remove the main directory all together and recreate it with the appropriate permissions as follows:

 $ sudo -u postgres rm -r /var/lib/postgresql/12/main
 $ sudo -u postgres mkdir /var/lib/postgresql/12/main
 $ sudo -u postgres chmod 700 /var/lib/postgresql/12/main

 

Now that the replica’s data directory is empty, you can perform a physical backup of the primary’s data files. PostgreSQL conveniently has the utility pg_basebackup that simplifies the process. It even allows you to put the server into standby mode using the -R option.

Execute the pg_basebackup command on the replica as follows:

 $ sudo -u postgres pg_basebackup -h primary-ip-addr -p 5432 -U test -D /var/lib/postgresql/12/main/ -Fp -Xs -R
  • The -h option specifies a non-local host. Here, you need to enter the IP address of your server with the primary cluster.
  • The -p option specifies the port number it connects to on the primary server. By default, PostgreSQL uses port :5432.
  • The -U option allows you to specify the user you connect to the primary cluster as. This is the role you created in the previous step.
  • The -D flag is the output directory of the backup. This is your replica’s data directory that you emptied just before.
  • The -Fp specifies the data to be outputted in the plain format instead of as a tar file.
  • -Xs streams the contents of the WAL log as the backup of the primary is performed.
  • Lastly, -R creates an empty file, named standby.signal, in the replica’s data directory. This file lets your replica cluster know that it should operate as a standby server. The -R option also adds the connection information about the primary server to the postgresql.auto.conf file. This is a special configuration file that is read whenever the regular postgresql.conf file is read, but the values in the .auto file override the values in the regular configuration file.

When the pg_basebackup command connects to the primary, you will be prompted to enter the password for the role you created in the previous step. Depending on the size of your primary database cluster, it may take some time to copy all the files.

Your replica will now have all the data files from the primary that it requires to begin replication. Next, you’ll be putting the replica into standby mode and start replicating.

Step 4 — Restarting and Testing the Clusters

Now that the primary cluster’s data files have been successfully backed up on the replica, the next step is to restart the replica database cluster to put it into standby mode. To restart the replica database, run the following command:

 $ sudo systemctl restart postgresql@12-main

If your replica cluster restarted in standby mode successfully, it should have already connected to the primary database cluster on your other machine. To check if the replica has connected to the primary and the primary is streaming, connect to the primary database cluster by running:

 $ sudo -u postgres psql

Now query the pg_stat_replication table on the primary database cluster as follows:

  • SELECT client_addr, state FROM pg_stat_replication;

Running this query on the primary cluster will output something similar to the following:

Output
   client_addr    |  state
------------------+-----------
 your_replica_IP | streaming

If you have similar output, then the primary is correctly streaming to the replica.

conclusion
You now have two Ubuntu 20.04 servers each with a PostgreSQL 12 database cluster running with physical streaming between them. Any changes now made to the primary database cluster will also appear in the replica cluster.

You can also add more replicas to your setup if your databases need to handle more traffic.

If you wish to learn more about physical streaming replication including how to set up synchronous replication to ensure zero chance of losing any mission-critical data, you can read the entry in the official PostgreSQL docs.