NGINX

Browse posts by tag

NGINX - Client Certificate Authentication

January 11, 2026

Summary

This guide sets up Nginx with client certificate authentication on Amazon Linux 2023, requiring clients to present a valid certificate signed by your CA to access the web server. Key steps include installing Nginx with TLS configuration, uploading server and CA certificates, configuring ssl_verify_client on for authentication, and installing client certificates on Windows machines. Common troubleshooting involves temporarily disabling SELinux if it blocks HTTPS traffic.

1. Launch EC2 Instance (Example operations)

# Amazon Linux 2023, t3.micro, in your VPC
# Security Group: Allow 443 from your network

sh-5.2$ cd ~
sh-5.2$ pwd
/home/ssm-user
sh-5.2$ aws s3 cp s3://gcs-share/certs/certs_v06.zip .
download: s3://gcs-share/certs/certs_v06.zip to ./certs_v06.zip
sh-5.2$ ls
certs_v06.zip
sh-5.2$ pwd
/home/ssm-user
sh-5.2$ sudo dnf install -y unzip tree
sh-5.2$ unzip certs_v06.zip
Archive:  certs_v06.zip
   creating: certs/
  inflating: certs/ca-bundle.crt
  inflating: certs/ca.crt
  inflating: certs/client-006.crt
  inflating: certs/client-006.pfx
  inflating: certs/client.key
  inflating: certs/dual-006.crt
  inflating: certs/dual-006.pfx
  inflating: certs/dual.key
  inflating: certs/mid-ca.crt
  inflating: certs/server-006.crt
  inflating: certs/server-006.pfx
  inflating: certs/server.key
sh-5.2$ tree certs
certs
├── ca-bundle.crt
├── ca.crt
├── client-006.crt
├── client-006.pfx
├── client.key
├── dual-006.crt
├── dual-006.pfx
├── dual.key
├── mid-ca.crt
├── server-006.crt
├── server-006.pfx
└── server.key

0 directories, 12 files
sh-5.2$ ^C

2. Install and Configure Nginx

sudo dnf update -y
sudo dnf install -y nginx
sudo systemctl enable nginx

# Create SSL directory
sudo mkdir -p /etc/nginx/ssl
sudo chmod 700 /etc/nginx/ssl

3. Upload Your Certificates

# Copy these files to /etc/nginx/ssl/:
# - server-006.crt (your server certificate)
# - server.key (your server private key)
# - ca-bundle.crt (mid-ca.crt + ca.crt concatenated)
sudo cp certs/* /etc/nginx/ssl/
sudo chmod 600 /etc/nginx/ssl/server.key
sudo chmod 644 /etc/nginx/ssl/ca-bundle.crt
sudo chmod 644 /etc/nginx/ssl/server-006.crt
sudo ls -l /etc/nginx/ssl/

4. Configure Nginx (/etc/nginx/nginx.conf)

events {
    worker_connections 1024;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    server {
        listen 443 ssl;
        server_name d11-lnx-web01.gcs.cloud;  # <-- SET THE FQDN OF THE SERVER

        # Server certificates
        ssl_certificate /etc/nginx/ssl/server-006.crt;
        ssl_certificate_key /etc/nginx/ssl/server.key;

        # Client certificate authentication
        ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt;
        ssl_verify_client on;

        # SSL settings
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256;
        ssl_prefer_server_ciphers off;

        location / {
            root /usr/share/nginx/html;
            index index.html;

            # Add client cert info to response
            add_header X-Client-Cert-Subject $ssl_client_s_dn;
            add_header X-Client-Cert-Issuer $ssl_client_i_dn;
        }
    }
}

6. Create Sample HTML Page (Optional)

sudo tee /usr/share/nginx/html/index.html > /dev/null << 'EOF'
<!DOCTYPE html>
<html>
<head>
    <title>Client Certificate Authentication Test</title>
</head>
<body>
    <h1>Welcome! Client Certificate Authentication Successful</h1>
    <p>Your client certificate was verified successfully.</p>
    <p>This page requires a valid client certificate to access.</p>
</body>
</html>
EOF

7. Start Nginx

sudo nginx -t
sudo systemctl start nginx
sudo systemctl status nginx

9. Client Setup (Windows machines)

  • Install  in “Current User\Personal\Certificates”

NGINX Load Balancer - Secure gRPC

April 24, 2025

This guide extends our previous blog post on NGINX Load Balancer for WCF & gRPC by adding SSL connections to the gRPC protocol. The steps are similar—just update the config file bpserver-loadbalancer.conf

Configuration File Location: /etc/nginx/conf.d/bpserver-loadbalancer.conf

# NGINX Load Balancer Configuration for Blue Prism Enterprise
# Defining two upstream blocks for different ports

upstream bpserver_backend_8199 {
    ip_hash;
    server d11-app-bpe02.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
    server d11-app-bpe03.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
    server d11-app-bpe04.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
}

upstream bpserver_backend_10000 {
    ip_hash;
    server d11-app-bpe02.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
    server d11-app-bpe03.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
    server d11-app-bpe04.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
}

server {
    listen 8199 ssl;
    server_name d11-lnx-alb01.gcs.cloud;
    
    ssl_certificate     /etc/nginx/ssl/server_001.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;
    ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt;
    ssl_verify_client off;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    
    location / {
        proxy_pass https://bpserver_backend_8199;
        proxy_ssl_verify off;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        proxy_connect_timeout 300s;
        proxy_read_timeout 300s;
        proxy_send_timeout 300s;

        proxy_pass_request_headers on;
        
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }
}

server {
    listen 10000 ssl;  # Add ssl here
    http2 on;
    server_name d11-lnx-alb01.gcs.cloud;
    
    # Add SSL certificate configuration
    ssl_certificate     /etc/nginx/ssl/server_001.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;
    ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt;
    ssl_verify_client off;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    
    location / {
        grpc_pass grpcs://bpserver_backend_10000;  # Change to grpcs:// for SSL
        
        # gRPC specific settings
        grpc_read_timeout 300s;
        grpc_send_timeout 300s;
        
        # Headers for gRPC
        grpc_set_header Host $host;
        grpc_set_header X-Real-IP $remote_addr;
        grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

See also:

NGINX Load Balancer for WCF & gRPC

NGINX Load Balancer for WCF & gRPC

April 23, 2025

This guide extends our previous blog post on NGINX Load Balancing for WCF Applications by adding gRPC protocol support on port 10000. While the setup process remains similar, we’ll focus on the specific configuration changes needed in the bpserver-loadbalancer.conf file.

Configuration File Location: /etc/nginx/conf.d/bpserver-loadbalancer.conf

# NGINX Load Balancer Configuration for Blue Prism Enterprise
# Defining two upstream blocks for different ports

upstream bpserver_backend_8199 {
    ip_hash;
    server d11-app-bpe02.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
    server d11-app-bpe03.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
    server d11-app-bpe04.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
}

upstream bpserver_backend_10000 {
    ip_hash;
    server d11-app-bpe02.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
    server d11-app-bpe03.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
    server d11-app-bpe04.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
}

server {
    listen 8199 ssl;
    server_name d11-lnx-alb01.gcs.cloud;
    
    ssl_certificate     /etc/nginx/ssl/server_001.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;
    ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt;
    ssl_verify_client off;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    
    location / {
        proxy_pass https://bpserver_backend_8199;
        proxy_ssl_verify off;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        proxy_connect_timeout 300s;
        proxy_read_timeout 300s;
        proxy_send_timeout 300s;

        proxy_pass_request_headers on;
        
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }
}

server {
    listen 10000; 
    http2 on;      # Add this line to enable HTTP/2
    server_name d11-lnx-alb01.gcs.cloud;
    
    location / {
        grpc_pass grpc://bpserver_backend_10000;  # Use grpc_pass instead of proxy_pass
        
        # gRPC specific settings
        grpc_read_timeout 300s;
        grpc_send_timeout 300s;
        
        # Headers for gRPC
        grpc_set_header Host $host;
        grpc_set_header X-Real-IP $remote_addr;
        grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

See also:

NGINX Load Balancer - Secure gRPC

NGINX Load Balancer for WCF App

April 21, 2025

This guide demonstrates how to implement a high-performance NGINX load balancer for WCF applications with the following features:

  • Enhanced security through SSL/TLS encryption
  • Reliable session management using IP-based persistence
  • Custom-tuned configurations for WCF service optimisation
  • Advanced timeout and buffer settings to handle complex WCF payloads

The configuration ensures reliable, secure, and efficient load balancing specifically optimised for WCF service applications, with built-in session persistence and performance tuning.

1. Install required packages and SSL certificates

NGINX Container - Load Balancer

April 13, 2025

Let’s build a Dockerized NGINX setup with:

  • SSL termination using a wildcard cert
  • Reverse proxy + Load balancing to 2 backend servers
  • Mounted volumes for certs and config

1. Updated Step for CA Chain

#Create the CA chain file:
cat mid-ca.crt ca.crt > ca-bundle.crt
Cert file Purpose
server_001.crt Wildcard cert for your domain
server.key Private key for the wildcard cert
ca-bundle.crt Combined mid-ca.crt + ca.crt (in that order)

2. Directory Structure (suggested)

sh-5.2$ tree
.
└── nginx-lb
    ├── Dockerfile
    ├── certs
    │   ├── ca-bundle.crt
    │   ├── ca.crt
    │   ├── mid-ca.crt
    │   ├── server-bundle.crt
    │   ├── server.key
    │   ├── server_001.crt
    │   └── server_001.pfx
    ├── docker-compose.yml
    ├── nginx
    │   └── nginx.conf
    └── nginx-log

3. Create Dockerfile

FROM nginx:alpine

# Create the log directory inside the container
RUN mkdir -p /var/log/nginx

# Copy NGINX config and certs into the image (will be overridden by volume)
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY certs/ /etc/nginx/certs/

# Expose port 443 for HTTPS
EXPOSE 443

4. Create nginx.conf

worker_processes 1;

events {
    worker_connections 1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    # Log format definition
    log_format detailed '$remote_addr - $remote_user [$time_local] "$request" '
                       '$status $body_bytes_sent "$http_referer" '
                       '"$http_user_agent" "$http_x_forwarded_for" '
                       '"$proxy_host" "$upstream_addr"';

    # Access and error logs
    access_log /var/log/nginx/access.log detailed;
    error_log /var/log/nginx/error.log debug;

    ssl_certificate     /etc/nginx/certs/server_001.crt;
    ssl_certificate_key /etc/nginx/certs/server.key;
    ssl_client_certificate /etc/nginx/certs/ca-bundle.crt;
    ssl_verify_client off;

    upstream backend_apis {
        server d11-api-demo1.gcs.cloud:443;
        server d11-api-demo2.gcs.cloud:443;
    }

    server {
        listen 443 ssl;
        server_name d11-alb-ngx01.gcs.cloud;

        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt;

        location / {
            proxy_pass https://backend_apis;
            proxy_ssl_protocols TLSv1.2 TLSv1.3;
            proxy_ssl_verify on;
            proxy_ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt;
            proxy_ssl_name $host;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto https;
        }
    }
}

NGINX Load Balancer - Bare Metal

April 13, 2025

  1. Install NGINX:
sudo apt update
sudo apt install nginx -y
  1. Set SSL Certificates
sh-5.2$ sudo mkdir -p /etc/nginx/ssl
sh-5.2$ sudo cp certs/* /etc/nginx/ssl/
sh-5.2$ sudo ls -l /etc/nginx/ssl/
total 32
-rw-r--r--. 1 root root 3830 Apr 13 15:08 ca-bundle.crt
-r--r--r--. 1 root root 1911 Apr 13 15:08 ca.crt
-r--r--r--. 1 root root 1919 Apr 13 15:08 mid-ca.crt
-rw-r--r--. 1 root root 6082 Apr 13 15:08 server-bundle.crt
-rw-------. 1 root root 1704 Apr 13 15:08 server.key
-rw-r--r--. 1 root root 2252 Apr 13 15:08 server_001.crt
-rw-------. 1 root root 3363 Apr 13 15:08 server_001.pfx
sh-5.2$
  1. Create the NGINX Load Balancing Config

Edit /etc/nginx/nginx.conf or (preferably) add a new file in /etc/nginx/conf.d/iis-loadbalancer.conf:

NGINX Container - Secure Web Page

April 10, 2025

Why Choose NGINX for Your Web Server?

  • It’s lightweight and high-performance
  • Excellent for serving static content and as a reverse proxy
  • Simple configuration syntax
  • Very popular in containerized environments

So, let’s create a Docker container with Nginx and SSL!

  1. First, create a directory structure:
cd ~
aws s3 cp s3://BUCKET NAME/ . --recursive
sudo yum install unzip tree -y

mkdir nginx-ssl
unzip certs.zip
mv certs nginx-ssl/
unzip html.zip
mv html nginx-ssl/

cd nginx-ssl
mkdir conf
  1. Create nginx.conf in the conf directory: Change server_name.
server {
    listen 443 ssl;
    server_name d11-vdi-lin04.gcs.cloud;
    
    root /usr/share/nginx/html;
	    location / {
		    index index.html;
		    }
		    
    ssl_certificate /etc/nginx/certs/server_001.crt;
    ssl_certificate_key /etc/nginx/certs/server.key;
    ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt;
    ssl_protocols TLSv1.2 TLSv1.3;
}
  1. Create the full certificate chain by concatenating the certificates in the correct order:
cd certs
cat mid-ca.crt ca.crt > ca-bundle.crt
cat server_001.crt mid-ca.crt ca.crt > server-bundle.crt
  1. Create Dockerfile:
FROM nginx:alpine

RUN mkdir -p /etc/nginx/certs

# Copy SSL certificates
COPY certs/ca-bundle.crt /etc/nginx/certs/
COPY certs/server_001.crt /etc/nginx/certs/
COPY certs/server.key /etc/nginx/certs/

COPY conf/nginx.conf /etc/nginx/conf.d/default.conf
COPY html /usr/share/nginx/html

EXPOSE 443

CMD ["nginx", "-g", "daemon off;"]
  1. Make sure your HTML content is organized in a directory structure like this:
 .
└── nginx-ssl
    ├── Dockerfile
    ├── certs
    │   ├── ca-bundle.crt
    │   ├── ca.crt
    │   ├── mid-ca.crt
    │   ├── server-bundle.crt
    │   ├── server.key
    │   ├── server_001.crt
    │   └── server_001.pfx
    ├── conf
    │   └── nginx.conf
    └── html
        ├── colour.conf
        ├── img
        │   └── GCS-AWS-logo_32_v02.png
        ├── index.html
        └── swagger
            └── ui
                └── index
                    ├── img
                    │   └── Tech-Task-v07.png
                    └── index.html
                    
  1. Build and run the container:
# Build the image
sudo docker build -t my-secure-nginx:latest .

# Run the container
sudo docker run -d --name secure-nginx \
-p 443:443 \
--restart always \
my-secure-nginx:latest
  1. Check the status using curl command.
# -k flag to allow insecure connections
curl -k https://localhost
# Or specify your domain
curl -k https://your-domain.com
# To get more detailed with -v (verbose) flag
curl -kv https://localhost

See also:

Deploy a Amazon Linux 2023