Technology

Browse posts by category

Static Website (VS Code, GitHub, netlify)

February 24, 2026

Summary

This guide walks you through creating and deploying a static website using VS Code, GitHub, and netlify. You’ll learn how to build a simple website with AI assistance, set up version control, and establish an automated CI/CD pipeline that deploys your changes automatically whenever you push to GitHub.

Install VS Code and Plugins

Download and install VS Code from https://code.visualstudio.com/download

Install Plugins

  • Live Preview or Live Server (for previewing your website locally)
  • Co-Pilot or Amazon Q (for AI-powered coding assistance)

yEd - Reset Toolbar

February 13, 2026

Delete *.layout file

Remove C:\Users\{USER ACCOUNT}\AppData\Roaming\yWorks\yEd\yed-?_?.layout

Espanso - Installation and Setup

February 13, 2026

Install Espanso (macOS) Using Homebrew

Open Terminal and install Espanso

brew install espanso
# Start the espanso service
espanso service register
espanso service start
# Check the status
~ ❯ espanso service status 
espanso is running
# Restart the service if required
~ ❯ espanso service restart
espanso started correctly!
# Unregister and Register the service if required
~ ❯ espanso service unregister
service unregistered correctly!
~ ❯ espanso service register
service registered correctly!

Enable Accessibility Permission Is Enabled

Go to: System Settings → Privacy & Security → Accessibility

Python - Enable Virtual Environment

January 19, 2026

Open your repo in VS Code

cd your-new-repo
code .

Create a virtual environment

# MacOS / Linux
python3 -m venv .venv

# Windows
python-m venv .venv

Activate the virtual environment

# MacOS / Linux
source .venv/bin/activate

# Windows
.venv\Scripts\Activate.ps1

Install packages

pip install requests boto3

Freeze dependencies

pip freeze > requirements.txt

Later, anyone can recreate the env with:

pip install -r requirements.txt

Python - Check Version

January 19, 2026

Check default Python versions in Mac OS

python3 --version

Check where Python is coming from

which python3

List all Python versions on your system

ls -l /usr/bin/python*

If you use Homebrew (very common)

brew list | grep python
brew info python

Custom Domain - Cloudfront & Cloudflare

January 18, 2026

Step 1: Request ACM Certificate (AWS)

  1. Go to AWS Certificate Manager in us-east-1 region
  2. Click Request a public certificate
  3. Add domain names: e.g., heartbeat.intelliumx.com
  4. Select DNS validation
  5. Click Request


Step 2: Add Validation Records to CloudFlare

  1. In ACM, copy the CNAME name and value shown for validation
  2. Go to CloudFlare DNS settings
  3. Add a new CNAME record with the name and value from ACM
  4. Important: Set proxy status to DNS only (grey cloud) for validation
  5. Save the record

GitHub - Delete a Tag

January 13, 2026

To delete a tag from the LOCAL:

(.venv) coo:~/Documents/DevOps/GCS-Cloud$ git tag -d v1.0
Deleted tag 'v1.0' (was 6655ed0)
(.venv) coo:~/Documents/DevOps/GCS-Cloud$ git tag -d v1.0.0 
Deleted tag 'v1.0.0' (was ab35d04)
(.venv) coo:~/Documents/DevOps/GCS-Cloud$ git tag      
2.0
show

To delete a tag from the REMOTE:

# For example, to delete the v1.0 and v1.0.0 tags from remote:
git push origin --delete v1.0
git push origin --delete v1.0.0

# Check the remote tags again
git ls-remote --tags origin

Note that while the tags were deleted locally, they still appear in the remote repository when using “git ls-remote –tags origin”. The “git tag” command only displays tags that exist in your local repository.

GitHub - Create a Tag

January 13, 2026

1. Displaying Git Tags

(.venv) coo:~/Documents/DevOps/GCS-Cloud$ git tag
2.0
show
v1.0
v1.0.0

2. Check out the main branch

git checkout main

3. Check the commit hash of main branch (last update)

# Show the latest commit on main (from remote)
git fetch origin
git log origin/main -1
# For example
PS C:\code\Quick-Installer> git log origin/main -1
commit 87b1cfd5fae54d85f9c1a77b134092fb9ed624a3 (origin/main, origin/HEAD)
Merge: ea44cbc 4195693
Author: gituser <[email protected]>
Date:   Mon Sep 15 15:59:32 2025 +1000

    Merge pull request #7 from gituser/patch/backward-compatibility

    fixed confi file downlaod issue

4. Create the tag

git tag v.3.3.2 87b1cfd5fae54d85f9c1a77b134092fb9ed624a3

5. Push the tag to remote

git push origin v3.3.2

6. Verify

git show v3.3.2

Display tags with detailed information

(.venv) coo:~/Documents/DevOps/GCS-Cloud$ git ls-remote --tags origin
90e2de33d27de3116195bfe26dde6d750191889d        refs/tags/2.0
6655ed0a66e4b9fa141d5aa87ffd9c10fdfdc603        refs/tags/v1.0
ab35d0488d9e8b0a66502a0e900538449dc7190d        refs/tags/v1.0.0
47ecf4f18119d628cc3f61932c9d81d25690630c        refs/tags/v1.0.0^{}

See also:

GitHub - Create a New Repo

GitHub - Clean a Branch

January 13, 2026

After merging, we may still see the release branch like this

(.venv) coo:~/Documents/DevOps/GCS-Cloud$ git branch -a
  develop
* main
  release/2.0
  remotes/origin/HEAD -> origin/main
  remotes/origin/develop
  remotes/origin/feature/witt_test
  remotes/origin/main
  remotes/origin/release/2.0

After running “git fetch –prune”, the remote branch was updated but the local branch remained unchanged.

(.venv) coo:~/Documents/DevOps/GCS-Cloud$ git fetch --prune
From https://dev.azure.com/nobuops/GCS-Cloud/_git/GCS-Cloud
 - [deleted]         (none)     -> origin/release/2.0
remote: Azure Repos
remote: Found 1 objects to send. (0 ms)
Unpacking objects: 100% (1/1), 328 bytes | 82.00 KiB/s, done.
   47ecf4f..90e2de3  main       -> origin/main
(.venv) coo:~/Documents/DevOps/GCS-Cloud$ git branch -a
  develop
* main
  release/2.0
  remotes/origin/HEAD -> origin/main
  remotes/origin/develop
  remotes/origin/feature/witt_test
  remotes/origin/main

To clean up the repository completely, manually delete the local branch

GitHub - Pull Request (Azure DevOps)

January 13, 2026

This guide shows how to push changes from the feature/script-update-for-multi-regsions branch to the develop branch in Azure DevOps


Step 1: Ensure all the changes are committed to the feature branch.

# Check the current branch
(.venv) coo:~/Documents/DevOps/GCS-Cloud/base$ git branch
  develop
* feature/script-update-for-multi-regsions
  main
  
# Check the status of the local repository
(.venv) coo:~/Documents/DevOps/GCS-Cloud/base$ git status
On branch feature/script-update-for-multi-regsions
nothing to commit, working tree clean

Step 2: Use Azure DevOps Web UI

  1. Go to your Azure DevOps project in the browser.

GitHub - Pull Latest Change

January 13, 2026

Option 1: Stash your changes (temporarily save)

If you’re not ready to commit, but want to pull the latest code:

git stash
git pull
git stash pop
  • git stash temporarily saves your changes
  • git pull fetches and merges the latest version
  • git stash pop restores your changes

Option 2: Discard your changes

⚠️ Only do this if you’re okay with losing your local changes:

git reset --hard origin/develop
git pull

See also:

GitHub - Create a New Repo

GitHub - Create a New Branch

January 13, 2026

This is an example to create “develop” branch from copying from “main” branch. We can apply the same steps for creating “feature” branch and “release” branch


1. Ensure your repository is clean

git status
git branch -a

2. Preparation before creating a new branch

git checkout {ORIGINAL BRANCH} (e.g, main)
git pull origin {ORIGINAL BRANCH} (e.g, main)

3. Create the new branch locally

git checkout -b {NEW BRANCH} (e.g, develop)

# Sample of feature branch
git checkout -b feature/new_feature_branch_name

# Sample of release branch
git checkout -b release/new_release_branch_name

4. Push the new branch to GitHub

git push -u origin {NEW BRANCH} (e.g, develop)

5. Verify both branches exist

Run:

NGINX - Client Certificate Authentication

January 11, 2026

Summary

This guide sets up Nginx with client certificate authentication on Amazon Linux 2023, requiring clients to present a valid certificate signed by your CA to access the web server. Key steps include installing Nginx with TLS configuration, uploading server and CA certificates, configuring ssl_verify_client on for authentication, and installing client certificates on Windows machines. Common troubleshooting involves temporarily disabling SELinux if it blocks HTTPS traffic.

1. Launch EC2 Instance (Example operations)

# Amazon Linux 2023, t3.micro, in your VPC
# Security Group: Allow 443 from your network

sh-5.2$ cd ~
sh-5.2$ pwd
/home/ssm-user
sh-5.2$ aws s3 cp s3://gcs-share/certs/certs_v06.zip .
download: s3://gcs-share/certs/certs_v06.zip to ./certs_v06.zip
sh-5.2$ ls
certs_v06.zip
sh-5.2$ pwd
/home/ssm-user
sh-5.2$ sudo dnf install -y unzip tree
sh-5.2$ unzip certs_v06.zip
Archive:  certs_v06.zip
   creating: certs/
  inflating: certs/ca-bundle.crt
  inflating: certs/ca.crt
  inflating: certs/client-006.crt
  inflating: certs/client-006.pfx
  inflating: certs/client.key
  inflating: certs/dual-006.crt
  inflating: certs/dual-006.pfx
  inflating: certs/dual.key
  inflating: certs/mid-ca.crt
  inflating: certs/server-006.crt
  inflating: certs/server-006.pfx
  inflating: certs/server.key
sh-5.2$ tree certs
certs
├── ca-bundle.crt
├── ca.crt
├── client-006.crt
├── client-006.pfx
├── client.key
├── dual-006.crt
├── dual-006.pfx
├── dual.key
├── mid-ca.crt
├── server-006.crt
├── server-006.pfx
└── server.key

0 directories, 12 files
sh-5.2$ ^C

2. Install and Configure Nginx

sudo dnf update -y
sudo dnf install -y nginx
sudo systemctl enable nginx

# Create SSL directory
sudo mkdir -p /etc/nginx/ssl
sudo chmod 700 /etc/nginx/ssl

3. Upload Your Certificates

# Copy these files to /etc/nginx/ssl/:
# - server-006.crt (your server certificate)
# - server.key (your server private key)
# - ca-bundle.crt (mid-ca.crt + ca.crt concatenated)
sudo cp certs/* /etc/nginx/ssl/
sudo chmod 600 /etc/nginx/ssl/server.key
sudo chmod 644 /etc/nginx/ssl/ca-bundle.crt
sudo chmod 644 /etc/nginx/ssl/server-006.crt
sudo ls -l /etc/nginx/ssl/

4. Configure Nginx (/etc/nginx/nginx.conf)

events {
    worker_connections 1024;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    server {
        listen 443 ssl;
        server_name d11-lnx-web01.gcs.cloud;  # <-- SET THE FQDN OF THE SERVER

        # Server certificates
        ssl_certificate /etc/nginx/ssl/server-006.crt;
        ssl_certificate_key /etc/nginx/ssl/server.key;

        # Client certificate authentication
        ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt;
        ssl_verify_client on;

        # SSL settings
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256;
        ssl_prefer_server_ciphers off;

        location / {
            root /usr/share/nginx/html;
            index index.html;

            # Add client cert info to response
            add_header X-Client-Cert-Subject $ssl_client_s_dn;
            add_header X-Client-Cert-Issuer $ssl_client_i_dn;
        }
    }
}

6. Create Sample HTML Page (Optional)

sudo tee /usr/share/nginx/html/index.html > /dev/null << 'EOF'
<!DOCTYPE html>
<html>
<head>
    <title>Client Certificate Authentication Test</title>
</head>
<body>
    <h1>Welcome! Client Certificate Authentication Successful</h1>
    <p>Your client certificate was verified successfully.</p>
    <p>This page requires a valid client certificate to access.</p>
</body>
</html>
EOF

7. Start Nginx

sudo nginx -t
sudo systemctl start nginx
sudo systemctl status nginx

9. Client Setup (Windows machines)

  • Install  in “Current User\Personal\Certificates”

Setting Up Proxy Using WinHTTP in Windows

January 11, 2026

Summary

WinHTTP (Windows HTTP Services) allows you to configure proxy settings for applications that use the WinHTTP API. This is useful when you need to route HTTP traffic through a proxy server. Here’s a simple guide on how to manage proxy settings using the netsh winhttp command.

1. Check Current Proxy Settings

To view your current WinHTTP proxy configuration, use:

netsh winhttp show proxy

2. Set Proxy Server

To configure a proxy server, use the following syntax:

SSH Connection in Linux

January 11, 2026

1️⃣ Setup SSH service

Install OpenSSH server

sudo apt update
sudo apt install -y openssh-server

Enable and start the SSH service

sudo systemctl enable ssh
sudo systemctl start ssh

Check status:

systemctl status ssh

2️⃣ Allow SSH through the firewall (if enabled)

Ubuntu Server often has ufw disabled by default, but check:

sudo ufw status

If it’s active:

sudo ufw allow ssh
sudo ufw reload

3️⃣ Connect from Windows

From PowerShell on Windows:

Proxy Squid Install

January 11, 2026

Summary

This guide provides a baseline setup for Squid proxy server. It covers system updates, installing Squid with essential networking tools, configuring a minimal proxy setup that allows local network access.

1️⃣ First steps after installation (baseline)

After installing Linux Server and logging in:

sudo dnf update
sudo dnf upgrade -y

Install basic tools (very helpful later):

sudo dnf install -y \
  squid \
  tcpdump \
  net-tools \
  iproute \
  curl \
  ca-certificates

Enable and start Squid:

GitHub - SSH Settings (ed25519)

January 10, 2026

This workflow ensures:

  • Secure key (with passphrase)
  • Convenient usage (Keychain remembers passphrase)
  • SSH-only workflow → no HTTPS credentials required

1) Check for existing keys (optional)

ls -la ~/.ssh
  • Look for id_ed25519 / id_ed25519.pub
  • If you already have a key you want to use, skip key generation

2) Generate a new ed25519 key

ed25519 refers to the Ed25519 elliptic-curve algorithm, which is the modern, faster, and more secure replacement for older RSA SSH keys. Replace the email with your GitHub email:

WSL - Start up a Distibution

January 10, 2026

Summary

This guide explains how to manage and start up Linux distributions in Windows Subsystem for Linux (WSL).

  1. Check Available Linux Distributions
# Check which distribution is set as default (marked with *)
wsl --list --verbose
  1. Start Up a Distribution
# case insenstive
wsl -d fedoralinux-42
  1. Verify the Distribution (Optional)
cat /etc/os-release
  1. Set a Distribution as Default (Optional)
# Set a specific distribution as the default
wsl --set-default Ubuntu

See also:

WSL - Install a Linux Distribution

WSL - Install a Linux Distribution

January 10, 2026

Summary

This guide explains how to install a Linux distribution on Windows Subsystem for Linux (WSL). It covers the installation command using PowerShell, the password setup process, and how to verify the installation by listing all installed distributions.

  1. Install a Distribution
# for example, install "Ubuntu"
wsl --install -d ubuntu

# set a new password during the step
  1. Verify the installation
# Eixt from the Distribution
exit

# Show the list of Distributions
wsl --list --verbose

See also:

WSL - Install a Linux Distribution

AWS Credentials for CLI (Profile)

December 23, 2025

🚀 Quick Start Guide

  • Create a named profile using aws configure --profile [name]
  • Never set a default profile permanently
  • This avoids accidental operations on the wrong AWS account
  • Activate profiles temporarily per session using $env:AWS_PROFILE
  • Or activate per command using --profile
  • Always clear the active profile when done to prevent unintended AWS operations

1. Create the profile (once only)

In PowerShell:

aws configure --profile nob

This creates:

  • ~\.aws\credentials
  • ~\.aws\config

2. Use the profile temporarily in PowerShell

Option A — Set environment variable only for the current session

What Is Zone Identifier

December 21, 2025

  • What it is

    Zone.Identifier is Windows-only metadata stored as an NTFS Alternate Data Stream (ADS).

  • Why it exists

    Windows uses it to mark files downloaded from the Internet for security purposes.

  • Why it appears on macOS/Linux

    NTFS ADS is hidden on Windows, but when files are copied, zipped, or committed to Git and then opened on non-Windows systems, the metadata becomes a visible file:

    filename:Zone.Identifier
    
  • Is it dangerous?

    ❌ No.

Customise Terminal

December 21, 2025

🎯 Goal

  • Hide @computername
  • Use a clean, professional prompt
  • Reload config safely

✅ Step 1: Open zsh config with vi

vi ~/.zshrc

✅ Step 2: Add the custom prompt

# --- Custom prompt ---

# Colors
BLUE="%F{blue}"
RESET="%f"

# Prompt with color
PROMPT="${BLUE}%1~${RESET} ❯ "

✅ Step 4: Save and exit vi


✅ Step 5: Reload the config

source ~/.zshrc

Your prompt updates immediately.


🔁 Do I always need to run source ~/.zshrc?

  • ✅ Yes → to apply changes now
  • ❌ No → if you open a new Terminal window

Entra ID and Cognito - OIDC

August 11, 2025

Overview

This guide walks through setting up Microsoft Entra ID (formerly Azure AD) as an OpenID Connect (OIDC) identity provider for AWS Cognito User Pools. This integration allows users to sign in to your applications using their Microsoft credentials.

Steps

1. Sign in to Azure Portal

Navigate to https://portal.azure.com and access Microsoft Entra ID (formerly “Azure AD”) from the main services menu.

2. Register a New Application

  1. In Microsoft Entra ID, select App registrationsNew registration.

Entra ID and Cognito Itegration - Step5

August 10, 2025

Step 5: Verify and Test the Integration

The hosted UI automatically displays both authentication options.

  • Native Cognito User Pool authentication (existing functionality)
  • Microsoft Entra ID federation (newly implemented)

Now, Amazon Cognito seamlessly handles the authentication flow regardless of which identity provider is used, as shown below.


See also:

Entra ID and Cognito Itegration - Guide

Entra ID and Cognito Itegration - Step1

Entra ID and Cognito Itegration - Step2

Entra ID and Cognito Itegration - Step4

August 10, 2025

Step 4: Update Cognito App Client Configuration

Enable the new identity provider in your Cognito App Client settings:

  1. Within your Cognito User Pool, navigate to App integration and select your app client
  2. Under Hosted UI, click Edit to modify the settings

  1. In the Identity providers section, select EntraID to enable it
  2. Save your changes to apply the configuration


See also:

Entra ID and Cognito Itegration - Guide

Entra ID and Cognito Itegration - Step3

August 10, 2025

Step 3: Integrate Identity Provider with Amazon Cognito

Now, configure Amazon Cognito to recognize Microsoft Entra ID as a federated identity provider:

  1. In the AWS Console, navigate to Amazon Cognito and select your User Pool
  2. Go to Sign-in experience and locate the Federated identity provider sign-in section
  3. Select Add identity provider and choose SAML as the provider type

  1. Configure the identity provider with these settings:
    • Provider name: “EntraID” (this name will appear on your login screen)

Entra ID and Cognito Itegration - Step2

August 10, 2025

Step 2: Configure SAML Parameters

You’ll need to retrieve key information from your Cognito User Pool to properly configure the SAML parameters:

  1. In the AWS Console, locate your Cognito User Pool and note the User Pool ID

  1. Record the Cognito Domain from your User Pool settings

  1. In Azure Portal, configure the following SAML parameters:
    • Identifier (Entity ID): Format as urn:amazon:cognito:sp:{User pool ID}
    • Reply URL (Assertion Consumer Service URL): Format as {Cognito domain}/saml2/idpresponse
  2. Save your configuration changes

Entra ID and Cognito Itegration - Guide

August 10, 2025

Overview

This comprehensive guide walks you through the process of integrating Microsoft Entra ID (formerly Azure AD) with Amazon Cognito to enable seamless federation authentication for your web application.

What we do

Our web application currently uses Amazon Cognito User Pool for authentication, limiting access to Cognito-registered users only. This guide demonstrates how to extend authentication capabilities by implementing Microsoft Entra ID federation, allowing your enterprise users to access the application using their existing Microsoft credentials.

Entra ID and Cognito Itegration - Step1

August 10, 2025

Step 1: Configure SAML in Microsoft Entra ID

Begin by setting up a SAML application in Microsoft Entra ID to establish the identity provider side of the federation:

  1. In Azure Portal, access Microsoft Entra ID, then select Enterprise applications
  2. Select New application to create a custom application integration

  1. Click Create your own application to configure a custom SAML provider

  1. Configure the application with the following parameters:

    • Application name: (your preferred application name)
    • Select Integrate any other application you don’t find in the gallery (Non-gallery)
  2. Click Create to generate the application

GitHub - Reset and Start Again

June 25, 2025

📥 How to Reset and Start Again

Method 1: Reset your current branch

This will discard all local changes and make the local branch exactly match the remote.

# First, make sure you're on the right branch
git checkout release/v3.1.0

# Fetch the latest changes from remote
git fetch origin

# Reset your local branch to match the remote version
git reset --hard origin/release/v3.1.0

Method 2: Fresh checkout (if you want to start completely fresh)

# First, move to a safe location (if you have unsaved work)
git stash

# Then checkout the release branch, forcing a clean copy
git checkout -f release/v3.1.0

# Update to the latest version from remote
git pull origin release/v3.1.0

⚠️ Note: Both methods will discard any uncommitted changes! If you have work you want to keep, commit it to a temporary branch first:

GitHub - Best Practive Tagging

June 25, 2025

Daily Git Workflow with Tag Strategy

1. Continue to work on the feature branch

git checkout feature/v3.2.0
git pull origin feature/v3.2.0

2. Commit stable changes

git add .
git commit -m "Fixed export timeout issue"

3. Tag a stable version (optional, when ready)

# Use semantic versioning: v3.1.3, v3.1.4, etc.
# Tags point to the most recent commit.
git tag v3.1.2

4. Push your changes and tag to GitHub

git push origin feature/v3.2.0
git push origin v3.1.2

5. Verify your tag (if needed)

git tag                         # List local tags
git show v3.1.2                 # Show details of the tag
git branch --contains v3.1.2    # See which branch includes it

Bonus (Optional Advanced Commands)

  • List tags sorted by date (most recent first):

API Gateway - Configuration

June 15, 2025

1. Create REST API

  1. Go to API Gateway console
  2. Create new REST API

  1. Create new resource and method
    • Add resource: e.g., “/user-list”
    • Add GET method
    • Integration type: Lambda Function
    • Select your Lambda function
  2. Enable CORS if needed
    • Actions → Enable CORS
    • Accept default settings for testing

2. Update “Method request”

3. Update “Integration request”

{
    "limit": "$input.params('limit')"
}

4. Deploy and Test

  1. Deploy API

  1. Note the API endpoint URL

API Gateway - Usage Plan

June 15, 2025

1. Create new usage plan

Rate and Burst

  • Rate: Set to 10-20 requests per second for development/testing
    • Recommended: Start with 10 req/sec for controlled testing
  • Burst: Set to 2x your rate (20-40)
    • Recommended: Start with 20 to handle short traffic spikes

Quota Settings

  • Quota period: MONTH (most common)
    • Alternative periods: WEEK, DAY
  • Requests per quota period: Start with 50,000/month
    • This allows approximately 1,600 requests per day
    • Can be adjusted based on actual usage patterns

Recommended Initial Configuration:

AWS STS - Temporary Access Tokens

June 15, 2025

1. Generate Temporary Credentials

First, use the AWS STS (Security Token Service) to generate temporary credentials:

# 3600 x 5 = 18000 (5 hours)
aws sts get-session-token --duration-seconds 18000

This will return something like:

{
    "Credentials": {
        "AccessKeyId": "ASIA...",
        "SecretAccessKey": "...",
        "SessionToken": "...",
        "Expiration": "2025-06-13T..."
    }
}

2. Set Environment Variables

Then set these environment variables:

# Replace the values with your actual credentials from the previous step.
export AWS_ACCESS_KEY_ID="your_access_key"
export AWS_SECRET_ACCESS_KEY="your_secret_key"
export AWS_SESSION_TOKEN="your_session_token"
export AWS_DEFAULT_REGION="ap-southeast-2"  # Sydney region

3. Verify the environment variables

env | grep AWS

After setting these variables, try running your Python script again. The credentials will be automatically picked up by the AWS SDK.

AWS Lambda - Create a Function

June 15, 2025

  1. Navigate to Lambda in AWS Console
  2. Click “Create function”
    • Choose “Author from scratch”
    • Runtime: Python 3.x
    • Name: e.g., “get-user-list”

Paste the Python code into “Code” page and click “Deploy” button

import boto3
from datetime import datetime
from boto3.dynamodb.conditions import Key

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('user_list')

def create_nested_structure(data, current_level, max_level):
    if current_level >= max_level:
        return data
    
    return {
        f"level_{current_level}": {
            "data": data,
            "nested": create_nested_structure(data, current_level + 1, max_level),
            "metadata": {
                "level_info": f"This is level {current_level}",
                "timestamp": datetime.now().isoformat(),
                "metrics": {
                    "depth": current_level,
                    "remaining_levels": max_level - current_level,
                    "complexity_score": max_level * current_level
                }
            }
        }
    }

def create_complex_response(user_data, nested_level):
    base_data = {
        "id": f"user_{user_data['user_id']}",
        "timestamp": datetime.now().isoformat(),
        "category": "Personnel",
        "details": {
            "name": {
                "first": user_data['first_name'],
                "last": user_data['last_name']
            },
            "company": {
                "name": user_data['company_name'],
                "web": user_data['web']
            },
            "contact_info": {
                "address": {
                    "street": user_data['address'],
                    "city": user_data['city'],
                    "state": user_data['state'],
                    "postcode": user_data['post']
                },
                "communication": {
                    "phones": [
                        {
                            "type": "primary",
                            "number": user_data['phone1']
                        },
                        {
                            "type": "secondary",
                            "number": user_data['phone2']
                        }
                    ],
                    "email": user_data['email']
                }
            }
        }
    }
    
    return create_nested_structure(base_data, 1, nested_level)

def lambda_handler(event, context):
    try:
        # Get parameters from event body
        limit = int(event.get('limit', 10) if event.get('limit') else 10)
        nested_level = int(event.get('nested_level', 1) if event.get('nested_level') else 1)
        
        # Validate nested_level
        if nested_level < 1:
            nested_level = 1
        elif nested_level > 30:  # Set a reasonable maximum 
            nested_level = 30    # 29 nested is the limit on Blue Prism
            
        # Scan DynamoDB table with limit
        response = table.scan(
            Limit=limit
        )
        items = response.get('Items', [])
        
        # Transform items into complex nested structure
        transformed_data = [create_complex_response(item, nested_level) for item in items]
        
        # Create final response
        return {
            "statusCode": 200,
            "headers": {
                "Content-Type": "application/json",
                "Access-Control-Allow-Origin": "*"
            },
            "success": True,
            "timestamp": datetime.now().isoformat(),
            "total_records": len(transformed_data),
            "limit_applied": limit,
            "nesting_level": nested_level,
            "data": transformed_data,
            "metadata": {
                "api_version": "1.0",
                "service": "user-data-api",
                "complexity_info": {
                    "max_depth": nested_level,
                    "structure_type": "recursive",
                    "total_nodes": len(transformed_data) * nested_level
                }
            }
        }
        
    except Exception as e:
        return {
            "statusCode": 500,
            "success": False,
            "message": "Error processing request",
            "error": str(e)
        }

AWS Credentials for CLI

June 15, 2025

1. Using AWS CLI Configuration

aws configure

This will prompt you to enter:

  • AWS Access Key ID
  • AWS Secret Access Key
  • Default region name
  • Default output format

2. Environment Variables

export AWS_ACCESS_KEY_ID="your_access_key"
export AWS_SECRET_ACCESS_KEY="your_secret_key"
export AWS_DEFAULT_REGION="your_region"

3. Credentials File

Create or edit ~/.aws/credentials:

[default]
aws_access_key_id = your_access_key
aws_secret_access_key = your_secret_key

4. Clear AWS CLI Configuration (OPTIONAL)

To clear your AWS CLI credentials, you have several options:

  • Delete the credentials file: rm ~/.aws/credentials
  • Delete the config file: rm ~/.aws/config
  • Clear specific profile: aws configure --profile your_profile_name and press Enter without entering values
# Remove both credentials and config files
rm ~/.aws/credentials ~/.aws/config

After clearing the credentials, you can reconfigure them using any of the methods described above.

AWS Lambda - Grant Access

June 15, 2025

  1. Go to AWS IAM Console

  2. Find your Lambda’s role

    • Click on the role name
    • Click “Add permissions” → “Create inline policy”
  3. In the JSON editor, paste this policy:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "dynamodb:Scan",
                    "dynamodb:GetItem",
                    "dynamodb:Query"
                ],
                "Resource": "arn:aws:dynamodb:ap-southeast-2:6850********:table/user_list"
            }
        ]
    }
    
  4. Click “Review policy”

    • Name it something like “DynamoDBScanPolicy”
    • Click “Create policy”

After adding this policy, wait a few seconds and try your Lambda function again. The error should be resolved.

Amazon DynamoDB - Import CSV Data

June 15, 2025

1. Save the CSV file in the same location as the Python code

user_id,first_name,last_name,company_name,address,city,state,post,phone1,phone2,email,web
U001,Rebbecca,Didio,"Brandt, Jonathan F Esq",171 E 24th St,Leith,TAS,7315,03-8174-9123,0458-665-290,[email protected],http://www.brandtjonathanfesq.com.au
U002,Stevie,Hallo,Landrum Temporary Services,22222 Acoma St,Proston,QLD,4613,07-9997-3366,0497-622-620,[email protected],http://www.landrumtemporaryservices.com.au
U003,Mariko,Stayer,"Inabinet, Macre Esq",534 Schoenborn St #51,Hamel,WA,6215,08-5558-9019,0427-885-282,[email protected],http://www.inabinetmacreesq.com.au
U004,Gerardo,Woodka,Morris Downing & Sherred,69206 Jackson Ave,Talmalmo,NSW,2640,02-6044-4682,0443-795-912,[email protected],http://www.morrisdowningsherred.com.au
U005,Mayra,Bena,"Buelt, David L Esq",808 Glen Cove Ave,Lane Cove,NSW,1595,02-1455-6085,0453-666-885,[email protected],http://www.bueltdavidlesq.com.au
U006,Idella,Scotland,Artesian Ice & Cold Storage Co,373 Lafayette St,Cartmeticup,WA,6316,08-7868-1355,0451-966-921,[email protected],http://www.artesianicecoldstorageco.com.au
U007,Sherill,Klar,Midway Hotel,87 Sylvan Ave,Nyamup,WA,6258,08-6522-8931,0427-991-688,[email protected],http://www.midwayhotel.com.au
U008,Ena,Desjardiws,"Selsor, Robert J Esq",60562 Ky Rt 321,Bendick Murrell,NSW,2803,02-5226-9402,0415-961-606,[email protected],http://www.selsorrobertjesq.com.au
U009,Vince,Siena,Vincent J Petti & Co,70 S 18th Pl,Purrawunda,QLD,4356,07-3184-9989,0411-732-965,[email protected],http://www.vincentjpettico.com.au
U010,Theron,Jarding,"Prentiss, Paul F Esq",8839 Ventura Blvd,Blanchetown,SA,5357,08-6890-4661,0461-862-457,[email protected],http://www.prentisspaulfesq.com.au

2. Set a temporary token for VS Code

Reference: AWS STS - Temporary Access Tokens

Setup Python Environment

June 11, 2025

Step-by-Step Python Environment on MacOS

This guide walks through setting up a Python development environment on MacOS using Visual Studio Code, including package management, virtual environments, and recommended configurations.

1. Install/Verify Homebrew

# Check if Homebrew is installed
brew --version

# If not installed, install Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

2. Install/Update Python using Homebrew

# Install Python
brew install python

# Verify Python installation
python3 --version

3. Verify pip Installation

pip (Python package installer) comes with Python installed via Homebrew. Verify it:

Git - Move Branch

May 20, 2025

Move to “master”

Open the terminal (press Ctrl+`) and run:

git checkout master

If master isn’t checked out locally yet, do:

git fetch origin
git checkout master

If “master” was not updated in the local branch

If the branch switched successfully to master, but the files in your working directory didn’t update as expected, it could be one of these cases:

  1. Confirm You’re on the Correct Branch Run this to verify:
git branch

You should see something like this. The asterisk * indicates your current branch.

Azure DevOps - Complete (Merge)

May 20, 2025

Use Azure DevOps Web UI

Scenario:

This guide shows how to merge a pull request from the release/v1.0-america branch into the main branch using Azure DevOps or any Git repository.

Final Step (Complete the Pull Request):

  1. Once it’s approved, click the “Complete” button to merge it into master.

  2. You’ll get options like:

    • Squash or merge (depending on settings)
    • Delete source branch after merging (optional)
  3. Click “Complete merge” when ready

Azure DevOps - Approve

May 20, 2025

Use Azure DevOps Web UI

Scenario:

This guide demonstrates how to approve a pull request from the release/v1.0-america branch into the master branch in Azure DevOps or any Git repository.

Steps (Approve the Pull Request):

  1. Go to Azure DevOps in your browser
  2. Navigate to Repos > Pull Requests

  1. Find the Pull request (from release/v1.0-america to master)
  2. Click on the PR to open it.
  3. On the right-hand side, you’ll see the “Reviewers” section

Azure DevOps - Pull Request

May 20, 2025

Use Azure DevOps Web UI

Scenario:

This guide shows how to push changes from the release/v1.0-america branch to the master branch in Azure DevOps (or any Git repository).


Steps:

  1. Go to your Azure DevOps project in the browser.
  2. Navigate to Repos > Branches.
  3. Find your branch release/v1.0-america
  4. Click on the "…" (More options) next to it and select “New pull request”

  1. Set:

    • Source: release/v1.0-america
    • Target: master
  2. Add a title, description (optional but helpful), and click “Create”

Memory Dump Anaylsis - WinDbg

May 19, 2025

WinDbg is the primary tool from Microsoft to analyze memory dump files.

This is the Step-by-Step guide to Analyze Memory Dump with WinDbg.

1. Install WinDbg Preview

  • Open Microsoft Store and search for WinDbg Preview.

  • Or download it from WinDbg Preview

2. Open Your Dump File

  • Launch WinDbg Preview.

  • Click File > Open dump file.

  • Select your .dmp file (e.g., C:\Windows\MEMORY.DMP, C:\Windows\Minidump).

3. Use Basic Commands

  • After opening the file, type the following in the command window:
!analyze -v

NGINX Load Balancer - Secure gRPC

April 24, 2025

This guide extends our previous blog post on NGINX Load Balancer for WCF & gRPC by adding SSL connections to the gRPC protocol. The steps are similar—just update the config file bpserver-loadbalancer.conf

Configuration File Location: /etc/nginx/conf.d/bpserver-loadbalancer.conf

# NGINX Load Balancer Configuration for Blue Prism Enterprise
# Defining two upstream blocks for different ports

upstream bpserver_backend_8199 {
    ip_hash;
    server d11-app-bpe02.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
    server d11-app-bpe03.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
    server d11-app-bpe04.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
}

upstream bpserver_backend_10000 {
    ip_hash;
    server d11-app-bpe02.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
    server d11-app-bpe03.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
    server d11-app-bpe04.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
}

server {
    listen 8199 ssl;
    server_name d11-lnx-alb01.gcs.cloud;
    
    ssl_certificate     /etc/nginx/ssl/server_001.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;
    ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt;
    ssl_verify_client off;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    
    location / {
        proxy_pass https://bpserver_backend_8199;
        proxy_ssl_verify off;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        proxy_connect_timeout 300s;
        proxy_read_timeout 300s;
        proxy_send_timeout 300s;

        proxy_pass_request_headers on;
        
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }
}

server {
    listen 10000 ssl;  # Add ssl here
    http2 on;
    server_name d11-lnx-alb01.gcs.cloud;
    
    # Add SSL certificate configuration
    ssl_certificate     /etc/nginx/ssl/server_001.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;
    ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt;
    ssl_verify_client off;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    
    location / {
        grpc_pass grpcs://bpserver_backend_10000;  # Change to grpcs:// for SSL
        
        # gRPC specific settings
        grpc_read_timeout 300s;
        grpc_send_timeout 300s;
        
        # Headers for gRPC
        grpc_set_header Host $host;
        grpc_set_header X-Real-IP $remote_addr;
        grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

See also:

NGINX Load Balancer for WCF & gRPC

OpenSSL - Verify Certificate

April 24, 2025

Verify the certificate

openssl x509 -in server/certs/client.crt -text -noout
openssl x509 -in server/certs/server.crt -text -noout

Verify the certificate chain

# First, concatenate the CA certificates (leaf to root)
cat mid-ca.crt ca.cert > ca-bundle.crt

# Then verify using the chain file
openssl verify -CAfile ca-bundle.crt server/certs/client.crt
openssl verify -CAfile ca-bundle.crt server/certs/server.crt

See also:

OpenSSL - Initial Setup

OpenSSL (1) - Root CA

OpenSSL (2) - Intermediate CA

OpenSSL (3) - Server Certificate

OpenSSL (4) - Client Certificate

April 24, 2025

Create a Client Certificate

1. Generate a client key file

openssl genrsa -out server/private/client.key 2048

2. Generate a client Certificate Signing Request (CSR)

openssl req -config mid-ca/mid-ca.conf -key server/private/client.key -new -sha256 -out server/csr/client.csr

e.g., CN=GCS-Client-Certificate-v0x

3. Sign the client CSR using the client_cert extension

openssl ca -config mid-ca/mid-ca.conf -extensions client_cert -days 3650 -notext -in server/csr/client.csr -out server/client-certs/client.crt

4. Generate client PFX (if needed)

openssl pkcs12 -inkey server/private/client.key -in server/client-certs/client.crt -export -out server/client-certs/client.pfx -passout pass:

See also:

Download from CloudShell

NGINX Load Balancer for WCF & gRPC

April 23, 2025

This guide extends our previous blog post on NGINX Load Balancing for WCF Applications by adding gRPC protocol support on port 10000. While the setup process remains similar, we’ll focus on the specific configuration changes needed in the bpserver-loadbalancer.conf file.

Configuration File Location: /etc/nginx/conf.d/bpserver-loadbalancer.conf

# NGINX Load Balancer Configuration for Blue Prism Enterprise
# Defining two upstream blocks for different ports

upstream bpserver_backend_8199 {
    ip_hash;
    server d11-app-bpe02.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
    server d11-app-bpe03.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
    server d11-app-bpe04.gcs.cloud:8199 max_fails=3 fail_timeout=30s;
}

upstream bpserver_backend_10000 {
    ip_hash;
    server d11-app-bpe02.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
    server d11-app-bpe03.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
    server d11-app-bpe04.gcs.cloud:10000 max_fails=3 fail_timeout=30s;
}

server {
    listen 8199 ssl;
    server_name d11-lnx-alb01.gcs.cloud;
    
    ssl_certificate     /etc/nginx/ssl/server_001.crt;
    ssl_certificate_key /etc/nginx/ssl/server.key;
    ssl_client_certificate /etc/nginx/ssl/ca-bundle.crt;
    ssl_verify_client off;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    
    location / {
        proxy_pass https://bpserver_backend_8199;
        proxy_ssl_verify off;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        proxy_connect_timeout 300s;
        proxy_read_timeout 300s;
        proxy_send_timeout 300s;

        proxy_pass_request_headers on;
        
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }
}

server {
    listen 10000; 
    http2 on;      # Add this line to enable HTTP/2
    server_name d11-lnx-alb01.gcs.cloud;
    
    location / {
        grpc_pass grpc://bpserver_backend_10000;  # Use grpc_pass instead of proxy_pass
        
        # gRPC specific settings
        grpc_read_timeout 300s;
        grpc_send_timeout 300s;
        
        # Headers for gRPC
        grpc_set_header Host $host;
        grpc_set_header X-Real-IP $remote_addr;
        grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

See also:

NGINX Load Balancer - Secure gRPC

NGINX Load Balancer for WCF App

April 21, 2025

This guide demonstrates how to implement a high-performance NGINX load balancer for WCF applications with the following features:

  • Enhanced security through SSL/TLS encryption
  • Reliable session management using IP-based persistence
  • Custom-tuned configurations for WCF service optimisation
  • Advanced timeout and buffer settings to handle complex WCF payloads

The configuration ensures reliable, secure, and efficient load balancing specifically optimised for WCF service applications, with built-in session persistence and performance tuning.

1. Install required packages and SSL certificates

HAProxy Container - Load Balancer

April 13, 2025

HAProxy Load Balancer with SSL Termination

1. Install Docker

sudo yum update -y
sudo yum install docker -y
sudo systemctl start docker
sudo systemctl enable docker

2. Install Docker Compose

# Download Docker Compose binary
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# Make it executable
sudo chmod +x /usr/local/bin/docker-compose

# Verify the installation
docker-compose --version

3. Create a Docker Compose file (docker-compose.yml):

version: '3'
services:
  haproxy:
    image: haproxy:latest
    ports:
      - "443:443"
    volumes:
      - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
      - ./certs:/etc/ssl/certs:ro
    restart: always

4. Create SSL certificates directory and copy certificates:

mkdir certs
cp ~/certs/server-bundle.crt certs/
cp ~/certs/server.key certs/
cat certs/server.key certs/server-bundle.crt > certs/server.pem

5. Create HAProxy configuration file (haproxy.cfg):

global
    log /dev/log local0
    log /dev/log local1 notice
    daemon
    maxconn 2000

defaults
    log global
    mode http
    option httplog
    option forwardfor
    timeout connect 5000
    timeout client  50000
    timeout server  50000

frontend https_front
    bind *:443 ssl crt /etc/ssl/certs/server.pem
    mode http

    # Add URL path rule for Swagger
    use_backend servers if { path_beg /swagger }
    default_backend servers

backend servers
    mode http
    balance roundrobin
    server win1 d11-api-demo1.gcs.cloud:443 ssl verify none check
    server win2 d11-api-demo2.gcs.cloud:443 ssl verify none check

This configuration will route any requests starting with /swagger to your backend servers. The only change needed is adding the path rule in the frontend section.

NGINX Container - Load Balancer

April 13, 2025

Let’s build a Dockerized NGINX setup with:

  • SSL termination using a wildcard cert
  • Reverse proxy + Load balancing to 2 backend servers
  • Mounted volumes for certs and config

1. Updated Step for CA Chain

#Create the CA chain file:
cat mid-ca.crt ca.crt > ca-bundle.crt
Cert file Purpose
server_001.crt Wildcard cert for your domain
server.key Private key for the wildcard cert
ca-bundle.crt Combined mid-ca.crt + ca.crt (in that order)

2. Directory Structure (suggested)

sh-5.2$ tree
.
└── nginx-lb
    ├── Dockerfile
    ├── certs
    │   ├── ca-bundle.crt
    │   ├── ca.crt
    │   ├── mid-ca.crt
    │   ├── server-bundle.crt
    │   ├── server.key
    │   ├── server_001.crt
    │   └── server_001.pfx
    ├── docker-compose.yml
    ├── nginx
    │   └── nginx.conf
    └── nginx-log

3. Create Dockerfile

FROM nginx:alpine

# Create the log directory inside the container
RUN mkdir -p /var/log/nginx

# Copy NGINX config and certs into the image (will be overridden by volume)
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY certs/ /etc/nginx/certs/

# Expose port 443 for HTTPS
EXPOSE 443

4. Create nginx.conf

worker_processes 1;

events {
    worker_connections 1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    # Log format definition
    log_format detailed '$remote_addr - $remote_user [$time_local] "$request" '
                       '$status $body_bytes_sent "$http_referer" '
                       '"$http_user_agent" "$http_x_forwarded_for" '
                       '"$proxy_host" "$upstream_addr"';

    # Access and error logs
    access_log /var/log/nginx/access.log detailed;
    error_log /var/log/nginx/error.log debug;

    ssl_certificate     /etc/nginx/certs/server_001.crt;
    ssl_certificate_key /etc/nginx/certs/server.key;
    ssl_client_certificate /etc/nginx/certs/ca-bundle.crt;
    ssl_verify_client off;

    upstream backend_apis {
        server d11-api-demo1.gcs.cloud:443;
        server d11-api-demo2.gcs.cloud:443;
    }

    server {
        listen 443 ssl;
        server_name d11-alb-ngx01.gcs.cloud;

        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt;

        location / {
            proxy_pass https://backend_apis;
            proxy_ssl_protocols TLSv1.2 TLSv1.3;
            proxy_ssl_verify on;
            proxy_ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt;
            proxy_ssl_name $host;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto https;
        }
    }
}

NGINX Load Balancer - Bare Metal

April 13, 2025

  1. Install NGINX:
sudo apt update
sudo apt install nginx -y
  1. Set SSL Certificates
sh-5.2$ sudo mkdir -p /etc/nginx/ssl
sh-5.2$ sudo cp certs/* /etc/nginx/ssl/
sh-5.2$ sudo ls -l /etc/nginx/ssl/
total 32
-rw-r--r--. 1 root root 3830 Apr 13 15:08 ca-bundle.crt
-r--r--r--. 1 root root 1911 Apr 13 15:08 ca.crt
-r--r--r--. 1 root root 1919 Apr 13 15:08 mid-ca.crt
-rw-r--r--. 1 root root 6082 Apr 13 15:08 server-bundle.crt
-rw-------. 1 root root 1704 Apr 13 15:08 server.key
-rw-r--r--. 1 root root 2252 Apr 13 15:08 server_001.crt
-rw-------. 1 root root 3363 Apr 13 15:08 server_001.pfx
sh-5.2$
  1. Create the NGINX Load Balancing Config

Edit /etc/nginx/nginx.conf or (preferably) add a new file in /etc/nginx/conf.d/iis-loadbalancer.conf:

Change hostname - Linux

April 13, 2025

Change the hostname of a Linux machine that’s already joined to Active Directory

  1. First, leave the Active Directory domain:
sudo realm leave gcs.cloud
  1. Change the hostname using hostnamectl:
sudo hostnamectl set-hostname new-hostname.gcs.cloud
  1. Rejoin the Active Directory domain:
realm join gcs.cloud

See also:

Deploy a Amazon Linux 2023

NGINX Container - Secure Web Page

April 10, 2025

Why Choose NGINX for Your Web Server?

  • It’s lightweight and high-performance
  • Excellent for serving static content and as a reverse proxy
  • Simple configuration syntax
  • Very popular in containerized environments

So, let’s create a Docker container with Nginx and SSL!

  1. First, create a directory structure:
cd ~
aws s3 cp s3://BUCKET NAME/ . --recursive
sudo yum install unzip tree -y

mkdir nginx-ssl
unzip certs.zip
mv certs nginx-ssl/
unzip html.zip
mv html nginx-ssl/

cd nginx-ssl
mkdir conf
  1. Create nginx.conf in the conf directory: Change server_name.
server {
    listen 443 ssl;
    server_name d11-vdi-lin04.gcs.cloud;
    
    root /usr/share/nginx/html;
	    location / {
		    index index.html;
		    }
		    
    ssl_certificate /etc/nginx/certs/server_001.crt;
    ssl_certificate_key /etc/nginx/certs/server.key;
    ssl_trusted_certificate /etc/nginx/certs/ca-bundle.crt;
    ssl_protocols TLSv1.2 TLSv1.3;
}
  1. Create the full certificate chain by concatenating the certificates in the correct order:
cd certs
cat mid-ca.crt ca.crt > ca-bundle.crt
cat server_001.crt mid-ca.crt ca.crt > server-bundle.crt
  1. Create Dockerfile:
FROM nginx:alpine

RUN mkdir -p /etc/nginx/certs

# Copy SSL certificates
COPY certs/ca-bundle.crt /etc/nginx/certs/
COPY certs/server_001.crt /etc/nginx/certs/
COPY certs/server.key /etc/nginx/certs/

COPY conf/nginx.conf /etc/nginx/conf.d/default.conf
COPY html /usr/share/nginx/html

EXPOSE 443

CMD ["nginx", "-g", "daemon off;"]
  1. Make sure your HTML content is organized in a directory structure like this:
 .
└── nginx-ssl
    ├── Dockerfile
    ├── certs
    │   ├── ca-bundle.crt
    │   ├── ca.crt
    │   ├── mid-ca.crt
    │   ├── server-bundle.crt
    │   ├── server.key
    │   ├── server_001.crt
    │   └── server_001.pfx
    ├── conf
    │   └── nginx.conf
    └── html
        ├── colour.conf
        ├── img
        │   └── GCS-AWS-logo_32_v02.png
        ├── index.html
        └── swagger
            └── ui
                └── index
                    ├── img
                    │   └── Tech-Task-v07.png
                    └── index.html
                    
  1. Build and run the container:
# Build the image
sudo docker build -t my-secure-nginx:latest .

# Run the container
sudo docker run -d --name secure-nginx \
-p 443:443 \
--restart always \
my-secure-nginx:latest
  1. Check the status using curl command.
# -k flag to allow insecure connections
curl -k https://localhost
# Or specify your domain
curl -k https://your-domain.com
# To get more detailed with -v (verbose) flag
curl -kv https://localhost

See also:

Deploy a Amazon Linux 2023

Create a MS SQL Server Container

April 4, 2025

# This is the current folder structure
sh-5.2$ tree
.
├── Dockerfile
├── backups
│   ├── APP-6.3.2-lab_Stage_2.bak
│   ├── APP-6.3.2-lab_Stage_3.bak
│   ├── APP-6.3.2-lab_Stage_4.bak
│   ├── v9.1.23_APP_632_lab_Stage_3.bak
│   └── v9.1.23_APP_632_lab_Stage_4.bak
├── certs
│   ├── server-bundle.crt
│   └── server.key
├── containers
│   └── sql1
│       ├── data [error opening dir]
│       ├── log [error opening dir]
│       └── secrets [error opening dir]
└── mssql.conf
  1. Create Dockerfile file
FROM mcr.microsoft.com/mssql/server:2022-latest

USER root

# Install required dependencies
RUN apt-get update && \
    apt-get install -y curl apt-transport-https gnupg2 && \
    mkdir -p /etc/apt/keyrings && \
    curl -sSL https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > /etc/apt/keyrings/microsoft.gpg && \
    chmod 644 /etc/apt/keyrings/microsoft.gpg && \
    echo "deb [signed-by=/etc/apt/keyrings/microsoft.gpg] https://packages.microsoft.com/ubuntu/22.04/prod jammy main" > /etc/apt/sources.list.d/mssql-release.list && \
    apt-get update && \
    ACCEPT_EULA=Y apt-get install -y mssql-tools unixodbc-dev && \
    ln -s /opt/mssql-tools/bin/sqlcmd /usr/bin/sqlcmd && \
    ln -s /opt/mssql-tools/bin/bcp /usr/bin/bcp && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

# Switch back to default user
USER mssql
  1. Create mssql.conf file
[network]
tlscert = /var/opt/mssql/secrets/server-bundle.crt
tlskey = /var/opt/mssql/secrets/server.key
tlsprotocols = 1.2
forceencryption = 1
  1. Build an image
# Build new image
sudo docker build -t mssql-with-tools .
  1. Test locally
# Run new container
sudo docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Password123' \
-p 1433:1433 \
-v /data/containers/sql1/data:/var/opt/mssql/data \
-v /data/containers/sql1/log:/var/opt/mssql/log \
-v sql-certs:/var/opt/mssql/secrets:ro \
-v /data/mssql.conf:/var/opt/mssql/mssql.conf:ro \
-v /data/backups:/var/opt/mssql/backups \
--restart always \
--name sql1 \
-d mssql-with-tools
  1. Build a custom container and push into ECR in AWS.
# The container URI is below
ACCOUNTID.dkr.ecr.ap-southeast-2.amazonaws.com/gcs-sql-server:latest
  1. Then run the script to deploy a MS SQL Container
#=============================================================================
# The following approach successfully copy "server.key"
#=============================================================================
# Create a Docker volume for the certificates
sudo docker volume create sql-certs

# Copy the necessary certificate files into the volume
sudo cp /data/certs/server-bundle.crt /var/lib/docker/volumes/sql-certs/_data/
sudo cp /data/certs/server.key /var/lib/docker/volumes/sql-certs/_data

# Change the ownership
sudo chown -R 10001:0 /var/lib/docker/volumes/sql-certs/_data/
sudo chmod -R 600 /var/lib/docker/volumes/sql-certs/_data/

# Retrieve an authentication token and authenticate your Docker client to your registry. Use the AWS CLI:
aws ecr get-login-password --region ap-southeast-2 | sudo docker login --username AWS --password-stdin ACCOUNTID.dkr.ecr.ap-southeast-2.amazonaws.com

# Deploy MS SQL Server container
sudo docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Password123' \
-p 1433:1433 \
-v /data/containers/sql1/data:/var/opt/mssql/data \
-v /data/containers/sql1/log:/var/opt/mssql/log \
-v sql-certs:/var/opt/mssql/secrets:ro \
-v /data/mssql.conf:/var/opt/mssql/mssql.conf:ro \
-v /data/backups:/var/opt/mssql/backups \
--restart always \
--name sql1 \
-d ACCOUNTID.dkr.ecr.ap-southeast-2.amazonaws.com/gcs-sql-server:latest
  1. After the deployment, check the status of the container
# Check the login
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'Password123'

#Check the files
sudo docker exec -it sql1 ls -l /var/opt/mssql/backups

Backup Restore Database by sqlcmd

April 3, 2025

1. Taking Full Backups with sqlcmd

# Run the commands when you reach an important point in the database configuration
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Password123' -Q "BACKUP DATABASE [v7.3.1_HUB_511_lab] TO DISK = '/var/opt/mssql/backups/v7.3.1_HUB_511_lab_Stage_3.bak' WITH FORMAT, INIT, NAME = 'Stage3';"
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Password123' -Q "BACKUP DATABASE [HUB-5.1.1-lab] TO DISK = '/var/opt/mssql/backups/HUB-5.1.1-lab_Stage_3.bak' WITH FORMAT, INIT, NAME = 'Stage3';"

# Check the result
sudo docker exec -it sql1 ls -l /var/opt/mssql/backups/

2. Restoring a Specific Backup

# Restore databases
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Password123' -Q "RESTORE DATABASE [v7.3.1_HUB_511_lab] FROM DISK = '/var/opt/mssql/backups/v7.3.1_HUB_511_lab_Stage_3.bak' WITH REPLACE, RECOVERY;"
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Password123' -Q "RESTORE DATABASE [HUB-5.1.1-lab] FROM DISK = '/var/opt/mssql/backups/HUB-5.1.1-lab_Stage_3.bak' WITH REPLACE, RECOVERY;"

3. Restoring a Specific Backup via SSM

# Restore database via SSM
aws ssm send-command \
    --instance-ids "i-0e0df3af14a11b3d1" \
    --document-name "AWS-RunShellScript" \
    --parameters 'commands=[
        "sudo docker exec sql1 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '\''Password123'\'' -Q \"RESTORE DATABASE [v7.3.1_HUB_511_lab] FROM DISK = '\''/var/opt/mssql/backups/v7.3.1_HUB_511_lab_Stage_3.bak'\'' WITH REPLACE, RECOVERY;\""
    ]' \
    --region "ap-southeast-2"
# Check the Log in case of failure
aws ssm list-command-invocations --command-id abab87ca-7abb-4746-8666-fa6ebbe67b51 --details

SQL Server Container with Tools

April 2, 2025

File and Folder Structure at the end

  1. Create mssql.conf
[network]
tlscert = /var/opt/mssql/secrets/server-bundle.crt
tlskey = /var/opt/mssql/secrets/server.key
tlsprotocols = 1.2
forceencryption = 1
  1. Create Dockerfile:
# Use the official Microsoft SQL Server 2022 image as base
FROM mcr.microsoft.com/mssql/server:2022-latest

# Switch to root to install packages
USER root
    
# Install required dependencies
RUN apt-get update && \
    apt-get install -y curl apt-transport-https gnupg2 && \
    mkdir -p /etc/apt/keyrings && \
    curl -sSL https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > /etc/apt/keyrings/microsoft.gpg && \
    chmod 644 /etc/apt/keyrings/microsoft.gpg && \
    echo "deb [signed-by=/etc/apt/keyrings/microsoft.gpg] https://packages.microsoft.com/ubuntu/22.04/prod jammy main" > /etc/apt/sources.list.d/mssql-release.list && \
    apt-get update && \
    ACCEPT_EULA=Y apt-get install -y mssql-tools unixodbc-dev && \
    ln -s /opt/mssql-tools/bin/sqlcmd /usr/bin/sqlcmd && \
    ln -s /opt/mssql-tools/bin/bcp /usr/bin/bcp && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

# Switch back to mssql user
USER mssql
  1. Build an image
# Build new image
sudo docker build -t mssql-with-tools .
  1. Run commands
# Run new container
sudo docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Password123' \
-p 1433:1433 \
-v /data/containers/sql1/data:/var/opt/mssql/data \
-v /data/containers/sql1/log:/var/opt/mssql/log \
-v sql-certs:/var/opt/mssql/secrets:ro \
-v /data/mssql.conf:/var/opt/mssql/mssql.conf:ro \
--restart always \
--name sql1 \
-d mssql-with-tools
  1. Verify installation:
# Test sqlcmd
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd -?

Copy Files from a Docker to S3

April 2, 2025

Backup files from Docker Container

  1. Login to the machine running the Docker Container
  2. Copy back files in Docker container to the current directory
sudo docker cp sql1:/var/opt/mssql/backups/HUB-5.1.1-lab_Stage_2.bak ./HUB-5.1.1-lab_Stage_2.bak
sudo docker cp sql1:/var/opt/mssql/backups/HUB-5.1.1-lab_Stage_3.bak ./HUB-5.1.1-lab_Stage_3.bak
sudo docker cp sql1:/var/opt/mssql/backups/HUB-5.1.1-lab_Stage_4.bak ./HUB-5.1.1-lab_Stage_4.bak
sudo docker cp sql1:/var/opt/mssql/backups/v7.3.1_HUB_511_lab_Stage_3.bak ./v7.3.1_HUB_511_lab_Stage_3.bak
sudo docker cp sql1:/var/opt/mssql/backups/v7.3.1_HUB_511_lab_Stage_4.bak ./v7.3.1_HUB_511_lab_Stage_4.bak
  1. Upload them to S3 bucket
# Change the ownership of the files:
sudo chown ssm-user:ssm-user *.bak

# Create a timestamp variable
TIMESTAMP=$(date +%Y%m%d-%H%M%S)

# Upload both files to the timestamped folder
aws s3 cp HUB-5.1.1-lab_Stage_2.bak s3://gcs-share/db-backup/$TIMESTAMP/
aws s3 cp HUB-5.1.1-lab_Stage_3.bak s3://gcs-share/db-backup/$TIMESTAMP/
aws s3 cp HUB-5.1.1-lab_Stage_4.bak s3://gcs-share/db-backup/$TIMESTAMP/
aws s3 cp v7.3.1_HUB_511_lab_Stage_3.bak s3://gcs-share/db-backup/$TIMESTAMP/
aws s3 cp v7.3.1_HUB_511_lab_Stage_4.bak s3://gcs-share/db-backup/$TIMESTAMP/

RabbitMQ Container - SSL

March 30, 2025

Create a container (SSL)

  1. First, create a new working directory and prepare your certificate files:
mkdir gcs-rabbit-ssl
cd gcs-secure-rabbit
mkdir certs
# Copy your certificates to gcs-secure-rabbit/certs:
# - ca.crt
# - mid-ca.crt
# - server-001.crt
# - server-001.key
  1. Set 644 to these certificate
chmod 644 certs/*

  1. Create a rabbitmq.conf (gcs-secure-rabbit/rabbitmq.conf):
# RabbitMQ Configuration File
# Disable non-SSL listeners
listeners.tcp = none
listeners.ssl.default = 5671

# SSL configuration
ssl_options.cacertfile = /etc/rabbitmq/ssl/ca-bundle.crt
ssl_options.certfile = /etc/rabbitmq/ssl/server.crt
ssl_options.keyfile = /etc/rabbitmq/ssl/server.key
ssl_options.verify = verify_peer
ssl_options.depth = 2
ssl_options.fail_if_no_peer_cert = true

# Management SSL configuration
management.ssl.port = 15671
management.ssl.cacertfile = /etc/rabbitmq/ssl/ca-bundle.crt
management.ssl.certfile = /etc/rabbitmq/ssl/server.crt
management.ssl.keyfile = /etc/rabbitmq/ssl/server.key
  1. Create a Dockerfile (e.g., gcs-secure-rabbit/DockerFile):
FROM rabbitmq:3.11.10-management

# Create SSL directory
RUN mkdir -p /etc/rabbitmq/ssl

# Copy certificates
COPY ca.crt mid-ca.crt /etc/rabbitmq/ssl/
COPY server-001.crt /etc/rabbitmq/ssl/server.crt
COPY server-001.key /etc/rabbitmq/ssl/server.key

# Create bundle certificate
RUN cat /etc/rabbitmq/ssl/mid-ca.crt /etc/rabbitmq/ssl/ca.crt > /etc/rabbitmq/ssl/ca-bundle.crt

# Copy config file
COPY rabbitmq.conf /etc/rabbitmq/rabbitmq.conf

# Expose SSL ports
EXPOSE 5671 15671

CMD ["rabbitmq-server"]
  1. Build and run the container:
# Build the image
sudo docker build -t gcs-secure-rabbit:latest .

# Run the container
sudo docker run -d --hostname secure-rabbit --name secure-rabbit \
-p 15671:15671 \
-p 5671:5671 \
--restart always \
gcs-secure-rabbit:latest
  1. Check the container logs after running it:
sudo docker logs secure-rabbit

See also:

RabbitMQ Container - HTTP

Upload Docker Image to ECR

March 29, 2025

Configure in AWS management console

  1. Stay in the working directory where Dockerfile is located (e.g., ~/gcs-rabbit)

  2. Open Repository page in Amazon ECR

  3. Create a repository by the code below

aws ecr create-repository --repository-name gcs-normal-rabbit --region ap-southeast-2

  1. Click “View push command” and follow the instruction with sudo command


See also:

RabbitMQ Container - HTTP

RabbitMQ Container - SSL

RabbitMQ Container - HTTP

March 28, 2025

Create a container (HTTP)

  1. Install Docker
sudo yum install docker -y
sudo systemctl start docker
sudo systemctl enable docker
docker --version
sudo docker info
  1. Create a workiing directory
mkdir gcs-rabbit
cd gcs-rabbit
  1. Create “Dockerfile”
# Use the official RabbitMQ image from the Docker Hub
FROM rabbitmq:3.11.10-management

# Set the default RabbitMQ environment variables
ENV RABBITMQ_DEFAULT_USER=guest
ENV RABBITMQ_DEFAULT_PASS=guest

# Expose ports for RabbitMQ and the management UI
EXPOSE 5672 15672

# Copy rabbitmq.conf if you have additional configurations
COPY rabbitmq.conf /etc/rabbitmq/rabbitmq.conf

# Start RabbitMQ server
CMD ["rabbitmq-server"]
  1. Create “rabbitmq.conf”
# RabbitMQ Configuration File

# Listeners for AMQP (5672) and HTTP management (15672)
listeners.tcp.default = 5672
management.tcp.port = 15672

# Optional: Define a specific IP address to bind to
# (Uncomment the next line to specify a specific IP)
# listeners.tcp.default = 0.0.0.0

# Disable SSL (since you're focusing on HTTP only)
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
  1. Build a Docker Image
sudo docker build -t gcs-normal-rabbit:latest .
sudo docker images
  1. Test the Docker Image locall
sudo docker run -d --name brown -p 5672:5672 -p 15672:15672 gcs-normal-rabbit
sudo docker logs brown

See also:

RabbitMQ Container - SSL

AppSteram 2.0 Image Builder

March 27, 2025

  1. Select the right image to be updated

  1. Configure Image Builder

  1. Configure Network

  1. Review

  1. Confirmation

SQL Server - Check Secure Connnection

March 27, 2025

 sqlcmd -S d11-sql-db001.gcs.cloud -U sa -P Password123
 1 >
 2 >
 3 < exit
sqlcmd -S d11-sql-db001.gcs.cloud -U sa -P Password123 -Q "SELECT session_id, encrypt_option FROM sys.dm_exec_connections WHERE session_id = @@SPID;"

session_id  encrypt_option
				53  FALSE

sysprep in AWS

March 27, 2025

  1. Delete all the items controlled by Group Policy (e.g., Certificates, Firewall Settings)
  2. Open “Amazon EC2Launch Settings” and click ”Shutdown with Sysprep”

OpenSSL - Initial Setup

March 10, 2025

OpenSSL Initial Setup

1. Create a folder structure

mkdir -p certs/{ca,mid-ca,server}/{private,certs,newcerts,crl,csr}

2. Change the permissions

chmod -v 700 certs/{ca,mid-ca,server}/private

3. Create index files

touch certs/{ca,mid-ca}/index

4. Set a serial number

openssl rand -hex 16 > certs/ca/serial
openssl rand -hex 16 > certs/mid-ca/serial

5. Copy and place the configuration files

ca.conf - mid-ca.conf


See also:

OpenSSL - Initial Setup

OpenSSL (1) - Root CA

OpenSSL (2) - Intermediate CA

OpenSSL (3) - Server Certificate

Managing AWS Accounts in Terminal

February 27, 2025

Register AWS Accounts to the Terminal

  1. Set AWS Credential

  1. The command to check the Current AWS Credentials
aws sts get-caller-identity
  1. The command to clear the AWS Account from the terminal
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN

Deploy a Amazon Linux 2023

February 25, 2025

Deploy a Linux machine

  1. Update OS
sudo yum update -y
  1. Update Hostname and check it
sudo hostnamectl set-hostname DEV-VAR-OIDC2.apj.cloud
hostnamectl
  1. Update TimeZone and check it
sudo timedatectl set-timezone Australia/Sydney
timedatectl
  1. DNS Settings - Make sure all the DNS servers are registered
sudo vi /etc/resolv.conf

  1. Install some components for any Linux OS
sudo yum install sssd-ad sssd-tools realmd adcli
  1. Install some components for Amazon Linux 2023.
sudo yum install oddjob oddjob-mkhomedir
  1. Check the status of Active Directory
realm discover apj.cloud

Git - Ignore Settings

February 25, 2025

How to Ignore Uploading Folders and Files to GitHub

For example .venv folder

  1. Open your project folder in VS Code.

  2. Open .gitignore file in the root of the project

  3. Add the following line to .gitignore:

    .venv/
    
  4. Save the file. then Git will ignore the .venv folder, and it won’t be tracked in your repository.

If .venv was already committed before, you’ll need to remove it from Git history using:

git rm -r --cached .venv
git commit -m "Removed .venv from repository"
git push origin main  # or your current branch

You can check if .venv is ignored by Git using the following command

Create a New Blog Post

February 25, 2025

Create a new blog post in HUGO

  1. Create a new file
hugo new posts/create-a-new-blog-post.md
  1. Add Tag and Category to the header
---
date: '2025-02-25T22:21:46+11:00'
draft: false
title: 'Create a New Blog Post'
tags: ["Hugo", "Blog","Homepage"]
categories: ["Technology"]
---
  1. Edit the blog page
  2. Check the page in debugging mode
hugo server
hugo serve
  1. Open the page from the browser http://localhost:1313/

Terraform Link to "tfvars" File

February 18, 2025

Terraform tfvars file

When a tfvars file is in a different location, you must specify it using the “-var-file” option. However, creating a symbolic link can simplify the command operation.

  1. Create a symbolic link
ln -s ../common.tfvars terraform.tfvars
  1. Run a simple terraform command without option
terraform plan
  1. Screenshot of the process

How to Disable Sounds in vs Code

February 18, 2025

How to disable sounds in VS Code.

  1. Open the Command Palette (Ctrl + Shift + P).

  2. Search for “Preferences: Open Settings (JSON)” and select it.

  1. Add the following line inside the JSON file:
"editor.accessibilitySupport": "off",

Create ICO File From SVG File

February 10, 2025

Using GIMP

  1. Open your SVG file in “GIMP”
  2. Resize to “256x256” pixels
  3. Click “File” → “Export As”, choose “.ico” format
  4. Save the file

Setup Fleet Manager

February 9, 2025

How to Enable GUI Access via Fleet Manager

  1. Ensure SSM Agent is Installed and Running
    • Windows EC2 instances must have the “SSM Agent” installed and running.
    • Check the status by the powershell command
Get-Service AmazonSSMAgent
  1. Attach a Role with the following policies
    • AmazonSSMManagedInstanceCore
    • AmazonSSMFullAccess (This is required for GUI access via Fleet Manager)


How to access to EC2 via Fleet Manager

  1. Go to “Systems Manager” → “Fleet Manager”

OpenSSL (3) - Server Certificate

February 9, 2025

Create a Server Certificate

1. Generate a key file (It can be one-off operation)

openssl genrsa -out server/private/server.key 2048

2. Generate a Certificate Signing Request (CSR)

openssl req -config mid-ca/mid-ca.conf -key server/private/server.key -new -sha256 -out server/csr/server.csr

e.g., CN=GCS-Server-Certificate-v0x

3. Sign the request (CSR) by Sub-CA

openssl ca -config mid-ca/mid-ca.conf -extensions server_cert -days 3650 -notext -in server/csr/server.csr -out server/certs/server.crt

4. Generate PFX with NO password

openssl pkcs12 -inkey server/private/server.key -in server/certs/server.crt -export -out server/certs/server.pfx -passout pass:

5. Result

OpenSSL (2) - Intermediate CA

February 9, 2025

Create a “Intermediate CA” certificate

1. Generate a key file for “Intermediate CA”

openssl genrsa -aes256 -out mid-ca/private/mid-ca.key 4096

2. Change the permission of mid-ca.key

chmod 400 mid-ca/private/mid-ca.key

3. Generate a Certificate Signing Request (CSR)

openssl req -config ca/ca.conf -new -key mid-ca/private/mid-ca.key -sha256 -out mid-ca/csr/mid-ca.csr

4. Sign the request file by Root-CA

openssl ca -config ca/ca.conf -extensions v3_mid_ca -days 3650 -notext -in mid-ca/csr/mid-ca.csr -out mid-ca/certs/mid-ca.crt

5. Change the permission of mid-ca.crt

chmod 444 mid-ca/certs/mid-ca.crt

6. Check a backup file created in newcerts dirctory

OpenSSL (1) - Root CA

February 9, 2025

Create a “Root CA” certificate

1. Generate a key file for “Root CA”

openssl genrsa –aes256 -out ca/private/ca.key 4096

2. Change the permission of ca.key

chmod 400 ca/private/ca.key

3. Check the content of ca.key

openssl rsa -noout -text -in ca/private/ca.key

4. Generate a certificate file for “Root CA”

openssl req -config ca/ca.conf -key ca/private/ca.key -new -x509 -days 3650 -sha256 -extensions v3_ca -out ca/certs/ca.crt

5. Change the permission of ca.crt

chmod 444 ca/certs/ca.crt 

6. Check the contents of ca.crt

openssl x509 -noout -text -in ca/certs/ca.crt

See also:

OpenSSL - Initial Setup

Deadlock Issue in SQL Serever

February 9, 2025

What is a Deadlock?

A deadlock in SQL Server occurs when two or more processes hold locks on resources and each process is waiting for the other to release its lock, causing a cycle where none can proceed.

Why Does It Occur?

Deadlocks typically hapen due to:

  1. Concurrent Transactions: Multiple transactions access the same resources in a conflicting order.
  2. Locking Order: Processes acquire locks in different sequences, leading to circular wait conditions.
  3. Long-Running Transactions: Holding locks for an extended period increases the chance of conflicts.
  4. Insufficient Indexing: Poor indexing leads to table scans, increasing lock contention.
  5. Blocking Issues: Heavy blocking can escalate to deadlocks if multiple processes wait indefinitely.

How Poor Maintenance Can Lead to Deadlocks

  1. Fragmented Indexes & Performance Degradation
    • If the database has grown significantly and indexes haven’t been maintained (i.e., no reindexing or rebuilding), queries will take longer to execute.
    • Longer query execution times mean locks are held for extended periods, increasing the chances of deadlocks.
  2. Full Logging & Large Transaction Logs
    • If transaction logs are continuously growing without proper backups or truncation, SQL Server might struggle with log management, leading to slower transaction processing.
    • Slow transactions hold locks for longer, making deadlocks more likely.
  3. Mass Deletes Without Reindexing
    • Deleting a large number of records without reindexing can leave fragmented pages and inefficient query plans.
    • The database engine might perform table scans instead of index seeks, leading to increased lock contention.
  4. Query Plan Changes (Due to Increased Data Size)
    • As the database grows, SQL Server might generate different execution plans that were not optimized for the current data size.
    • This can lead to more locking and blocking, increasing the chance of deadlocks.

Demo - Create a test table

1. Create a Large Table & Insert Sample Data

USE master;
GO
CREATE DATABASE ConflictTestDB;
GO
USE ConflictTestDB;
GO

CREATE TABLE Orders (
    OrderID INT IDENTITY(1,1) PRIMARY KEY,
    CustomerID INT,
    OrderDate DATETIME DEFAULT GETDATE(),
    OrderAmount DECIMAL(10,2),
    Status VARCHAR(20)
);
GO

-- Insert ~1 Million Rows
SET NOCOUNT ON;
DECLARE @i INT = 1;
BEGIN TRAN
WHILE @i <= 1000000
BEGIN
    INSERT INTO Orders (CustomerID, OrderAmount, Status)
    VALUES (ABS(CHECKSUM(NEWID())) % 1000, RAND() * 1000, 'Pending');
    SET @i = @i + 1;
END
COMMIT TRAN;
GO

2. Create Index Fragmentation (Without Reindexing)

-- Delete a large number of records randomly to cause index fragmentation
DELETE FROM Orders WHERE OrderID % 10 = 0;
GO

-- Fill gaps with new inserts (in random order)
SET NOCOUNT ON;
DECLARE @i INT = 1;
BEGIN TRAN
WHILE @i <= 100000
BEGIN
    INSERT INTO Orders (CustomerID, OrderAmount, Status)
    VALUES (ABS(CHECKSUM(NEWID())) % 1000, RAND() * 1000, 'Pending');
    SET @i = @i + 1;
END
COMMIT TRAN;
GO

3. Check Fragmentation for All Indexes on [Orders] Table

SELECT 
    index_id,
    index_type_desc,
    avg_fragmentation_in_percent,
    avg_page_space_used_in_percent,
    page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('Orders'), NULL, NULL, 'LIMITED');

avg_fragmentation_in_percent → Shows fragmentation level:

How to Setup PasteImage

February 9, 2025

How to Paste Images into Your Hugo Blog in VS Code on Mac

1. Install the “Paste Image” Extension in VS Code

  • Open VS Code
  • Press Cmd + Shift + X to open Extensions
  • Search for “Paste Image” by mushanh
  • Click Install

2. Configure the Extension to Save Images in Your Blog Folder

  • Open Settings (Cmd + ,)

  • Search for “pasteImage.path”

  • Set it to:This ensures images are saved inside the same folder as the Markdown file.