This isn’t the “nginx proxy_pass to localhost:3000” tutorial. Here I document the NGINX configuration I run in production — with multiple upstreams, zone-based rate limiting, auto-detection of missing configs, iterative error fix, SSL automation with fallback, and Go application deployment with systemd hardening.
Note: Domains, paths, and service names have been anonymized. Configurations reflect real infrastructure.
Architecture: 4+ simultaneous upstreams
In real production, it’s not one service behind NGINX — it’s several:
upstream portal {
server 127.0.0.1:4321; # Astro SSR (public portal)
}
upstream dashboard_client {
server 127.0.0.1:3001; # React SPA (client panel)
}
upstream dashboard_admin {
server 127.0.0.1:3002; # React SPA (admin panel)
}
upstream backend_api {
server 127.0.0.1:8000; # FastAPI/Uvicorn (REST API)
}
Each upstream is a service with its own systemd unit, port, rate limit, and cache policy.
Main nginx.conf — what I actually use
user www-data;
worker_processes auto;
worker_rlimit_nofile 65535;
pid /run/nginx.pid;
events {
worker_connections 4096;
multi_accept on;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 1000;
types_hash_max_size 2048;
server_tokens off;
client_body_buffer_size 16k;
client_header_buffer_size 1k;
client_max_body_size 10m;
large_client_header_buffers 4 8k;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 4;
gzip_min_length 256;
gzip_types
text/plain text/css text/javascript
application/javascript application/json application/xml
image/svg+xml application/wasm;
log_format main '$remote_addr - [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time urt=$upstream_response_time';
access_log /var/log/nginx/access.log main buffer=32k flush=5s;
error_log /var/log/nginx/error.log warn;
# Zone-based rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/m;
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=auth:10m rate=5r/m;
limit_conn_zone $binary_remote_addr zone=addr:10m;
include /etc/nginx/conf.d/*.conf;
}
Complete Virtual Host — HTTPS + Security Headers
server {
listen 80;
listen [::]:80;
server_name app.example.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
limit_conn addr 100;
# API with strict rate limiting
location /api/ {
limit_req zone=api burst=20 nodelay;
limit_req_status 429;
proxy_pass http://backend_api;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
proxy_connect_timeout 5s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
add_header Cache-Control "no-store";
}
# Client dashboard — SPA with try_files
location /client/ {
proxy_pass http://dashboard_client/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# WebSocket
location /ws {
proxy_pass http://backend_api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
# Static assets — served directly by NGINX
location ~* \.(js|jsx|css|png|jpg|svg|woff|woff2|ttf|eot|wasm)$ {
expires 30d;
add_header Cache-Control "public, immutable";
access_log off;
}
# Portal (catch-all)
location / {
limit_req zone=general burst=50 nodelay;
proxy_pass http://portal;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
}
location ~ /\. { deny all; access_log off; }
error_page 502 503 504 /50x.html;
location = /50x.html { root /usr/share/nginx/html; internal; }
}
Generating NGINX configs via Go
I automate nginx config generation programmatically. When provisioning SPA panels with API proxy:
func generatePanelNginxConfig(domain, rootDir string, port int) string {
return fmt.Sprintf(`server {
listen %d;
server_name %s;
root %s;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location ~* \.(js|css|png|svg|woff2)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
}`, port, domain, rootDir)
}
Auto-detection of missing configs
When deploying nginx configs that reference undefined upstreams, rate limit zones, or cache zones, NGINX fails. I solve this with auto-detection:
#!/bin/bash
AUTO_CONF="/etc/nginx/conf.d/auto-generated.conf"
> "$AUTO_CONF"
# Auto-detect undefined upstreams
UPSTREAM_NAMES=$(grep -rhoP 'proxy_pass\s+https?://\K[a-zA-Z_][a-zA-Z0-9_-]*' \
/etc/nginx/sites-available/ 2>/dev/null | grep -v '^localhost$' | sort -u)
for UNAME in $UPSTREAM_NAMES; do
if ! grep -rq "upstream[[:space:]]*${UNAME}[[:space:]]*{" /etc/nginx/; then
SERVICE_BASE=$(echo "$UNAME" | sed 's/_backend$//' | sed 's/_upstream$//')
PORT=$(grep -oP '\-\-port[= ]+\K\d+' \
"/etc/systemd/system/${SERVICE_BASE}.service" 2>/dev/null | head -1)
[ -n "$PORT" ] && echo "upstream ${UNAME} { server 127.0.0.1:${PORT}; }" >> "$AUTO_CONF"
fi
done
# Auto-detect undefined rate limit zones
ZONES=$(grep -rhoP 'limit_req\s+zone=\K[a-zA-Z_]+' /etc/nginx/sites-available/ | sort -u)
for ZONE in $ZONES; do
grep -rq "limit_req_zone.*zone=${ZONE}:" /etc/nginx/ || \
echo "limit_req_zone \$binary_remote_addr zone=${ZONE}:10m rate=20r/s;" >> "$AUTO_CONF"
done
Iterative NGINX error fix
With dozens of sites in sites-enabled, broken configs happen. Instead of debugging one by one, I use iterative fixing:
for attempt in $(seq 1 20); do
ERRORS=$(nginx -t 2>&1)
if echo "$ERRORS" | grep -q 'test is successful'; then
echo " ✓ Nginx config OK"
break
fi
BAD_SITE=$(echo "$ERRORS" | grep -oP 'in /etc/nginx/sites-enabled/\K[^:]+' | head -1)
if [ -n "$BAD_SITE" ]; then
echo " ✗ Disabling broken site: $BAD_SITE"
rm -f "/etc/nginx/sites-enabled/$BAD_SITE"
else
echo "WARNING: Nginx invalid — $(echo "$ERRORS" | tail -1)"
break
fi
done
sudo systemctl reload nginx
SSL with Certbot + self-signed fallback
In production, certbot doesn’t always work first try (DNS not propagated, Let’s Encrypt rate limit). I automate the fallback:
DOMAIN="app.example.com"
certbot certonly --webroot -w /var/www/html \
-d "$DOMAIN" --non-interactive --agree-tos \
--register-unsafely-without-email
if [ $? -ne 0 ]; then
echo "Certbot failed — generating self-signed fallback"
mkdir -p "/etc/letsencrypt/archive/$DOMAIN" "/etc/letsencrypt/live/$DOMAIN"
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout "/etc/letsencrypt/archive/$DOMAIN/privkey1.pem" \
-out "/etc/letsencrypt/archive/$DOMAIN/fullchain1.pem" \
-subj "/CN=$DOMAIN"
ln -sf "../../archive/$DOMAIN/fullchain1.pem" "/etc/letsencrypt/live/$DOMAIN/fullchain.pem"
ln -sf "../../archive/$DOMAIN/privkey1.pem" "/etc/letsencrypt/live/$DOMAIN/privkey.pem"
fi
echo "0 3 * * * certbot renew --quiet --post-hook 'systemctl reload nginx'" | sudo crontab -
Go production deploy with systemd hardening
Optimized build
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-s -w" -o /usr/local/bin/myapp .
systemd unit with full sandbox
[Unit]
Description=Production Go Service
After=network.target
[Service]
Type=simple
User=www-data
ExecStart=/usr/local/bin/myapp
Restart=always
RestartSec=5
Environment=PORT=8080
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/myapp/data
PrivateTmp=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectControlGroups=true
ProtectKernelModules=true
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
Go server with graceful shutdown
What I run in production isn’t log.Fatal(http.ListenAndServe(...)) — it’s graceful shutdown:
func main() {
server := &http.Server{
Addr: "127.0.0.1:8080",
Handler: mux,
ReadTimeout: 15 * time.Second,
WriteTimeout: 15 * time.Second,
IdleTimeout: 60 * time.Second,
}
go func() {
log.Printf("Server starting on %s", server.Addr)
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Fatalf("server error: %v", err)
}
}()
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
log.Println("Shutting down gracefully...")
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := server.Shutdown(ctx); err != nil {
log.Fatalf("forced shutdown: %v", err)
}
log.Println("Server stopped cleanly")
}
Rate limiting middleware in Go
func (rl *RateLimiter) Middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ip := r.Header.Get("X-Real-IP")
if ip == "" { ip = r.RemoteAddr }
rl.mu.Lock()
now := time.Now()
var valid []time.Time
for _, t := range rl.requests[ip] {
if now.Sub(t) < rl.window { valid = append(valid, t) }
}
rl.requests[ip] = valid
if len(valid) >= rl.max {
rl.mu.Unlock()
w.WriteHeader(http.StatusTooManyRequests)
return
}
rl.requests[ip] = append(rl.requests[ip], now)
rl.mu.Unlock()
next.ServeHTTP(w, r)
})
}
Reading real headers from NGINX in Go
func getClientIP(r *http.Request) string {
if ip := r.Header.Get("X-Real-IP"); ip != "" { return ip }
if xff := r.Header.Get("X-Forwarded-For"); xff != "" {
return strings.TrimSpace(strings.Split(xff, ",")[0])
}
host, _, _ := net.SplitHostPort(r.RemoteAddr)
return host
}
Post-deploy verification script
#!/bin/bash
sudo nginx -t
for svc in nginx myapp-api myapp-portal; do
STATUS=$(systemctl is-active $svc)
[ "$STATUS" = "active" ] && echo " ✓ $svc" || echo " ✗ $svc: $STATUS"
done
ss -tlnp | grep -E '(80|443|8000|8080|4321)'
for PORT in 8000 8080 4321; do
CODE=$(curl -s -o /dev/null -w '%{http_code}' "http://127.0.0.1:$PORT/")
[ "$CODE" = "200" ] && echo " ✓ Port $PORT: HTTP $CODE" || echo " ✗ Port $PORT: HTTP $CODE"
done
Performance tips
- keepalive on upstream — reuses TCP connections between NGINX and Go
- gzip in NGINX, not Go — frees the app for business logic
sendfile onfor statics — kernel zero-copy, bypasses userspace- HTTP/2 — automatic multiplexing with SSL
access_log offfor statics — reduces disk I/O- Connection draining — graceful shutdown in Go prevents 502 during deploys
Conclusion
The NGINX + Go + systemd hardening combination is one of the most robust stacks available. NGINX handles SSL, compression, rate limiting, cache, and protection; Go focuses on business logic with native performance; and systemd ensures everything restarts, logs, and runs sandboxed.
The difference isn’t knowing how to configure proxy_pass — it’s knowing how to automate config generation, detect problems before reload, have certificate fallbacks, and monitor everything with self-running scripts.
This article documents real infrastructure configurations. Domains and service names have been anonymized. Use on your own infrastructure or with authorization.
Rafael Cavalcanti da Silva — Fullstack Developer & Security Specialist rafaelroot.com