Este não é o tutorial “nginx proxy_pass para localhost:3000”. Aqui eu documento a configuração de NGINX que rodo em produção — com múltiplos upstreams, rate limiting por zona, auto-detection de configs faltantes, iterative error fix, SSL automation com fallback, e deploy de aplicações Go com systemd hardening.
Nota: Domínios, paths e nomes de serviço foram anonimizados. As configurações refletem infraestrutura real.
Arquitetura: 4+ upstreams simultâneos
Em produção real, não é um serviço atrás do NGINX — são vários:
# /etc/nginx/nginx.conf
upstream portal {
server 127.0.0.1:4321; # Astro SSR (portal público)
}
upstream dashboard_client {
server 127.0.0.1:3001; # SPA React (painel do cliente)
}
upstream dashboard_admin {
server 127.0.0.1:3002; # SPA React (painel admin)
}
upstream backend_api {
server 127.0.0.1:8000; # FastAPI/Uvicorn (API REST)
}
Cada upstream é um serviço com seu próprio systemd unit, porta, rate limit e cache policy.
nginx.conf principal — o que uso de verdade
user www-data;
worker_processes auto;
worker_rlimit_nofile 65535;
pid /run/nginx.pid;
events {
worker_connections 4096;
multi_accept on;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
charset utf-8;
# Performance
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 1000;
types_hash_max_size 2048;
server_tokens off; # Ocultar versão
# Buffers otimizados
client_body_buffer_size 16k;
client_header_buffer_size 1k;
client_max_body_size 10m;
large_client_header_buffers 4 8k;
# Gzip
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 4;
gzip_min_length 256;
gzip_types
text/plain text/css text/javascript
application/javascript application/json application/xml
image/svg+xml application/wasm;
# Logging com métricas
log_format main '$remote_addr - [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time urt=$upstream_response_time';
access_log /var/log/nginx/access.log main buffer=32k flush=5s;
error_log /var/log/nginx/error.log warn;
# Rate Limiting — zonas separadas por tipo de tráfego
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/m;
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=auth:10m rate=5r/m;
limit_conn_zone $binary_remote_addr zone=addr:10m;
include /etc/nginx/conf.d/*.conf;
}
Virtual Host completo — HTTPS + Security Headers
# /etc/nginx/conf.d/app.conf
# HTTP → HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name app.example.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$server_name$request_uri;
}
}
# HTTPS
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app.example.com;
# SSL — Let's Encrypt
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
# SSL hardening
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
# Security headers completos
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
# Connection limits
limit_conn addr 100;
# API com rate limiting restritivo
location /api/ {
limit_req zone=api burst=20 nodelay;
limit_req_status 429;
proxy_pass http://backend_api;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
proxy_connect_timeout 5s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
add_header Cache-Control "no-store";
}
# Dashboard do cliente — SPA com try_files
location /client/ {
proxy_pass http://dashboard_client/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# WebSocket support
location /ws {
proxy_pass http://backend_api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
# Estáticos — servidos direto pelo NGINX
location ~* \.(js|jsx|ts|tsx|mjs|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot|wasm)$ {
expires 30d;
add_header Cache-Control "public, immutable";
access_log off;
}
# Portal (catch-all)
location / {
limit_req zone=general burst=50 nodelay;
proxy_pass http://portal;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
}
# Bloquear arquivos ocultos
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
# Error pages
error_page 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
internal;
}
}
Geração de configs NGINX via Go
Uma das coisas que automatizo é gerar configs de NGINX programaticamente. Quando preciso provisionar painéis SPA com proxy para API:
func generatePanelNginxConfig(domain, rootDir string, port int) string {
return fmt.Sprintf(`server {
listen %d;
listen [::]:%d;
server_name %s;
root %s;
index index.html;
types {
application/javascript js jsx mjs;
text/typescript ts tsx;
text/css css;
text/html html htm;
application/json json;
image/svg+xml svg;
application/wasm wasm;
}
# SPA — todas as rotas caem no index.html
location / {
try_files $uri $uri/ /index.html;
}
# API reverse proxy
location /api/ {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Cache de assets
location ~* \.(js|jsx|css|png|jpg|svg|woff|woff2|ttf|eot)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
}`, port, port, domain, rootDir)
}
Auto-detection de configs faltantes
Quando deploy configs de NGINX que referenciam upstreams, rate limit zones ou cache zones que não existem, o NGINX falha. Resolvo isso com auto-detection que gera as definições faltantes:
#!/bin/bash
# auto_nginx_fix.sh — detecta e gera configs faltantes
AUTO_CONF="/etc/nginx/conf.d/auto-generated.conf"
> "$AUTO_CONF"
# 1. Auto-detect upstreams não definidos
UPSTREAM_NAMES=$(grep -rhoP 'proxy_pass\s+https?://\K[a-zA-Z_][a-zA-Z0-9_-]*' \
/etc/nginx/sites-available/ 2>/dev/null | grep -v '^localhost$' | sort -u)
for UNAME in $UPSTREAM_NAMES; do
if ! grep -rq "upstream[[:space:]]*${UNAME}[[:space:]]*{" /etc/nginx/; then
# Tenta descobrir a porta pelo systemd service
SERVICE_BASE=$(echo "$UNAME" | sed 's/_backend$//' | sed 's/_upstream$//')
PORT=""
for SVC_FILE in /etc/systemd/system/${SERVICE_BASE}.service; do
[ -f "$SVC_FILE" ] || continue
PORT=$(grep -oP '\-\-port[= ]+\K\d+' "$SVC_FILE" | head -1)
[ -n "$PORT" ] && break
done
[ -n "$PORT" ] && echo "upstream ${UNAME} { server 127.0.0.1:${PORT}; }" >> "$AUTO_CONF"
fi
done
# 2. Auto-detect rate limit zones não definidas
LIMIT_REQ_ZONES=$(grep -rhoP 'limit_req\s+zone=\K[a-zA-Z_][a-zA-Z0-9_]*' \
/etc/nginx/sites-available/ | sort -u)
for ZONE in $LIMIT_REQ_ZONES; do
if ! grep -rq "limit_req_zone.*zone=${ZONE}:" /etc/nginx/; then
echo "limit_req_zone \$binary_remote_addr zone=${ZONE}:10m rate=20r/s;" >> "$AUTO_CONF"
fi
done
# 3. Auto-detect proxy cache paths não definidos
CACHE_NAMES=$(grep -rhoP 'proxy_cache\s+\K[a-zA-Z_][a-zA-Z0-9_]*' \
/etc/nginx/sites-available/ | sort -u)
for CACHE in $CACHE_NAMES; do
if ! grep -rq "proxy_cache_path.*keys_zone=${CACHE}:" /etc/nginx/; then
echo "proxy_cache_path /var/cache/nginx/${CACHE} levels=1:2 \
keys_zone=${CACHE}:10m max_size=100m inactive=60m;" >> "$AUTO_CONF"
mkdir -p "/var/cache/nginx/${CACHE}"
chown www-data:www-data "/var/cache/nginx/${CACHE}"
fi
done
Iterative NGINX error fix
Quando você tem dezenas de sites no sites-enabled, é comum ter configs quebradas. Em vez de debugar uma por uma, uso um fix iterativo que desabilita sites problemáticos até o NGINX validar:
# Testa e corrige iterativamente
for attempt in $(seq 1 20); do
ERRORS=$(nginx -t 2>&1)
if echo "$ERRORS" | grep -q 'test is successful'; then
echo " ✓ Nginx config OK"
break
fi
BAD_SITE=$(echo "$ERRORS" | grep -oP 'in /etc/nginx/sites-enabled/\K[^:]+' | head -1)
if [ -n "$BAD_SITE" ]; then
echo " ✗ Desabilitando site quebrado: $BAD_SITE"
rm -f "/etc/nginx/sites-enabled/$BAD_SITE"
else
echo "WARNING: Nginx inválido — $(echo "$ERRORS" | tail -1)"
break
fi
done
sudo systemctl reload nginx
SSL com Certbot + fallback self-signed
Em produção, nem sempre o certbot funciona de primeira (DNS não propagou, rate limit do Let’s Encrypt). Automatizo o fallback:
DOMAIN="app.example.com"
# Tentativa 1: Certbot webroot
certbot certonly --webroot -w /var/www/html \
-d "$DOMAIN" --non-interactive --agree-tos \
--register-unsafely-without-email
if [ $? -ne 0 ]; then
echo "Certbot falhou — gerando self-signed como fallback"
mkdir -p "/etc/letsencrypt/archive/$DOMAIN"
mkdir -p "/etc/letsencrypt/live/$DOMAIN"
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout "/etc/letsencrypt/archive/$DOMAIN/privkey1.pem" \
-out "/etc/letsencrypt/archive/$DOMAIN/fullchain1.pem" \
-subj "/CN=$DOMAIN"
# Symlinks imitando estrutura do certbot
ln -sf "../../archive/$DOMAIN/fullchain1.pem" \
"/etc/letsencrypt/live/$DOMAIN/fullchain.pem"
ln -sf "../../archive/$DOMAIN/privkey1.pem" \
"/etc/letsencrypt/live/$DOMAIN/privkey.pem"
fi
# Renovação automática
echo "0 3 * * * certbot renew --quiet --post-hook 'systemctl reload nginx'" \
| sudo crontab -
Deploy Go em produção com systemd hardening
Build otimizado
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -ldflags="-s -w" -o /usr/local/bin/myapp .
systemd unit com sandbox completo
[Unit]
Description=Production Go Service
After=network.target
Wants=network-online.target
[Service]
Type=simple
User=www-data
Group=www-data
WorkingDirectory=/opt/myapp
ExecStart=/usr/local/bin/myapp
Restart=always
RestartSec=5
Environment=PORT=8080
# Hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/myapp/data
PrivateTmp=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectControlGroups=true
ProtectKernelModules=true
LimitNOFILE=65535
LimitNPROC=4096
StandardOutput=journal
StandardError=journal
SyslogIdentifier=myapp
[Install]
WantedBy=multi-user.target
Go server com graceful shutdown
O que rodo em produção não é log.Fatal(http.ListenAndServe(...)) — é shutdown gracioso com timeout:
package main
import (
"context"
"log"
"net/http"
"os"
"os/signal"
"syscall"
"time"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/api/health", healthHandler)
mux.HandleFunc("/", indexHandler)
server := &http.Server{
Addr: "127.0.0.1:8080", // localhost only — NGINX é o entry point
Handler: mux,
ReadTimeout: 15 * time.Second,
WriteTimeout: 15 * time.Second,
IdleTimeout: 60 * time.Second,
}
// Start server em goroutine
go func() {
log.Printf("Server starting on %s", server.Addr)
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Fatalf("server error: %v", err)
}
}()
// Graceful shutdown — espera SIGINT/SIGTERM
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
log.Println("Shutting down gracefully...")
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := server.Shutdown(ctx); err != nil {
log.Fatalf("forced shutdown: %v", err)
}
log.Println("Server stopped cleanly")
}
Rate limiting em Go
type RateLimiter struct {
requests map[string][]time.Time
mu sync.Mutex
max int
window time.Duration
}
func NewRateLimiter(max int, window time.Duration) *RateLimiter {
return &RateLimiter{
requests: make(map[string][]time.Time),
max: max,
window: window,
}
}
func (rl *RateLimiter) Middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ip := r.Header.Get("X-Real-IP") // NGINX seta isso
if ip == "" {
ip = r.RemoteAddr
}
rl.mu.Lock()
now := time.Now()
// Limpa requests fora da janela
var valid []time.Time
for _, t := range rl.requests[ip] {
if now.Sub(t) < rl.window {
valid = append(valid, t)
}
}
rl.requests[ip] = valid
if len(valid) >= rl.max {
rl.mu.Unlock()
w.WriteHeader(http.StatusTooManyRequests)
return
}
rl.requests[ip] = append(rl.requests[ip], now)
rl.mu.Unlock()
next.ServeHTTP(w, r)
})
}
Lendo headers reais do NGINX no Go
func getClientIP(r *http.Request) string {
if ip := r.Header.Get("X-Real-IP"); ip != "" {
return ip
}
if xff := r.Header.Get("X-Forwarded-For"); xff != "" {
parts := strings.Split(xff, ",")
return strings.TrimSpace(parts[0])
}
host, _, _ := net.SplitHostPort(r.RemoteAddr)
return host
}
func isHTTPS(r *http.Request) bool {
return r.Header.Get("X-Forwarded-Proto") == "https"
}
Monitoramento NGINX
# Status page interno
location /nginx_status {
stub_status;
allow 127.0.0.1;
deny all;
}
# Verificar conexões ativas
curl -s http://127.0.0.1/nginx_status
# Active connections: 47
# server accepts handled requests
# 142857 142857 893421
# Reading: 0 Writing: 3 Waiting: 44
Verificação de performance
curl -w "\nDNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTLS: %{time_appconnect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" \
-o /dev/null -s https://app.example.com/api/health
Verification script pós-deploy
#!/bin/bash
echo "=== Verificação pós-deploy ==="
# 1. Config válida
sudo nginx -t
# 2. Serviços ativos
for svc in nginx myapp-api myapp-portal; do
STATUS=$(systemctl is-active $svc)
[ "$STATUS" = "active" ] && echo " ✓ $svc" || echo " ✗ $svc: $STATUS"
done
# 3. Portas listening
ss -tlnp | grep -E '(80|443|8000|8080|3001|3002|4321)'
# 4. HTTP smoke test
for PORT in 8000 8080 4321 3001; do
CODE=$(curl -s -o /dev/null -w '%{http_code}' "http://127.0.0.1:$PORT/")
[ "$CODE" = "200" ] && echo " ✓ Port $PORT: HTTP $CODE" || echo " ✗ Port $PORT: HTTP $CODE"
done
echo "=== Verificação completa ==="
Dicas de performance
- keepalive no upstream — reutiliza TCP entre NGINX e Go
- gzip no NGINX, não no Go — libera a app para processar negócio
sendfile onpara estáticos — zero-copy do kernel, bypass userspace- HTTP/2 — multiplexação automática com SSL
access_log offpara estáticos — reduz I/O de disco- Buffer tuning — ajuste
proxy_buffer_sizese headers grandes - Connection draining — graceful shutdown no Go evita 502 durante deploy
Conclusão
A combinação NGINX + Go + systemd hardening é uma das stacks mais robustas que existe. O NGINX cuida de SSL, compressão, rate limiting, cache e proteção; o Go foca na lógica de negócio com performance nativa; e o systemd garante que tudo reinicie, logue e rode sandboxed.
O diferencial não é saber configurar um proxy_pass — é saber automatizar a geração de configs, detectar problemas antes do reload, ter fallback para certificados, e monitorar tudo com scripts que rodam sozinhos.
Este artigo documenta configurações de infraestrutura real. Domínios e nomes de serviço foram anonimizados. Use em infraestrutura própria ou com autorização.
Rafael Cavalcanti da Silva — Fullstack Developer & Security Specialist rafaelroot.com