Why Nginx?
Nginx (pronounced "engine-X") uses an event-driven architecture instead of Apache's process-per-connection model. Lower memory footprint under load, way better at serving static files, and the reverse proxy setup is straightforward.
I switched from Apache to Nginx in 2020 and never ran Apache again. Nginx handles static files faster and the config syntax makes more sense once you stop thinking in .htaccess. That mental shift took me about a week. After that, going back to Apache felt like writing XML by hand.
Don't edit /etc/nginx/nginx.conf directly for site configs. Create a file per site in /etc/nginx/sites-available/, symlink to sites-enabled, reload. Keeps things clean when you're managing multiple domains.
Installing Nginx
Pick your distro:
Ubuntu/Debian
sudo apt update
sudo apt install nginx
Fedora/RHEL/CentOS
sudo dnf install nginx
Arch Linux
sudo pacman -S nginx
Start and enable on boot:
sudo systemctl start nginx
sudo systemctl enable nginx # Start on boot
Verify:
sudo systemctl status nginx
Hit your server's IP in a browser. You should get the default Nginx welcome page.
Config Structure
Nginx config is hierarchical. Outermost to innermost:
- Main context — global settings, worker processes
- Events context — connection handling
- HTTP context — everything web-related
- Server blocks — one per domain/site
- Location blocks — URL path matching within a server
Realistically, you'll spend 95% of your time in server and location blocks. The main config just includes other files, so each site gets its own config file and you rarely touch nginx.conf itself.
Where Config Lives
/etc/nginx/nginx.conf # Main config file
/etc/nginx/sites-available/ # Site configs (not active yet)
/etc/nginx/sites-enabled/ # Symlinks to active sites
/etc/nginx/conf.d/ # Alternative location for configs
The sites-available/sites-enabled pattern is Debian's invention. It works but it's not how upstream Nginx does it. On Fedora/Arch you just edit /etc/nginx/conf.d/. Both are fine. I've used both on different servers and honestly the conf.d/ approach is simpler — one fewer step since you skip the symlink dance.
Your First Server Block
Basic static site config:
sudo nano /etc/nginx/sites-available/mysite
Contents:
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
root /var/www/mysite;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
Line by line:
listen 80/listen [::]:80— HTTP on IPv4 and IPv6server_name— domains this block responds toroot— filesystem path to your site filesindex— default files to servetry_files— check for file, then directory, then 404
Enable it:
sudo ln -s /etc/nginx/sites-available/mysite /etc/nginx/sites-enabled/
Create the web root and a test page:
sudo mkdir -p /var/www/mysite
echo "<h1>Hello from Nginx!</h1>" | sudo tee /var/www/mysite/index.html
Test config:
sudo nginx -t
Always run nginx -t before systemctl reload nginx. I've brought down production by reloading a config with a missing semicolon. The test catches syntax errors so your running config stays intact if something's wrong.
If it passes, reload:
sudo systemctl reload nginx
SSL with Let's Encrypt
Let's Encrypt + certbot is the only sane way to do SSL in 2026. If you're still paying for SSL certs, stop. Free certs, auto-renewal, and Certbot handles the Nginx config changes for you. There's no excuse for running HTTP-only anymore.
Install Certbot
# Ubuntu/Debian
sudo apt install certbot python3-certbot-nginx
# Fedora
sudo dnf install certbot python3-certbot-nginx
Get a Certificate
sudo certbot --nginx -d example.com -d www.example.com
Certbot verifies domain ownership via HTTP challenge, issues the cert, modifies your Nginx config to add SSL directives, and sets up auto-renewal. It also drops in an HTTP-to-HTTPS redirect. The whole thing takes about 30 seconds.
Check Auto-Renewal
sudo certbot renew --dry-run
If the dry run passes, you're set. Certs renew before expiry via a systemd timer.
Reverse Proxy
This is where Nginx really earns its keep. App on port 3000, users hit port 80/443, Nginx sits in front and forwards traffic.
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
One gotcha here: watch the trailing slash on proxy_pass. proxy_pass http://localhost:3000 and proxy_pass http://localhost:3000/ behave differently when you have a location prefix. Without the slash, Nginx passes the full original URI. With the slash, it strips the matched location prefix. This has bitten me more than once — requests hitting the wrong path on the backend and I'm staring at 404s wondering what's wrong.
The headers matter:
X-Real-IP— without this your app sees all requests from 127.0.0.1X-Forwarded-For— full proxy chainX-Forwarded-Proto— tells the app if the original request was HTTPS- Upgrade/Connection — needed for WebSocket connections
I spent two hours once wondering why my reverse proxy returned 502. The upstream block pointed to port 3000 but the app was on 8080. Nginx error logs said "connection refused" which at least pointed me in the right direction. Always check /var/log/nginx/error.log first — it's usually more helpful than you'd expect.
Multiple Apps on One Server
Two ways to do this: subdomains or path-based routing. I prefer subdomains because the config is cleaner, but path-based works when you don't want to mess with DNS for every service.
By Subdomain
# api.example.com
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://localhost:3001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
# app.example.com
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://localhost:3002;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
By Path
server {
listen 80;
server_name example.com;
location /api {
proxy_pass http://localhost:3001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
proxy_pass http://localhost:3002;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Static File Serving
Let Nginx handle static assets directly — don't proxy them to your app. Your Node/Python/whatever backend has no business serving JPEGs.
server {
listen 80;
server_name example.com;
root /var/www/example;
# Static files with caching
location ~* \.(jpg|jpeg|png|gif|ico|css|js|pdf|txt)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
# Everything else goes to the app
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Static assets get a 30-day cache header. Everything else goes to the app. This alone cut response times in half on a project I worked on — the app server stopped wasting cycles on files Nginx could serve from disk in microseconds.
Security Headers
Add these to your server blocks. Takes two minutes and stops a whole class of attacks:
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
Better yet, put them in a snippet file and include it (so you don't copy-paste the same block into every server config):
# In /etc/nginx/snippets/security-headers.conf
add_header X-Frame-Options "SAMEORIGIN" always;
# ... rest of headers
# In your server block
include snippets/security-headers.conf;
Rate Limiting
Define zones in the http block, apply per-location. This is worth doing even on small projects — bots will find your login endpoint faster than you'd think.
# In the http block (nginx.conf or /etc/nginx/conf.d/rate-limiting.conf)
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;
# In your server block
location /api/login {
limit_req zone=login burst=5 nodelay;
proxy_pass http://localhost:3000;
}
location /api {
limit_req zone=general burst=20 nodelay;
proxy_pass http://localhost:3000;
}
The general zone allows 10 req/s, the login zone 1 req/s. burst lets short spikes through without dropping requests immediately — set it too low and legitimate users get 429'd during normal browsing.
Troubleshooting
502 Bad Gateway
Your backend isn't responding on the port in proxy_pass. Check if the app is actually running and on the right port. This is the number one Nginx question on Stack Overflow and it's almost always a port mismatch.
Permission Denied
Nginx runs as www-data (Debian) or nginx (RHEL). That user needs read access to your web root:
sudo chown -R www-data:www-data /var/www/mysite
sudo chmod -R 755 /var/www/mysite
Syntax Errors
Run nginx -t. It tells you the file and line number.
Changes Not Taking Effect
Check three things: did you actually reload (sudo systemctl reload nginx), is there a symlink in sites-enabled, and does server_name match exactly what you're typing in the browser. I've spent embarrassing amounts of time debugging configs that were fine — I just forgot to reload.
Useful Commands Reference
# Test configuration
sudo nginx -t
# Reload config (no downtime)
sudo systemctl reload nginx
# Restart nginx (brief downtime)
sudo systemctl restart nginx
# See what nginx is doing
sudo systemctl status nginx
# View error logs
sudo tail -f /var/log/nginx/error.log
# View access logs
sudo tail -f /var/log/nginx/access.log
# See all enabled sites
ls -la /etc/nginx/sites-enabled/
# Disable a site
sudo rm /etc/nginx/sites-enabled/sitename
💬 Comments