Our backend now handles concurrency correctly.
But real systems must also deal with network failures and distributed systems issues.
External APIs fail sometimes
When calling services like email providers, we might receive errors like:
503 Service Unavailable
Instead of failing immediately, systems use retries with exponential backoff.
Exponential backoff
Example:
Attempt 1 → wait 1 second
Attempt 2 → wait 2 seconds
Attempt 3 → wait 4 seconds
Attempt 4 → wait 8 seconds
The wait time doubles after each failure.
This prevents overwhelming a service that is already struggling.
Implementing retries in Python
import time
def send_email_with_retry(student_email, max_retries=3):
for attempt in range(max_retries):
try:
send_email(student_email)
return True
except Exception as e:
if attempt < max_retries - 1:
wait_time = 2 ** attempt # 1, 2, 4 seconds
time.sleep(wait_time)
else:
# Log failure and move to dead letter queue
log_failed_email(student_email)
return False
Authentication with headers
APIs must verify who is calling them.
Instead of sending credentials in URLs, systems use headers.
Authorization: Bearer TOKEN
Tokens are generated when a user logs in and stored by the client (browser, mobile app, etc).
Each request includes the token so the server can verify identity.
Example in FastAPI
from fastapi import Header, HTTPException
@app.post("/register")
async def register(
student: Student,
authorization: str = Header(None)
):
if not authorization or not authorization.startswith("Bearer "):
raise HTTPException(status_code=401, detail="Unauthorized")
token = authorization.split(" ")[1]
user = verify_token(token)
if not user:
raise HTTPException(status_code=401, detail="Invalid token")
# Process registration
Long-running tasks
Some tasks take time.
Ticket generation might involve:
• generate QR code
• create PDF
• upload ticket
• send email
Instead of blocking the API, these tasks run as background jobs.
Student registers
↓
API saves data
↓
Job added to queue
↓
Background worker processes task
Using Redis Queue
from rq import Queue
from redis import Redis
redis_conn = Redis()
queue = Queue(connection=redis_conn)
@app.post("/register")
async def register(student: Student):
# Save registration immediately
save_registration(student)
# Queue background job
queue.enqueue(generate_and_send_ticket, student.id)
return {"message": "Registration successful"}
The API responds immediately. The ticket generation happens in the background.
Idempotency
Sometimes the client retries a request because the response was lost.
Without protection, the student might get registered twice.
To avoid this, APIs use idempotency keys.
Idempotency-Key: abc123
If the same key appears again, the server returns the previous result instead of executing the operation again.
Implementing idempotency
import redis
redis_client = redis.Redis()
@app.post("/register")
async def register(
student: Student,
idempotency_key: str = Header(None)
):
if idempotency_key:
# Check if we've seen this key before
cached_response = redis_client.get(f"idempotency:{idempotency_key}")
if cached_response:
return json.loads(cached_response)
# Process registration
result = save_registration(student)
# Cache the response
if idempotency_key:
redis_client.setex(
f"idempotency:{idempotency_key}",
86400, # 24 hours
json.dumps(result)
)
return result
The real life of a backend request
Behind every simple API endpoint, many systems work together:
Workers handling concurrency
Async operations managing I/O
Locks preventing race conditions
Atomic writes protecting data
Retries handling network failures
Authentication securing APIs
Background workers handling long tasks
Idempotency preventing duplicate actions
These are the concepts that turn simple API code into reliable production systems.
And that is the real story behind every backend request.
The Life of a Backend Request Series
- Part 1: When Hundreds of Users Hit Your API
- Part 2: Why Async APIs Can Handle Thousands of Requests
- Part 3: Race Conditions, Locks and Safe Data Handling
- Part 4: Making APIs Reliable in the Real World