If you’ve transitioned from Docker to Podman, you’ve likely encountered a fundamental question: “How should my containers communicate?” Unlike Docker’s straightforward container-to-container networking, Podman introduces pods—a powerful concept borrowed from Kubernetes that adds both capability and complexity.
In this post, we’ll unravel the mystery of Podman communication patterns, helping you make informed decisions about when to use pods, when to use networks, and when to use both. Whether you’re developing locally or architecting production systems, understanding these patterns is crucial for building efficient, maintainable containerised applications.
Part 1: Understanding the Building Blocks
What Are Pods Really?
Think of a pod as a logical host machine within your system. Containers in the same pod share:
- Network namespace (same IP, can communicate via
localhost) - IPC namespace (inter-process communication)
- UTS namespace (hostname)
- Optionally, shared volumes and memory
# A simple pod with two containers
podman pod create --name myapp-pod -p 8080:80
podman run -d --pod myapp-pod --name web nginx:latest
podman run -d --pod myapp-pod --name log-agent fluentd:latest
# These containers can talk to each other via localhost:portWhat Are Networks?
Networks in Podman function similarly to Docker networks—they’re virtual networks that containers (or pods) can join for isolated communication.
# Creating and using a custom network
podman network create app-network
podman run -d --network app-network --name database postgres:15Part 2: The Decision Framework
When Pods Shine 🌟
Use pods when your containers exhibit these characteristics:
- Tight Coupling – Containers that are part of the same logical application unit
- Shared Lifecycle – They start, stop, and scale together
- Localhost Communication Needed – Performance-critical inter-container communication
- Sidecar Pattern – Main application + helper containers (log shippers, proxies, etc.)
- Kubernetes Development – Local development for K8s deployments
Real-world example: Web Application with Sidecars
podman pod create --name webapp --publish 8443:443
podman run -d --pod webapp --name app my-webapp:latest
podman run -d --pod webapp --name nginx nginx:reverse-proxy
podman run -d --pod webapp --name filebeat filebeat:latest
# All share network, nginx proxies to app on localhost:3000
# filebeat reads logs from shared volumeWhen Networks Are the Right Choice 🌐
Use standalone containers on networks when:
- Loose Coupling – Independent services with separate lifecycles
- Service Discovery Needed – Multiple consumers of the same service
- Network Isolation – Security segmentation (frontend/backend separation)
- Legacy Application Patterns – Traditional multi-tier architectures
- Resource Optimization – Avoiding pod overhead for simple services
Real-world example: Microservices Architecture
podman network create microservices-net
# Independent services
podman run -d --network microservices-net --name auth-service auth:latest
podman run -d --network microservices-net --name payment-service payment:latest
podman run -d --network microservices-net --name user-service user:latest
# Shared infrastructure
podman run -d --network microservices-net --name redis redis:7-alpine
podman run -d --network microservices-net --name postgres postgres:15Part 3: The Powerful Hybrid Approach
Yes, Containers Can Talk to Pods!
This is where Podman’s flexibility truly shines. You can have pods and standalone containers on the same network, enabling complex architectures.
# Create a shared network
podman network create production-net
# Database as standalone container (multiple pods use it)
podman run -d --network production-net --name postgres-db \
-v pgdata:/var/lib/postgresql/data \
postgres:15
# Backend API as a pod (app + metrics sidecar)
podman pod create --name api-pod --network production-net
podman run -d --pod api-pod --name api-server go-api:latest
podman run -d --pod api-pod --name metrics prometheus-exporter:latest
# Frontend as another pod
podman pod create --name frontend-pod --network production-net -p 8080:80
podman run -d --pod frontend-pod --name nginx nginx:frontend
# Message queue as standalone
podman run -d --network production-net --name rabbitmq rabbitmq:managementCommunication flow:
frontend-pod→api-podvia DNS (api-podresolves to pod IP)api-pod→postgres-dbvia DNS- All services →
rabbitmqvia DNS - Within
api-pod:api-server→metricsvialocalhost:9090
Part 4: Advanced Patterns and Best Practices
Pattern 1: The Kubernetes-Compatible Setup
# Develop like you'll deploy on K8s
podman pod create --name my-service
podman run -d --pod my-service --name app myapp:latest
# Later, add a sidecar without changing networking
podman run -d --pod my-service --name istio-proxy istio/proxyv2
# Generate K8s YAML for easy migration
podman generate kube my-service > deployment.yamlPattern 2: Systemd-Integrated Production Services
Using Quadlet (Podman 4.4+):
# ~/.config/containers/systemd/api.pod
[Pod]
Name=api-pod
Network=production-net
[Container]
Image=docker.io/library/nginx:latest
Pod=api-pod
[Container]
Image=myapp:latest
Pod=api-pod
[Install]
WantedBy=multi-user.target
Pattern 3: Development vs Production Parity
# Development: Simple networking
podman network create dev-net
podman run -d --network dev-net --name db postgres
podman run -d --network dev-net -p 3000:3000 --name app myapp
# Production: Pod-based with resource limits
podman pod create --name prod-app --network prod-net \
--cpus=2 --memory=2g -p 80:3000
podman run -d --pod prod-app --name app myapp:prodPart 5: Decision Checklist
Ask yourself these questions:
| Question | If YES → | If NO → |
|---|---|---|
| Do containers always run together? | Use Pod | Consider separate |
Need localhost communication? | Use Pod | Network is fine |
| Sharing IPC or temporary volumes? | Use Pod | Network is fine |
| Targeting Kubernetes deployment? | Use Pod | Either works |
| Independent scaling needed? | Use Network | Pod might work |
| Security isolation required? | Separate Networks | Shared network OK |
| Simple, single-service app? | Either | Start with container |
Part 6: Common Pitfalls and Solutions
⚠️ Pitfall 1: Over-podding
Problem: Putting everything in pods “because we can”
Solution: Start with single-container pods, expand only when needed
⚠️ Pitfall 2: DNS Resolution Issues
Problem: Containers can’t resolve pod names
Solution: Use podman network inspect to verify DNS configuration
⚠️ Pitfall 3: Port Conflict Confusion
Problem: Multiple containers trying to bind same port in pod
Solution: Remember: pods share IP, so port conflicts happen!
# WRONG: Both try to bind to port 80 in same pod
podman run --pod mypod --name web1 -p 80:80 nginx
podman run --pod mypod --name web2 -p 80:80 nginx
# RIGHT: Different internal ports, map to same external
podman pod create --name mypod -p 8080:80 -p 8081:80
podman run --pod mypod --name web1 nginx # binds to 80
podman run --pod mypod --name web2 nginx # ERROR! Can't bind to 80Conclusion: Start Simple, Evolve as Needed
My recommended approach for most projects:
- Begin with single-container pods on a custom network
- Add multi-container pods only when you need shared namespaces
- Use the hybrid model for microservices
- Leverage systemd/Quadlet for production services
Podman gives you the flexibility to choose the right abstraction for each part of your system. Don’t feel pressured to use pods everywhere—they’re a tool, not a religion. The best architecture depends on your specific requirements, team expertise, and deployment target.
Remember: The goal isn’t to use the “coolest” feature, but to build maintainable, efficient systems. Sometimes, a simple container on a network is the perfect solution.