My Podman containers on a custom network couldn’t resolve each other by name — even though everything looked correct. After methodically eliminating layers (network config, DNS daemon, container state, firewall), I discovered UFW was silently blocking UDP port 53 on the new podman4 bridge interface. The fix? One ufw allow rule. Here’s how I got there.
🔧 The Goal: Isolated, Secure Container Networking
I wanted a clean architecture for my development stack:
- A custom Podman network (
appnet) to isolate apps - Containers communicating by name (e.g.,
mathesar→postgres) - No hardcoded IPs, no host port conflicts, and secure by design
I launched a Mathesar app and added PostgreSQL — all on the same network, using container names for connectivity. During provision, I noticed that Mathesar container is unable to resolve PostgreSQL by using hostname.
I don’t have nslookup utility installed in Mathesar container so, I used alpine to replicate and test the same issue. when I ran:
podman run --rm -it --network appnet alpine nslookup postgres
I got:
;; connection timed out; no servers could be reached
Yet everything seemed right:
- Podman 4.9.3 (netavark backend)
aardvark-dnsinstalled and running- Custom network created with
dns_enabled: true - Container’s
/etc/resolv.confpointed to10.89.3.1(the network gateway)
So why wasn’t DNS working?
Step 1: Verify the Basics
Is the container running?
$ podman ps --filter name=postgres
CONTAINER ID ... NAMES
c67617e5287d ... postgres ← Yes, Up 12 minutes
Is it on the right network?
$ podman inspect postgres | grep -A5 NetworkSettings
"Networks": {
"appnet": { ... }
}
Does the network have DNS enabled?
$ podman network inspect appnet
{
"dns_enabled": true,
"gateway": "10.89.3.1"
}
Is Aardvark-dns running?
$ ps aux | grep aardvark
/usr/lib/podman/aardvark-dns ... ← Yes
Is the container using the right DNS server?
Inside the container:
/ # cat /etc/resolv.conf
nameserver 10.89.3.1
search dns.podman
✅ All checks passed. But DNS still failed.
Step 2: Is Aardvark Actually Responding?
I checked if Aardvark was listening:
$ sudo ss -uln | grep ':53'
UNCONN 0 0 10.89.3.1:53 0.0.0.0:*
✅ Yes — it’s bound to the correct IP.
I even verified Aardvark had a record for postgres:
$ sudo cat /run/containers/networks/aardvark-dns/appnet
10.89.3.1
c67617e5287d80b890527720a82555bcda193c4c4d09d9886a7b231a711357f3 10.89.3.2 postgres,c67617e5287d
✅The entry was there.
So why the timeout?
Step 3: Suspect the Firewall
Since Aardvark was listening and the DNS record existed, the only remaining layer was network packet flow.
I checked UFW (Uncomplicated Firewall), which I knew was active:
$ sudo ufw status verbose
Status: active
...
To Action From
-- ------ ----
10.89.2.1 53/udp on podman3 ALLOW IN Anywhere ← DNS allowed on podman3
...
Ah-ha!
My network appnet uses interface podman4 (as confirmed by ip addr show podman4),
but UFW had no rule for podman4 — only podman3.
That meant:
→ DNS queries (UDP to 10.89.3.1:53) were being silently dropped by the host firewall.
To confirm this, I looked into the journalctl logs for any UFW BLOCK entry. and following confirmed it.
sudo journalctl -xef | grep "UFW BLOCK"
[UFW BLOCK] IN=podman4 OUT .... SRC=10.89.3.2 DST=10.89.3.1 .... PROTO=UDP SPT=51675 DPT=53
Step 4: Fix — Add UFW Rule for podman4
sudo ufw allow in on podman4 to any port 53 proto udp
Then retest:
$ podman run --rm -it --network appnet alpine nslookup postgres
Server: 10.89.3.1
Address: 10.89.3.1:53
Name: postgres.dns.podman
Address: 10.89.3.2
🎉 Success!
Key Lessons Learned
- Podman networks = new bridge interfaces
Eachpodman network creategenerates a newpodmanXinterface. Firewall rules don’t auto-apply. - DNS timeouts ≠ DNS misconfiguration
Even with perfect DNS setup, a host firewall can silently break internal resolution. - UFW + Podman require manual sync
If you use UFW, always add a rule for new Podman networks:
ufw allow in on <interface> to any port 53 proto udp
nslookuptimeout = network-level block
If even external names fail (nslookup google.com), suspect resolv.conf or firewall — not just Aardvark.
Pro Tip: Automate Network Creation + Firewall Rules
Add this to your shell config:
# ~/.bashrc
podman-net() {
sudo podman network create "$1"
local iface=$(sudo podman network inspect "$1" -f '{{.NetworkInterface}}')
sudo ufw allow in on "$iface" to any port 53 proto udp
echo "✅ Network '$1' ready with DNS on $iface"
}Now:
podman-net appnet # creates network + firewall rule in one stepConclusion
What looked like a “Podman DNS bug” turned out to be a classic firewall oversight. By systematically validating each layer — container state → network config → DNS daemon → packet flow — I narrowed it down to a missing UFW rule.
This is the essence of effective troubleshooting:
Assume nothing. Verify everything. Eliminate variables one by one.
Now my containers resolve each other by name, my stack is isolated, and I’ve got a repeatable, secure pattern for future projects.