tl;dr - Make sure /etc/subuid
and /etc/subgid
have good ranges, podman
stores it’s auth config at ${XDG_RUNTIME_DIR}/containers/auth.json
, and make sure you use :z
on volumes you want to re-use with different containers (ex. postgres
).
After getting my podman setup working I ran into a few issues that were work writing up real quick just in case anyone else runs into them.
While building a relatively large image with many layers I ran out of subuid/subgid space surprisingly quickly. After attempting to RTFM and doing a bit of wild google searching the most useful resources were:
I won’t go into it too deeply here (see the previous post or the existing body of literature on how “rootless containers” work), but basically what happened is that I did not give my user account enough sub-uids or sub-gids. The error message says as much:
I got a good hint from the error message:
$ podman pull image/tag:1.2.3
Trying to pull docker.io/image/tag:1.2.3...
Getting image source signatures
Copying blob 40dca07f8222 done
Copying blob 6e83b260b73b done
Copying blob 8ee29e426c26 done
Copying blob d19e5a17f613 done
Copying blob b420ae9e10b3 done
Copying blob e26b65fd1143 done
Copying blob 1db464c93497 done
Copying blob 925405374b1e done
Copying blob 137ddab4acf8 done
Copying blob d8ee63bad8df done
Copying blob 893cf92b81aa done
Copying blob 50888863bed9 done
Copying blob fc172b401726 done
Copying blob a0917fc7b09f done
Copying blob d56a482d36a0 done
Copying blob abafe6754f84 done
Copying blob 46cdc1ac9eeb done
Copying blob 9a8a615d33e1 done
Copying config 6a6af371d8 done
Writing manifest to image destination
Storing signatures
Error processing tar file(exit status 1): there might not be enough IDs available in the namespace (requested 0:8377 for /etc/container_environment.json): lchown /etc/container_environment.json: invalid argument
The obvious way to fix this is to make sure that you have enough space allocated for sub uid/gids! Since we know that that information is stored in files on disk (/etc/subuid
, /etc/subgid
) we should be able to fix this by allocating more space. For example, here’s what /etc/subuid
had originally:
/etc/subuid
:
mrman:165536:4096
To raise the limits we can simply change this file, here’s what it looks like after:
/etc/subuid
:
mrman:100000:65535
After changing these files I ran podman system migrate
but I don’t think that’s strictly necessary.
podman login
, where did my creds go??${XDG_RUNTIME_DIR}/containers/auth.json
While working with podman login
I wanted to save some auth credentials for reuse later so I needed to copy the auth.json
file out of there and give it a meaningful name somewhere else (or use a different credentials provider altogether if you can swing it).
What $XDG_RUNTIME_DIR
is set to is theoretically distribution/setup dependent as far as I know but for most people it will be something like /run/user/1000
.
While I was running some dependencies with persistent data (ex. a DB) for some projects, I found that I could start, but not restart them. When I tried to restart I got this error:
$ make db-local
mkdir -p /path/to/my/project/infra/runtime/db/data
Running local DB...
/usr/bin/podman run --rm \
-it \
--env POSTGRES_PASSWORD="postgres" \
--env POSTGRES_USER="postgres" \
-p 5432:5432 \
-v /path/to/my/project/infra/runtime/db/data:/var/lib/postgresql/data \
--name "pg" \
"postgres":"12.3-alpine"
chmod: /var/lib/postgresql/data: Operation not permitted
So right off the bat you can tell it’s something of a permissions error. It was a bit confusing since of course I’m not running chmod
, and the directory that is pointed to is inside the container. I imagine if others were debugging this, they might notice that it worked on first boot but then never again (if you keep the data dir around).
So this issue is actually SELinux springing into action to keep you a bit safer. It’s captured in a podman issue thread. The gist of what’s happening is that SELinux is labelling the volume data to be used/owned by a given process, and that labelling is what is preventing other processes (the second postgres
instance, in my case) from taking ownership of the data.
--label=disable
option (OK… but I don’t want the labeling security feature completely gone…)docker
invocations)So here’s a good time to RTFM on volume binding options in docker
(or your container runtime of choice). If you want to really get the feel for the path I went through, check out these links in order:
This exposes one interesting thing – before you switch to podman
on all your machines – have you made sure to set this label? I know I haven’t thought about it up until now.
Hopefully this article helps some people who might get stuck using podman
in the future!