Currently Iām planning to dockerize some web applications but I didnāt find a reasonably easy way do create the images to be hosted in my repository so I can pull them on my server.
What I currently have is:
- A local computer with a directory where the application that I want to dockerize is located
- A ādocker serverā running Portainer without shell/ssh access
- A place where I can upload/host the Docker images and where I can pull the images from on the āDocker serverā
- Basic knowledge on how to write the needed
Dockerfile
What I now need is a sane way to build the images WITHOUT setting up a fully featured Docker environment on the local computer.
Ideally something where I can build the images and upload them but without that something ālittering Docker-related files all over my systemā.
Something like a VM that resets on every start maybe? So ā¦ build the image, upload to repository, close the terminal window, and forget that anything ever happened.
What is YOUR solution to create and upload Docker images in a clean and sane way?
Poorly
For the littering part, just type crontab -e
and add the following line:
@daily docker system prune -a -f
Careful this will also delete your unused volumes (not attached to a running container because it is stopped for whatever reason counts as unused). For this reason alone, always use bind mounts for volumes you care about.
You shouldnāt need sudo to run docker, just can create a docker
group and add your user to it. This will give you the steps on how to run docker without sudo
.
Edit: as pointed out below, please make sure that youāre comfortable with giving these permissions to the user youāre adding to the docker group.
run docker without sudo.
Doing that, you effectively give the user account root access without password
docker run --volume /etc:/host_etc debian /bin/bash
-> can read/write anything below the hostās /etc
directory, including shadow file, etc.
Genuinely curious, what would the advantages be?
Also, what if the Linux distro does not have systemd?
I was just making a meme dude. Personally, I like systemd, itās more complicated to learn, I ended up reading books to really learn it properly. Thereās 100% nothing wrong with cron.
One of the reasons I like timers is journalctl integration. I can see everything in one place. Small thing.
The chances I am going to manage a linux distro without systemd are low, but some systems (arch for example) donāt have cron out of the box.
Not that big of a deal since itās easy to translate them all, but thatās one of the reasons why I default to systemd/timer units.
I use Gitea and a Runner to build Docker images from the projects in the git repo. Since Iām lazy and only have one machine, I just run the runner on the target machine and mount the docker socket.
BTW: If you manage to ālitter your system with docker related filesā you fundamentally mis-used Docker. Thatās exactly what Docker is supposed to prevent.
I already have Forgejo (soft-fork of Gitea) in a Docker container. I guess I need to check how I can access that exact same Docker server where itself is hosted ā¦
With littering I mean several docker dotfiles and dotdirectories in the userās home directory and other system-wide locations. When I installed Docker on my local computer it created various images, containers, and volumes when created an image.
This is what I want to prevent. Neither do I want nor do I need a fully-featured Docker environment on my local computer.
Maybe you should read up a bit about how docker works, you seem to misunderstand a lot here.
For example the āvarious imagesā are kind of the point of docker. Images are layered, and each layer is its own image, so you might end up with 3 or 4 images despite only building one image.
This is something you canāt really prevent. Itās just how docker works.
Anyway, you can mount the docker socket into a container, and using that socket you can then build an image within the running container. Thatās essentially how most ci/cd systems work.
You could maybe look into podman and buildah, as far as I know, these can build images without a running docker daemon. That might be a tad ācleanerā, but comes with other problems (like no caching).
I have no problem with Docker creating several images and containers and volumes for building a single-image application. The problem is that it does not clean up afterwards and leaves me with multiple things I donāt need for anything else.
I also donāt care about caching or any āmagicā stuff. I just ideally want to run one command (or script doing it for me) to build an image resulting in just this one image without any other traces left. ā¦ I just like a clean environment and the build process ideally being self-contained.
But Iāll look into your suggestions, thanks!
Do you mean that you want to build the docker image on one computer, export it to a different computer where itās going to run, and there shouldnāt be any traces of the build process on the first computer? Perhaps itās possible with the āoutput optionā¦ Otherwise you could write a small script which combines the commands for docker build, export to file, delete local image, and clean up the system.
For local testing: build and run tests on whatever computer Iām developing on.
For deployment: I have a self hosted gitlab instance in a kubernetes cluster. It comes with a registry all setup. Push the project, let the cicd pipeline build, test, and deploy through staging into prod.
Gitlab has a great set of CI tools for deploying docker images, and includes an internal registry of images automatically tied to your repo and available in CI.