6 posts tagged with "docker"
View All TagsExploring Docker Compose Support in Spring Boot 3.1
Let's take a brief look at the Docker Compose Support introduced in Spring Boot 3.1.
Please provide feedback if there are any inaccuracies!
Overview
When developing with the Spring framework, it seems that using Docker for setting up DB environments is more common than installing them directly on the local machine. Typically, the workflow involves:
- Using
docker run
before bootRun to prepare the DB in a running state - Performing development and validation tasks using bootRun
- Stopping bootRun and using
docker stop
to stop the container DB
The process of running and stopping Docker before and after development tasks used to be quite cumbersome. However, starting from Spring Boot 3.1, you can use a docker-compose.yaml
file to synchronize the lifecycle of Spring and Docker containers.
Contents
First, add the dependency:
dependencies {
// ...
developmentOnly 'org.springframework.boot:spring-boot-docker-compose'
// ...
}
Next, create a compose file as follows:
services:
elasticsearch:
image: 'docker.elastic.co/elasticsearch/elasticsearch:7.17.10'
environment:
- 'ELASTIC_PASSWORD=secret'
- 'discovery.type=single-node'
- 'xpack.security.enabled=false'
ports:
- '9200' # random port mapping
- '9300'
During bootRun, the compose file is automatically recognized, and the docker compose up
operation is executed first.
However, if you are mapping the container port to a random host port, you may need to update the application.yml
every time docker compose down
is triggered. Fortunately, starting from Spring Boot 3.1, once you write the compose file, Spring Boot takes care of the rest. It's incredibly convenient!
If you need to change the path to the compose file, simply modify the file
property:
spring:
docker:
compose:
file: infrastructure/compose.yaml
There are also properties related to lifecycle management, allowing you to appropriately adjust the container lifecycle. If you don't want the container to stop every time you shut down Boot, you can use the start_only
option:
spring:
docker:
compose:
lifecycle-management: start_and_stop # none, start_only
There are various other options available, so exploring them should help you choose what you need.
Conclusion
No matter how much test code you write, verifying the interaction with the actual DB was essential during the development process. Setting up that environment felt like a tedious chore. While container technology made configuration much simpler, remembering to run docker
commands before and after starting Spring Boot was definitely a hassle.
Now, starting from Spring Boot 3.1, developers can avoid situations where they forget to start or stop containers, preventing memory consumption. It allows developers to focus more on development. The seamless integration of Docker with Spring is both fascinating and convenient. Give it a try!
Reference
Could not find a valid Docker environment
Overview
After updating my Mac and finding that Docker was not working properly, I had to reinstall it. However, I encountered an error where the container was not running properly when running tests.
It turned out that there was an issue with the /var/run/docker.sock
not being properly configured. Here, I will share the solution to resolve this issue.
Description
This problem occurs in Docker desktop version 4.13.0
.
By default Docker will not create the /var/run/docker.sock symlink on the host and use the docker-desktop CLI context instead. (see: https://docs.docker.com/desktop/release-notes/)
You can check the current Docker context using docker context ls
, which will display something like this:
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://kubernetes.docker.internal:6443 (default) swarm
desktop-linux * moby unix:///Users/<USER>/.docker/run/docker.sock
To fix the issue, either set the default context or connect to unix:///Users/<USER>/.docker/run/docker.sock
.
Solution
Try running the following command to switch to the default context and check if Docker works properly:
docker context use default
If the issue persists, you can manually create a symbolic link to resolve it with the following command:
sudo ln -svf /Users/<USER>/.docker/run/docker.sock /var/run/docker.sock
Reference
Docker Network
Overview
Since Docker containers run in isolated environments, they cannot communicate with each other by default. However, connecting multiple containers to a single Docker network enables them to communicate. In this article, we will explore how to configure networks for communication between different containers.
Types of Networks
Docker networks support various types of network drivers such as bridge
, host
, and overlay
based on their purposes.
bridge
: Allows multiple containers within a single host to communicate with each other.host
: Used to run containers in the same network as the host computer.overlay
: Used for networking between containers running on multiple hosts.
Creating a Network
Let's create a new Docker network using the docker network create
command.
docker network create my-net
The newly added network can be verified using the docker network ls
command, which confirms that it was created as a default bridge
network since the -d
option was not specified.
Network Details
Let's inspect the details of the newly added network using the docker network inspect
command.
docker network inspect my-net
[
{
"Name": "my-net",
"Id": "05f28107caa4fc699ea71c07a0cb7a17f6be8ee65f6001ed549da137e555b648",
"Created": "2022-08-02T09:05:20.250288712Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
By checking the Containers
section, we can see that no containers are connected to this network.
Connecting Containers to the Network
Let's first run a container named one
.
docker run -it -d --name one busybox
# af588368c67b8a273cf63a330ee5191838f261de1f3e455de39352e0e95deac4
If the --network
option is not specified when running a container, it will by default connect to the bridge
network.
busybox
is a lightweight command-line library ideal for testing purposes, officially provided by Docker.
docker network inspect bridge
#...
"Containers": {
"af588368c67b8a273cf63a330ee5191838f261de1f3e455de39352e0e95deac4": {
"Name": "one",
"EndpointID": "44a4a022cc0f5fb30e53f0499306db836fe64da15631f2abf68ebc74754d9750",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
#...
]
Now, let's connect the one
container to the my-net
network using the docker network connect
command.
docker network connect my-net one
Upon rechecking the details of the my-net
network, we can see that the one
container has been added to the Containers
section with the IP 172.18.0.2
.
docker network inspect my-net
[
{
"Name": "my-net",
"Id": "05f28107caa4fc699ea71c07a0cb7a17f6be8ee65f6001ed549da137e555b648",
"Created": "2022-08-02T09:05:20.250288712Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"af588368c67b8a273cf63a330ee5191838f261de1f3e455de39352e0e95deac4": {
"Name": "one",
"EndpointID": "ac85884c9058767b037b88102fe6c35fb65ebf91135fbce8df24a173b0defcaa",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Disconnecting a Container from the Network
A container can be connected to multiple networks simultaneously. Since the one
container was initially connected to the bridge
network, it is currently connected to both the my-net
and bridge
networks.
Let's disconnect the one
container from the bridge
network using the docker network disconnect
command.
docker network disconnect bridge one
Connecting a Second Container
Let's connect another container named two
to the my-net
network.
This time, let's specify the network to connect to while running the container using the --network
option.
docker run -it -d --name two --network my-net busybox
# b1509c6fcdf8b2f0860902f204115017c3e2cc074810b330921c96e88ffb408e
Upon inspecting the details of the my-net
network, we can see that the two
container has been assigned the IP 172.18.0.3
and connected.
docker network inspect my-net
[
{
"Name": "my-net",
"Id": "05f28107caa4fc699ea71c07a0cb7a17f6be8ee65f6001ed549da137e555b648",
"Created": "2022-08-02T09:05:20.250288712Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"af588368c67b8a273cf63a330ee5191838f261de1f3e455de39352e0e95deac4": {
"Name": "one",
"EndpointID": "ac85884c9058767b037b88102fe6c35fb65ebf91135fbce8df24a173b0defcaa",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"b1509c6fcdf8b2f0860902f204115017c3e2cc074810b330921c96e88ffb408e": {
"Name": "two",
"EndpointID": "f6e40a7e06300dfad1f7f176af9e3ede26ef9394cb542647abcd4502d60c4ff9",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Container Networking
Let's test if the two containers can communicate with each other over the network.
First, let's use the ping
command from the one
container to ping the two
container. Container names can be used as hostnames.
docker exec one ping two
# PING two (172.18.0.3): 56 data bytes
# 64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.114 ms
# 64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.915 ms
Next, let's ping the one
container from the two
container.
docker exec two ping one
# PING one (172.18.0.2): 56 data bytes
# 64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.108 ms
# 64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.734 ms
# 64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.270 ms
# 64 bytes from 172.18.0.2: seq=3 ttl=64 time=0.353 ms
# 64 bytes from 172.18.0.2: seq=4 ttl=64 time=0.371 ms
Both containers can communicate smoothly.
Removing the Network
Finally, let's remove the my-net
network using the docker network rm
command.
docker network rm my-net
# Error response from daemon: error while removing network: network my-net id 05f28107caa4fc699ea71c07a0cb7a17f6be8ee65f6001ed549da137e555b648 has active endpoints
If there are active containers running on the network you are trying to remove, it will not be deleted.
In such cases, you need to stop all containers connected to that network before deleting the network.
docker stop one two
# one
# two
docker network rm my-net
# my-net
Network Cleanup
When running multiple containers on a host, you may end up with networks that have no containers connected to them. In such cases, you can use the docker network prune
command to remove all unnecessary networks at once.
docker network prune
WARNING! This will remove all custom networks not used by at least one container.
Are you sure you want to continue? [y/N] y
Conclusion
In this article, we explored various docker network
commands:
ls
create
connect
disconnect
inspect
rm
prune
Understanding networks is essential when working with Docker containers, whether for containerizing databases or implementing container clustering. It is crucial to have a good grasp of networking as a key skill for managing multiple containers effectively.
Reference
Docker Volume
Overview
Docker containers are completely isolated by default, which means that data inside a container cannot be accessed from the host machine. This implies that the container's lifecycle is entirely dependent on its internal data. In simpler terms, when a container is removed, its data is also lost.
So, what should you do if you need to permanently store important data like logs or database information, independent of the container's lifecycle?
This is where volumes
come into play.
Installing PostgreSQL Locally
Let's explore volumes by installing and using PostgreSQL in a simple example.
Without Using Volumes
1. Pull the Image
docker run -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=1234 -d postgres
2. Connect to PostgreSQL
docker exec -it postgres psql -U postgres
3. Create a User
create user testuser password '1234' superuser;
4. Create a Database
create database testdb owner testuser;
You can also use tools like DBeaver
or DataGrip
to create users and databases.
When you're done, you can stop the container with docker stop postgres
. Checking the container list with docker ps -a
will show that the container is stopped but not removed.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c72a3d21021 postgres "docker-entrypoint.s…" 54 seconds ago Exited (0) 43 seconds ago postgres
In this state, you can restart the container with docker start postgres
and the data will still be there.
Let's verify this.
Using the \list
command in PostgreSQL will show that the testdb
database still exists.
postgres=# \list
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
testdb | testuser | UTF8 | en_US.utf8 | en_US.utf8 |
(4 rows)
But what happens if you completely remove the container using the docker rm
option?
After running docker rm postgres
and then docker run
again, a new container is created, and you'll see that the testdb
and user are gone.
$ docker rm postgres
postgres
$ docker run -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=1234 -d postgres
67c5c39658f5a21a833fd2fab6058f509ddac110c72749092335eec5516177c2
$ docker exec -it postgres psql -U postgres
psql (14.4 (Debian 14.4-1.pgdg110+1))
Type "help" for help.
postgres=# \list
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)
postgres=#
Using Volumes
First, create a volume.
$ docker volume create postgres
postgres
You can verify the volume creation with the ls
command.
$ docker volume ls
DRIVER VOLUME NAME
local postgres
Now, run the PostgreSQL container with the created volume mounted.
$ docker run -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=1234 -v postgres:/var/lib/postgresql/data -d postgres
002c552fe092da485ee30235d809c835eeb08bd7c00e6f91a2f172618682c48e
The subsequent steps are the same as those without using volumes. Now, even if you completely remove the container using docker rm
, the data will remain in the volume and won't be lost.
As mentioned earlier, for long-term storage of log files or backup data, you can use volumes to ensure data persistence independent of the container's lifecycle.
Conclusion
We have explored what Docker volumes are and how to use them through a PostgreSQL example. Volumes are a key mechanism for data management in Docker containers. By appropriately using volumes based on the nature of the container, you can manage data safely and easily, which can significantly enhance development productivity once you get accustomed to it. For more detailed information, refer to the official documentation.
Reference
Why Docker?
This article is written for internal information sharing and is explained based on a Java development environment.
What is Docker?
A containerization technology that allows you to create and use Linux containers, and also the name of the largest company supporting this technology as well as the name of the open-source project.
The image everyone has seen at least once when searching for Docker
Introduced in 2013, Docker has transformed the infrastructure world into a container-centric one. Many applications are now deployed using containers, with Dockerfiles created to build images and deploy containers, becoming a common development process. In the 2019 DockerCon presentation, it was reported that there were a staggering 105.2 billion container image pulls.
Using Docker allows you to handle containers like very lightweight modular virtual machines. Additionally, containers can be built, deployed, copied, and moved from one environment to another flexibly, supporting the optimization of applications for the cloud.
Benefits of Docker Containers
Consistent Behavior Everywhere
As long as the container runtime is installed, Docker containers guarantee the same behavior anywhere. For example, team member A using Windows OS and team member B using MacOS are working on different OSs, but by sharing the image through a Dockerfile, they can see the same results regardless of the OS. The same goes for deployment. If the container has been verified to work correctly, it will operate normally without additional configuration wherever it is run.
Modularity
Docker's containerization approach focuses on the ability to decompose, update, or recover parts of an application without needing to break down the entire application. Users can share processes among multiple applications in a microservices-based approach, similar to how service-oriented architecture (SOA) operates.
Layering and Image Version Control
Each Docker image file consists of a series of layers, which are combined into a single image.
Docker reuses these layers when building new containers, making the build process much faster. Intermediate changes are shared between images, improving speed, scalability, and efficiency.
Rapid Deployment
Docker-based containers can reduce deployment time to mere seconds. Since there is no need to boot the OS to add or move containers, deployment time is significantly reduced. Moreover, the fast deployment speed allows for cost-effective and easy creation and deletion of data generated by containers, without users needing to worry about whether it was done correctly.
In short, Docker technology emphasizes efficiency and offers a more granular and controllable microservices-based approach.
Rollback
When deploying with Docker, images are used with tags. For example, if you deploy using version 1.2 of an image, and version 1.1 of the image is still in the repository, you can simply run the command without needing to prepare the jar file again.
docker run --name app image:1.2
docker stop app
## Run version 1.1
docker run --name app image:1.1
Comparing Before and After Using Docker
Using Docker containers allows for much faster and more flexible deployment compared to traditional methods.
Deployment Without Docker Containers
- Package the
jar
file to be deployed on the local machine. - Transfer the
jar
file to the production server using file transfer protocols likescp
. - Write a service file using
systemctl
for status management. - Run the application with
systemctl start app
.
If multiple apps are running on a single server, the complexity increases significantly in finding stopped apps. The process is similarly cumbersome when running multiple apps on multiple servers, requiring commands to be executed on each server, making it a tiring process.
Deployment With Docker Containers
- Use a
Dockerfile
to create an image of the application. → Build ⚒️ - Push the image to a repository like Dockerhub or Gitlab registry. → Shipping🚢
- Run the application on the production server with
docker run image
.
You don't need to waste time on complex path settings and file transfer processes. Docker works in any environment, ensuring it runs anywhere and uses resources efficiently.
Docker is designed to manage single containers effectively. However, as you start using hundreds of containers and containerized apps, management and orchestration can become very challenging. To provide services like networking, security, and telemetry across all containers, you need to step back and group them. This is where Kubernetes1 comes into play.
When Should You Use It?
Developers can find Docker extremely useful in almost any situation. In fact, Docker often proves superior to traditional methods in development, deployment, and operations, so Docker containers should always be a top consideration.
- When you need a development database like PostgreSQL on your local machine.
- When you want to test or quickly adopt new technologies.
- When you have software that is difficult to install or uninstall directly on your local machine (e.g., reinstalling Java on Windows can be a nightmare).
- When you want to run the latest deployment version from another team, like the front-end team, on your local machine.
- When you need to switch your production server from NCP to AWS.
Example
A simple API server:
docker run --name rest-server -p 80:8080 songkg7/rest-server
# Using curl
curl http://localhost/ping
# Using httpie
http localhost/ping
Since port 80 is mapped to the container's port 8080, you can see that communication with the container works well.
--name
: Assign a name to the container
-p
: Publish a container's port(s) to the host
--rm
: Automatically remove the container when it exits
-i
: Interactive, keep STDIN open even if not attached
-t
: Allocate a pseudo-TTY, creating an environment similar to a terminal
-v
: Bind mount a volume
Conclusion
Using Docker containers allows for convenient operations while solving issues that arise with traditional deployment methods. Next, we'll look into the Dockerfile
, which creates an image of your application.