Skip to main content

Could not find a valid Docker environment

· One min read
Haril Song
Owner, Software Engineer at 42dot

Overview

After updating my Mac and finding that Docker was not working properly, I had to reinstall it. However, I encountered an error where the container was not running properly when running tests.

It turned out that there was an issue with the /var/run/docker.sock not being properly configured. Here, I will share the solution to resolve this issue.

Description

This problem occurs in Docker desktop version 4.13.0.

By default Docker will not create the /var/run/docker.sock symlink on the host and use the docker-desktop CLI context instead. (see: https://docs.docker.com/desktop/release-notes/)

You can check the current Docker context using docker context ls, which will display something like this:

NAME                TYPE                DESCRIPTION                               DOCKER ENDPOINT                                KUBERNETES ENDPOINT                                 ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://kubernetes.docker.internal:6443 (default) swarm
desktop-linux * moby unix:///Users/<USER>/.docker/run/docker.sock

To fix the issue, either set the default context or connect to unix:///Users/<USER>/.docker/run/docker.sock.

Solution

Try running the following command to switch to the default context and check if Docker works properly:

docker context use default

If the issue persists, you can manually create a symbolic link to resolve it with the following command:

sudo ln -svf /Users/<USER>/.docker/run/docker.sock /var/run/docker.sock

Reference

Key Generation Error

· One min read
Haril Song
Owner, Software Engineer at 42dot
info

Here is a simple solution to resolve the error.

key generation error: Unknown signature subpacket: 34

While trying to register a GPG key on Keybase, the above error occurred. In search of a solution, I found the following workaround on GitHub.

$ gpg --edit-key mykey

gpg> showpref
[ultimate] (1). mykey
Cipher: AES256, AES192, AES, 3DES
AEAD: OCB, EAX
Digest: SHA512, SHA384, SHA256, SHA224, SHA1
Compression: ZLIB, BZIP2, ZIP, Uncompressed
Features: MDC, AEAD, Keyserver no-modify

gpg> setpref AES256 AES192 AES 3DES SHA512 SHA384 SHA256 SHA224 SHA1 ZLIB BZIP2 ZIP
Set preference list to:
Cipher: AES256, AES192, AES, 3DES
AEAD:
Digest: SHA512, SHA384, SHA256, SHA224, SHA1
Compression: ZLIB, BZIP2, ZIP, Uncompressed
Features: MDC, Keyserver no-modify
Really update the preferences? (y/N) y

gpg> save

After this, the operation should run smoothly. For more details, refer to the provided link.

Reference

How to Change Vimium Shortcuts

· One min read
Haril Song
Owner, Software Engineer at 42dot

Overview

Recently, as I started using Vim, I've been aligning all my environments with Vim. Among them, I noticed that there are some differences in the shortcuts between Vimari, the Vim extension for Safari, and Vimium, the extension for Chrome. To unify them, I decided to remap specific keys. In this guide, I will introduce how to remap shortcuts in Vimium.

Vimium Options Window

where

Click the button in the Chrome extension to open the options.

input

By modifying this section, you can change the shortcuts. The basic mapping method is the same as in Vim. Personally, I found it more convenient to change the tab navigation shortcuts from q, w in Vimari to J, K in Vimium.

If you're unsure which key to map to a specific action, you can click on "show available commands" next to it for a helpful explanation.

help-view

From here, you can find the desired action and map it to a specific key.

Enable Keyboard Key Repeat on Mac

· One min read
Haril Song
Owner, Software Engineer at 42dot

On a Mac, when you press and hold a specific key for a while, a special character input window like umlauts may appear. This can be quite disruptive when using editors like Vim for code navigation.

defaults write -g ApplePressAndHoldEnabled -bool false

After running the above command and restarting the application, the special character input window, such as umlauts, will no longer appear and key repetition will be enabled.

Docker Network

· 6 min read
Haril Song
Owner, Software Engineer at 42dot

Overview

Since Docker containers run in isolated environments, they cannot communicate with each other by default. However, connecting multiple containers to a single Docker network enables them to communicate. In this article, we will explore how to configure networks for communication between different containers.

Types of Networks

Docker networks support various types of network drivers such as bridge, host, and overlay based on their purposes.

  • bridge: Allows multiple containers within a single host to communicate with each other.
  • host: Used to run containers in the same network as the host computer.
  • overlay: Used for networking between containers running on multiple hosts.

Creating a Network

Let's create a new Docker network using the docker network create command.

docker network create my-net

The newly added network can be verified using the docker network ls command, which confirms that it was created as a default bridge network since the -d option was not specified.

Network Details

Let's inspect the details of the newly added network using the docker network inspect command.

docker network inspect my-net
[
{
"Name": "my-net",
"Id": "05f28107caa4fc699ea71c07a0cb7a17f6be8ee65f6001ed549da137e555b648",
"Created": "2022-08-02T09:05:20.250288712Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]

By checking the Containers section, we can see that no containers are connected to this network.

Connecting Containers to the Network

Let's first run a container named one.

docker run -it -d --name one busybox
# af588368c67b8a273cf63a330ee5191838f261de1f3e455de39352e0e95deac4

If the --network option is not specified when running a container, it will by default connect to the bridge network.

info

busybox is a lightweight command-line library ideal for testing purposes, officially provided by Docker.

docker network inspect bridge
#...
"Containers": {
"af588368c67b8a273cf63a330ee5191838f261de1f3e455de39352e0e95deac4": {
"Name": "one",
"EndpointID": "44a4a022cc0f5fb30e53f0499306db836fe64da15631f2abf68ebc74754d9750",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
#...
]

Now, let's connect the one container to the my-net network using the docker network connect command.

docker network connect my-net one

Upon rechecking the details of the my-net network, we can see that the one container has been added to the Containers section with the IP 172.18.0.2.

docker network inspect my-net
[
{
"Name": "my-net",
"Id": "05f28107caa4fc699ea71c07a0cb7a17f6be8ee65f6001ed549da137e555b648",
"Created": "2022-08-02T09:05:20.250288712Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"af588368c67b8a273cf63a330ee5191838f261de1f3e455de39352e0e95deac4": {
"Name": "one",
"EndpointID": "ac85884c9058767b037b88102fe6c35fb65ebf91135fbce8df24a173b0defcaa",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

Disconnecting a Container from the Network

A container can be connected to multiple networks simultaneously. Since the one container was initially connected to the bridge network, it is currently connected to both the my-net and bridge networks.

Let's disconnect the one container from the bridge network using the docker network disconnect command.

docker network disconnect bridge one

Connecting a Second Container

Let's connect another container named two to the my-net network.

This time, let's specify the network to connect to while running the container using the --network option.

docker run -it -d --name two --network my-net busybox
# b1509c6fcdf8b2f0860902f204115017c3e2cc074810b330921c96e88ffb408e

Upon inspecting the details of the my-net network, we can see that the two container has been assigned the IP 172.18.0.3 and connected.

docker network inspect my-net
[
{
"Name": "my-net",
"Id": "05f28107caa4fc699ea71c07a0cb7a17f6be8ee65f6001ed549da137e555b648",
"Created": "2022-08-02T09:05:20.250288712Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"af588368c67b8a273cf63a330ee5191838f261de1f3e455de39352e0e95deac4": {
"Name": "one",
"EndpointID": "ac85884c9058767b037b88102fe6c35fb65ebf91135fbce8df24a173b0defcaa",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"b1509c6fcdf8b2f0860902f204115017c3e2cc074810b330921c96e88ffb408e": {
"Name": "two",
"EndpointID": "f6e40a7e06300dfad1f7f176af9e3ede26ef9394cb542647abcd4502d60c4ff9",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

Container Networking

Let's test if the two containers can communicate with each other over the network.

First, let's use the ping command from the one container to ping the two container. Container names can be used as hostnames.

docker exec one ping two
# PING two (172.18.0.3): 56 data bytes
# 64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.114 ms
# 64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.915 ms

Next, let's ping the one container from the two container.

docker exec two ping one
# PING one (172.18.0.2): 56 data bytes
# 64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.108 ms
# 64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.734 ms
# 64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.270 ms
# 64 bytes from 172.18.0.2: seq=3 ttl=64 time=0.353 ms
# 64 bytes from 172.18.0.2: seq=4 ttl=64 time=0.371 ms

Both containers can communicate smoothly.

Removing the Network

Finally, let's remove the my-net network using the docker network rm command.

docker network rm my-net
# Error response from daemon: error while removing network: network my-net id 05f28107caa4fc699ea71c07a0cb7a17f6be8ee65f6001ed549da137e555b648 has active endpoints

If there are active containers running on the network you are trying to remove, it will not be deleted.

In such cases, you need to stop all containers connected to that network before deleting the network.

docker stop one two
# one
# two
docker network rm my-net
# my-net

Network Cleanup

When running multiple containers on a host, you may end up with networks that have no containers connected to them. In such cases, you can use the docker network prune command to remove all unnecessary networks at once.

docker network prune
WARNING! This will remove all custom networks not used by at least one container.
Are you sure you want to continue? [y/N] y

Conclusion

In this article, we explored various docker network commands:

  • ls
  • create
  • connect
  • disconnect
  • inspect
  • rm
  • prune

Understanding networks is essential when working with Docker containers, whether for containerizing databases or implementing container clustering. It is crucial to have a good grasp of networking as a key skill for managing multiple containers effectively.

Reference

Docker Volume

· 4 min read
Haril Song
Owner, Software Engineer at 42dot

Overview

Docker containers are completely isolated by default, which means that data inside a container cannot be accessed from the host machine. This implies that the container's lifecycle is entirely dependent on its internal data. In simpler terms, when a container is removed, its data is also lost.

So, what should you do if you need to permanently store important data like logs or database information, independent of the container's lifecycle?

This is where volumes come into play.

Installing PostgreSQL Locally

Let's explore volumes by installing and using PostgreSQL in a simple example.

Without Using Volumes

1. Pull the Image

docker run -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=1234 -d postgres

2. Connect to PostgreSQL

docker exec -it postgres psql -U postgres

3. Create a User

create user testuser password '1234' superuser;

4. Create a Database

create database testdb owner testuser;

You can also use tools like DBeaver or DataGrip to create users and databases.

When you're done, you can stop the container with docker stop postgres. Checking the container list with docker ps -a will show that the container is stopped but not removed.

$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c72a3d21021 postgres "docker-entrypoint.s…" 54 seconds ago Exited (0) 43 seconds ago postgres

In this state, you can restart the container with docker start postgres and the data will still be there.

Let's verify this.

Using the \list command in PostgreSQL will show that the testdb database still exists.

postgres=# \list
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
testdb | testuser | UTF8 | en_US.utf8 | en_US.utf8 |
(4 rows)

But what happens if you completely remove the container using the docker rm option?

After running docker rm postgres and then docker run again, a new container is created, and you'll see that the testdb and user are gone.

$ docker rm postgres
postgres
$ docker run -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=1234 -d postgres
67c5c39658f5a21a833fd2fab6058f509ddac110c72749092335eec5516177c2
$ docker exec -it postgres psql -U postgres
psql (14.4 (Debian 14.4-1.pgdg110+1))
Type "help" for help.

postgres=# \list
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)

postgres=#

Using Volumes

First, create a volume.

$ docker volume create postgres
postgres

You can verify the volume creation with the ls command.

$ docker volume ls
DRIVER VOLUME NAME
local postgres

Now, run the PostgreSQL container with the created volume mounted.

$ docker run -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=1234 -v postgres:/var/lib/postgresql/data -d postgres
002c552fe092da485ee30235d809c835eeb08bd7c00e6f91a2f172618682c48e

The subsequent steps are the same as those without using volumes. Now, even if you completely remove the container using docker rm, the data will remain in the volume and won't be lost.

As mentioned earlier, for long-term storage of log files or backup data, you can use volumes to ensure data persistence independent of the container's lifecycle.

Conclusion

We have explored what Docker volumes are and how to use them through a PostgreSQL example. Volumes are a key mechanism for data management in Docker containers. By appropriately using volumes based on the nature of the container, you can manage data safely and easily, which can significantly enhance development productivity once you get accustomed to it. For more detailed information, refer to the official documentation.

Reference

[Jacoco] Aggregating Jacoco Reports for Multi-Module Projects

· 2 min read
Haril Song
Owner, Software Engineer at 42dot

Overview

Starting from Gradle 7.4, a feature has been added that allows you to aggregate multiple Jacoco test reports into a single, unified report. In the past, it was very difficult to view the test results across multiple modules in one file, but now it has become much more convenient to merge these reports.

Usage

Creating a Submodule Solely for Collecting Reports

The current project structure consists of a module named application and other modules like list and utils that are used by the application module.

By adding a code-coverage-report module, we can collect the test reports from the application, list, and utils modules.

The project structure will then look like this:

  • application
  • utils
  • list
  • code-coverage-report

Adding the jacoco-report-aggregation Plugin

// code-coverage-report/build.gradle
plugins {
id 'base'
id 'jacoco-report-aggregation'
}

repositories {
mavenCentral()
}

dependencies {
jacocoAggregation project(":application")
}

Now, by running ./gradlew testCodeCoverageReport, you can generate a Jacoco report that aggregates the test results from all modules.

jacoco-directory

warning

To use the aggregation feature, a jar file is required. If you have set jar { enable = false }, you need to change it to true.

Update 22-09-28

In the case of a Gradle multi-project setup, there is an issue where packages that were properly excluded in a single project are not excluded in the aggregate report.

By adding the following configuration, you can generate a report that excludes specific packages.

testCodeCoverageReport {
reports {
csv.required = true
xml.required = false
}
getClassDirectories().setFrom(files(
[project(':api'), project(':utils'), project(':core')].collect {
it.fileTree(dir: "${it.buildDir}/classes/java/main", exclude: [
'**/dto/**',
'**/config/**',
'**/output/**',
])
}
))
}

Next Step

The jvm-test-suite plugin, which is introduced alongside jacoco-aggregation-report in Gradle, also seems very useful. Since these plugins are complementary, it would be beneficial to use them together.

Reference

Why Docker?

· 5 min read
Haril Song
Owner, Software Engineer at 42dot
info

This article is written for internal information sharing and is explained based on a Java development environment.

What is Docker?

info

A containerization technology that allows you to create and use Linux containers, and also the name of the largest company supporting this technology as well as the name of the open-source project.

deploy-history The image everyone has seen at least once when searching for Docker

Introduced in 2013, Docker has transformed the infrastructure world into a container-centric one. Many applications are now deployed using containers, with Dockerfiles created to build images and deploy containers, becoming a common development process. In the 2019 DockerCon presentation, it was reported that there were a staggering 105.2 billion container image pulls.

Using Docker allows you to handle containers like very lightweight modular virtual machines. Additionally, containers can be built, deployed, copied, and moved from one environment to another flexibly, supporting the optimization of applications for the cloud.

Benefits of Docker Containers

Consistent Behavior Everywhere

As long as the container runtime is installed, Docker containers guarantee the same behavior anywhere. For example, team member A using Windows OS and team member B using MacOS are working on different OSs, but by sharing the image through a Dockerfile, they can see the same results regardless of the OS. The same goes for deployment. If the container has been verified to work correctly, it will operate normally without additional configuration wherever it is run.

Modularity

Docker's containerization approach focuses on the ability to decompose, update, or recover parts of an application without needing to break down the entire application. Users can share processes among multiple applications in a microservices-based approach, similar to how service-oriented architecture (SOA) operates.

Layering and Image Version Control

Each Docker image file consists of a series of layers, which are combined into a single image.

Docker reuses these layers when building new containers, making the build process much faster. Intermediate changes are shared between images, improving speed, scalability, and efficiency.

Rapid Deployment

Docker-based containers can reduce deployment time to mere seconds. Since there is no need to boot the OS to add or move containers, deployment time is significantly reduced. Moreover, the fast deployment speed allows for cost-effective and easy creation and deletion of data generated by containers, without users needing to worry about whether it was done correctly.

In short, Docker technology emphasizes efficiency and offers a more granular and controllable microservices-based approach.

Rollback

When deploying with Docker, images are used with tags. For example, if you deploy using version 1.2 of an image, and version 1.1 of the image is still in the repository, you can simply run the command without needing to prepare the jar file again.

docker run --name app image:1.2
docker stop app

## Run version 1.1
docker run --name app image:1.1

Comparing Before and After Using Docker

Using Docker containers allows for much faster and more flexible deployment compared to traditional methods.

Deployment Without Docker Containers

  1. Package the jar file to be deployed on the local machine.
  2. Transfer the jar file to the production server using file transfer protocols like scp.
  3. Write a service file using systemctl for status management.
  4. Run the application with systemctl start app.

If multiple apps are running on a single server, the complexity increases significantly in finding stopped apps. The process is similarly cumbersome when running multiple apps on multiple servers, requiring commands to be executed on each server, making it a tiring process.

Deployment With Docker Containers

  1. Use a Dockerfile to create an image of the application. → Build ⚒️
  2. Push the image to a repository like Dockerhub or Gitlab registry. → Shipping🚢
  3. Run the application on the production server with docker run image.

You don't need to waste time on complex path settings and file transfer processes. Docker works in any environment, ensuring it runs anywhere and uses resources efficiently.

Docker is designed to manage single containers effectively. However, as you start using hundreds of containers and containerized apps, management and orchestration can become very challenging. To provide services like networking, security, and telemetry across all containers, you need to step back and group them. This is where Kubernetes1 comes into play.

When Should You Use It?

Developers can find Docker extremely useful in almost any situation. In fact, Docker often proves superior to traditional methods in development, deployment, and operations, so Docker containers should always be a top consideration.

  1. When you need a development database like PostgreSQL on your local machine.
  2. When you want to test or quickly adopt new technologies.
  3. When you have software that is difficult to install or uninstall directly on your local machine (e.g., reinstalling Java on Windows can be a nightmare).
  4. When you want to run the latest deployment version from another team, like the front-end team, on your local machine.
  5. When you need to switch your production server from NCP to AWS.

Example

A simple API server:

docker run --name rest-server -p 80:8080 songkg7/rest-server
# Using curl
curl http://localhost/ping

# Using httpie
http localhost/ping

Since port 80 is mapped to the container's port 8080, you can see that communication with the container works well.

Commonly Used Docker Run Options

--name : Assign a name to the container

-p : Publish a container's port(s) to the host

--rm : Automatically remove the container when it exits

-i : Interactive, keep STDIN open even if not attached

-t : Allocate a pseudo-TTY, creating an environment similar to a terminal

-v : Bind mount a volume

Conclusion

Using Docker containers allows for convenient operations while solving issues that arise with traditional deployment methods. Next, we'll look into the Dockerfile, which creates an image of your application.

Reference


Footnotes

  1. Kubernetes

Exploring Kubernetes

· 4 min read
Haril Song
Owner, Software Engineer at 42dot

What is Kubernetes?

Kubernetes provides the following functionalities:

  • Service discovery and load balancing
  • Storage orchestration
  • Automated rollouts and rollbacks
  • Automated bin packing
  • Automated scaling
  • Secret and configuration management

For more detailed information, refer to the official documentation.

There are various ways to run Kubernetes, but the official site uses minikube for demonstration. This article focuses on utilizing Kubernetes using Docker Desktop. If you want to learn how to use minikube, refer to the official site.

Let's briefly touch on minikube.

Minikube

Install

brew install minikube

Usage

The commands are intuitive and straightforward, requiring minimal explanation.

minikube start
minikube dashboard
minikube stop
# Clean up resources after use
minikube delete --all

Pros

Minikube is suitable for development purposes as it does not require detailed configurations like setting up secrets.

Cons

One major drawback is that sometimes the command to view the dashboard causes hang-ups. This issue is the primary reason why I am not using minikube while writing this article.

Docker Desktop

Install

Simply activate Kubernetes from the Docker Desktop menu.

enable

Dashboard

The Kubernetes dashboard is not enabled by default. You can activate it using the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

Starting the Dashboard

kubectl proxy

You can now access the dashboard via this link.

dashboard

To log in, you will need a token. Let's see how to create one.

Secrets

First, create a kubernetes folder to store related files separately.

mkdir kubernetes && cd kubernetes
warning

Granting admin privileges to the dashboard account can pose security risks, so be cautious when using it in actual operations.

dashboard-adminuser.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
kubectl apply -f dashboard-adminuser.yaml

cluster-role-binding.yml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
kubectl apply -f cluster-role-binding.yaml

Create Token

kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IjVjQjhWQVdpeWdLTlJYeXVKSUpxZndQUkoxdzU3eXFvM2dtMHJQZGY4TUkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjox7jU4NTA3NTY1LCJpYXQiOjE2NTg1MDM5NjUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW4lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW55Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZTRkODM5NjQtZWE2MC00ZWI0LTk1NDgtZjFjNWQ3YWM4ZGQ3In19LCJuYmYiOjE2NTg1MDM5NjUsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn1.RjoUaQnhTVKvzpAx_rToItI8HTZsr-6brMHWL63ca1_D4QIMCxU-zz7HFK04tCvOwyOTWw603XPDCv-ovjs1lM6A3tdgncqs8z1oTRamM4E-Sum8oi7cKnmVFSLjfLKqQxapBvZF5x-SxJ8Myla-izQxYkCtbWIlc6JfShxCSBJvfwSGW8c6kKdYdJv1QQdU1BfPY1sVz__cLNPA70_OpoosHevfVV86hsMvxCwVkNQHIpGlBX-NPog4nLY4gfuCMxKqjdVh8wLT7yS-E3sUJiXCcPJ2-BFSen4y-RIDbg18qbCtE3hQBr033Mfuly1Wc12UkU4bQeiF5SerODDn-g

Use the generated token to log in.

welcome-view Successful access!

Creating a Deployment

Create a deployment using an image. For this article, a web server using golang has been prepared in advance.

kubectl create deployment rest-server --image=songkg7/rest-server

As soon as the command is executed successfully, you can easily monitor the changes on the dashboard.

create-deployment The dashboard updates immediately upon deployment creation.

However, let's also learn how to check this via the CLI (the root...!).

Checking Status

kubectl get deployments

get-deployment

When a deployment is created, pods are also generated simultaneously.

kubectl get pods -o wide

get-pods

Having confirmed that everything is running smoothly, let's send a request to our web server. Instead of using curl, we will use httpie1. If you are more comfortable with curl, feel free to use it.

http localhost:8080/ping

error

Even though everything seems to be working fine, why can't we receive a response? 🤔

This is because our service is not exposed to the outside world yet. By default, Kubernetes pods can only communicate internally. Let's make our service accessible externally.

Exposing the Service

kubectl expose deployment rest-server --type=LoadBalancer --port=8080

Since our service uses port 8080, we open this port. Using a different port may result in connection issues.

Now, try sending the request again.

http localhost:8080/ping

200

You can see that you receive a successful response.

Reference


Footnotes

  1. Elegant httpie

[Java] Making First Collection More Collection-like - Iterable

· 2 min read
Haril Song
Owner, Software Engineer at 42dot

Overview

// Java Collection that implements Iterable.
public interface Collection<E> extends Iterable<E>

First-class collections are a very useful way to handle objects. However, despite the name "first-class collection," it only holds Collection as a field and is not actually a Collection, so you cannot use the various methods provided by Collection. In this article, we introduce a way to make first-class collections more like a real Collection using Iterable.

Let's look at a simple example.

Example

@Value
public class LottoNumber {
int value;

public static LottoNumber create(int value) {
return new LottoNumber(value);
}
}
public class LottoNumbers {

private final List<LottoNumber> lottoNumbers;

private LottoNumbers(List<LottoNumber> lottoNumbers) {
this.lottoNumbers = lottoNumbers;
}

public static LottoNumbers create(LottoNumber... numbers) {
return new LottoNumbers(List.of(numbers));
}

// Delegates isEmpty() method to use List's methods.
public boolean isEmpty() {
return lottoNumbers.isEmpty();
}
}

LottoNumbers is a first-class collection that holds LottoNumber as a list. To check if the list is empty, we have implemented isEmpty().

Let's write a simple test for isEmpty().

@Test
void isEmpty() {
LottoNumber lottoNumber = LottoNumber.create(7);
LottoNumbers lottoNumbers = LottoNumbers.create(lottoNumber);

assertThat(lottoNumbers.isEmpty()).isFalse();
}

It's not bad, but AssertJ provides various methods to test collections.

  • has..
  • contains...
  • isEmpty()

You cannot use these convenient assert methods with first-class collections because they do not have access to them due to not being a Collection.

More precisely, you cannot use them because you cannot iterate over the elements without iterator(). To use iterator(), you just need to implement Iterable.

The implementation is very simple.

public class LottoNumbers implements Iterable<LottoNumber> {

//...

@Override
public Iterator<LottoNumber> iterator() {
return lottoNumbers.iterator();
}
}

Since first-class collections already have Collection, you can simply return it just like you delegated isEmpty().

@Test
void isEmpty_iterable() {
LottoNumber lottoNumber = LottoNumber.create(7);
LottoNumbers lottoNumbers = LottoNumbers.create(lottoNumber);

assertThat(lottoNumbers).containsExactly(lottoNumber);
assertThat(lottoNumbers).isNotEmpty();
assertThat(lottoNumbers).hasSize(1);
}

Now you can use various test methods.

Not only in tests but also in functionality implementation, you can conveniently use it.

for (LottoNumber lottoNumber : lottoNumbers) {
System.out.println("lottoNumber: " + lottoNumber);
}

This is possible because the for loop uses iterator().

Conclusion

By implementing Iterable, you can use much richer functionality. The implementation is not difficult, and it is close to extending functionality, so if you have a first-class collection, actively utilize Iterable.