Skip to main content

Deep Dive into Java: The Path to Hello World - Part 3

Β· 12 min read
Haril Song
Owner, Software Engineer at 42dot

banner

In the previous chapter, we compiled Java and examined the bytecode structure. In this chapter, we will explore how the JVM executes the 'Hello World' code block.

Chapter 3: Running Java on the JVM​

  • Class Loader
  • Java Virtual Machine
  • Java Native Interface
  • JVM Memory Loading Process
  • Interaction of Hello World with Memory Areas

Class Loader​

To understand when, where, and how Java classes are loaded into memory and initialized, we need to first look at the * Class Loader* of the JVM.

The class loader dynamically loads compiled Java class files (.class) and places them in the Runtime Data Area, which is the memory area of the JVM.

The process of loading class files by the class loader consists of three stages:

  1. Loading: Bringing the class file into JVM memory.
  2. Linking: The process of verifying the class file for use.
  3. Initialization: Initializing the class file with appropriate values.

It is important to note that class files are not loaded into memory all at once but are dynamically loaded into memory * when needed by the application*.

A common misconception is the timing of when classes or static members within classes are loaded into memory. Many mistakenly believe that all classes and static members are loaded into memory as soon as the source is executed. However, static members are only loaded into memory when the class is dynamically loaded into memory upon calling a member within the class.

By using the verbose option, you can observe the process of loading into memory.

java -verbose:class VerboseLanguage

image

You can see that the VerboseLanguage class is loaded before 'Hello World' is printed.

info

Java 1.8 and Java 21 have different log output formats starting from the compilation results. As versions progress, optimizations are made and compiler behavior changes slightly, so it is important to check the version. This article uses Java 21 as the default version, and other versions will be specified separately.

Runtime Data Area​

The Runtime Data Area is the space where data is stored during program execution. It is divided into Shared Data Areas and Per-thread Data Areas.

Shared Data Areas​

Within the JVM, there are several areas where data can be shared among multiple threads running within the JVM. This allows various threads to access one of these areas simultaneously.

Heap​

Where instances of the VerboseLanguage class exist

The Heap area is where all Java objects or arrays are allocated when created. It is created when the JVM starts and is destroyed when the JVM exits.

According to the Java specification, this space should be automatically managed. This role is performed by a tool known as the Garbage Collector (GC).

There are no constraints on the size of the Heap specified in the JVM specification. Memory management is also left to the JVM implementation. However, if the Garbage Collector fails to secure enough space to create new objects, the JVM will throw an OutOfMemory error.

Method Area​

The Method Area is a shared data area that stores class and interface definitions. Similar to the Heap, it is created when the JVM starts and is destroyed when the JVM exits.

Global variables and static variables of a class are stored in this area, making them accessible from anywhere in the program from start to finish. (= Run-Time Constant Pool)

Specifically, the class loader loads the bytecode (.class) of a class and passes it to the JVM, which then generates the internal representation of the class used for creating objects and invoking methods. This internal representation collects information about fields, methods, and constructors of the class and interfaces.

In fact, according to the JVM specification, the Method Area is an area with no clear definition of 'how it should be'. It is a logical area and depending on the implementation, it can exist as part of the Heap. In a simple implementation, it can be part of the Heap without undergoing GC or compression.

Run-Time Constant Pool​

The Run-Time Constant Pool is part of the Method Area and contains symbolic references to class and interface names, field names, and method names. The JVM uses the Run-Time Constant Pool to find the actual memory addresses for references.

As seen when analyzing bytecode, the constant pool was found inside the class file. During runtime, the constant pool, which was part of the class file structure, is read and loaded into memory by the class loader.

String Constant Pool​

Where the "Hello World" string is stored

As mentioned earlier, the Run-Time Constant Pool is part of the Method Area. However, there is also a Constant Pool in the Heap, known as the String Constant Pool.

When creating a string using new String("Hello World"), the string is treated as an object and is managed in the Heap. Let's look at an example:

String s1 = "Hello World";
String s2 = new String("Hello World");

The string literal used inside the constructor is retrieved from the String Pool, but the new keyword guarantees the creation of a new and unique string.

0: ldc           #7                  // String Hello World
2: astore_1
3: new #9 // class java/lang/String
6: dup
7: ldc #7 // String Hello World
9: invokespecial #11 // Method java/lang/String."<init>":(Ljava/lang/String;)V
12: astore_2
13: return

If we examine the bytecode, we can see that the string is 'created' using the invokespecial instruction.

The invokespecial instruction means that the object initialization method is directly called.

Why does the String Constant Pool exist in the Heap, unlike the Run-Time Constant Pool in the Method Area? πŸ€”

  • Strings belong to very large objects. Also, it is difficult to predict how many strings will be created, so a process is needed to efficiently use memory space by cleaning up unused strings. This means that it is necessary for the String Constant Pool to exist in the Heap.
    • Storing in the stack would make it difficult to find space, and declaring a string could fail.
    • The stack size is typically around 320kb1MB for 32-bit and 1MB2MB for 64-bit systems.
  • Strings are managed as immutable. They cannot be modified and are always created anew. By reusing already created strings, memory space is saved (interning). However, unused (unreachable) strings may accumulate over the application's lifecycle. To efficiently utilize memory, there is a need to clean up unreferenced strings, which again leads to the need for GC.

In conclusion, the String Constant Pool needs to exist in the Heap to be under the influence of GC.

String comparison operations require N operations for perfect matching if the length is N. In contrast, using the pool, the equals comparison only requires checking the reference, incurring a cost of O(1)O(1).

It is possible to move a string that is outside the String Constant Pool into the String Constant Pool by creating a string using new.

String greeting = new String("Hello World");
greeting.intern(); // using the constant pool

// Now, comparison with the string literal in the SCP is possible.
assertThat(greeting).isEqualTo("Hello World"); // true

While this was provided as a trick in the past to save memory, it is no longer necessary, so it is best to use strings as literals.

To summarize:

  1. Numbers have a maximum value, whereas strings, due to their nature, have an unclear maximum size.
  2. Strings can become very large and are likely to be used frequently after creation compared to other types.
  3. Naturally, high memory efficiency is required. To achieve this while increasing usability, they should be globally referable.
  4. If placed in the Per-Thread Data Area within the Stack, they cannot be reused by other threads, and if the size is large, finding allocation space becomes difficult.
  5. It is rational to have them in the Shared Data Area + in the Heap, but since they need to be treated as immutable at the JVM level, a dedicated Constant Pool is created within the Heap to manage them separately.
tip

While string literals inside constructors are retrieved from the String Constant Pool, the new keyword guarantees independent string creation. Consequently, there are two strings, one in the String Constant Pool and one in the Heap.

Per-thread Data Areas​

In addition to the Shared Data Area, the JVM manages data for individual threads separately. The JVM actually supports the concurrent execution of quite a few threads.

PC Register​

Each JVM thread has a PC (program counter) register.

The PC register stores the current position of the execution of instructions to enable the CPU to continue executing instructions. It also holds the memory address of the next instruction to be executed, aiding in optimizing instruction execution.

The behavior of the PC depends on the nature of the method:

  • For non-native methods, the PC register stores the address of the currently executing instruction.
  • For native methods, the PC register holds an undefined value.

The lifecycle of the PC register is essentially the same as the thread's lifecycle.

JVM Stack​

Each JVM thread has its own independent stack. The JVM stack is a data structure that stores method invocation information. A new frame is created on the stack for each method invocation, containing the method's local variables and the address of the return value. If it is a primitive type, it is stored directly on the stack, while if it is a wrapper type, it holds a reference to an instance created in the Heap. This results in int and double types having a slight performance advantage over Integer and Double.

Thanks to the JVM stack, the JVM can trace program execution and record stack traces as needed.

  • This is known as a stack trace. printStackTrace is an example of this.
  • In scenarios like webflux's event loop where a single operation traverses multiple threads, the significance of a stack trace may be difficult to understand.

The memory size and allocation method of the stack can be determined by the JVM implementation. Typically, around 1MB of space is allocated when a thread starts.

JVM memory allocation errors can result in a stack overflow error. However, if a JVM implementation allows dynamic expansion of the JVM stack size and a memory error occurs during expansion, the JVM may throw an OutOfMemory error.

Native Method Stack​

Native methods are methods written in languages other than Java. These methods cannot be compiled into bytecode (as they are not Java, javac cannot be used), so they require a separate memory area.

  • The Native Method Stack is very similar to the JVM Stack but is exclusively for native methods.
  • The purpose of the Native Method Stack is to track the execution of native methods.

JVM implementations can determine how to manipulate the size and memory blocks of the Native Method Stack.

In the case of memory allocation errors originating from the Native Method Stack, a stack overflow error occurs. However, if an attempt to increase the size of the Native Method Stack fails, an OutOfMemory error occurs.

In conclusion, a JVM implementation can decide not to support Native Method calls, emphasizing that such an implementation does not require a Native Method Stack.

The usage of the Java Native Interface will be covered in a separate article.

Execution Engine​

Once the loading and storage stages are complete, the JVM executes the Class File. It consists of three elements:

  • Interpreter
  • JIT Compiler
  • Garbage Collector

Interpreter​

When a program starts, the Interpreter reads the bytecode line by line, converting it into machine code that the machine can understand.

Interpreters are generally slower. Why is that?

Compiled languages can define resources and types needed for a program to run during the compilation process before execution. However, in interpreted languages, necessary resources and variable types cannot be known until execution, making optimization difficult.

JIT Compiler​

The Just In Time Compiler was introduced in Java 1.1 to overcome the shortcomings of the Interpreter.

The JIT compiler compiles bytecode into machine code at runtime, improving the execution speed of Java applications. It detects frequently executed parts (hot code) and compiles them.

You can use the following keywords to check JIT-related behaviors if needed:

  • -XX:+PrintCompilation: Outputs JIT-related logs
  • -Djava.compiler=NONE: Deactivates JIT. You can observe a performance drop.

Garbage Collector​

The Garbage Collector is a critical component that deserves a separate document, and there is already a document on it, so it will be skipped this time.

  • Optimizing the GC is not common.
    • However, there are cases where a delay of over 500ms due to GC operations occurs, and in scenarios handling high traffic or tight TTLs in caches, a 500ms delay can be a significant issue.

Conclusion​

Java is undoubtedly a complex language.

In interviews, you often get asked questions like this:

How well do you think you know Java?

Now, you should be able to answer more confidently.

Um... πŸ€” Just about Hello World.

Reference​

Deep Dive into Java: The Path to Hello World - Part 2

Β· 9 min read
Haril Song
Owner, Software Engineer at 42dot

banner

Continuing from the previous post, let's explore how the code evolves to print "Hello World."

Chapter 2. Compilation and Disassembly​

Programming languages have levels.

The closer a programming language is to human language, the higher-level language it is, and the closer it is to the language a computer can understand (machine language), the lower-level language it is. Writing programs in a high-level language makes it easier for humans to understand and increases productivity, but it also creates a gap with machine language, requiring a process to bridge this gap.

The process of a high-level language descending to a lower level is called compilation.

Since Java is not a low-level language, there is a compilation process. Let's take a look at how this compilation process works in Java.

Compilation​

As mentioned earlier, Java code cannot be directly executed by the computer. To execute Java code, it needs to be transformed into a form that the computer can read and interpret. This transformation involves the following major steps:

The resulting .class file from compilation is in bytecode. However, it is still not machine code that the computer can execute. The Java Virtual Machine (JVM) reads this bytecode and further processes it into machine code. We will cover how the JVM handles this in the final chapter.

First, let's compile the .java file to create a .class file. You can compile it using the javac command.

// VerboseLanguage.java
public class VerboseLanguage {
public static void main(String[] args) {
System.out.println("Hello World");
}
}
javac VerboseLanguage.java

You can see that the class file has been created. You can run the class file using the java command, and this is the basic flow of running a Java program.

java VerboseLanguage
// Hello World

Are you curious about the contents of the class file? Wondering how the computer reads and executes the language? What secrets lie within this file? It feels like opening Pandora's box.

Expecting something, you open it up, and...

No way!

Only a brief binary content is displayed.

Wait, wasn't the result of compilation supposed to be bytecode...?

Yes, it is bytecode. At the same time, it is also binary code. At this point, let's briefly touch on the differences between bytecode and binary code before moving on.

Binary Code : Code composed of 0s and 1s. While machine language is made up of binary code, not all binary code is machine language.

Bytecode : Code composed of 0s and 1s. However, bytecode is not intended for the machine but for the VM. It is converted into machine code by the VM through processes like the JIT compiler.

Still, as this article claims to be a deep dive, we reluctantly tried to read through the conversion.

Fortunately, our Pandora's box contains only 0s and 1s, with no other hardships or challenges.

While we succeeded in reading it, it is quite difficult to understand the content with just 0s and 1s πŸ€”

Now, let's decipher this code.

Disassembly​

During the compilation process, the code is transformed into bytecode composed of 0s and 1s. As seen earlier, interpreting bytecode directly is quite challenging. Fortunately, the JDK includes tools that help developers read compiled bytecode, making it useful for debugging purposes.

The process of converting bytecode into a more readable form for developers is called disassembly. Sometimes this process can be confused with decompilation, but decompilation results in a higher-level programming language, not assembly language. Also, since the javap documentation clearly uses the term disassemble, we will follow suit.

info

Decompilation refers to representing binary code in a relatively higher-level language, just like before compiling binary. On the other hand, disassembly represents binary code in a minimal human-readable form (assembler language).

Virtual Machine Assembly Language​

Let's use javap to disassemble the bytecode. The output is much more readable than just 0s and 1s.

javap -c VerboseLanguage.class
Compiled from "VerboseLanguage.java"
public class VerboseLanguage {
public VerboseLanguage();
Code:
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: return

public static void main(java.lang.String[]);
Code:
0: getstatic #7 // Field java/lang/System.out:Ljava/io/PrintStream;
3: ldc #13 // String Hello World
5: invokevirtual #15 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
8: return
}

What can we learn from this?

Firstly, this language is called virtual machine assembly language.

The Java Virtual Machine code is written in the informal β€œvirtual machine assembly language” output by Oracle's javap utility, distributed with the JDK release. - JVM Spec

The format is as follows:

<index> <opcode> [ <operand1> [ <operand2>... ]] [<comment>]

index : Index of the JVM code byte array. It can be thought of as the method's starting offset.

opcode : Mnemonic symbol representing the set of instructions opcode. We remember the order of the rainbow colors as 'ROYGBIV' to distinguish the instruction set. If the rainbow colors represent the instruction set, each syllable of 'ROYGBIV' can be considered as a mnemonic symbol defined to differentiate them.

operandN : Operand of the instruction. The operand of a computer instruction is the address field. It points to where the data to be processed is stored in the constant pool.

Let's take a closer look at the main method part of the disassembled result.

Code:
0: getstatic #7 // Field java/lang/System.out:Ljava/io/PrintStream;
3: ldc #13 // String Hello World
5: invokevirtual #15 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
8: return
  • invokevirtual: Call an instance method
  • getstatic: Get a static field from a class
  • ldc: Load data into the run-time constant pool.

The 3: ldc #13 on the third line means to put an item at index 13, and the item being put is kindly indicated in the comment.

Hello World

Note that bytecode instructions like getstatic and invokevirtual are represented by a single-byte opcode number. For example, getstatic=0xb2, invokevirtual=0xb6, and so on. It can be understood that Java bytecode instructions also have a maximum of 256 different opcodes.

JVM Instruction Set showing the bytecode for invokevirtual

If we look at the bytecode of the main method in hex, it would be as follows:

b2 00 07 12 0d b6

It might still be a bit hard to notice the pattern. As a hint, remember that earlier we mentioned the number before the opcode is the index in the JVM array. Let's slightly change the representation.

arr = [b2, 00, 07, 12, 0d, b6]
  • arr[0] = b2 = getstatic
  • arr[3] = 12 = ldc
  • arr[5] = b6 = invokevirtual

It becomes somewhat clearer what the index meant. The reason for skipping some indices is quite simple: getstatic requires a 2-byte operand, and ldc requires a 1-byte operand. Therefore, the ldc instruction, which is the next instruction after getstatic, is recorded at index 3, skipping 1 and 2. Similarly, skipping 4, the invokevirtual instruction is recorded at index 5.

Lastly, notice the comment (Ljava/lang/String;)V on the 4th line. Through this comment, we can see that in Java bytecode, classes are represented as L;, and void is represented as V. Other types also have their unique representations, summarized as follows:

Java BytecodeTypeDescription
Bbytesigned byte
CcharUnicode character
Ddoubledouble-precision floating-point value
Ffloatsingle-precision floating-point value
Iintinteger
Jlonglong integer
L<classname>;referencean instance of class <classname>
Sshortsigned short
Zbooleantrue or false
[referenceone array dimension

Using the -verbose option, you can see a more detailed disassembly result, including the constant pool. It would be interesting to examine the operands and constant pool together.

  Compiled from "VerboseLanguage.java"
public class VerboseLanguage
minor version: 0
major version: 65
flags: (0x0021) ACC_PUBLIC, ACC_SUPER
this_class: #21 // VerboseLanguage
super_class: #2 // java/lang/Object
interfaces: 0, fields: 0, methods: 2, attributes: 1
Constant pool:
#1 = Methodref #2.#3 // java/lang/Object."<init>":()V
#2 = Class #4 // java/lang/Object
#3 = NameAndType #5:#6 // "<init>":()V
#4 = Utf8 java/lang/Object
#5 = Utf8 <init>
#6 = Utf8 ()V
#7 = Fieldref #8.#9 // java/lang/System.out:Ljava/io/PrintStream;
#8 = Class #10 // java/lang/System
#9 = NameAndType #11:#12 // out:Ljava/io/PrintStream;
#10 = Utf8 java/lang/System
#11 = Utf8 out
#12 = Utf8 Ljava/io/PrintStream;
#13 = String #14 // Hello World
#14 = Utf8 Hello World
#15 = Methodref #16.#17 // java/io/PrintStream.println:(Ljava/lang/String;)V
#16 = Class #18 // java/io/PrintStream
#17 = NameAndType #19:#20 // println:(Ljava/lang/String;)V
#18 = Utf8 java/io/PrintStream
#19 = Utf8 println
#20 = Utf8 (Ljava/lang/String;)V
#21 = Class #22 // VerboseLanguage
#22 = Utf8 VerboseLanguage
#23 = Utf8 Code
#24 = Utf8 LineNumberTable
#25 = Utf8 main
#26 = Utf8 ([Ljava/lang/String;)V
#27 = Utf8 SourceFile
#28 = Utf8 VerboseLanguage.java
{
public VerboseLanguage();
descriptor: ()V
flags: (0x0001) ACC_PUBLIC
Code:
stack=1, locals=1, args_size=1
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: return
LineNumberTable:
line 1: 0

public static void main(java.lang.String[]);
descriptor: ([Ljava/lang/String;)V
flags: (0x0009) ACC_PUBLIC, ACC_STATIC
Code:
stack=2, locals=1, args_size=1
0: getstatic #7 // Field java/lang/System.out:Ljava/io/PrintStream;
3: ldc #13 // String Hello World
5: invokevirtual #15 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
8: return
LineNumberTable:
line 3: 0
line 4: 8
}
SourceFile: "VerboseLanguage.java"

Conclusion​

In the previous chapter, we explored why a verbose process is required to print Hello World. In this chapter, we looked at the compilation and disassembly processes before printing Hello World. Next, we will finally examine the execution flow of the Hello World printing method with the JVM.

Reference​

Deep Dive into Java: The Path to Hello World - Part 1

Β· 10 min read
Haril Song
Owner, Software Engineer at 42dot

banner

In the world of programming, it always starts with printing the sentence Hello World. It's like an unwritten rule.

# hello.py
print("Hello World")
python hello.py
// Hello World

Python? Excellent.

// hello.js
console.log("Hello World");
node hello.js
// Hello World

JavaScript? Not bad.

public class VerboseLanguage {
public static void main(String[] args) {
System.out.println("Hello World");
}
}
javac VerboseLanguage.java
java VerboseLanguage
// Hello World

However, Java feels like it's from a different world. We haven't even mentioned yet that the class name must match the file name.

What is public, what is class, what is static, and going through void, main, String[], and System.out.println, we finally reach the string "Hello World". Now, let's go learn another language.1

Even for simply printing "Hello World", Java demands quite a bit of background knowledge. Why does Java require such verbose processes?

This series is divided into 3 chapters. The goal is to delve into what happens behind the scenes to print the 2 words " Hello World" in detail. The specific contents of each chapter are as follows:

  • In the first chapter, we introduce the reasons behind the Hello World as the starting point.
  • In the second chapter, we examine the compiled class files and how the computer interprets and executes Java code.
  • Finally, we explore how the JVM loads and executes public static void main and the operating principles behind it.

By combining the contents of the 3 chapters, we can finally grasp the concept of "Hello World". It's quite a long journey, so let's take a deep breath and embark on it.

Chapter 1. Why?​

Before printing Hello World in Java, there are several "why moments" that need to be considered.

Why must the class name match the file name?​

More precisely, it is the name of the public class that must match the file name. Why is that?

Java programs are not directly understandable by computers. A virtual machine called JVM assists the computer in executing the program. To make a Java program executable by the computer, it needs to go through several steps to convert it into machine code that the JVM can interpret. The first step is using a compiler to convert the program into bytecode that the JVM can interpret. The converted bytecode is then passed through an interpreter inside the JVM to be translated into machine code and executed.

Let's briefly look at the compilation process.

public class Outer {
public static void main(String[] args) {
System.out.println("This is Outer class");
}

private class Inner {
}
}
javac Outer.java
Permissions Size User   Date Modified Name
.rw-r--r-- 302 haril 30 Nov 16:09 Outer$Inner.class
.rw-r--r-- 503 haril 30 Nov 16:09 Outer.class
.rw-r--r-- 159 haril 30 Nov 16:09 Outer.java

Java generates a .class file for every class at compile time.

Now, the JVM needs to find the main method for program execution. How does it know where the main method is?

Why does it have to find main() specifically? Just wait a little longer.

If the Java file name does not match the public class name, the Java interpreter has to read all class files to find the main method. If the file name matches the name of the public class, the Java interpreter can better identify the file it needs to interpret.

Imagine a file named Java1000 with 1000 classes inside. To identify where main() is among the 1000 classes, the interpreter would have to examine all the class files.

However, if the file name matches the name of the public class, it can access main() more quickly (since main exists in the public class), and it can easily access other classes since all the logic starts from main().

Why must it be public?​

The JVM needs to find the main method inside the class. If the JVM, which accesses the class from outside, needs to find a method inside the class, that method must be public. In fact, changing the access modifier to private will result in an error message instructing you to declare main as public.

Error: Main method not found in class VerboseLanguage, please define the main method as:
public static void main(String[] args)

Why must it be static?​

The JVM has found the public main() method. However, to invoke this method, an object must first be created. Does the JVM need this object? No, it just needs to be able to call main. By declaring it as static, the JVM does not need to create an unnecessary object, saving memory.

Why must it be void?​

The end of the main method signifies the end of Java's execution. The JVM cannot do anything with the return value of main, so the presence of a return value is meaningless. Therefore, it is natural to declare it as void.

Why must it be named main?​

The method name main is designed for the JVM to find the entry point for running the application.

Although the term "design" sounds grand, in reality, it is hard-coded to find the method named main. If the name to be found was not main but haril, it would have searched for a method named haril. Of course, the Java creators likely had reasons for choosing main, but that's about it.

mainClassName = GetMainClassName(env, jarfile);
mainClass = LoadClass(env, classname);

// Find the main method
mainID = (*env)->GetStaticMethodID(env, mainClass, "main", "([Ljava/lang/String;)V");

jbject obj = (*env)->ToReflectedMethod(env, mainClass, mainID, JNI_TRUE);

Why args?​

Until now, we omitted mentioning String[] args in main(). Why must this argument be specified, and why does an error occur if it is omitted?

As public static void main(String[] args) is the entry point of a Java application, this argument must come from outside the Java application.

All types of standard input are entered as strings.

This is why args is declared as a string array. If you think about it, it makes sense. Before the Java application even runs, can you create custom object types directly? πŸ€”

So why is args necessary?

By passing arguments in a simple way from outside to inside, you can change the behavior of a Java application, a mechanism widely used since the early days of C programming to control program behavior. Especially for simple applications, this method is very effective. Java simply adopted this widely used method.

The reason String[] args cannot be omitted is that Java only allows one public static void main(String[] args) as the entry point. The Java creators thought it would be less confusing to declare and not use args than to allow it to be omitted.

System.out.println​

Finally, we can start talking about the method related to output.

Just to mention it again, in Python it was print("Hello World"). 2

A Java program runs not directly on the operating system but on a virtual machine called JVM. This allows Java programs to be executed anywhere regardless of the operating system, but it also makes it difficult to use specific functions provided by the operating system. This is why coding at the system level, such as creating a CLI in Java or collecting OS metrics, is challenging.

However, there is a way to leverage limited OS functionality (JNI), and System provides this functionality. Some of the key functions include:

  • Standard input
  • Standard output
  • Setting environment variables
  • Terminating the running application and returning a status code

To print Hello World, we are using the standard output function of System.

In fact, as you follow the flow of System.out.println, you will encounter a writeBytes method with the native keyword attached, which delegates the operation to C code and transfers it to standard output.

// FileOutputStream.java
private native void writeBytes(byte b[], int off, int len, boolean append)
throws IOException;

The invocation of a method with the native keyword works through the Java Native Interface (JNI). This will be covered in a later chapter.

String​

Strings in Java are somewhat special. No, they seem quite special. They are allocated separate memory space, indicating they are definitely treated as special. Why is that?

It is important to note the following properties of strings:

  • They can become very large.
  • They are relatively frequently reused.

Therefore, strings are designed with a focus on how to reuse them once created. To fully understand how large string data is managed in memory, you need an understanding of the topics to be covered later. For now, let's briefly touch on the principles of memory space saving.

First, let's look at how strings are declared in Java.

String greeting = "Hello World";

Internally, it works as follows:

Strings are created in the String Constant Pool and have immutable properties. Once a string is created, it does not change, and if the same string is found in the Constant Pool when creating a new string, it is reused.

We will cover JVM Stack, Frame, Heap in the next chapter.

Another way to declare strings is by instantiation.

String greeting = new String("Hello World");

This method is rarely used because there is a difference in internal behavior, as shown below.

When a string is used directly without the new keyword, it is created in the String Constant Pool and can be reused. However, if instantiated with the new keyword, it is not created in the Constant Pool. This means the same string can be created multiple times, potentially wasting memory space.

Summary​

In this chapter, we answered the following questions:

  • Why must the .java file name match the class name?
  • Why must it be public static void main(String[] args)?
  • The flow of the output operation
  • The characteristics of strings and the basic principles of their creation and use

In the next chapter, we will compile Java code ourselves and explore how bytecode is generated, its relationship with memory areas, and more.

Reference​

Footnotes​

  1. Life Coding Python ↩

  2. Life Coding Python ↩

How Many Concurrent Requests Can a Single Server Application Handle?

Β· 15 min read
Haril Song
Owner, Software Engineer at 42dot

banner

Overview​

How many concurrent users can a Spring MVC web application accommodate? πŸ€”

To estimate the approximate number of users a server needs to handle to provide stable service while accommodating many users, this article explores changes in network traffic focusing on Spring MVC's Tomcat configuration.

For the sake of convenience, the following text will be written in a conversational tone πŸ™

info

If you find any technical errors, typos, or incorrect information, please let us know in the comments. Your feedback is greatly appreciated πŸ™‡β€β™‚οΈ

[System Design Interview] Implementing a URL Shortener from Scratch

Β· 5 min read
Haril Song
Owner, Software Engineer at 42dot

banner

info

You can check the code on GitHub.

Overview​

Shortening URLs started to prevent URLs from being fragmented in email or SMS transmissions. However, nowadays, it is more actively used for sharing specific links on social media platforms like Twitter or Instagram. It improves readability by not looking verbose and can also provide additional features such as collecting user statistics before redirecting to the URL.

In this article, we will implement a URL shortener from scratch and explore how it works.

What is a URL Shortener?​

Let's first take a look at the result.

You can run the URL shortener we will implement in this article directly with the following command:

docker run -d -p 8080:8080 songkg7/url-shortener

Here is how to use it. Simply input the long URL you want to shorten as the value of longUrl.

curl -X POST --location "http://localhost:8080/api/v1/shorten" \
-H "Content-Type: application/json" \
-d "{
\"longUrl\": \"https://www.google.com/search?q=url+shortener&sourceid=chrome&ie=UTF-8\"
}"
# You will receive a random value like tN47tML.

Now, if you access http://localhost:8080/tN47tML in your web browser,

image

You will see that it correctly redirects to the original URL.

Before Shortening

After Shortening

Now, let's see how we can shorten URLs.

Rough Design​

Shortening URLs​

  1. Generate an ID before storing the longUrl.
  2. Encode the ID to base62 to create the shortUrl.
  3. Store the ID, shortUrl, and longUrl in the database.

Memory is finite and relatively expensive. RDB can be quickly queried through indexes and is relatively cheaper compared to memory, so we will use RDB to manage URLs.

To manage URLs, we first need to secure an ID generation strategy. There are various methods for ID generation, but it may be too lengthy to cover here, so we will skip it. I will simply use the current timestamp for ID generation.

Base62 Conversion​

By using ULID, you can generate a unique ID that includes a timestamp.

val id: Long = Ulid.fast().time // e.g., 3145144998701, used as a primary key

Converting this number to base62, we get the following string.

tN47tML

This string is stored in the database as the shortUrl.

idshortlong
3145144998701tN47tMLhttps://www.google.com/search?q=url+shortener&sourceid=chrome&ie=UTF-8

The retrieval process will proceed as follows:

  1. A GET request is made to localhost:8080/tN47tML.
  2. Decode tN47tML from base62.
  3. Obtain the primary key 3145144998701 and query the database.
  4. Redirect the request to the longUrl.

Now that we have briefly looked at it, let's implement it and delve into more details.

Implementation​

Just like the previous article on Consistent Hashing, we will implement it ourselves. Fortunately, implementing a URL shortener is not that difficult.

Model​

First, we implement the model to receive requests from users. We simplified the structure to only receive the URL to be shortened.

data class ShortenRequest(
val longUrl: String
)

We implement a Controller to handle POST requests.

@PostMapping("/api/v1/shorten")
fun shorten(@RequestBody request: ShortenRequest): ResponseEntity<ShortenResponse> {
val url = urlShortenService.shorten(request.longUrl)
return ResponseEntity.ok(ShortenResponse(url))
}

Base62 Conversion​

Finally, the most crucial part. After generating an ID, we encode it to base62 to shorten it. This shortened string becomes the shortUrl. Conversely, we decode the shortUrl to find the ID and use it to query the database to retrieve the longUrl.

private const val BASE62 = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"

class Base62Conversion : Conversion {
override fun encode(input: Long): String {
val sb = StringBuilder()
var num = BigInteger.valueOf(input)
while (num > BigInteger.ZERO) {
val remainder = num % BigInteger.valueOf(62)
sb.append(BASE62[remainder.toInt()])
num /= BigInteger.valueOf(62)
}
return sb.reverse().toString()
}

override fun decode(input: String): Long {
var num = BigInteger.ZERO
for (c in input) {
num *= BigInteger.valueOf(62)
num += BigInteger.valueOf(BASE62.indexOf(c).toLong())
}
return num.toLong()

}
}

The length of the shortened URL is inversely proportional to the size of the ID number. The smaller the generated ID number, the shorter the URL can be made.

If you want the length of the shortened URL to not exceed 8 characters, you should ensure that the size of the ID does not exceed 62^8. Therefore, how you generate the ID is also crucial. As mentioned earlier, to simplify the content in this article, we handled this part using a timestamp value.

Test​

Let's send a POST request with curl to shorten a random URL.

curl -X POST --location "http://localhost:8080/api/v1/shorten" \
-H "Content-Type: application/json" \
-d "{
\"longUrl\": \"https://www.google.com/search?q=url+shortener&sourceid=chrome&ie=UTF-8\"
}"

You can confirm that it correctly redirects by accessing http://localhost:8080/{shortUrl}.

Conclusion​

Here are some areas for improvement:

  • By controlling the ID generation strategy more precisely, you can further shorten the shortUrl.
    • If there is heavy traffic, you must consider issues related to concurrency.
    • Snowflake
  • Using DNS for the host part can further shorten the URL.
  • Applying cache to the Persistence Layer can achieve faster responses.

Exploring Docker Compose Support in Spring Boot 3.1

Β· 3 min read
Haril Song
Owner, Software Engineer at 42dot

Let's take a brief look at the Docker Compose Support introduced in Spring Boot 3.1.

info

Please provide feedback if there are any inaccuracies!

Overview​

When developing with the Spring framework, it seems that using Docker for setting up DB environments is more common than installing them directly on the local machine. Typically, the workflow involves:

  1. Using docker run before bootRun to prepare the DB in a running state
  2. Performing development and validation tasks using bootRun
  3. Stopping bootRun and using docker stop to stop the container DB

The process of running and stopping Docker before and after development tasks used to be quite cumbersome. However, starting from Spring Boot 3.1, you can use a docker-compose.yaml file to synchronize the lifecycle of Spring and Docker containers.

Contents​

First, add the dependency:

dependencies {
// ...
developmentOnly 'org.springframework.boot:spring-boot-docker-compose'
// ...
}

Next, create a compose file as follows:

services:
elasticsearch:
image: 'docker.elastic.co/elasticsearch/elasticsearch:7.17.10'
environment:
- 'ELASTIC_PASSWORD=secret'
- 'discovery.type=single-node'
- 'xpack.security.enabled=false'
ports:
- '9200' # random port mapping
- '9300'

image

During bootRun, the compose file is automatically recognized, and the docker compose up operation is executed first.

However, if you are mapping the container port to a random host port, you may need to update the application.yml every time docker compose down is triggered. Fortunately, starting from Spring Boot 3.1, once you write the compose file, Spring Boot takes care of the rest. It's incredibly convenient!

If you need to change the path to the compose file, simply modify the file property:

spring:
docker:
compose:
file: infrastructure/compose.yaml

There are also properties related to lifecycle management, allowing you to appropriately adjust the container lifecycle. If you don't want the container to stop every time you shut down Boot, you can use the start_only option:

spring:
docker:
compose:
lifecycle-management: start_and_stop # none, start_only

There are various other options available, so exploring them should help you choose what you need.

image

Conclusion​

No matter how much test code you write, verifying the interaction with the actual DB was essential during the development process. Setting up that environment felt like a tedious chore. While container technology made configuration much simpler, remembering to run docker commands before and after starting Spring Boot was definitely a hassle.

Now, starting from Spring Boot 3.1, developers can avoid situations where they forget to start or stop containers, preventing memory consumption. It allows developers to focus more on development. The seamless integration of Docker with Spring is both fascinating and convenient. Give it a try!

Reference​

A Yearlong Blogging Journey

Β· 5 min read
Haril Song
Owner, Software Engineer at 42dot

Overview​

This post holds a significant meaning for me. It is intended to be the final entry of the blog journey I have been on since the beginning of the year. As a review, I will summarize my blogging experience up to this point.

Criteria for Choosing a Blogging Platform​

I was looking for a platform that met the following criteria to facilitate convenient posting:

  • Easy use of Markdown
  • Convenient image uploading
  • Ongoing maintenance (especially for open-source platforms)

While platforms like Tistory lacked robust Markdown support and had cumbersome image uploading processes, Velog, although popular among developers, seemed neglected recently, so I decided against it. In the end, I found GitHub Page + Jekyll to be the most rational choice as it fully supports Markdown, makes image uploading easy, and allows for long-term maintenance. Although managing Jekyll requires some knowledge of Ruby, I had a basic understanding and committed to learning as needed, and have been operating with this setup to date.

SEO Struggles​

Despite my efforts to get all pages indexed, things haven't gone as smoothly as I hoped. When will the crawling finally start?

However, this journey has led me to study the field more and realize the importance of patience. Even though it's taking time for the pages to get indexed, I believe that with increased traffic, indexing will happen naturally. Gradually, I have noticed an increase in the number of indexed pages. While I am publishing content faster than the indexing speed, I have to accept that I cannot control the time it takes for the pages to get indexed and appear in search results due to Google's crawling policies.

image

Evolution of Content​

Initially, when I started my blog on Tistory, I focused on algorithm problem-solving as I was diving into algorithm studies.

image

As I delved into practical work, I realized that algorithm solutions are better explained on algorithmic problem-solving platforms, and simply listing knowledge felt redundant compared to consulting official documentation. I did not want my blog to become just another mundane one.

My desire to create a blog that is distinctive and personal, setting it apart from others has continued, driving me to enhance the quality and uniqueness of my content. Some posts that I find personally satisfying include my journey of creating open-source projects and implementing concepts rather than just reading about them.

image

info

In 2024, it evolved further into a blog using Docusaurus πŸ˜„.

Open-Sourcing Obsidian Plugin​

I have developed a plugin called O2 specifically for blog posting. It facilitates the continuity between Obsidian and Jekyll tasks. Developing this plugin required me to learn TypeScript as well πŸ˜….

Fortunately, around 400 users have joined me in using this plugin as of July 2023. Although most probably uninstalled it within 10 minutes... DAU 1...

image

Initially, there were many bugs, but now, after addressing numerous minor issues, the plugin has entered a stable phase. If you are an Obsidian user who uses Jekyll as a blogging platform, I would appreciate it if you could show some interest in this plugin!

image

I have also obtained the plugin dev role in the Obsidian Discord Community and am actively participating. Feel free to ask any Obsidian-related questions!

Growth Metrics​

To maintain consistent motivation and direction when starting my blog, I believed that using Google Analytics was essential. Seeing the graph gradually trend upwards gave me a sense of accomplishment. Some argue that having few initial blog visitors can have a negative impact, but personally, it motivated me. It sparked a desire to attract more people to my blog.

Below is the growth rate of my blog over the past year.

image

Despite the dynamic appearance of the graph, the numbers are not as high compared to many influential bloggers. That's the paradox of statistics... Nevertheless, the overall upward trend is encouraging.

Participating in the writing program has made me pay more attention to the quality of my posts, and as a result, external links have started to generate more traffic. Especially, being curated frequently on the Serfit community site has significantly boosted traffic. I am grateful to the curator who selected my mediocre posts. I will strive to write more diligently and refine my work in the future.

Future Goals​

When summarizing my goals for the second half of this year and the next year, they can be outlined as follows:

  1. Strive to publish high-quality, distinctive, and practical posts beyond simple knowledge sharing.
  2. Reach over 30,000 new users.
  3. Publish at least two posts per month.
  4. Start posting in English for language learning purposes.

I am particularly pondering the best approach and platform for English posts. In the future, I would like to post in languages other than English, so considering multilingual support will be crucial. As I progress through the writing program (please select me for the 9th cohort), I will further refine these plans.

Thank you for accompanying me on my journey so far. I look forward to your continued support πŸ™.

Saving EC2 Costs with Jenkins

Β· 3 min read
Haril Song
Owner, Software Engineer at 42dot

I would like to share a very simple method for optimizing resource costs when dealing with batch applications that need to run at specific times and under specific conditions.

Problem​

  1. Batches are only executed at specific times. For tasks like calculations, which need to run at regular intervals like daily, monthly, or yearly.
  2. Speed of response is not crucial; ensuring that the batch runs is the priority.
  3. Maintaining an EC2 instance for 24 hours just for resources needed at specific times is inefficient.
  4. Is it possible to have the EC2 instance ready only when the cloud server resources are needed?

Of course, it is possible. While there are various automation solutions like AWS ECS and AWS EKS, let's assume managing batches and EC2 servers directly with Jenkins and set up the environment.

Architecture​

With this infrastructure design, you can ensure that costs are incurred only when resources are needed for batch execution.

Jenkins​

Jenkins Node Management Policy​

image

Activates the node only when there are requests waiting in the queue, minimizing unnecessary error logs. Additionally, it transitions to idle state if there is no activity for 1 minute.

AWS CLI​

Installing AWS CLI​

With AWS CLI, you can manage AWS resources in a terminal environment. Use the following command to retrieve a list of currently running instances:

aws ec2 describe-instances

Once you have checked the information for the desired resource, you can specify the target and execute a specific action. The commands are as follows:

EC2 start​

aws ec2 start-instances --instance-ids {instanceId}

EC2 stop​

aws ec2 stop-instances --instance-ids {instanceId}

Scheduling​

By writing a cron expression for the batch to run once a month, you can set it up easily.

image

H 9 1 * *

Now, the EC2 instance will remain in a stopped state most of the time and will be activated by Jenkins once a month to process the batch.

Conclusion​

Keeping an EC2 instance in a running state when not in use is inefficient in terms of cost. This article has shown that with Jenkins and simple commands, you can use EC2 only when needed.

While higher-level cloud orchestration tools like EKS can elegantly solve such issues, sometimes a simple approach can be the most efficient. I hope you choose the method that suits your situation best as I conclude this article.

Changes in Spring Batch 5.0

Β· 3 min read
Haril Song
Owner, Software Engineer at 42dot

Here's a summary of the changes in Spring Batch 5.0.

What's new?​

@AutoConfiguration(after = { HibernateJpaAutoConfiguration.class, TransactionAutoConfiguration.class })
@ConditionalOnClass({ JobLauncher.class, DataSource.class, DatabasePopulator.class })
@ConditionalOnBean({ DataSource.class, PlatformTransactionManager.class })
@ConditionalOnMissingBean(value = DefaultBatchConfiguration.class, annotation = EnableBatchProcessing.class) // 5.0 λΆ€ν„° μΆ”κ°€λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
@EnableConfigurationProperties(BatchProperties.class)
@Import(DatabaseInitializationDependencyConfigurer.class)
public class BatchAutoConfiguration {
// ...
}

In the past, you could activate Spring Batch's Spring Boot auto-configuration using the @EnableBatchProcessing annotation. However, now you need to remove it to use Spring Boot's auto-configuration. Specifying @EnableBatchProcessing or inheriting from DefaultBatchConfiguration now pushes back Spring Boot's auto-configuration and is used for customizing application settings.

Therefore, using @EnableBatchProcessing or DefaultBatchConfiguration will cause default settings like spring.batch.jdbc.initialize-schema not to work. Additionally, Jobs won't run automatically when Boot is started, so an implementation of a Runner is required.

Multiple Job Execution is no longer supported​

Previously, if there were multiple Jobs in a batch, you could execute them all at once. However, now Boot will execute a Job when it detects a single one. If there are multiple Jobs in the context, you need to specify the Job to be executed using spring.batch.job.name when starting Boot.

Expanded JobParameter support​

In Spring Batch v4, Job parameters could only be of types Long, String, Date, and Double. In v5, you can now implement converters to use any type as a JobParameter. However, the default conversion service in Spring Batch still does not support LocalDate and LocalDateTime, causing exceptions. Although you can resolve this by implementing a converter for the default conversion service, it is problematic that even though JobParametersBuilder provides related methods, the conversion does not actually occur and throws an exception. An issue has been opened regarding this, and it is expected to be fixed in 5.0.1.

JobParameters jobParameters = jobLauncherTestUtils.getUniqueJobParametersBuilder()
.addLocalDate("date", LocalDate.now()) // if you use this method, it will throw an exception even though it is provided.
.toJobParameters();

image

The issue was resolved in the release of 5.0.1 on 2023-02-23.

initializeSchema​

spring:
datasource:
url: jdbc:postgresql://localhost:5432/postgres?currentSchema=mySchema
username: postgres
password: 1234
driver-class-name: org.postgresql.Driver
batch:
jdbc:
initialize-schema: always
table-prefix: mySchema.BATCH_
sql:
init:
mode: always

Specify the currentSchema option for proper functioning.

Reference​

[System Design Interview] Chapter 5: Consistent Hashing

Β· 12 min read
Haril Song
Owner, Software Engineer at 42dot

What are the essential components needed to design a large-scale system?

In this article, we will directly implement and discuss Consistent Hashing, which is commonly used in routing systems, and talk about it based on data.

info

You can check the complete code on Github.

Since the article is quite lengthy, from now on, we will use '~' for convenience in explanations. πŸ™

What is Hashing?​

Before delving into Consistent Hashing, let's briefly touch on hashing.

The dictionary definition of hashing is 'a mathematical function that takes an arbitrary length data string as input and generates a fixed-size output, typically a hash value or hash code consisting of numbers and strings.'

In simple terms, it means that the same input string will always return the same hash code. This characteristic of hashing is used for various purposes such as encryption and file integrity verification.

So, What is Consistent Hashing?​

Consistent Hashing is a technique used to evenly distribute data among distributed servers or services.

Even without using Consistent Hashing, it is not impossible to evenly distribute data. However, Consistent Hashing is focused on making horizontal scaling easier. Before exploring Consistent Hashing, let's understand why Consistent Hashing emerged through a simple hash routing method.

Node-Based Hash Routing Method​

hash(key) % n

image

This method efficiently distributes traffic while being simple.

However, it has a significant weakness in horizontal scaling. When the node list changes, there is a high probability that traffic will be redistributed, leading to routing to new nodes instead of existing nodes.

If you are managing traffic by caching on specific nodes, if a node leaves the group for some reason, it can cause a massive cache miss, leading to service disruptions.

image

In an experiment with four nodes, it was observed that if only one node leaves, the cache hit rate drops drastically to 27%. We will examine the experimental method in detail in the following paragraphs.

Consistent Hash Routing Method​

Consistent Hashing is a concept designed to minimize the possibility of massive cache misses.

image

The idea is simple. Create a kind of ring by connecting the start and end of the hash space, then place nodes on the hash space above the ring. Each node is allocated its hash space and waits for traffic.

info

The hash function used to place nodes is independent of modulo operations.

Now, let's assume a situation where traffic enters this router implemented with Consistent Hashing.

image

Traffic passed through the hash function is routed towards the nearest node on the ring. Node B caches key1 in preparation for future requests.

Even in the scenario of a high volume of traffic, traffic will be routed to their respective nodes following the same principle.

Advantages of Consistent Hashing​

Low probability of cache misses even when the node list changes​

Let's consider a situation where Node E is added.

image

Previously entered keys are placed at the same points as before. Some keys that were placed between Nodes D and C now point to the new Node E, causing cache misses. However, the rest of the keys placed in other spaces do not experience cache misses.

Even if there is a network error causing Node C to disappear, the results are similar.

image

Keys that were directed to Node C now route to Node D, causing cache misses. However, the keys placed in other spaces do not experience cache misses.

In conclusion, regardless of any changes in the node list, only keys directly related to the changed nodes experience cache misses. This increases the cache hit rate compared to node-based hash routing, improving overall system performance.

Disadvantages of Consistent Hashing​

Like all other designs, Consistent Hashing, which may seem elegant, also has its drawbacks.

Difficult to maintain uniform partitions​

image Nodes with different sizes of hash spaces are placed on the ring.

It is very difficult to predict the results of a hash function without knowing which key will be generated. Therefore, Consistent Hashing, which determines the position on the ring based on the hash result, cannot guarantee that nodes will have uniform hash spaces and be distributed evenly on the ring.

Difficult to achieve uniform distribution​

image If a node's hash space is too wide, traffic can be concentrated.

This problem arises because nodes are not evenly distributed on the hash ring. If Node D's hash space is abnormally larger than other nodes, it can lead to a hotspot issue where traffic is concentrated on a specific node, causing overall system failure.

Virtual Nodes​

The hash space is finite. Therefore, if there are a large number of nodes placed in the hash space, the standard deviation decreases, meaning that even if one node is removed, the next node will not be heavily burdened. The problem lies in the fact that in the real world, the number of physical nodes equates to cost.

To address this, virtual nodes, which mimic physical nodes, are implemented to solve this intelligently.

image

Virtual nodes internally point to the hash value of the physical nodes. Think of them as a kind of duplication magic. The main physical node is not placed on the hash ring, only the replicated virtual nodes wait for traffic on the hash ring. When traffic is allocated to a virtual node, it is routed based on the hash value of the actual node it represents.

DIY Consistent Hashing​

DIY: Do It Yourself

So far, we have discussed the theoretical aspects. Personally, I believe that there is no better way to learn a concept than implementing it yourself. Let's implement it.

Choosing a Hash Algorithm​

It may seem obvious since the name includes hashing, but when implementing Consistent Hashing, selecting an appropriate hash algorithm is crucial. The speed of the hash function is directly related to performance. Commonly used hash algorithms are MD5 and SHA-256.

  • MD5: Suitable for applications where speed is more important than security. Has a smaller hash space compared to SHA-256. 2^128
  • SHA-256: Has a longer hash size and stronger encryption properties. Slower than MD5. With a very large hash space of about 2^256, collisions are almost non-existent.

For routing, speed is more important than security, and since there are fewer concerns about hash collisions, MD5 is considered sufficient for implementing the hash function.

public class MD5Hash implements HashAlgorithm {
MessageDigest instance;

public MD5Hash() {
try {
instance = MessageDigest.getInstance("MD5");
} catch (NoSuchAlgorithmException e) {
throw new IllegalStateException("no algorithm found");
}
}

@Override
public long hash(String key) {
instance.reset();
instance.update(key.getBytes());
byte[] digest = instance.digest();
long h = 0;
for (int i = 0; i < 4; i++) {
h <<= 8;
h |= (digest[i]) & 0xFF;
}
return h;
}
}
tip

In Java, you can conveniently implement a hash function using the MD5 algorithm through MessageDigest.

Hash Ring​

// Hash the businessKey and find the hashed value (node) placed on the ring.
public T routeNode(String businessKey) {
if (ring.isEmpty()) { // If the ring is empty, it means there are no nodes, so return null
return null;
}
Long hashOfBusinessKey = this.hashAlgorithm.hash(businessKey);
SortedMap<Long, VirtualNode<T>> biggerTailMap = ring.tailMap(hashOfBusinessKey);
Long nodeHash;
if (biggerTailMap.isEmpty()) {
nodeHash = ring.firstKey();
} else {
nodeHash = biggerTailMap.firstKey();
}
VirtualNode<T> virtualNode = ring.get(nodeHash);
return virtualNode.getPhysicalNode();
}

The hash ring is implemented using a TreeMap. Since TreeMap maintains keys (hash values) in ascending order upon storage, we can use the tailMap(key) method to find values greater than the key (hash value) and connect them to the largest key if a larger key cannot be found.

info

If you are not familiar with TreeMap, please refer to this link.

Testing​

How effective is Consistent Hashing compared to the standard routing method? Now that we have implemented it ourselves, let's resolve this question. The rough test design is as follows:

  • Process 1 million requests, then introduce changes to the node list and assume the same traffic comes in again.
  • 4 physical nodes

The numerical data was quantified through a simple test code1, and when graphed, it revealed six cases. Let's look at each one.

Case 1: Simple Hash, No Node Changes​

image

After sending 1 million requests and then another 1 million of the same requests, since there were no changes in the nodes, the cache hit rate was 100% from the second request onwards.

info

Although the cache hit rate was low, the possibility of cache hits even in the first request (gray graph) was due to the random nature of the keys used in the test, resulting in a low probability of duplicate key values.

Looking at the heights of the graphs for the nodes, we can see that the routing using hash % N is indeed distributing all traffic very evenly.

Case 2: Simple Hash, 1 Node Departure​

image

The cache hit rate, indicated by the green graph, significantly decreased. With Node 1 departing, the traffic was distributed to Nodes 2, 3, and 4. While some traffic luckily hit the cache on the same nodes as before, most of it was directed to different nodes, resulting in cache misses.

Case 3: Consistent Hash, No Node Changes, No Virtual Nodes​

image

info

Considering that physical nodes are not placed on the hash ring, using only one virtual node practically means not using virtual nodes.

Similar to Case 1, the red graph rises first as cache hits cannot occur immediately in the first request. By the second request, the cache hit rate is 100%, aligning the heights of the green and red graphs.

However, it can be observed that the heights of the graphs for each node are different, indicating the drawback of Consistent Hashingβ€”uneven traffic distribution due to non-uniform partitions.

Case 4: Consistent Hash, 1 Node Departure, No Virtual Nodes​

image

After Node 1 departs, the cache hit rate overwhelmingly improved compared to Case 2.

Upon closer inspection, it can be seen that the traffic originally directed to Node 1 then moved to Node 2 in the second traffic wave. Node 2 processed around 450,000 requests, including cache hits, which is more than twice the amount processed by Node 3 with 220,000 requests. Meanwhile, the traffic to Nodes 3 and 4 remained unchanged. This illustrates the advantage of Consistent Hashing while also highlighting a kind of hotspot phenomenon.

Case 5: Consistent Hash, 1 Node Departure, 10 Virtual Nodes​

To achieve uniform partitioning and resolve the hotspot issue, let's apply virtual nodes.

image

Overall, there is a change in the graphs. The traffic that was supposed to go to Node 1 is now divided among Nodes 2, 3, and 4. Although the partitions are not evenly distributed, the hotspot issue is gradually being resolved compared to Case 4. Since 10 virtual nodes seem insufficient, let's increase them further.

Case 6: Consistent Hash, 1 Node Departure, 100 Virtual Nodes​

image

Finally, the graphs for Nodes 2, 3, and 4 are similar. After Node 1's departure, there are 100 virtual nodes per physical node on the hash ring, totaling 300 virtual nodes. In summary:

  • It can be seen that traffic is evenly distributed enough to withstand Case 1.
  • Even if Node 1 departs, the traffic intended for Node 1 is spread across multiple nodes, preventing the hotspot issue.
  • Apart from the traffic directed to Node 1, the cache still hits.

By placing a sufficient number of virtual nodes, the routing method using Consistent Hashing has become highly advantageous for horizontal scaling compared to the remaining operations, as observed.

Conclusion​

We have examined Consistent Hashing as discussed in Chapter 5 of the fundamentals of large-scale system design. We hope this has helped you understand what Consistent Hashing is, and why it exists to solve certain problems.

Although not mentioned in a separate case, I was concerned about how many virtual nodes should be added to achieve a perfectly uniform distribution. Therefore, I increased the number of virtual nodes to 10,000 and found that adding more virtual nodes had minimal effect. Theoretically, increasing virtual nodes should converge the variance to zero and achieve a uniform distribution. However, increasing virtual nodes means having many instances on the hash ring, leading to unnecessary overhead. It requires the task of finding and organizing virtual nodes on the hash ring whenever a new node is added or removed2. In a live environment, please set an appropriate number of virtual nodes based on data.

Reference​

Footnotes​

  1. SimpleHashRouterTest ↩

  2. In particular, for a Hash Ring implemented using TreeMap, massive insertions and deletions are somewhat inefficient as the internal elements need to be rearranged each time. ↩