Skip to main content

5 posts tagged with "test"

View All Tags

Make Testing Easy and Convenient with Fixture Monkey

· 6 min read
Haril Song
Owner, Software Engineer at 42dot

"Write once, Test anywhere"

Fixture Monkey is a testing object creation library being developed as open source by Naver. The name seems to be inspired by Netflix's open source tool, Chaos Monkey. By generating test fixtures randomly, it allows you to experience chaos engineering in practice.

Since I first encountered it about 2 years ago, it has become one of my favorite open source libraries. I even ended up writing two articles about it.

I haven't written any additional articles as there were too many changes with each version update, but now that version 1.x has been released, I am revisiting it with a fresh perspective.

While my previous articles were based on Java, I am now writing in Kotlin to align with current trends. The content of this article is based on the official documentation with some added insights from my actual usage.

Why Fixture Monkey is Needed

Let's examine the following code to see what issues exist with the traditional approach.

info

I used JUnit5, which is familiar to Java developers, for the examples. However, personally, I recommend using Kotest in a Kotlin environment.

data class Product (
val id: Long,

val productName: String,

val price: Long,

val options: List<String>,

val createdAt: Instant,

val productType: ProductType,

val merchantInfo: Map<Int, String>
)

enum class ProductType {
ELECTRONICS,
CLOTHING,
FOOD
}
@Test
fun basic() {
val actual: Product = Product(
id = 1L,
price = 1000L,
productName = "productName",
productType = ProductType.FOOD,
options = listOf(
"option1",
"option2"
),
createdAt = Instant.now(),
merchantInfo = mapOf(
1 to "merchant1",
2 to "merchant2"
)
)

// The preparation process is lengthy compared to the test purpose
actual shouldNotBe null
}

Challenges of Test Object Creation

Looking at the test code, it feels like there is too much code to write just to create objects for assertion. Due to the nature of the implementation, if properties are not set, a compilation error occurs, so even meaningless properties must be written.

When the preparation required for assertion in test code becomes lengthy, the meaning of the test purpose in the code can become unclear. The person reading this code for the first time would have to examine even seemingly meaningless properties to see if there is any hidden significance. This process increases developers' fatigue.

Difficulty in Recognizing Edge Cases

When directly setting properties to create objects, many edge cases that could occur in various scenarios are often overlooked because the properties are fixed.

val actual: Product = Product(
id = 1L, // What if the id becomes negative?
// ...omitted
)

To find edge cases, developers have to set properties one by one and verify them, but in reality, it is often only after runtime errors occur that developers become aware of edge cases. To easily discover edge cases before errors occur, object properties need to be set with a certain degree of randomness.

Issues with the Object Mother Pattern

To reuse test objects, a pattern called the Object Mother pattern involves creating a factory class to generate objects and then executing test code using objects created from that class.

However, this method is not favored because it requires continuous management not only of the test code but also of the factory. Furthermore, it does not help in identifying edge cases.

Using Fixture Monkey

Fixture Monkey elegantly addresses the issues of reusability and randomness as mentioned above. Let's see how it solves these problems.

First, add the dependency.

testImplementation("com.navercorp.fixturemonkey:fixture-monkey-starter-kotlin:1.0.13")

Apply KotlinPlugin() to ensure that Fixture Monkey works smoothly in a Kotlin environment.

@Test
fun test() {
val fixtureMonkey = FixtureMonkey.builder()
.plugin(KotlinPlugin())
.build()
}

Let's write a test again using the Product class we used before.

data class Product (
val id: Long,

val productName: String,

val price: Long,

val options: List<String>,

val createdAt: Instant,

val productType: ProductType,

val merchantInfo: Map<Int, String>
)

enum class ProductType {
ELECTRONICS,
CLOTHING,
FOOD
}
@Test
fun test() {
val fixtureMonkey = FixtureMonkey.builder()
.plugin(KotlinPlugin())
.build()

val actual: Product = fixtureMonkey.giveMeOne()

actual shouldNotBe null
}

You can create an instance of Product without the need for unnecessary property settings. All property values are filled randomly by default.

image Fills in multiple properties nicely

Post Condition

However, in most cases, specific property values are required. For example, in the example, the id was generated as a negative number, but in reality, id is often used as a positive number. There might be a validation logic like this:

init {
require(id > 0) { "id should be positive" }
}

After running the test a few times, if the id is generated as a negative number, the test fails. The fact that all values are randomly generated makes it particularly useful for finding unexpected edge cases.

image

Let's maintain the randomness but restrict the range slightly to ensure the validation logic passes.

@RepeatedTest(10)
fun postCondition() {
val fixtureMonkey = FixtureMonkey.builder()
.plugin(KotlinPlugin())
.build()

val actual = fixtureMonkey.giveMeBuilder<Product>()
.setPostCondition { it.id > 0 } // Specify property conditions for the generated object
.sample()

actual.id shouldBeGreaterThan 0
}

I used @RepeatedTest to run the test 10 times.

image

You can see that all tests pass.

Setting Various Properties

When using postCondition, be cautious as setting conditions too narrowly can make object creation costly. This is because the creation is repeated internally until an object that meets the condition is generated. In such cases, it is much better to use setExp to fix specific values.

val actual = fixtureMonkey.giveMeBuilder<Product>()
.setExp(Product::id, 1L) // Only the specified value is fixed, the rest are random
.sample()

actual.id shouldBe 1L

If a property is a collection, you can use sizeExp to specify the size of the collection.

val actual = fixtureMonkey.giveMeBuilder<Product>()
.sizeExp(Product::options, 3)
.sample()

actual.options.size shouldBe 3

Using maxSize and minSize, you can easily set the maximum and minimum size constraints for a collection.

val actual = fixtureMonkey.giveMeBuilder<Product>()
.maxSizeExp(Product::options, 10)
.sample()

actual.options.size shouldBeLessThan 11

There are various other property setting methods available, so I recommend exploring them when needed.

Conclusion

Fixture Monkey really resolves the inconveniences encountered while writing unit tests. Although not mentioned in this article, you can create conditions in the builder and reuse them, add randomness to properties, and help developers discover edge cases they may have missed. As a result, test code becomes very concise, and the need for additional code like Object Mother disappears, making maintenance easier.

Even before the release of Fixture Monkey 1.x, I found it very helpful in writing test code. Now that it has become a stable version, I hope you can introduce it without hesitation and enjoy writing test code.

Reference

How Many Concurrent Requests Can a Single Server Application Handle?

· 14 min read
Haril Song
Owner, Software Engineer at 42dot

banner

Overview

How many concurrent users can a Spring MVC web application accommodate? 🤔

To estimate the approximate number of users a server needs to handle to provide stable service while accommodating many users, this article explores changes in network traffic focusing on Spring MVC's Tomcat configuration.

For the sake of convenience, the following text will be written in a conversational tone 🙏

info

If you find any technical errors, typos, or incorrect information, please let us know in the comments. Your feedback is greatly appreciated 🙇‍♂️

Speeding up Test Execution, Spring Context Mocking

· 3 min read
Haril Song
Owner, Software Engineer at 42dot

Overview

Writing test code in every project has become a common practice. As projects grow, the number of tests inevitably increases, leading to longer overall test execution times. Particularly in projects based on the Spring framework, test execution can significantly slow down due to the loading of Spring Bean contexts. This article introduces methods to address this issue.

Write All Tests as Unit Tests

Tests need to be fast. The faster they are, the more frequently they can be run without hesitation. If running all tests once takes 10 minutes, it means feedback will only come after 10 minutes.

To achieve faster tests in Spring, it is essential to avoid using @SpringBootTest. Loading all Beans causes the time to load necessary Beans to be overwhelmingly longer than executing the code for testing business logic.

@SpringBootTest
class SpringApplicationTest {

@Test
void main() {
}
}

The above code is a basic test code for running a Spring application. All Beans configured by @SpringBootTest are loaded. How can we inject only the necessary Beans for testing?

Utilizing Annotations or Mockito

By using specific annotations, only the necessary Beans for related tests are automatically loaded. This way, instead of loading all Beans through Context loading, only the truly needed Beans are loaded, minimizing test execution time.

Let's briefly look at a few annotations.

  • @WebMvcTest: Loads only Web MVC related Beans.
  • @WebFluxTest: Loads only Web Flux related Beans. Allows the use of WebTestClient.
  • @DataJpaTest: Loads only JPA repository related Beans.
  • @WithMockUser: When using Spring Security, creates a fake User, skipping unnecessary authentication processes.

Additionally, by using Mockito, complex dependencies can be easily resolved to write tests. By appropriately utilizing these two concepts, most unit tests are not overly difficult.

warning

If excessive mocking is required, there is a high possibility that the dependency design is flawed. Be cautious not to overuse mocking.

What about SpringApplication?

For SpringApplication to run, SpringApplication.run() must be executed. Instead of inefficiently loading all Spring contexts to verify the execution of this method, we can mock the SpringApplication where context loading occurs and verify only if run() is called without using @SpringBootTest.

class DemoApplicationTests {  

@Test
void main() {
try (MockedStatic<SpringApplication> springApplication = mockStatic(SpringApplication.class)) {
when(SpringApplication.run(DemoApplication.class)).thenReturn(null);

DemoApplication.main(new String[]{});

springApplication.verify(
() -> SpringApplication.run(DemoApplication.class), only()
);
}
}
}

Conclusion

In Robert C. Martin's Clean Code, Chapter 9 discusses the 'FIRST principle'.

Reflecting on the first letter, F, for Fast, as mentioned in this article, we briefly introduced considerations on speed. Once again, emphasizing the importance of fast tests, we conclude with the quote:

Tests must be fast enough. - Robert C. Martin

Reference

Fixture Monkey 0.4.x

· 3 min read
Haril Song
Owner, Software Engineer at 42dot
warning

As of May 2024, this post is no longer valid. Instead, please refer to Making Testing Easy and Convenient with Fixture Monkey.

Overview

With the update to FixtureMonkey version 0.4.x, there have been significant changes in functionality. It has only been a month since the previous post1, and there have been many modifications (ㅠ) which was a bit overwhelming, but taking comfort in the active community, I am writing a new post reflecting the updated features.

Changes

LabMonkey

An experimental feature has been added as an instance.

LabMonkey inherits from FixtureMonkey and supports existing features while adding several new methods. The overall usage is similar, so it seems that using LabMonkey instead of FixtureMonkey would be appropriate. It is said that after version 1.0.0, the functionality of LabMonkey will be deprecated, and the same features will be provided by FixtureMonkey.

private final LabMonkey fixture = LabMonkey.create();

Change in Object Creation Method

The responsibility has shifted from ArbitraryGenerator to ArbitraryIntrospector.

Record Support

Now, you can also create Record through FixtureMonkey.

public record LottoRecord(int number) {}
class LottoRecordTest {

private final LabMonkey fixture = LabMonkey.labMonkeyBuilder()
.objectIntrospector(ConstructorPropertiesArbitraryIntrospector.INSTANCE)
.build();

@Test
void shouldBetween1to45() {
LottoRecord lottoRecord = fixture.giveMeOne(LottoRecord.class);

System.out.println("lottoRecord: " + lottoRecord);

assertThat(lottoRecord).isNotNull();
}
}
lottoRecord: LottoRecord[number=-6]

By using ConstructorPropertiesArbitraryIntrospector to create objects, you can create Record objects. In version 0.3.x, the ConstructorProperties annotation was required, but now you don't need to make changes to the production code, which is quite a significant change.

In addition, various Introspectors exist to support object creation in a way that matches the shape of the object.

Plugin

private final LabMonkey fixture = LabMonkey.labMonkeyBuilder()
.objectIntrospector(ConstructorPropertiesArbitraryIntrospector.INSTANCE)
.plugin(new JavaxValidationPlugin())
.build();

Through the fluent API plugin(), you can easily add plugins. By adding JavaxValidationPlugin, you can apply Java Bean Validation functionality to create objects.

It seems like a kind of decorator pattern, making it easier to develop and apply various third-party plugins.

public record LottoRecord(
@Min(1)
@Max(45)
int number
) {
public LottoRecord {
if (number < 1 || number > 45) {
throw new IllegalArgumentException("The lotto number must be between 1 and 45.");
}
}
}
@RepeatedTest(100)
void shouldBetween1to45() {
LottoRecord lottoRecord = fixture.giveMeOne(LottoRecord.class);

assertThat(lottoRecord.number()).isBetween(1, 45);
}

Conclusion

Most of the areas that were mentioned as lacking in the previous post have been improved, and I am very satisfied with using it. But somehow, the documentation2 seems a bit lacking compared to before...

Reference

Footnotes

  1. FixtureMonkey 0.3.0 - Object Creation Strategy

  2. FixtureMonkey

[Jacoco] Aggregating Jacoco Reports for Multi-Module Projects

· 2 min read
Haril Song
Owner, Software Engineer at 42dot

Overview

Starting from Gradle 7.4, a feature has been added that allows you to aggregate multiple Jacoco test reports into a single, unified report. In the past, it was very difficult to view the test results across multiple modules in one file, but now it has become much more convenient to merge these reports.

Usage

Creating a Submodule Solely for Collecting Reports

The current project structure consists of a module named application and other modules like list and utils that are used by the application module.

By adding a code-coverage-report module, we can collect the test reports from the application, list, and utils modules.

The project structure will then look like this:

  • application
  • utils
  • list
  • code-coverage-report

Adding the jacoco-report-aggregation Plugin

// code-coverage-report/build.gradle
plugins {
id 'base'
id 'jacoco-report-aggregation'
}

repositories {
mavenCentral()
}

dependencies {
jacocoAggregation project(":application")
}

Now, by running ./gradlew testCodeCoverageReport, you can generate a Jacoco report that aggregates the test results from all modules.

jacoco-directory

warning

To use the aggregation feature, a jar file is required. If you have set jar { enable = false }, you need to change it to true.

Update 22-09-28

In the case of a Gradle multi-project setup, there is an issue where packages that were properly excluded in a single project are not excluded in the aggregate report.

By adding the following configuration, you can generate a report that excludes specific packages.

testCodeCoverageReport {
reports {
csv.required = true
xml.required = false
}
getClassDirectories().setFrom(files(
[project(':api'), project(':utils'), project(':core')].collect {
it.fileTree(dir: "${it.buildDir}/classes/java/main", exclude: [
'**/dto/**',
'**/config/**',
'**/output/**',
])
}
))
}

Next Step

The jvm-test-suite plugin, which is introduced alongside jacoco-aggregation-report in Gradle, also seems very useful. Since these plugins are complementary, it would be beneficial to use them together.

Reference