Many examples, tutorials, and blog posts convey the impression that Python is the only language to do AI. This blog post shows that for enterprise applications, Kotlin with Spring Boot is a very good choice in order to build robust AI and Agentic AI applications.
In today’s fast‑evolving landscape of AI‑driven applications, integrating language models into enterprise solutions is essential. This post explores LangChain4j within a Spring Boot environment using Kotlin. I will demonstrate how to declare AI services using annotations, integrate system prompts, dynamic user prompts, and tools — all via declarative interfaces.
What is LangChain4j?
LangChain4j is a Java‑based framework inspired by LangChain that enables developers to build chains of language model interactions with ease. In this guide, we use Kotlin instead of Java. Kotlin is a modern, expressive, and concise programming language that reduces boilerplate code and works seamlessly with Spring Boot. The official documentation explains why Kotlin is an excellent choice for LangChain4j projects (see LangChain4j – Kotlin Tutorial).
DISCLAIMER
LangChain4j’s newest release is 1.0.0-beta-1 from Feb. 2025. So a version with stable API will be available soon, but until then you should perhaps wait before productive use.
Integrating LangChain4j with Spring Boot Using Kotlin
Let us dive into the implementation of a small example of an assistant, which converts unstructured text into a Person object containing first name, last name, email, phone number, and address.
AI Service and Structured Output
With LangChain4j you define your AI service as a simple interface annotated with @AiService
. System prompts (via @SystemMessage
) set the static context, while dynamic user inputs (via @UserMessage
) are injected at runtime. LangChain4j automatically adapts the prompt so that the output is transformed into the return type of the extractContact()
method as a structured output. This spares you from writing boilerplate code (see Structured Outputs).
package com.example.service
import com.langchain4j.agent.annotations.AiService
import com.langchain4j.agent.annotations.SystemMessage
import com.langchain4j.agent.annotations.UserMessage
import com.langchain4j.agent.annotations.V
// Data class for contact information
data class Person(
val firstName: String,
val lastName: String,
val email: String,
val phone: String,
val address: String
)
@AiService
interface AssistantService {
@SystemMessage("""
Extract structured contact information from unstructured text.
""")
@UserMessage("""
The following text contains contact information.
Extract the data and return a structured output:
{{userMessage}}
""")
fun extractContact(@V("userMessage") userMessage: String): Person
}
Integrating Tools with @Tool
To extend the assistant’s capabilities, you can define methods as tools. In this example, we offer ones that extract email addresses and phone numbers using regular expressions. These tools are automatically wired into the Spring context via Automatic Component Wiring.
package com.example.tools
import com.langchain4j.agent.annotations.Tool
import org.springframework.stereotype.Component
@Component
class ContactTools {
@Tool("""
Extracts email addresses from a given text.
""")
fun extractEmail(text: String): List<String> {
val regex = Regex("[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}")
return regex.findAll(text).map { it.value }.toList()
}
@Tool("""
Extracts phone numbers from a given text.
""")
fun extractPhone(text: String): List<String> {
// A simple regex pattern for demonstration – adjust as needed.
val regex = Regex("(\\+\\d{1,3}[- ]?)?\\d{2,4}[- ]?\\d{3,4}[- ]?\\d{3,4}")
return regex.findAll(text).map { it.value }.toList()
}
}
Configuring the AI Service in Spring Boot
LangChain4j automatically generates an implementation for your @AiService
-annotated interface and registers it as a Spring bean at runtime using Proxies. You can then inject this service into a REST controller.
package com.example.controller
import com.example.service.AssistantService
import com.example.service.Person
import org.springframework.web.bind.annotation.GetMapping
import org.springframework.web.bind.annotation.RequestParam
import org.springframework.web.bind.annotation.RestController
@RestController
class AssistantController(private val assistantService: AssistantService) {
@GetMapping("/extractContact")
fun getExtractContact(@RequestParam message: String): Person {
return assistantService.extractContact(message)
}
}
Using an Environment Variable for the API Key
The LLM provider’s API key (e.g., OpenAI) is retrieved from an environment variable. Configure a bean for the LLM provider as follows:
package com.example.config
import com.langchain4j.model.OpenAI
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
@Configuration
class LangChainConfiguration {
@Bean
fun openAI(): OpenAI {
val apiKey = System.getenv("OPENAI_API_KEY")
?: throw IllegalStateException("OPENAI_API_KEY environment variable not set")
return OpenAI(apiKey)
}
}
Gradle Wrapper – Ensuring a Consistent Build Environment
Using the Gradle Wrapper ensures that all team members and CI/CD pipelines use the same Gradle version, eliminating compatibility issues. A prerequisites is, that you have a current gradle version on your system, but anyone who wants to build the project does not.
Installing the Gradle Wrapper
If your project does not yet have the Gradle Wrapper, generate it by running:
gradle wrapper
This command creates the following files:
.
├── gradlew # Unix shell script to run Gradle
├── gradlew.bat # Windows batch script to run Gradle
└── gradle
└── wrapper
├── gradle-wrapper.jar # Wrapper bootstrap
└── gradle-wrapper.properties # Specifies the Gradle version
Commit these files to your version control system to ensure a consistent build environment.
Using the Gradle Wrapper
Instead of running Gradle directly, always use the wrapper:
./gradlew build # On macOS/Linux
gradlew build # On Windows
GitHub Actions Integration
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up JDK
uses: actions/setup-java@v3
with:
distribution: 'temurin'
java-version: '21'
- name: Grant execute permission for Gradle Wrapper
run: chmod +x gradlew
- name: Build with Gradle Wrapper
run: ./gradlew build
build.gradle.kts
plugins {
id("org.springframework.boot") version "3.4.3"
id("io.spring.dependency-management") version "1.1.0"
kotlin("jvm") version "2.1.10"
kotlin("plugin.spring") version "2.1.10"
}
group = "com.example"
version = "0.0.1-SNAPSHOT"
java.sourceCompatibility = JavaVersion.VERSION_21
repositories {
mavenCentral()
}
dependencies {
implementation("org.springframework.boot:spring-boot-starter-web")
implementation("org.jetbrains.kotlin:kotlin-reflect")
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8")
implementation("dev.langchain4j:langchain4j-core:1.0.0-beta1")
testImplementation("org.springframework.boot:spring-boot-starter-test")
}
tasks.withType<Test> {
useJUnitPlatform()
}
Using the AI Service via cURL
Once your Spring Boot application is running, you can test the AI service using a cURL. For instance, consider the following narrative text that includes an address:
curl -G \
--data-urlencode "message=Jon Doe lives in 123 Maple Street, Springfield. You can reach him at email: jon.doe@example.com and phone: +1-555-123-4567." \
http://localhost:8080/extractContact
A successful request returns structured JSON output similar to:
{
"firstName": "Jon",
"lastName": "Doe",
"email": "jon.doe@example.com",
"phone": "+1-555-123-4567",
"address": "123 Maple Street, Springfield"
}
Scaling to More Complex AI Workflows
Beyond simple contact extraction, LangChain4j’s declarative approach enables you to build highly complex AI workflows with minimal code. With just a few annotations, you can:
- Develop Intelligent Chatbots: Maintain conversation context and respond to nuanced queries.
- Implement Agentic AI: Dynamically invoke specialized tools to handle a variety of tasks.
- Integrate Multiple AI Models: Combine different language models to leverage their individual strengths.
- Incorporate Memory: Enable conversation history and contextual awareness for richer interactions.
- Deploy Retrieval-Augmented Generation (RAG): Seamlessly fetch and integrate external data to enhance responses.
This approach not only reduces boilerplate code but also allows for rapid development of sophisticated AI systems, making it ideal for enterprise applications.
Conclusion
By leveraging annotations such as @SystemMessage
, @UserMessage
, and @Tool
, LangChain4j enables the creation of robust AI services within a Spring Boot application. This approach facilitates the seamless integration of static context, dynamic user inputs, and tool functionalities—all managed through declarative interfaces. Developers can efficiently build intelligent, modular applications with significantly reduced boilerplate code.
🤓 Exciting!
This article has been written with help from an LLM, which may make errors (like humans do 😇).