Package dyntabs.ai
Class ConversationBuilder
java.lang.Object
dyntabs.ai.ConversationBuilder
Builder for creating
Conversation instances.
Every setting is optional. At minimum, you just need a configured
easyai.properties file on the classpath, then:
Conversation chat = EasyAI.chat().build();
Available Options
| Method | What it does | Default |
|---|---|---|
withMemory(int) | How many messages to remember | 0 (no memory) |
withSystemMessage(String) | Set the AI's personality/role | none |
withModel(String) | Override the model name | from properties |
withApiKey(String) | Override the API key | from properties |
withProvider(String) | "openai" or "ollama" | from properties |
withBaseUrl(String) | Custom API endpoint | provider default |
withTemperature(double) | 0.0=precise, 1.0=creative | provider default |
withMaxTokens(int) | Max response length | provider default |
Example: Using Local Ollama Instead of OpenAI
Conversation chat = EasyAI.chat()
.withProvider("ollama")
.withModel("llama3")
.withMemory(10)
.build();
- See Also:
-
Method Summary
Modifier and TypeMethodDescriptionbuild()Builds and returns a ready-to-useConversation.withApiKey(String apiKey) Overrides the API key from configuration.withBaseUrl(String baseUrl) Overrides the API base URL.withChatLanguageModel(dev.langchain4j.model.chat.ChatLanguageModel model) Injects an externally created ChatLanguageModel.withMaxTokens(int maxTokens) Limits the maximum number of tokens in the AI response.withMemory(int maxMessages) Enables conversation memory.Overrides the model name from configuration.withProvider(String provider) Overrides the AI provider.withSystemMessage(String systemMessage) Sets a system message that defines the AI's behavior and personality.withTemperature(double temperature) Sets the temperature (creativity) of the AI responses.
-
Method Details
-
withMemory
Enables conversation memory. The AI will remember the lastmaxMessagesmessages (both user and AI messages count).Example:
withMemory(20)means the AI sees the last 20 messages as context when generating a response.- Parameters:
maxMessages- number of messages to keep in memory (e.g. 10, 20, 50)- Returns:
- this builder
-
withSystemMessage
Sets a system message that defines the AI's behavior and personality.Examples:
- "You are a helpful Java tutor"
- "You are a customer support agent for an online store"
- "Always respond in JSON format"
- Parameters:
systemMessage- the system instruction for the AI- Returns:
- this builder
-
withModel
Overrides the model name from configuration.Examples: "gpt-4o", "gpt-4o-mini", "llama3"
- Parameters:
modelName- the model name- Returns:
- this builder
-
withApiKey
Overrides the API key from configuration.- Parameters:
apiKey- your API key- Returns:
- this builder
-
withProvider
Overrides the AI provider. Supported values: "openai", "ollama".- Parameters:
provider- the provider name- Returns:
- this builder
-
withBaseUrl
Overrides the API base URL.Useful for proxies, Azure OpenAI, or self-hosted endpoints.
- Parameters:
baseUrl- the base URL (e.g. "http://localhost:11434/v1/")- Returns:
- this builder
-
withTemperature
Sets the temperature (creativity) of the AI responses.- 0.0 = deterministic, always picks the most likely word
- 0.7 = balanced (good default)
- 1.0 = very creative, more random
- Parameters:
temperature- value between 0.0 and 1.0- Returns:
- this builder
-
withMaxTokens
Limits the maximum number of tokens in the AI response.Roughly: 1 token ~ 4 characters in English.
- Parameters:
maxTokens- maximum tokens (e.g. 500, 1000, 4000)- Returns:
- this builder
-
withChatLanguageModel
public ConversationBuilder withChatLanguageModel(dev.langchain4j.model.chat.ChatLanguageModel model) Injects an externally created ChatLanguageModel.Useful for testing with a mock model, or when you need full control over model creation.
- Parameters:
model- a pre-built ChatLanguageModel instance- Returns:
- this builder
-
build
Builds and returns a ready-to-useConversation.- Returns:
- a new Conversation instance
-