🚨 AtomicJar is now part of Docker 🐋! Read the blog

Description

Ollama makes it easy to get up and running with large language models locally.

Examples

var ollama = new OllamaContainer("ollama/ollama:0.1.26");
ollama.start();
ollama.execInContainer("ollama", "pull", "all-minilm");
ollamaContainer, err := ollama.Run(ctx, "ollama/ollama:0.1.26")
if err != nil {
      log.Fatalf("failed to start container: %s", err)
}
_, _, err = ollamaContainer.Exec(ctx, []string{"ollama", "pull", "all-minilm"})
const container = await new OllamaContainer().start();