AI-Powered Testing Using Docker Model Runner with Microcks for Dynamic Mock APIs

December 8, 2025 · 612 words · 3 min

The non-deterministic nature of LLMs makes them ideal for generating dynamic, rich test data, perfec

The non-deterministic nature of LLMs makes them ideal for generating dynamic, rich test data, perfect for validating app behavior and ensuring consistent, high-quality user experiences. Today, we’ll walk you through how to use with Microcks to generate dynamic mock APIs for testing your applications. that allows developers to quickly spin up mock services for development and testing. By providing predefined mock responses or generating them directly from an OpenAPI schema, you can point your applications to consume these mocks instead of hitting real APIs, enabling efficient and safe testing environments. Docker Model Runner is a convenient way to within your Docker Desktop. It provides an OpenAI-compatible API, allowing you to integrate sophisticated AI capabilities into your projects seamlessly, using local hardware resources. By integrating Microcks with Docker Model Runner, you can enrich your mock APIs with AI-generated responses, creating realistic and varied data that is less rigid than static examples. In this guide, we’ll explore how to set up these two tools together, giving you the benefits of dynamic mock generation powered by local AI. To start, ensure you’ve enabled Docker Model Runner as described in our on configuring Goose for a local AI assistant setup. Next, select and pull your desired from Docker Hub. For example: First, clone the Microcks repository: Navigate to the Docker Compose setup directory: You’ll need to adjust some configurations to enable the AI Copilot feature within Microcks. In the file, configure the AI Copilot to use Docker Model Runner: We’re using the as the base URL for the OpenAI compatible API. Docker Model Runner is available there from the containers running in Docker Desktop.  Using it ensures direct communication between the containers and the Model Runner and avoids unnecessary networking using the host machine ports. Next, enable the copilot feature itself by adding this line to the Microcks file: Start Microcks with Docker Compose in development mode: Once up, access the Microcks UI at . Install the example API for testing. Click through these buttons on the Microcks page: Microcks Hub → MicrocksIO Samples APIs → pastry-api-openapi v.2.0.0 → Install → Direct import → Go. Within the Microcks UI, and select an operation you’d like to enhance. Open the “AI Copilot Samples” dialog, prompting Microcks to query the configured LLM via Docker Model Runner. You may notice increased GPU activity as the model processes your request. After processing, the AI-generated mock responses are displayed, ready to be reviewed or added directly to your mocked operations. You can easily test the generated mocks with a simple command. For example: This returns a realistic, AI-generated response that enhances the quality and reliability of your test data.  Now you could use this approach in the tests; for example, a shopping cart application, where the app depends on some inventory service. With realistic yet randomized mocked data, you can cover more application behaviors with the same set of tests. For better reproducibility, you can also specify the Docker Model Runner dependency and the chosen model explicitly in your : Then starting the compose setup will pull the model too and wait for it to be available, the same way it does for containers. Docker Model Runner is an excellent and provides compatibility with OpenAI APIs, allowing for seamless integration into existing workflows. Tools like Microcks can leverage Model Runner to generate dynamic sample responses for mocked APIs, giving you richer, more realistic synthetic data for integration testing. If you have local AI workflows or just run LLMs locally, please discuss with us in the ! We’d love to explore more local AI integrations with Docker.