Adding the ESLint Tool to an AI Assistant Improving Recommendations for JS/TS Projects
December 8, 2025 · 932 words · 5 min
Can an AI assistant help you write better JavaScript or TypeScript? Projects that heavily rely on Ja
Can an AI assistant help you write better JavaScript or TypeScript? Projects that heavily rely on JavaScript (JS) or TypeScript (TS) are synonymous with the web, so there is a high demand for tools that can improve the consistency and quality of projects using these languages. In previous posts, we’ve introduced the idea that tools both enable AI assistants to understand our code more and enable them to take action based on this understanding. In this article, we’ll be trying to enable our AI assistant to provide advice that is both helpful and actionable for linting JS/TS projects and to finally delve into the NPM ecosystem. As we learned in , you won’t get much help asking an LLM to tell you how to lint your project without any details. So, like before, we’re using our “linguist” tool to learn about the languages used in the project and augment the prompt (Figure 1): In Figure 2, we see that GPT-4 recognizes that ESLint is highly configurable and actually doesn’t work without a config, and so it is trying to provide that for us by either helping us run ESLint’s init tool or by writing a config to use. However, this response gives us either a config that does not work for many projects, or a boilerplate setup task for the user to do manually. This is in contrast with other linters, like Pylint or golangci-lint, where linguist was actually enough for the LLM to find a clear path to linting. So, with ESLint, we need to add more knowledge to help the LLM figure this out. is a community-led effort to simplify configurations. Let’s start by nudging the assistant toward using this as a starting point. The ESLint config is published under its own package, StandardJS, so we can add the following prompt: We will also add a function definition so that our assistant knows how to run StandardJS. Note the container image defined at the bottom of the following definition: This definition will work for both TypeScript and JavaScript projects using an argument. The assistant uses the project content to determine how to optimally set the TypeScript property. When using StandardJS with TypeScript, two things happen in the container: But, with the right tools, this behavior is enabled with a single prompt: Both ESLint and StandardJS run in Node.js environments. In our current prototype, our assistant uses three different Docker images. Docker is significant because of the previously mentioned requirement of using in a directory with . When we baked this logic into the Docker image, we effectively introduced a contract bridging the AI Assistant, the linter tool, and the overall structure of the repository. After determining that a project uses JavaScript or TypeScript, our assistant also adds Git Hooks. (See for details.) Docker gives us a way to reliably distribute these tools. Linting output comes in the form of violations. A violation is attached to a range in the code file with the offending code and the violation reason. As mentioned previously, 75% of StandardJS violations are automatically fixable. Can we use the AI assistant to automatically fix the remaining violations? If you take, for example, the lint rule for type casting, all of the models we tested will replace with . Here’s the response when we ask for fixes to lines with the violation: If these models are able to fix these violations, why doesn’t ESLint just make them automatically fixable? In many of the cases, they represent riskier changes that still require some developer supervision. Perhaps the best thing an assistant can do is present these auto-fixes to the user directly in their editors. For example, a fix that has been generated by our assistant can be presented in VSCode (Figure 3). With the rise of tools like GitHub Copilot, developers are now becoming accustomed to assistants being present in their editors (Figure 4). Our work is showing that linting tools can improve the quality of these fixes. For example, when asking Copilot to fix the line from earlier, it lacks the additional context from ESLint (Figure 5). The assistant is unable to infer that there is a violation there. In this instance, Copilot is hallucinating because it was triggered by the developer’s editor action without any of the context coming in from the linter. As far as Copilot knows, I just asked it to fix perfectly good code. To improve this, we can use the output of a linter to “complain” about a violation. The editor allows us to surface a quick action to fix the code. Figure 6 shows the same “fix using Copilot” from the “problems” window, triggered by another violation: This is shown in VSCode’s “problems” window, which helps developers locate problems in the codebase. An assistant can use the editor to put the ESLint tool in a more effective relationship with the developer (Figure 7). Most importantly, we get an immediate resolution rather than a hallucination. We’re also hosting these tools in Docker, so these improvements do not require installs of Node.js, NPM, or ESLint. We continue to investigate the use of tools for gathering context and improving suggestions. In this article, we have looked at how AI assistants can provide significant value to developers by: As always, feel free to follow along in our new and please reach out. Everything we’ve discussed in this blog post is available for you to try out on your own projects. For more on what we’re doing at Docker,