Road to containing iSCSI
December 8, 2025 · 806 words · 4 min
iSCSI is a popular protocol for block-level storage access, where the iSCSI initiator (client) commu
iSCSI is a popular protocol for block-level storage access, where the iSCSI initiator (client) communicates with an iSCSI target (storage server) over the network. The iSCSI target provides storage to the initiator in the form of one or more LUNs. iSCSI can be leveraged to provide persistent storage to containerized workloads. In Kubernetes, iSCSI initiator operations are managed by processes running directly on the worker nodes. But there are situations where managing such initiator operations from within containers becomes essential. Developing storage plugins as a container for Container Orchestrators and running the kubelet within a container are popular use cases. For example, currently Docker Kubernetes Service (DKS) runs the kubelet as a container and provides support for the in-tree (shipped with Kubernetes) iSCSI plugin for Linux workloads. However iSCSI components were not designed with containers in mind, which makes managing iSCSI from within containers tricky. In this blog, we are sharing the different options that were explored to be able to containerize iSCSI on Linux. To provide iSCSI support, Kubernetes relies on the standard Linux iSCSI implementation, packages are expected to be installed on cluster nodes. The package provides: The iSCSI kernel modules implementing the data path are loaded and managed as part of the host kernel. Kubernetes uses to execute iSCSI commands on the node. For example, kubelet performs the attach and detach of a persistent volume to a node as well as the mount and unmount of a persistent volume to a pod by commands. Apart from the in-tree plugin, iSCSI is also supported by (Container Storage Interface) plugins.
works in conjunction with iscsid and the iscsi kernel modules. and communicate over a Unix domain socket. communicates with the kernel modules over a socket. It’s worth mentioning that the netlink control code in the kernel is (1) There are some cases where the call to the binary is made from a container and not from a host process. For the rest of this blog, we will refer to such containers as “iscsiadm containers”. Popular use cases for managing iSCSI from a container are: Kubelet runs as a container. Typically kubelet runs as a host process. However, containerizing Kubernetes components has widely helped in the ease of binary distribution and having a uniform predictable setup. It has also proven invaluable in dev/test environments. In such cases, the kubelet alongside other Kubernetes components, runs in a container. Typically, there are two components to a CSI Volume Plugin, the Controller plugin and the Node plugin. It’s common practice for such plugins to be deployed as Kubernetes Deployments (Controller plugin) and Daemonsets (Node plugin). Hence, a CSI plugin supporting iSCSI should be distributed as a container capable of issuing commands. Note that is invoked by the CSI Node plugin. Most modern Linux distros use the project to build their iSCSI packages. Different distros use different versions of Some versions of have additional library dependencies. Our experiments revealed the following: These dependencies imply that the container should have access to the libraries as well. There are several options to package and run the iSCSI components. We ran experiments on an array of Linux distros with different base images for the We’ve outlined our findings in the following three options: Install package in the . Run the container: Invoke and in the container. Cons:
Hence, the kernel modules have to be managed from the host and the control plane components (iscsiadm and iscsid) have to be managed by the container. This is not a clean design.
Install on the host, run iscsid on the host. Install on the container and invoke iscsiadm from the container. Run the container in the host network namespace. Here, the accesses the host’s iscsid which is possible due to both processes running in the same network namespace. Cons: Install open-iscsi on the host, run iscsid on the host. Run the container bind mounted with host root filesystem . Then use either of these solutions:
Add this file to the container image and grant the right permissions.
This ensures that every invocation of by the container calls the above chroot script. Trident uses this solution to containerize their Docker Volume Plugin. Cons: Pros: Options 1 and 2 do not work when multiple Linux host distributions need to be supported in your solution. Option 3 is a clean way to support containerized iSCSI environments across a heterogeneous set of Linux distros. Choosing option 3 above ensures that dependencies in different Linux host distros are handled correctly. Running into roadblocks while containerizing iSCSI environments is expected. We ran into several while building iSCSI support for Docker Enterprise 3.0 and have documented our research and best practices here. We hope that this is a useful guide for storage plugin authors and upstream kubernetes developers.