Part 1: Top trends from analyzing the security posture of open-source Helm charts

In its many shapes and forms, infrastructure as code (IaC) has one goal—to make infrastructure provisioning efficient through modular and reusable code components. Terraform modules, CloudFormation templates, and Kubernetes Helm charts are all examples of reusable building blocks for configuration management.

Pre-configured IaC components are incredibly accessible via purpose-built repositories like the Terraform RegistryArtifact Hub, or open-source communities like GitHub. Typically, when IaC components are shared, they’re optimized for functionality, ease of use, and “minimum time to hello world”—not security. As we found in our State of Open Source Terraform Security report last year, it’s extremely common for IaC examples intended for public reuse to violate common security and compliance policies. When leveraging open source IaC, it’s up to developers to put relevant security and compliance customizations in place and DevOps teams to enforce best practices throughout the pipeline. 

For the containerized and distributed Kubernetes ecosystem, building default security configurations is equally challenging. Like other infrastructure languages, Kubernetes comes with its own set of industry security guidelines, including those from the Center for Internet Security (CIS). The CIS Kubernetes Benchmarks highlight numerous ways an object deployed to Kubernetes can be given too many permissions if a specific configuration is omitted from the Kubernetes YAML defining the object. It also makes recommendations against configuration options that may provide a larger surface area for attack.

When adding Helm packages (known as charts) to the mix, adding proper security and compliance configurations can be even more nuanced. By making Kubernetes manifests consumable through templatized abstractions, Helm adds another layer of complexity when determining what security best practices should be enforced where. Enforcing relevant security controls within Helm charts, however, can provide a solid, reproducible foundation for building secure Kubernetes applications.

To get an understanding of how secure-by-default reusable Kubernetes components are, we scanned thousands of open-source Helm charts available for reuse on ArtifactHub against common Kubernetes security and compliance policies via Checkov.

In this three-part series, we’re sharing our findings:

  • Part One (you are here): Covering the importance of addressing security at the Helm chart level, introducing our research methodology, and reviewing our high-level findings.
  • Part Two: Mapping commonly used Helm chart dependencies and the impact they have on Helm security, as well as providing guidance for how to improve your Kubernetes security posture.
  • Part Three: Analyzing one of the most commonly used Helm chart dependencies and its security posture.

First, why Helm?

Now in version three, Helm is by far the most popular way to manage Kubernetes applications and reduce the need for manually-written and managed Kubernetes YAML. Helm is a graduated Cloud Native Computing Foundation (CNCF) project, and according to last year’s CNCF survey, it is used by 63% of developers.

Helm chart misconfigurations look exactly the same as that of a Kubernetes YAML manifest misconfiguration, and thus, many Kubernetes security best practices also apply to Helm. It’s essential to check these packaged templates for resulting manifests that do not have security and compliance considerations built into them. 

Since Helm’s release in 2016, charts were “blessed” via pull requests into a single GitHub repository which became a nightmare to manage as its popularity grew. At the same time, other commonly reusable components within the Kubernetes ecosystem—such as operators—needed a searchable, versionable home. And so, in January 2020, Artifact Hub was launched as a centrally queryable location for publishing Helm charts from multiple sources. The Helm community is growing fast, with 2,700+ charts available via Artifact Hub at the time of writing.

In contrast, Kustomize, another Kubernetes package manager, is template-free, meaning that specific objects such as charts aren’t needed, making it hard to gather the type of data required for this report. Kustomize is also a lot less mature, with only a small number of samples available

Sourcing data from alternative locations such as GitHub were tested but in practice yielded less practical and more complex data due to:

  • Abandoned, old, or non-complete charts
  • Multiple forks of charts that require extra processing to not unnecessarily inflate data
  • A lack of a standard versioning solution for each result, making it difficult to model the same datasets over time

Data collection with Helm Scanner and Checkov

To scan Artifact Hub’s rendered Kubernetes manifests and collect data for use in this research, we created a simple tool, Helm Scanner. As part of this project, we open-sourced the scanner code and formatting of the CSVs so that anyone can produce the same dataset. For more information, check out the project on GitHub.

How Helm Scanner works

The scanner starts with a zero-filter API query from Artifact Hub to download all Helm charts using the latest version of TGZ resource URI. Records are then unpacked and recorded with versions, located repositories, and names.

Each Helm chart is then templated via the helm template command of Helm 3 to capture necessary dependencies, ensuring dependency-injected security issues are captured. Those dependencies are recorded on their dependent chart and added to the list of charts to scan.

The resulting dataset is based on the latest available release of 2.3k Helm charts and their dependencies, covering 6.3k rendered Kubernetes manifests across 618 repositories and 21 Kubernetes resource types. 

Following the templating process, we passed the resultant Kubernetes manifests through Checkov

Checkov for Kubernetes and Helm

For those not familiar, Checkov is an open-source IaC static analysis tool that scans infrastructure configuration against hundreds of security, compliance, and DevOps best practices. Checkov has about 200 Kubernetes policies built into it, which you can check out in our policy index. Of those policies, 103 produced relevant results to our Helm inputs.

Once each Kubernetes manifest was checked against policies for failing and passing policies, we made a few necessary tweaks to our scanner to better visualize the data and ensure we focused on relevant issues.

Because Helm charts can contain logic based on their destinations, tests at the chart stage don’t always paint a complete picture. A simple example of this is in Checkov’s Default Namespace policies which reported a concerning number of failures. These are not relevant findings as Helm 3, by default, dynamically creates a new namespace for each of the deployed charts but isn’t captured in the local templating output. 

Finally, we analyzed the data and are excited to share our findings with you!

Top findings and trends

Drilling into the resulting Helm chart security data, we found that, similar to our previous research on open-source IaC, misconfigurations were widespread:

  • 71% of the 618 scanned repositories contained misconfigurations.
  • 46.5% of the 2.3k scanned Helm charts contained misconfiguration.


When looking at types of policies that failed higher than the baseline, three areas, in particular, jumped out:

  • Modeling CPU and memory limits
  • Root containers, shared namespaces, and Privilege Escalation
  • Low hanging fruit

Trend 1: Modeling CPU and memory limits

Despite advice from the CIS Kubernetes guidelines, at least 60% of scanned manifests were missing guardrails for CPU requests, CPU limits, and memory limits.


Without these defined, the end-user has no expectation of “common resource consumption” for a given application and no way for the Kubernetes scheduler to manage expected resource consumption for a given workload usefully.

Tools such as AWS’s CodeGuru Profiler may give rise to an automated way to append useful data to future charts, as the profile can be created from an application running on a known size and quantity Kubernetes cluster. 

There’s also an argument to be made that these values can only be truly tuned and selected by the end-user. This makes admission control with security tools like Checkov in the end users’ CI/CD pipeline or Kubernetes cluster just as critical as static analysis performed on the public Helm resources. While analysis of the open source Helm chart ensures secure defaults, checking again by the end user ensures the applied resource settings are secure.

Trend 2: Root containers, shared namespaces, and privilege escalation

A number of high-priority failures such as privilege escalation, underlying host access, and shared group namespaces were also flagged in our dataset. In some cases, over 70% of the scanned manifests suffered from one of these three issue categories.


We’ll deep dive into a specific example in part three. Still, a common theme here was the lack of failsafe security context items—likely omitted to keep the resultant manifests shorter. 

When these items are omitted, security controls must be defined in a PodSecurityPolicy which are usually managed by the cluster administrator itself. Alternatively, they don’t get defined at all. Ensuring they are also set at the pod level is good practice, especially for templated environments such as Helm, which do not need manually editing.

Trend 3: Low hanging fruit

After excluding the high-priority issues above, we saw a consistent baseline of charts where Kubernetes basics, such as image tagging and probes configuring, needed attention. 


Nearly 25% of pod definitions didn’t have a liveness or readiness probe configured. Much like our previous findings, this prevents the machinery of the Kubernetes scheduler from taking action and maintaining cluster health as efficiently as it should. It can also potentially cause application downtime due to stale “active” paths in a load balancer or ingress.

Misconfigurations in these Kubernetes core competencies are important—and relatively straightforward—to address.

Coming up in part two

In part two of this series, we’ll dive into another interesting observation of the dataset and analyze the part that Helm chart dependencies play in analyzing our dataset from a security standpoint. Dependencies represent commonly reused charts and, thus, are great examples to examine more closely from a security and compliance perspective. We’ll also provide guidance and recommendations for automating and enforcing security best practices when leveraging open-source Helm charts.

Check out Helm Scanner to generate and analyze your own infrastructure. We’d love to hear what you find in our #CodifiedSecurity Slack community!