So many people complaining about Helm but I'll share my 2 experiences. At my last 2 companies we shipped Helm charts for administrators to easily deploy our stuff.
It worked fine and was simple enough which is what the goal was. But then people came along wanting all sorts of customisations to make the chart configurable to work in their environments. The charts ended up getting pretty unwieldy.
Helm is a product that serves users who like customization to the nth-degree. But everyone else hates it.
Personally, I would prefer it if the 'power users' just got used to forking and maintaining their own charts with all the tweaks they want. The reason they don't do that of course is that it's harder to keep up with updates - maybe that's the problem that needs solving.
After some work with kubernetes, i must really say, helm is a complexity hell. I'm sure it has much features but many aren't needed but increase the complexity nonetheless.
Also, please fix the "default" helm chart template, it's a nightmare of options and values no beginner understands. Make it basic and simple.
Nowadays i would very much prefer to just use terraform for kubernetes deployments, especially if you use terraform anyway!
Helm is my example of where DevOps lost it's way. The insanity of multiple tiers on templating an invisible char scoped language... it blows my mind that so many of us just deal with it
Nowadays I'm using CUE in front of TF & k8s, in part because I have workloads that need a bit of both and share config. I emit tf.json and Yaml as needed from a single source of truth
The problem with Kubernetes, Docker and anything CNCF related is what happens when everyone and their dog tries to make a business out of an OS capability with venture capital.
shudders.. `| nindent 12`..
I've been trying to apply CUE to my work, but the tooling just isn't there for much of what I need yet. It also seems really short-sighted that it is implemented in Go which is notoriously bad for embedding.
Both Jsonnet and CUE are implemented in Go which happens to be the language Helm is written in. While I agree that it reduces "general embedability" it's ripe fruit for Helm to integrate either or both of these as alternatives to YAML templating.
We evaluated CUE, Jsonnet and CDK8s when we wanted to move on from Helm, and ended up using CDK8s. It's proven to be a good pick so far, it's in Typescript.
> seems really short-sighted that it is implemented in Go
CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)
Also, so much of the k8s ecosystem is in Go that it was a natural choice.
> CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)
Ah, that makes sense, I guess. I also get the feeling that the language itself is still under very active development, so until 1.0 is released I don't think it matters too much what it's implemented in.
> Also, so much of the k8s ecosystem is in Go that it was a natural choice.
That might turn out to be a costly decision, imho. I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.
I figured I'd try and hack something together, but it was a complete non-starter since I don't work within the Go ecosystem.
Projects like the cue language live and breathe from an active community with related tooling, so the decision still really boggles my mind.
I'll stay optimistic and hope that once it reaches 1.0, someone will write an implementation that is easily embedded for my use-cases. I won't hold my breath though, since the scope is getting quite big.
Why don't you work with the Go ecosystem? You don't use K8s, terraform, etc? What ecosystem do you prefer?
what language would you have chosen?
> I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.
Have you tried a Makefile to run cue? There should be no need to write code to do this
Holos[1] is an interesting project I’ve been looking at trying out.
1. it seems like development has largely ceased since Sept
2. it looks to only handle helm, not terraform, I'm looking for something to unify both and deal with dependencies between charts (another thing helm is terrible at)
Back when my job involved using Kubernetes and Helm, the solution I found was to use `| toJson` instead: it generates one line that happens to be valid YAML as well.
cue and argocd here. it is pretty neat.
the tf is still in hcl form for now.
RIP Ksonnet, we hardly knew what we were missing
jsonnet is the main DX issue therein
I don't think I've ever seen a Helm template that didn't invoke nightmares. Probably the biggest reason I moved away from Kubernetes in the first place.
We have several Helm charts we've written at my job and they are very pleasant to use. They are just normal k8s templates with a couple of values parameterized, and they work great. The ones people put out for public consumption are very complex, but it isn't like Helm charts have to be that complex.
In my book the main problem with Helm charts is that every customization option needs to be implemented by the chart that way. There is no way for chart consumer to change anything the chart author did not allow to be changed. That leads to these overly complex and config heavy charts people publish - just to make sure everything is customizable for consumers.
I'd love something that works more like Kustomize but with other benefits of Helm charts (packaging, distribution via OCI, more straight forward value interpolation than overlays and patches, ...). So far none have ticked all my boxes.
Kustomize can render Helm charts. It's "very basic" as in Kustomize will call the Helm binary to render the template, ingest it and apply patches.
I wrote a tool called "easykubenix" that works in a similar way, render the chart in a derivation, convert the YAML to JSON, import JSON into the Nix module structure and now you're free to override, remove or add anything you want :)
It's still very CLI deploy centric using kluctl as the deployment engine, but there's nothing preventing dumping the generated JSON (or YAML) manifests into a GitOps loop.
It doesn't make the public charts you consume any less horrible, but you don't have to care as much about them at least
fluxCD brings a really nice helm-controller that will allow to change manifests via a postRenderers stub while still allowing to use regular helm tooling against the cluster.
Yeah, but then it is yet another layer of configuration slapped on top of the previous layer of configuration. That can't be the best solution, can it? Same thing for piping helm template through Kustomize.
this, our helm charts are flat and for year only passed in the image as variable
That's generally what I try to push for in my company.
A single purpose chart for your project is generally a lot easier to grok and consume vs what can be done.
I think the likes of "kustomize" is probably a more sane route to go down. But our entire infrastructure is already helm so hard to switch that all out.
I've personally boiled down the Helm vs. Kubernetes to the following:
Does your Kubernetes configuration need to be installed by a stranger? Use Helm.
Does your Kubernetes configuration need to be installed by you and your organization alone? Use Kustomize.
It makes sense for Grafana to provide a Helm chart for Grafana Alloy that the employees of Random Corp can install on their servers. It doesn't make sense for my employer to make a Helm chart out of our SaaS application just so that we can have different prod/staging settings.
I'm ashamed to say it but I cannot for the life of me understand how kustomize works. I could not ever figure out how to do things outside the "hello world" tutorials they walk you through. I'm not a stupid person (citation needed lol), but trying to understand the kustomize docs made me feel incredibly stupid. That's why we didn't go with that instead of Helm.
Helm requires you to write a template and you need to know (or guess) up front which values you want to be configurable. Then you set sane defaults for those values. If you find a user needs to change something else you have to edit the chart to add it.
With Kustomize, on the other hand, you just write the default as perfectly normal K8s manifests in YAML. You don't have to know or care what your users are going to do with it.
Then you write a `kustomizatiom.yaml` that references those manifests somehow (could be in the same folder or you can use a URL). Kustomize simply concatenates everything together as its default behaviour. Run `kubectl kustomize` in the directory with `kustomization.yaml` to see the output. You can run `kubectl apply -k` to apply to your cluster (and `kubectl delete -k` to delete it all).
From there you just add what you need to `kustomization.yaml`. You can do a few basics easily like setting the namespace for it all, adding labels to everything and changing the image ref. Keep running `kubectl kustomize` to see how it's changing things. You can use configmap and secret generators to easily generate these with hashed names and it will make sure all references match the generated name. Then you have the all powerful YAML or JSON editing commands which allow you to selectively edit the manifests if you need to. Start small and add things when you need them. Keep running `kubectl kustomize` at every step until you get it.
Yes, this is the key. Helm charts should basically be manifests with some light customization.
Helm is not good enough to develop abstractions with. So go the opposite way: keep it stupid simple.
Pairing helm with Kustomize can help a lot as well. You do most of the templating in the helm chart but you have an escape hatch if you need more patches.
What did you move to?
Infrastructure as code should from the beginning have been through a strict typed language with solid dependency and packaging contract.
I know that there are solutions like CDK and SST that attempt this, but because the underlying mechanisms are not native to those solutions, it's simply not enough, and the resulting interfaces are still way too brittle and complex.
I mean terraform provides this but using it doesn't give a whole lot of value, at least IME. I enforce types but often an upstream provider implementation will break that convention. It's rarely the fault of the IAC itself and usually the fault of the upstream service when things get annoying.
I only whish terraform was more recognized by upstream projects, like postgres, tailscale, ingress operators.
A one-time adoption from kubectl yaml or helm to terraform is doable - but syncing upstream updates is a chore.
If terraform (or another rich format) was popular as source of truth - then perhaps helm and kubectl yaml could be built from a terraform definition, with benefits like variable documentation, validation etc.
I don't think I want to use kubernetes (or anything that uses it) again. Nightmare of broken glass. Back in the day Docker Compose gave me 95% of what I wanted and the complexity was basically one file with few surprises.
Docker Compose still takes you 95% of what you need. I wish Docker Swarm survived.
I got the impression it was like a smaller, more opinionated k8s. Like a mix between Docker Swarm and k8s.
It's rare that I see it mentioned though, so I'm not sure how big the community is.
I’d wager that like half the teams (at least) using kubernetes today should be using Nomad instead. Like the team I’m on now where I’m literally the only one familiar with Kubernetes and everyone else only has familiarity with more classic EC2-based patterns. Getting someone to even know what Helm does is its own uphill battle. Nomad is a lot more simple. That’s what I like about it a lot.
For better or for worse its a orchestrator (for containers/scripts/jars/baremetal) full stop.
Everything else is composable from the rest of the hashicorp stack consul(service mesh and discovery),vault(secrets) allowing you to use as much/or as little as you need and truly able to scale to a large deployment as needed.
In the plus column , picking up its config/admin is intuitive in a way that helm/k8s never really comes across.
Philosophy wise can put it int the unix way of doing things - it does one thing well and gets out of your way , and you add to it as you need/want.
Whereas k8s/heml etc have one way or the high way - leaving you fighting the deployment half the time.
Mitchel Hashimoto was a genius when it came to opinionated design and that was Hashicorp's biggest strength when it was part of their culture.
It's a shame Nomad couldn't overcome the K8s hype-wagon, but either way IBM is destroying everything good about Hashicorp's products and I would proceed with extreme caution deploying any of their stuff net-new right now...
What happened to it?
I'm still using it with not a single issue (except when is messes up the iptables rules)
I still confidently, upgrade the docker across all the nodes, workers and managers and it just works. Not a single time that it caused an issue.
Docker the company bet big on Swarm being the de facto container orchestration platform for businesses. It just got completely overshadowed by k8s. Swarm continues to exist and be actively developed, but it’s doomed to fade into obscurity.
For some reason I assumed it was unsupported. That doesn't seem to be the case.
The original iteration of Docker Swarm, now known as Classic, is deprecated. Maybe you were thinking of that?
As I read more about it, yes, that is indeed the case.
If you can confidently get it done with docker-compose, you shouldn't even think about using k8s IMO. Completely different scales.
K8s isn't for running containers, it's for implementing complex distributed systems: tenancy/isolation and dynamic scaling and no-downtime service models.
One of the problems seems to be that most moderately complex companies where any one system would be fine with Compose would want to unify their operations, thus going to a complex distributed system with k8s. And then either your unified IT/DevOps team is responsible for supporting all systems on k8s, or all individual dev teams have to be competent with k8s. Worst case, both.
I've embraced kustomize and I like it. It's simple enough and powerful enough for my needs. A bit verbose to type out all the manifests, but I can live with it.
This is what I've done too. Just enough features easily available to handle everything i've ever needed in the simple deployments I use. Secrets, A/B configuration, even "dynamic reload" of a Deployment for Configmap changes.
Gets the job done.
I'm using sed on my yaml files. Currently considering kustomize instead, but I wouldn't touch Helm with a 10 foot pole.
Kustomize with ArgoCD is my go to
Could you explain this a bit? Is helm optional part of the k8s stack?
Helm is not official or blessed or anything, just another third party tool people install after install k8s.
If you used helm + terraform before, you'll have no problem understanding the terraform kubernetes provider (as opposed to the helm provider).
It does make it challenging to track operators as upstream usually only provide/document helm installation.
If you write your own tf definition of operator x v1, it can be tricky to upgrade to v2 - as you need to figure out what changes are needed in your tf config to go from v1 to v2.
The way I understand, helm is the npm of k8s.
You can install, update, and remove an app in your k8s cluster using helm.
And you release a new version of your app to a helm repository.
The thing i would add to this is that in most cases, you need to manually provide config values to the install.
This sounds okay in principle, but I far too often end up needing to look through the template files (what helm deploys) to understand what a config option actually does since documentation is hit or miss.
Helm is sort of like a docker (or maybe docker compose) for k8s, in terms of a helm chart is a prepackaged k8s "application" that you can ship to your cluster. It got very popular very quickly because of the ease of use, and I think that was premature which affects its day-to-day usability.
[deleted]
It's a client-side preprocessor essentially. The K8s cluster knows nothing about Helm as it just receives perfectly normal YAMLs generated by Helm on the client.
I really appreciate the k3s default with HelmChart type and operator installed. Makes working with charts simpler in my view
Yes, I use flux which has a similar HelmChart/HelmRelease resource. One of the things that took me a while to "get" with K8s is operators are just clients running on the cluster.
Incidentally, Terraform is the only way I want to use Helm at all. Although the Terraform provider for Helm is quite cumbersome to use when you need to set values.
Do you have any resources regarding using tf to handle deployments ?
I’d love to dig a bit.
The kubernetes provider mostly just works exactly as you expect
…but how do you install helm charts via terraform?
Is there a helm provider?
If not, what would be the right way to install messy stuff like nginx ingress, cert-manager, etc.?
There is a helm provider. Why would you need it?
Can't you just use the kubernetes provider?
People probably don't realize, that helm mostly is templating for the YAMLs kubernetes wants (plus a lot of other stuff that increases complexity).
There are many applications which are distributed as helm charts. Those charts install multiple deployments, service accounts and whatnot. They barely document all these things.
So if you want to avoid helm, you gotta do a whole lot of reverse-engineering. You gotta render a chart, explore all the manifests, explore all the configuration options, find out if they're needed or not.
An alternative is to just use helm, invoking it and forgetting about it. You can't blame people for going the easy way, I guess...
Yep, this 100%.
Every time there is a technology which has became the "de facto" standard, and there are people proposing "simpler alternatives", this is the kind of practical detail that makes a GIANT difference and that's usually never mentioned.
Network effect is a thing, Helm is the de facto "package manger" for Kubernetes program distribution. But this time there are generally no alternative instructions like
Helm is truly a fractal of design pain. Even the description as a "package manager" is a verifiable lie - it's a config management tool at best.
Any tool that encourages templating on top of YAML, in a way that prevents the use of tools like yamllint on them, is a bad tool. Ansible learned this lesson much earlier and changed syntax of playbooks so that their YAML passes lint.
Additionally, K8s core developers don't like it and keep inventing things like Kustomize and similar that have better designs.
Imho, anyone who thought putting 'templating language' and 'significant whitespace' together is a good idea deserves to be in the Hague
Seriously. I’ve lost at least 100 hours of my life debugging whitespace in templated yaml. I shudder to think about the total engineering time wasted since yaml’s invention.
You blame YAML but I blame helm. I can build a dict in Python and dump it as YAML. I've painlessly templated many k8s resources like this. Why can't we build helm charts in a DSL or more sensible syntax and then dump k8s manifests as YAML? Using Go templating to build YAML is idiocy and the root of the issue here.
There's lots of advice on StackOverflow against building your own JSON strings instead of using a library. But helm wants us to build our own YAML with Go templating. Make it make sense.
Yaml wouldn't be so bad if they made the templates and editors indent-aware.
Which is a thing with some Python IDEs, but it's maddening to work on anything that can't do this.
autocmd FileType yaml setlocal et ts=2 ai sw=2 nu sts=0
I'm sure Emacs and others have something similar
we use cue straight to k8s resources. it made life way better.
but we don't have tons of infra so no idea how it would run for big thousands-of-employees corps.
Helm shines when you’re consuming vendor charts (nginx-ingress, cert-manager, Prometheus stack). It’s basically a package manager for k8s. Add a repo, pin a version, set values, and upgrade/rollback as one unit. For third-party infra, the chart’s values.yaml provides a fairly clean and often well documented interface
Yeah, I agree. Creating and maintaining helm charts sucks, but using them (if they are properly made and exposes everything you want to edit in the values.yaml) is a great experience with gitops tools such as FluxCD or helmfile.
Helm is the number 1 reason I'm looking to leave behind my DevOps/SRE job. Basically every job or project I accept involves working with helm in some capacity and I'm just tired of working with mostly garbage helm charts, especially big meta-charts or having to fork a chart to add a config parameter value override somewhere. Debugging broken chart installs or incomplete upgrades is also nothing but pain. Most helm charts remind me of working with ansible-galaxy roles around ~2015.
Been using bjw-s' common library chart (& its app-template companion) [1] for my homelab and it improved my experience with helm by a lot, since you only have to edit the values.yaml without doing weird text templating. Hope he gets more funding for maintainence so it can be used for more "production" systems.
Most people in this thread, it seems, just want a simple way to manage Kubernetes manifests, something that keeps track of different settings for different environments and what's in common for each environment in order to generate the final manifests for an environment. If so, Helm is over-engineered for your use-case. Stick with Kustomize or jsonnet.
Helm's contribution (as horrible as text templating on YAML is) is, yes, to be a package manager. Part of a Helm chart includes jobs ("hooks") that can be run at different stages (pre-install, pre-upgrade, etc.) as well as a job to run when someone runs "helm test", and a way to rollback changes ("helm rollback"), which is more powerful than just rolling back a Deployment, because it will rollback changes to CRDs, give you hooks/jobs that can run pre- and post-rollback, etc.
Helm charts are meant to be written by someone with the relevant skills sitting next to the developers, so that it can be handed off to another team to deploy into production. If that's not your organization or process, or if your developers are giving your ops teams Docker images instead of Helm charts, you're probably over-engineering by adopting it.
As someone who stared out with Helm and has not used any of its alternatives, I had no idea how hated it is. Maybe it's just because of how I use it, but once I got the hang of the template charts I don't feel like I'm running into any hurdles while using it.
I have several Docker hosts in my home lab as well as a k3s cluster and I'd really like to use k3s as much as possible. But when I want to figure out how to deploy basically any new package they say here are the Docker instructions, but if you want to use Kubernetes we have a Helm chart. So I invariably end up starting with the Docker instructions and writing my own Deployment/StatefulSet, Service, and Ingress yaml files by hand.
Ive found it easier, in most cases, to run 'helm template ...' on an existing chart, snd then use the output as my starting point.
Amazing how people are complaining while proposing shit solutions. Seems like nobody is doing infra seriously there.
Probably they have a different experience! I love using helm but I feel I got used to go templates and sub charts done right. I use it at work a lot and at home on my homelab with no issues at all: I guess is the usual tab vs spaces.
The alternatives of helm are not that interesting to me: I still have nightmare when I had to use jsonnet and kustomize just for istio, with upgrade hell.
So I am sticking to helm as it feels way straight forward when you need to change just a few things from an upstream open source project: way fewer lines to maintain and change!
So this is neither helm the Emacs completion framework nor helm the wavetable synthesizer?
I was also confused by the title. Off topic but Vital is the newer wavetable synth by the maker of the Helm synth, Matt Tytel. The synth Helm is a really good foss subtractive synth but not wavetable
WOW! This is great!! What is it?
Helm is the necessary evil for Kubernetes chose YAML
Helm works at text level. This approach could have worked with YAML, JSON, XML or any other text format. You can template C++ code with Helm if you really want. It's just golang templates below.
And that makes it wrong. YAML is structured format and proper templating should work with JSON-like data structures, not with text. Kustomize is better example.
Came here to feel the temperature of the comments, and unsurprisingly, most folks seem to have plenty of gripes with Helm.
A Helm chart is often a poorly documented abstraction layer which often makes it impossible to relate back the managed application's original documentation to the Helm chart's "interface". The number of times I had to grep through the templates to figure out how to access a specific setting ...
Running my home lab at home, I've grown sick of constant Renovate PRs against the helm charts in use. I recall one "minor" update not long ago recently in CoreDNS was messing with the exposed ports in the service and installs broke for a lot of folks.
If I need to run some software now, I `helm template` the resources and commit those to git. I'm so tired of some random "Extended helm chart to customise labels / annotations in $some resource" change notes. Traefik and Cilium are the only helm charts I use, the rest I `helm template` in to my gitops repo, customize and forget.
At Dayjob in the past, we've debugged various Helm issues caused by the internal sprig library used. We fear updating Argo CD and Helm for what surprises are in store for us and we're starting to adopt the rendered manifests pattern for greater visibility to catch such changes.
Can I hear from those of you who have had a good IAC experience? What tools worked well?
ArgoCD + Helm
But really any kind of reconciler, e.g. flux or argo with helm works very well. Helm is only used as a templating tool, i.e. helm template is the only thing allowed. It works very well and I've ran production systems for years without major issues.
I dont really understand how people have so much trouble with Helm, granted yaml whitespace + go templating is sometimes awful, it is the least bad tool out there that I have tried and once you learn the arcane ways of {{- its mostly a non-issue.
I would recommend writing your own charts for the most part and using external charts when they are simple, or well proven. Most applications you want to run arent that complicated, they are mostly a collection of environment variables, config files, and arguments.
If I could wish for a replacement of helm, it would be helm template with the chart implemented in a typed language, e.g. TypeScript, instead of go template but backwards compatible with go template.
Kubernetes API uses JSON. JSON is JavaScript Object Notation. So naturally the best approach to work with JSON is to write JavaScript or TypeScript code. You can just output JSON and consume it with kubectl. You can read data from whatever format you want, process it and output JSON. You can write your little functions to reduce boilerplate. There are many options that are obvious once you just embrace JavaScript.
Of course most other programming languages will work just as well, it's just JavaScript being the most natural fit for JSON.
There are some features of Kubernetes that are only available in the Go client like Informers. So Go is a much more natural fit (you can move between JSON and Go structs with one function call + error check)
I wrote Go and Python programs that constructed the manifests using the native Kubernetes types and piped them into kubectl apply. Had to write my own libraries for doing migrations too. But after that bootstrapping it worked great.
Reminds me of cdk8s if one is looking for a framework if it can be called that
cdk8s.io
Probably an unpopular opinion, but it’s been a couple of jobs that I write “just python” to generate k8s manifests, and it works really, really well.
There’s packages. You can write functions. You can write tests trivially (the output is basically a giant map that you just write out as yaml)…
I’m applying this to other areas too with great success, for example our snowflake IaC is “just python” that generates SQL. It’s great.
Like the others, I'm using a programming language except it is Javascript because we're a Node.js company. It actually works well enough
Im quite happy with FluxCD+Helm. Helm also supports creating library charts (basically component libraries) that can improve the experience of creating and maintaining helm charts by a lot.
I really don't like helm. I think we have arrived at abstraction over abstraction over abstraction.
The last project I had to be involved with used kustomize for different environments, flux to deploy, helm to use a helmchart which took in a list of configmaps using "valuesFrom". Not only does kustomize template and merge together yaml but so does the valuesFrom thing, however at "runtime" in the cluster.
There's just not a single chance to get any coherent checking/linting or anything before deployment. I mean how could a language server even understand how all this spaghetti yaml merges together? And note that I was working on this as a developer in a very restricted environment/cluster.
Yaml is too permissive already, people really start programming with it. The thing is, kubernetes resources are already an abstraction. That's kind of the nice thing about it, you can create arbitrary resources and kubernetes is the management platform for them. But I think it becomes hairy already when we create resources that manage other resources.
And also, sure some infrastructure may be "cattle" but at some point in the equation there is state and complexity that has to be managed by someone who understands it. Kubernetes manifests are great for that, I think using a package manager to deploy resources is taking it too far. Inevitably helm charts and the schema of values change and then attention is needed anyway. It makes the bar for entry into the kubernetes ecosystem lower but is that actually a good thing for the people who then fall into it without the experience to solve the problems they inevitably encounter?
Sorry for the rant but given my second paragraph I hope there is some understanding for my frustrations. Having all that said, I am glad they try to improve what has established itself now and still welcome these improvements.
> Update any automation that uses these renamed CLI flags.
I wish software providers like this would realize how fucking obnoxious this is. Why not support both? Seriously, leave the old, create a new one. Why put this burden on your users?
It doesn't sound like a big deal but in practice it's often a massive pain in the ass.
Obligatory complaint about bitnami rugpulling and effectively ruining a very nice eco system.
Helm sucks.
Helm, and a lot of devops tooling, is fundamentally broken.
The core problem is that it is a templating language and not a fully functional programming language, or at least a DSL.
This leads us to the mess we are in today. Here is a fun experiment: Go open 10 helm charts, and compare the differences between them. You will find they have the same copy-paste bullshit everywhere.
Helm simply does not provide powerful enough tools to develop proper abstractions. This leads to massive sprawl when defining our infrastructure. This leads to the DevOps nightmare we have all found ourselves in.
I have developed complex systems in Pulumi and other CDKs: 99% of the text just GOES AWAY and everything is way more legible.
You are not going to create a robust solution with a weak templating language. You are just going to create more and more sprawl.
Maybe the answer is a CDK that outputs helm charts.
And it STILL uses text-based Go templates instead of a proper language based on structured input and output? This was always my main pain point with Helm and also of many others I talked to. This major upgrade was years in the making and they couldn't add support for a single of many available options like CUE, JSONNET, or KCL? What an utter waste.
Now that you'll are here, has anyone tried timoni as an alternative to helm? I have it in my to-try-tools.
Yes, I currently have 2 timoni modules in production, deployed with ArgoCD - and it's great! It has a bit of a learning curve and takes a bit of getting used to, that there is no "overwriting" values, but it saves so much time on template iteration.
The language server support for cue could be better, though.
No commits in 3 months.
Not everything needs to be worked on like it’s a full time job.
Imagine 1,000s of helm charts. Your only abstraction tools are an umbrella chart or a library chart. There isn't much more in helm.
I liked KRO's model a lot but stringly typed text templating at the scale of thousands of services doesn't work, it's not fun when you need to make a change. I kinda like jsonnet plus the google cli i forget the name of right now, and the abstraction the Grafana folks did too but ultimately i decided to roll my own thing and leaned heavily into type safety for this. It's ideal. With any luck i can open source it. There's a few similar ideas floating around now - Scala Yaga is one.
I've used it in the past (for a quite small deployment I must say), but have been very happy with it. Specifically the diff mode is very powerful to see what changes you'll apply compared to what's currently deployed.
nightmares (if anything went wrong i had to blow helm stuff away and start over) ontop of nightmares (kubernetes when i was trying it was tons of namespaces called beta, then you never knew what to update to or when you had to update, or what was incompatible) ontop of the realization that no one should be using kubernetes unless you have over 50 servers running many hundreds of services. Otherwise its just a million times simpler using docker compose
Can you recommend any articles about minimum scale necessary to make Kubernetes worth it?
If you count 3 control plane nodes and at least one or two extra servers worth of space for pods to go when a node goes down, I'd say don't bother for anything less than 6-7 servers worth of infrastructure. Once you're over 10 servers, you can start using node affinity and labels to have some logical grouping based on hardware type and/or tenants. At that point it's just one big computer and the abstraction starts to really pay off compared to manually dealing with servers and installation scripts.
I'd say the abstraction is not worth it when you have only a steady 2-3 servers worth of infrastructure. Don't do it at "Hello, world!" scale, you win nothing.
(I work for a company that helps other companies set up and secure larger projects into environments like Kubernetes.)
I would always use Kubernetes, if you have 4 or more GB RAM on your server. It's just better than docker compose in every imaginable way. The only issue with Kubernetes is that it wants around 2 GB RAM for itself.
The answer today is more than one node (instance/kernel running)
What is Charts v3? Please tell me it is LUA support.
I think what Charts v3 will be is still an open question. According to the current accepted HIPs[0], there is some groundwork to in general enable a new generation of a chart format via HIP-0020, and most HIPs after that contain some parts that are planned to make it into Charts v3 (e.g. resource creation sequencing via HIP-0025).
would be nice, but we would also have to reimplement all of the charts we use, big ask/lift
DevOps has more friction for tooling changes because of the large blast radius
What do you prefer?
Just straight raw manifest files.
How do you have anything dynamic? How do you handle any differences at all between your infrastructure and what the authors built it for.
I get the feeling that most people commenting here have only surface level experience with deploying k8s applications. I don't care for helm myself but it's less bad than a lot of other approaches like hand rolling manifests with tools like envsubst and sed.
Kustomize also seems like hell when a deployment reaches a certain level of complexity.
Sorry, raw manifests and kustomize and a soupçon of regret.
So many people complaining about Helm but I'll share my 2 experiences. At my last 2 companies we shipped Helm charts for administrators to easily deploy our stuff.
It worked fine and was simple enough which is what the goal was. But then people came along wanting all sorts of customisations to make the chart configurable to work in their environments. The charts ended up getting pretty unwieldy.
Helm is a product that serves users who like customization to the nth-degree. But everyone else hates it.
Personally, I would prefer it if the 'power users' just got used to forking and maintaining their own charts with all the tweaks they want. The reason they don't do that of course is that it's harder to keep up with updates - maybe that's the problem that needs solving.
After some work with kubernetes, i must really say, helm is a complexity hell. I'm sure it has much features but many aren't needed but increase the complexity nonetheless.
Also, please fix the "default" helm chart template, it's a nightmare of options and values no beginner understands. Make it basic and simple.
Nowadays i would very much prefer to just use terraform for kubernetes deployments, especially if you use terraform anyway!
Helm is my example of where DevOps lost it's way. The insanity of multiple tiers on templating an invisible char scoped language... it blows my mind that so many of us just deal with it
Nowadays I'm using CUE in front of TF & k8s, in part because I have workloads that need a bit of both and share config. I emit tf.json and Yaml as needed from a single source of truth
The problem with Kubernetes, Docker and anything CNCF related is what happens when everyone and their dog tries to make a business out of an OS capability with venture capital.
shudders.. `| nindent 12`..
I've been trying to apply CUE to my work, but the tooling just isn't there for much of what I need yet. It also seems really short-sighted that it is implemented in Go which is notoriously bad for embedding.
Both Jsonnet and CUE are implemented in Go which happens to be the language Helm is written in. While I agree that it reduces "general embedability" it's ripe fruit for Helm to integrate either or both of these as alternatives to YAML templating.
We evaluated CUE, Jsonnet and CDK8s when we wanted to move on from Helm, and ended up using CDK8s. It's proven to be a good pick so far, it's in Typescript.
> seems really short-sighted that it is implemented in Go
CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)
Also, so much of the k8s ecosystem is in Go that it was a natural choice.
> CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)
Ah, that makes sense, I guess. I also get the feeling that the language itself is still under very active development, so until 1.0 is released I don't think it matters too much what it's implemented in.
> Also, so much of the k8s ecosystem is in Go that it was a natural choice.
That might turn out to be a costly decision, imho. I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.
I figured I'd try and hack something together, but it was a complete non-starter since I don't work within the Go ecosystem.
Projects like the cue language live and breathe from an active community with related tooling, so the decision still really boggles my mind.
I'll stay optimistic and hope that once it reaches 1.0, someone will write an implementation that is easily embedded for my use-cases. I won't hold my breath though, since the scope is getting quite big.
Why don't you work with the Go ecosystem? You don't use K8s, terraform, etc? What ecosystem do you prefer?
what language would you have chosen?
> I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.
Have you tried a Makefile to run cue? There should be no need to write code to do this
Holos[1] is an interesting project I’ve been looking at trying out.
1. https://holos.run/
I've looked at Holos recently
1. it seems like development has largely ceased since Sept
2. it looks to only handle helm, not terraform, I'm looking for something to unify both and deal with dependencies between charts (another thing helm is terrible at)
Back when my job involved using Kubernetes and Helm, the solution I found was to use `| toJson` instead: it generates one line that happens to be valid YAML as well.
cue and argocd here. it is pretty neat.
the tf is still in hcl form for now.
RIP Ksonnet, we hardly knew what we were missing
jsonnet is the main DX issue therein
I don't think I've ever seen a Helm template that didn't invoke nightmares. Probably the biggest reason I moved away from Kubernetes in the first place.
We have several Helm charts we've written at my job and they are very pleasant to use. They are just normal k8s templates with a couple of values parameterized, and they work great. The ones people put out for public consumption are very complex, but it isn't like Helm charts have to be that complex.
In my book the main problem with Helm charts is that every customization option needs to be implemented by the chart that way. There is no way for chart consumer to change anything the chart author did not allow to be changed. That leads to these overly complex and config heavy charts people publish - just to make sure everything is customizable for consumers.
I'd love something that works more like Kustomize but with other benefits of Helm charts (packaging, distribution via OCI, more straight forward value interpolation than overlays and patches, ...). So far none have ticked all my boxes.
Kustomize can render Helm charts. It's "very basic" as in Kustomize will call the Helm binary to render the template, ingest it and apply patches.
I wrote a tool called "easykubenix" that works in a similar way, render the chart in a derivation, convert the YAML to JSON, import JSON into the Nix module structure and now you're free to override, remove or add anything you want :)
It's still very CLI deploy centric using kluctl as the deployment engine, but there's nothing preventing dumping the generated JSON (or YAML) manifests into a GitOps loop.
It doesn't make the public charts you consume any less horrible, but you don't have to care as much about them at least
fluxCD brings a really nice helm-controller that will allow to change manifests via a postRenderers stub while still allowing to use regular helm tooling against the cluster.
https://fluxcd.io/flux/components/helm/helmreleases/#post-re...
Yeah, but then it is yet another layer of configuration slapped on top of the previous layer of configuration. That can't be the best solution, can it? Same thing for piping helm template through Kustomize.
this, our helm charts are flat and for year only passed in the image as variable
That's generally what I try to push for in my company.
A single purpose chart for your project is generally a lot easier to grok and consume vs what can be done.
I think the likes of "kustomize" is probably a more sane route to go down. But our entire infrastructure is already helm so hard to switch that all out.
I've personally boiled down the Helm vs. Kubernetes to the following:
Does your Kubernetes configuration need to be installed by a stranger? Use Helm.
Does your Kubernetes configuration need to be installed by you and your organization alone? Use Kustomize.
It makes sense for Grafana to provide a Helm chart for Grafana Alloy that the employees of Random Corp can install on their servers. It doesn't make sense for my employer to make a Helm chart out of our SaaS application just so that we can have different prod/staging settings.
I'm ashamed to say it but I cannot for the life of me understand how kustomize works. I could not ever figure out how to do things outside the "hello world" tutorials they walk you through. I'm not a stupid person (citation needed lol), but trying to understand the kustomize docs made me feel incredibly stupid. That's why we didn't go with that instead of Helm.
Helm requires you to write a template and you need to know (or guess) up front which values you want to be configurable. Then you set sane defaults for those values. If you find a user needs to change something else you have to edit the chart to add it.
With Kustomize, on the other hand, you just write the default as perfectly normal K8s manifests in YAML. You don't have to know or care what your users are going to do with it.
Then you write a `kustomizatiom.yaml` that references those manifests somehow (could be in the same folder or you can use a URL). Kustomize simply concatenates everything together as its default behaviour. Run `kubectl kustomize` in the directory with `kustomization.yaml` to see the output. You can run `kubectl apply -k` to apply to your cluster (and `kubectl delete -k` to delete it all).
From there you just add what you need to `kustomization.yaml`. You can do a few basics easily like setting the namespace for it all, adding labels to everything and changing the image ref. Keep running `kubectl kustomize` to see how it's changing things. You can use configmap and secret generators to easily generate these with hashed names and it will make sure all references match the generated name. Then you have the all powerful YAML or JSON editing commands which allow you to selectively edit the manifests if you need to. Start small and add things when you need them. Keep running `kubectl kustomize` at every step until you get it.
Yes, this is the key. Helm charts should basically be manifests with some light customization.
Helm is not good enough to develop abstractions with. So go the opposite way: keep it stupid simple.
Pairing helm with Kustomize can help a lot as well. You do most of the templating in the helm chart but you have an escape hatch if you need more patches.
What did you move to?
Infrastructure as code should from the beginning have been through a strict typed language with solid dependency and packaging contract.
I know that there are solutions like CDK and SST that attempt this, but because the underlying mechanisms are not native to those solutions, it's simply not enough, and the resulting interfaces are still way too brittle and complex.
I mean terraform provides this but using it doesn't give a whole lot of value, at least IME. I enforce types but often an upstream provider implementation will break that convention. It's rarely the fault of the IAC itself and usually the fault of the upstream service when things get annoying.
I only whish terraform was more recognized by upstream projects, like postgres, tailscale, ingress operators.
A one-time adoption from kubectl yaml or helm to terraform is doable - but syncing upstream updates is a chore.
If terraform (or another rich format) was popular as source of truth - then perhaps helm and kubectl yaml could be built from a terraform definition, with benefits like variable documentation, validation etc.
I don't think I want to use kubernetes (or anything that uses it) again. Nightmare of broken glass. Back in the day Docker Compose gave me 95% of what I wanted and the complexity was basically one file with few surprises.
Docker Compose still takes you 95% of what you need. I wish Docker Swarm survived.
> I wish Docker Swarm survived.
I heard good things about Nomad (albeit from before Hashicorp changed their licenses): https://developer.hashicorp.com/nomad
I got the impression it was like a smaller, more opinionated k8s. Like a mix between Docker Swarm and k8s.
It's rare that I see it mentioned though, so I'm not sure how big the community is.
I’d wager that like half the teams (at least) using kubernetes today should be using Nomad instead. Like the team I’m on now where I’m literally the only one familiar with Kubernetes and everyone else only has familiarity with more classic EC2-based patterns. Getting someone to even know what Helm does is its own uphill battle. Nomad is a lot more simple. That’s what I like about it a lot.
For better or for worse its a orchestrator (for containers/scripts/jars/baremetal) full stop.
Everything else is composable from the rest of the hashicorp stack consul(service mesh and discovery),vault(secrets) allowing you to use as much/or as little as you need and truly able to scale to a large deployment as needed.
In the plus column , picking up its config/admin is intuitive in a way that helm/k8s never really comes across.
Philosophy wise can put it int the unix way of doing things - it does one thing well and gets out of your way , and you add to it as you need/want. Whereas k8s/heml etc have one way or the high way - leaving you fighting the deployment half the time.
Mitchel Hashimoto was a genius when it came to opinionated design and that was Hashicorp's biggest strength when it was part of their culture.
It's a shame Nomad couldn't overcome the K8s hype-wagon, but either way IBM is destroying everything good about Hashicorp's products and I would proceed with extreme caution deploying any of their stuff net-new right now...
What happened to it?
I'm still using it with not a single issue (except when is messes up the iptables rules)
I still confidently, upgrade the docker across all the nodes, workers and managers and it just works. Not a single time that it caused an issue.
Docker the company bet big on Swarm being the de facto container orchestration platform for businesses. It just got completely overshadowed by k8s. Swarm continues to exist and be actively developed, but it’s doomed to fade into obscurity.
For some reason I assumed it was unsupported. That doesn't seem to be the case.
The original iteration of Docker Swarm, now known as Classic, is deprecated. Maybe you were thinking of that?
As I read more about it, yes, that is indeed the case.
If you can confidently get it done with docker-compose, you shouldn't even think about using k8s IMO. Completely different scales.
K8s isn't for running containers, it's for implementing complex distributed systems: tenancy/isolation and dynamic scaling and no-downtime service models.
One of the problems seems to be that most moderately complex companies where any one system would be fine with Compose would want to unify their operations, thus going to a complex distributed system with k8s. And then either your unified IT/DevOps team is responsible for supporting all systems on k8s, or all individual dev teams have to be competent with k8s. Worst case, both.
I've embraced kustomize and I like it. It's simple enough and powerful enough for my needs. A bit verbose to type out all the manifests, but I can live with it.
This is what I've done too. Just enough features easily available to handle everything i've ever needed in the simple deployments I use. Secrets, A/B configuration, even "dynamic reload" of a Deployment for Configmap changes.
Gets the job done.
I'm using sed on my yaml files. Currently considering kustomize instead, but I wouldn't touch Helm with a 10 foot pole.
Kustomize with ArgoCD is my go to
Could you explain this a bit? Is helm optional part of the k8s stack?
Helm is not official or blessed or anything, just another third party tool people install after install k8s.
Yes, you really don't need to use helm if you have terraform. Just use https://registry.terraform.io/providers/hashicorp/kubernetes... .
If you used helm + terraform before, you'll have no problem understanding the terraform kubernetes provider (as opposed to the helm provider).
It does make it challenging to track operators as upstream usually only provide/document helm installation.
If you write your own tf definition of operator x v1, it can be tricky to upgrade to v2 - as you need to figure out what changes are needed in your tf config to go from v1 to v2.
The way I understand, helm is the npm of k8s.
You can install, update, and remove an app in your k8s cluster using helm.
And you release a new version of your app to a helm repository.
The thing i would add to this is that in most cases, you need to manually provide config values to the install.
This sounds okay in principle, but I far too often end up needing to look through the template files (what helm deploys) to understand what a config option actually does since documentation is hit or miss.
Helm is sort of like a docker (or maybe docker compose) for k8s, in terms of a helm chart is a prepackaged k8s "application" that you can ship to your cluster. It got very popular very quickly because of the ease of use, and I think that was premature which affects its day-to-day usability.
It's a client-side preprocessor essentially. The K8s cluster knows nothing about Helm as it just receives perfectly normal YAMLs generated by Helm on the client.
I really appreciate the k3s default with HelmChart type and operator installed. Makes working with charts simpler in my view
Yes, I use flux which has a similar HelmChart/HelmRelease resource. One of the things that took me a while to "get" with K8s is operators are just clients running on the cluster.
Incidentally, Terraform is the only way I want to use Helm at all. Although the Terraform provider for Helm is quite cumbersome to use when you need to set values.
Do you have any resources regarding using tf to handle deployments ?
I’d love to dig a bit.
The kubernetes provider mostly just works exactly as you expect
Just use https://registry.terraform.io/providers/hashicorp/kubernetes... instead of helm...
…but how do you install helm charts via terraform?
Is there a helm provider?
If not, what would be the right way to install messy stuff like nginx ingress, cert-manager, etc.?
There is a helm provider. Why would you need it? Can't you just use the kubernetes provider?
People probably don't realize, that helm mostly is templating for the YAMLs kubernetes wants (plus a lot of other stuff that increases complexity).
There are many applications which are distributed as helm charts. Those charts install multiple deployments, service accounts and whatnot. They barely document all these things.
So if you want to avoid helm, you gotta do a whole lot of reverse-engineering. You gotta render a chart, explore all the manifests, explore all the configuration options, find out if they're needed or not.
An alternative is to just use helm, invoking it and forgetting about it. You can't blame people for going the easy way, I guess...
Yep, this 100%. Every time there is a technology which has became the "de facto" standard, and there are people proposing "simpler alternatives", this is the kind of practical detail that makes a GIANT difference and that's usually never mentioned.
Network effect is a thing, Helm is the de facto "package manger" for Kubernetes program distribution. But this time there are generally no alternative instructions like
[dead]
Helm is truly a fractal of design pain. Even the description as a "package manager" is a verifiable lie - it's a config management tool at best.
Any tool that encourages templating on top of YAML, in a way that prevents the use of tools like yamllint on them, is a bad tool. Ansible learned this lesson much earlier and changed syntax of playbooks so that their YAML passes lint.
Additionally, K8s core developers don't like it and keep inventing things like Kustomize and similar that have better designs.
Imho, anyone who thought putting 'templating language' and 'significant whitespace' together is a good idea deserves to be in the Hague
Seriously. I’ve lost at least 100 hours of my life debugging whitespace in templated yaml. I shudder to think about the total engineering time wasted since yaml’s invention.
You blame YAML but I blame helm. I can build a dict in Python and dump it as YAML. I've painlessly templated many k8s resources like this. Why can't we build helm charts in a DSL or more sensible syntax and then dump k8s manifests as YAML? Using Go templating to build YAML is idiocy and the root of the issue here.
There's lots of advice on StackOverflow against building your own JSON strings instead of using a library. But helm wants us to build our own YAML with Go templating. Make it make sense.
Yaml wouldn't be so bad if they made the templates and editors indent-aware.
Which is a thing with some Python IDEs, but it's maddening to work on anything that can't do this.
we use cue straight to k8s resources. it made life way better.
but we don't have tons of infra so no idea how it would run for big thousands-of-employees corps.
Helm shines when you’re consuming vendor charts (nginx-ingress, cert-manager, Prometheus stack). It’s basically a package manager for k8s. Add a repo, pin a version, set values, and upgrade/rollback as one unit. For third-party infra, the chart’s values.yaml provides a fairly clean and often well documented interface
Yeah, I agree. Creating and maintaining helm charts sucks, but using them (if they are properly made and exposes everything you want to edit in the values.yaml) is a great experience with gitops tools such as FluxCD or helmfile.
Helm is the number 1 reason I'm looking to leave behind my DevOps/SRE job. Basically every job or project I accept involves working with helm in some capacity and I'm just tired of working with mostly garbage helm charts, especially big meta-charts or having to fork a chart to add a config parameter value override somewhere. Debugging broken chart installs or incomplete upgrades is also nothing but pain. Most helm charts remind me of working with ansible-galaxy roles around ~2015.
Been using bjw-s' common library chart (& its app-template companion) [1] for my homelab and it improved my experience with helm by a lot, since you only have to edit the values.yaml without doing weird text templating. Hope he gets more funding for maintainence so it can be used for more "production" systems.
[1]: https://github.com/bjw-s-labs/helm-charts/tree/main
See here for more examples on how people are using this chart:
https://kubesearch.dev/#app-template
[dead]
Most people in this thread, it seems, just want a simple way to manage Kubernetes manifests, something that keeps track of different settings for different environments and what's in common for each environment in order to generate the final manifests for an environment. If so, Helm is over-engineered for your use-case. Stick with Kustomize or jsonnet.
Helm's contribution (as horrible as text templating on YAML is) is, yes, to be a package manager. Part of a Helm chart includes jobs ("hooks") that can be run at different stages (pre-install, pre-upgrade, etc.) as well as a job to run when someone runs "helm test", and a way to rollback changes ("helm rollback"), which is more powerful than just rolling back a Deployment, because it will rollback changes to CRDs, give you hooks/jobs that can run pre- and post-rollback, etc.
Helm charts are meant to be written by someone with the relevant skills sitting next to the developers, so that it can be handed off to another team to deploy into production. If that's not your organization or process, or if your developers are giving your ops teams Docker images instead of Helm charts, you're probably over-engineering by adopting it.
As someone who stared out with Helm and has not used any of its alternatives, I had no idea how hated it is. Maybe it's just because of how I use it, but once I got the hang of the template charts I don't feel like I'm running into any hurdles while using it.
I have several Docker hosts in my home lab as well as a k3s cluster and I'd really like to use k3s as much as possible. But when I want to figure out how to deploy basically any new package they say here are the Docker instructions, but if you want to use Kubernetes we have a Helm chart. So I invariably end up starting with the Docker instructions and writing my own Deployment/StatefulSet, Service, and Ingress yaml files by hand.
Ive found it easier, in most cases, to run 'helm template ...' on an existing chart, snd then use the output as my starting point.
Amazing how people are complaining while proposing shit solutions. Seems like nobody is doing infra seriously there.
Probably they have a different experience! I love using helm but I feel I got used to go templates and sub charts done right. I use it at work a lot and at home on my homelab with no issues at all: I guess is the usual tab vs spaces.
The alternatives of helm are not that interesting to me: I still have nightmare when I had to use jsonnet and kustomize just for istio, with upgrade hell.
So I am sticking to helm as it feels way straight forward when you need to change just a few things from an upstream open source project: way fewer lines to maintain and change!
So this is neither helm the Emacs completion framework nor helm the wavetable synthesizer?
I was also confused by the title. Off topic but Vital is the newer wavetable synth by the maker of the Helm synth, Matt Tytel. The synth Helm is a really good foss subtractive synth but not wavetable
WOW! This is great!! What is it?
Helm is the necessary evil for Kubernetes chose YAML
Helm works at text level. This approach could have worked with YAML, JSON, XML or any other text format. You can template C++ code with Helm if you really want. It's just golang templates below.
And that makes it wrong. YAML is structured format and proper templating should work with JSON-like data structures, not with text. Kustomize is better example.
Came here to feel the temperature of the comments, and unsurprisingly, most folks seem to have plenty of gripes with Helm.
A Helm chart is often a poorly documented abstraction layer which often makes it impossible to relate back the managed application's original documentation to the Helm chart's "interface". The number of times I had to grep through the templates to figure out how to access a specific setting ...
Running my home lab at home, I've grown sick of constant Renovate PRs against the helm charts in use. I recall one "minor" update not long ago recently in CoreDNS was messing with the exposed ports in the service and installs broke for a lot of folks. If I need to run some software now, I `helm template` the resources and commit those to git. I'm so tired of some random "Extended helm chart to customise labels / annotations in $some resource" change notes. Traefik and Cilium are the only helm charts I use, the rest I `helm template` in to my gitops repo, customize and forget.
At Dayjob in the past, we've debugged various Helm issues caused by the internal sprig library used. We fear updating Argo CD and Helm for what surprises are in store for us and we're starting to adopt the rendered manifests pattern for greater visibility to catch such changes.
What about https://github.com/werf/nelm? Anyone here using it?
Can I hear from those of you who have had a good IAC experience? What tools worked well?
ArgoCD + Helm
But really any kind of reconciler, e.g. flux or argo with helm works very well. Helm is only used as a templating tool, i.e. helm template is the only thing allowed. It works very well and I've ran production systems for years without major issues.
I dont really understand how people have so much trouble with Helm, granted yaml whitespace + go templating is sometimes awful, it is the least bad tool out there that I have tried and once you learn the arcane ways of {{- its mostly a non-issue.
I would recommend writing your own charts for the most part and using external charts when they are simple, or well proven. Most applications you want to run arent that complicated, they are mostly a collection of environment variables, config files, and arguments.
If I could wish for a replacement of helm, it would be helm template with the chart implemented in a typed language, e.g. TypeScript, instead of go template but backwards compatible with go template.
Kubernetes API uses JSON. JSON is JavaScript Object Notation. So naturally the best approach to work with JSON is to write JavaScript or TypeScript code. You can just output JSON and consume it with kubectl. You can read data from whatever format you want, process it and output JSON. You can write your little functions to reduce boilerplate. There are many options that are obvious once you just embrace JavaScript.
Of course most other programming languages will work just as well, it's just JavaScript being the most natural fit for JSON.
There are some features of Kubernetes that are only available in the Go client like Informers. So Go is a much more natural fit (you can move between JSON and Go structs with one function call + error check)
I wrote Go and Python programs that constructed the manifests using the native Kubernetes types and piped them into kubectl apply. Had to write my own libraries for doing migrations too. But after that bootstrapping it worked great.
Reminds me of cdk8s if one is looking for a framework if it can be called that
cdk8s.io
Probably an unpopular opinion, but it’s been a couple of jobs that I write “just python” to generate k8s manifests, and it works really, really well.
There’s packages. You can write functions. You can write tests trivially (the output is basically a giant map that you just write out as yaml)…
I’m applying this to other areas too with great success, for example our snowflake IaC is “just python” that generates SQL. It’s great.
Like the others, I'm using a programming language except it is Javascript because we're a Node.js company. It actually works well enough
Im quite happy with FluxCD+Helm. Helm also supports creating library charts (basically component libraries) that can improve the experience of creating and maintaining helm charts by a lot.
I really don't like helm. I think we have arrived at abstraction over abstraction over abstraction.
The last project I had to be involved with used kustomize for different environments, flux to deploy, helm to use a helmchart which took in a list of configmaps using "valuesFrom". Not only does kustomize template and merge together yaml but so does the valuesFrom thing, however at "runtime" in the cluster.
There's just not a single chance to get any coherent checking/linting or anything before deployment. I mean how could a language server even understand how all this spaghetti yaml merges together? And note that I was working on this as a developer in a very restricted environment/cluster.
Yaml is too permissive already, people really start programming with it. The thing is, kubernetes resources are already an abstraction. That's kind of the nice thing about it, you can create arbitrary resources and kubernetes is the management platform for them. But I think it becomes hairy already when we create resources that manage other resources.
And also, sure some infrastructure may be "cattle" but at some point in the equation there is state and complexity that has to be managed by someone who understands it. Kubernetes manifests are great for that, I think using a package manager to deploy resources is taking it too far. Inevitably helm charts and the schema of values change and then attention is needed anyway. It makes the bar for entry into the kubernetes ecosystem lower but is that actually a good thing for the people who then fall into it without the experience to solve the problems they inevitably encounter?
Sorry for the rant but given my second paragraph I hope there is some understanding for my frustrations. Having all that said, I am glad they try to improve what has established itself now and still welcome these improvements.
> CLI Flags renamed
> Some common CLI flags are renamed:
> --atomic → --rollback-on-failure > --force → --force-replace
> Update any automation that uses these renamed CLI flags.
I wish software providers like this would realize how fucking obnoxious this is. Why not support both? Seriously, leave the old, create a new one. Why put this burden on your users?
It doesn't sound like a big deal but in practice it's often a massive pain in the ass.
Obligatory complaint about bitnami rugpulling and effectively ruining a very nice eco system.
Helm sucks.
Helm, and a lot of devops tooling, is fundamentally broken.
The core problem is that it is a templating language and not a fully functional programming language, or at least a DSL.
This leads us to the mess we are in today. Here is a fun experiment: Go open 10 helm charts, and compare the differences between them. You will find they have the same copy-paste bullshit everywhere.
Helm simply does not provide powerful enough tools to develop proper abstractions. This leads to massive sprawl when defining our infrastructure. This leads to the DevOps nightmare we have all found ourselves in.
I have developed complex systems in Pulumi and other CDKs: 99% of the text just GOES AWAY and everything is way more legible.
You are not going to create a robust solution with a weak templating language. You are just going to create more and more sprawl.
Maybe the answer is a CDK that outputs helm charts.
And it STILL uses text-based Go templates instead of a proper language based on structured input and output? This was always my main pain point with Helm and also of many others I talked to. This major upgrade was years in the making and they couldn't add support for a single of many available options like CUE, JSONNET, or KCL? What an utter waste.
Now that you'll are here, has anyone tried timoni as an alternative to helm? I have it in my to-try-tools.
https://github.com/stefanprodan/timoni
Yes, I currently have 2 timoni modules in production, deployed with ArgoCD - and it's great! It has a bit of a learning curve and takes a bit of getting used to, that there is no "overwriting" values, but it saves so much time on template iteration. The language server support for cue could be better, though.
No commits in 3 months.
Not everything needs to be worked on like it’s a full time job.
Imagine 1,000s of helm charts. Your only abstraction tools are an umbrella chart or a library chart. There isn't much more in helm.
I liked KRO's model a lot but stringly typed text templating at the scale of thousands of services doesn't work, it's not fun when you need to make a change. I kinda like jsonnet plus the google cli i forget the name of right now, and the abstraction the Grafana folks did too but ultimately i decided to roll my own thing and leaned heavily into type safety for this. It's ideal. With any luck i can open source it. There's a few similar ideas floating around now - Scala Yaga is one.
I'm curious what the google cli is that you're referring to. Could it be kubecfg (https://github.com/kubecfg/kubecfg)?
I've used it in the past (for a quite small deployment I must say), but have been very happy with it. Specifically the diff mode is very powerful to see what changes you'll apply compared to what's currently deployed.
nightmares (if anything went wrong i had to blow helm stuff away and start over) ontop of nightmares (kubernetes when i was trying it was tons of namespaces called beta, then you never knew what to update to or when you had to update, or what was incompatible) ontop of the realization that no one should be using kubernetes unless you have over 50 servers running many hundreds of services. Otherwise its just a million times simpler using docker compose
Can you recommend any articles about minimum scale necessary to make Kubernetes worth it?
If you count 3 control plane nodes and at least one or two extra servers worth of space for pods to go when a node goes down, I'd say don't bother for anything less than 6-7 servers worth of infrastructure. Once you're over 10 servers, you can start using node affinity and labels to have some logical grouping based on hardware type and/or tenants. At that point it's just one big computer and the abstraction starts to really pay off compared to manually dealing with servers and installation scripts.
I'd say the abstraction is not worth it when you have only a steady 2-3 servers worth of infrastructure. Don't do it at "Hello, world!" scale, you win nothing.
(I work for a company that helps other companies set up and secure larger projects into environments like Kubernetes.)
I would always use Kubernetes, if you have 4 or more GB RAM on your server. It's just better than docker compose in every imaginable way. The only issue with Kubernetes is that it wants around 2 GB RAM for itself.
The answer today is more than one node (instance/kernel running)
What is Charts v3? Please tell me it is LUA support.
I think what Charts v3 will be is still an open question. According to the current accepted HIPs[0], there is some groundwork to in general enable a new generation of a chart format via HIP-0020, and most HIPs after that contain some parts that are planned to make it into Charts v3 (e.g. resource creation sequencing via HIP-0025).
[0]: https://github.com/helm/community/tree/main/hips
Ugh, can we all just agree to stop using helm
would be nice, but we would also have to reimplement all of the charts we use, big ask/lift
DevOps has more friction for tooling changes because of the large blast radius
What do you prefer?
Just straight raw manifest files.
How do you have anything dynamic? How do you handle any differences at all between your infrastructure and what the authors built it for.
I get the feeling that most people commenting here have only surface level experience with deploying k8s applications. I don't care for helm myself but it's less bad than a lot of other approaches like hand rolling manifests with tools like envsubst and sed.
Kustomize also seems like hell when a deployment reaches a certain level of complexity.
Sorry, raw manifests and kustomize and a soupçon of regret.