NIS2 and Kubernetes: What You Actually Need to Do

If you run Kubernetes in the EU, NIS2 is part of your day-to-day now. The directive has applied since October 2024, and each member state has been enforcing it through local law. I have spent the last few months hardening real clusters for these requirements, so this post is the practical version of what I learned. This is not legal advice. It is the technical checklist I wish I had from day one. ...

February 16, 2026

Reclaiming Idle GPUs in Kubernetes Before They Burn Your Budget

Last month I finally looked at our GPU utilization dashboards properly. What I saw made me physically uncomfortable: 14 A100 GPUs across our cluster, average utilization hovering around 15%. We were paying for dedicated hardware that spent most of its time doing absolutely nothing. This is embarrassingly common. Teams request a full GPU for a workload that uses it for training bursts of 20 minutes, then idles for hours. Kubernetes treats GPUs as integer resources — you either have one or you don’t. There’s no native way to share. ...

February 15, 2026

CPU Limits Don't Kill Pods - The #1 Kubernetes Misunderstanding

I keep seeing the same debugging rabbit hole. A team adds CPU limits, latency gets weird, and the first question is: “Are pods getting killed?” Usually no. That’s memory behavior, not CPU behavior. CPU limits do not kill pods. They throttle them. That one distinction explains a lot of “everything looks fine but users are complaining” incidents. The Misunderstanding A lot of engineers assume this mapping: Memory limit exceeded → pod gets killed (OOMKill) ✅ CPU limit exceeded → pod gets killed ❌ The second one is the trap. The official Kubernetes documentation spells it out: ...

February 14, 2026

Kubernetes Node Readiness Controller - Finally, Proper Node Bootstrap Gates

Last week I ran into a familiar mess, pods landing on nodes before the CNI plugin was actually ready. Kubelet marks the node as Ready, scheduler starts placing workloads, then everything sits in ContainerCreating because Calico is still coming up. I have worked around this with init containers and postStart tricks for way too long. I came across the Node Readiness Controller announcement on the Kubernetes blog. It is a new SIG project (v0.1.1), and it is basically what I wanted, custom readiness gates for nodes managed through a CRD. ...

February 13, 2026

Detecting Kubernetes Nodes Running Only DaemonSet Pods, A Deep Dive

Detecting Kubernetes Nodes Running Only DaemonSet Pods, A Deep Dive A real-world story about PromQL struggles, Helm templating, alert design, and operational savings by Dedico Servers. Executive Summary At Dedico Servers, we specialize in building efficient, cost-optimized Kubernetes clusters. In this article, we engineer a Prometheus-based alert to detect nodes running only DaemonSet pods, an operational and financial risk. By tackling this hidden inefficiency, we help our clients save thousands of dollars annually while improving the resilience of their clusters. ...

April 10, 2025 · Dedico Servers

Scaling GitOps with ArgoCD ApplicationSets

Managing Kubernetes applications with ArgoCD is already a game-changer, but what if you need to deploy the same app across 10 clusters, or generate dynamic app configs based on Git branches or Helm values? That’s where ApplicationSets step in. 🚀 What is an ApplicationSet? An ApplicationSet is a Kubernetes custom resource that tells ArgoCD how to automatically generate multiple Application resources from a template. It’s like templating your ArgoCD apps, letting you define how they should be generated and where they should go. ...

March 21, 2025

Kubernetes Introduction: When to Use It and When Not To

Kubernetes Is Not the Answer to Every Problem I say this as someone who spends a significant part of their work building and operating Kubernetes clusters. Kubernetes is a fantastic tool — but it’s not for everything, and introducing it at the wrong time can cause more problems than it solves. When to Use Kubernetes Many microservices (10+) that scale independently Variable load — autoscaling handles capacity automatically Multiple teams and environments — namespaces and RBAC provide clean separation High availability requirements (99.9%+ uptime) — self-healing, health checks, rolling updates Multi-cloud or hybrid strategy — Kubernetes abstracts the provider When NOT to Use Kubernetes One or two simple applications — use a VPS, Docker Compose, or managed PaaS instead Small team with no K8s experience — the learning curve takes months No CI/CD pipeline yet — build that first; Kubernetes builds on top of it Cost-sensitive project — minimum production EKS cluster costs $250-800/month Legacy stateful apps not designed for containers — significant refactoring needed Decision Framework Ask yourself: Do you have 5+ independently deployable services? Variable load needing autoscaling? K8s expertise on the team? Budget for minimum K8s costs? Containerizable services? ...

March 12, 2025

Using Tailscale with Kubernetes: Pod as a Client with Exit Node

Tailscale makes it incredibly easy to build secure, private networks between devices, and it works brilliantly inside Kubernetes too. In this guide, we’ll run a Kubernetes pod as a Tailscale client, routing its egress traffic through a Tailscale exit node. ✅ Use case: You want a pod to access the internet through a specific IP/location (e.g., a static home server) while maintaining full mesh connectivity over Tailscale. 🧱 Requirements A Kubernetes cluster (k3s, k8s, or managed service) A working Tailscale account An exit node already configured and enabled in Tailscale Linux container support (Debian-based preferred for Tailscale) 🐳 Step 1: Create a Tailscale-enabled Pod Here’s a basic example using an init container to authenticate and set up Tailscale. ...

March 21, 2024