Infrastructure Intermediate 10 min

Tailscale Setup for Kubernetes Homelab Services

Securely exposing K8s services like API, MinIO, and databases using Tailscale sidecars and subnet routers for seamless access from anywhere.

By Victor Robin Updated:

When I first configured Tailscale for my Kubernetes homelab, I expected a quick weekend project. Instead, I spent three days wrestling with subnet router advertisements that refused to propagate and DNS resolution loops between CoreDNS and MagicDNS. The moment I finally ran curl http://myapp-api/health from a coffee shop and got a response back in under 50ms, the frustration melted away. What followed was a deep dive into the Tailscale Kubernetes Operator that fundamentally changed how I think about remote access to homelab services. This article captures the lessons I wish I had when I started.

Introduction

In a distributed homelab environment, secure and convenient access to internal services is paramount. Traditional methods involving port forwarding or complex VPN setups can be security risks or maintenance headaches. Tailscale offers a zero-config VPN based on WireGuard that simplifies this dramatically.

[How Tailscale Works] — Avery Pennarun , 2023-03-15

Why Tailscale for Homelabs Matters:

  • Zero Trust Security: Services are not exposed to the public internet; only authenticated devices on your Tailnet can access them.
  • Ease of Access: Access your K8s API, databases, and dashboards from anywhere without fiddling with router ports.
  • Seamless Integration: Works beautifully with Kubernetes via sidecars or operators.

What We’ll Build

In this guide, we will implement a secure networking layer for our Kubernetes cluster. You will learn how to:

  1. Deploy Tailscale Operator: Manage Tailscale resources directly from Kubernetes.
  2. Expose Services: Securely expose our API and MinIO.
  3. Connect Locally: Verify connectivity from a development machine.

Architecture Overview

We utilize the Tailscale Kubernetes Operator to expose services. This allows us to define Ingress or Service resources that automatically get assigned IP addresses on our Tailnet.

[Tailscale Kubernetes Operator] — Tailscale , 2024-06-01
flowchart LR
    subgraph "Local Dev Machine"
        DevUser[Developer]
        TailscaleClient[Tailscale Client]
    end

    subgraph "Kubernetes Cluster"
        Operator[Tailscale Operator]
        
        subgraph "Application Namespace"
            API[Archives API]
            Sidecar[Tailscale Sidecar]
        end
    end

    DevUser -->|Requests 100.x.y.z| TailscaleClient
    TailscaleClient -->|WireGuard Tunnel| Sidecar
    Sidecar -->|Local Traffic| API
    Operator -.->|Manages| Sidecar
    
    classDef primary fill:#7c3aed,color:#fff
    classDef secondary fill:#06b6d4,color:#fff
    classDef db fill:#f43f5e,color:#fff
    classDef warning fill:#fbbf24,color:#000

    class DevUser warning
    class API primary
    class Sidecar,Operator,TailscaleClient secondary

Implementation

1. Installing the Tailscale Operator

We use Helm to deploy the operator. Ensure you have your specific OAuth client credentials from the Tailscale admin console.

[WireGuard: Next Generation Kernel Network Tunnel] — Jason A. Donenfeld , 2017-06-28
helm-install.sh
helm repo add tailscale https://pkgs.tailscale.com/helmcharts
helm repo update

helm upgrade --install tailscale-operator tailscale/tailscale-operator \
  --namespace tailscale \
  --create-namespace \
  --set oauth.clientId=$TAILSCALE_CLIENT_ID \
  --set oauth.clientSecret=$TAILSCALE_CLIENT_SECRET \
  --wait

2. Exposing the API

Instead of a standard NodePort or LoadBalancer, we annotate a Service to tell the Tailscale operator to expose it.

myapp-api-tailscale.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp-api-ts
  namespace: archives
  annotations:
    tailscale.com/expose: "true"
    tailscale.com/hostname: "myapp-api"
spec:
  selector:
    app: myapp-api
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

Applying this manifest triggers the operator to create a proxy pod that joins your Tailnet with the hostname myapp-api.

[Tailscale on Kubernetes] — Tailscale , 2024-02-10

3. Verifying Connectivity

Once the proxy pod is running, you can check your Tailscale admin console or simply try to ping the hostname from your local machine.

[MagicDNS] — Tailscale , 2024-01-20
# From your local machine on the tailnet
curl http://myapp-api/health
# Output: Healthy

Conclusion

By leveraging Tailscale, we’ve secured our infrastructure without sacrificing accessibility. We can now run database migrations, inspect Qdrant vectors, or debug the API from a coffee shop as securely as if we were sitting next to the server rack. This setup is foundational for a robust homelab DevOps lifecycle.

Looking back at this project, what surprised me most was how much the networking layer simplified everything downstream. Before Tailscale, I had a brittle chain of SSH tunnels and port forwards that broke every time my ISP rotated my IP. Now, the entire tailnet is stable, self-healing, and I rarely think about connectivity anymore. The initial DNS headaches were worth the payoff of a genuinely zero-trust overlay network.

[Zero Trust Networks: Building Secure Systems in Untrusted Networks] — Evan Gilman and Doug Barth , 2017-07-01

Next Steps

  • Explore Tailscale ACLs to enforce fine-grained access control between team members and services.
  • Set up subnet routers for accessing non-Kubernetes resources like NAS devices and IoT hardware on your homelab network.
  • Investigate Tailscale SSH to eliminate the need for managing SSH keys across your fleet.

Further Reading

[Tailscale ACLs Documentation] — Tailscale , 2024 [WireGuard Whitepaper] — Wireguard , 2024 [Kubernetes Networking Deep Dive] — Kubernetes Authors , 2024