Introduction
Running Apache Kafka inside Kubernetes is straightforward — until you need external clients to connect. Kafka's broker advertisement model means that when a client connects, the broker hands back its own advertised address for subsequent communication. If that address is an internal cluster DNS name, external clients are dead in the water.
Strimzi is a CNCF-graduated operator that manages the full Kafka lifecycle on Kubernetes. It handles broker configuration, rolling upgrades, TLS certificate management, and — critically — external listener configuration. Strimzi supports several external access patterns: NodePort, LoadBalancer, Ingress, and ClusterIP with manual routing. Each has trade-offs.
In this guide we'll use the Ingress listener type with Nginx Ingress Controller configured for TCP passthrough. This approach:
- Works on any Kubernetes cluster without a cloud load balancer
- Preserves TLS all the way to the broker (no TLS termination at the ingress layer)
- Keeps broker advertisement addresses stable and predictable
- Scales cleanly as you add brokers
By the end you'll have a Kafka cluster where external producers and consumers can connect using a standard Kafka client pointed at a hostname like kafka.example.com:9094.
Prerequisites
Infrastructure:
- A running Kubernetes cluster (K3s, kubeadm, EKS, GKE — any will do)
kubectlconfigured with cluster-admin accesshelmv3.12+- Nginx Ingress Controller installed (we'll cover this if you don't have it)
- A domain name you control, with DNS pointing to your ingress controller's external IP
Knowledge:
- Basic Kubernetes concepts (Deployments, Services, ConfigMaps)
- Familiarity with Kafka producer/consumer concepts
- Basic understanding of TLS certificates
Node naming convention used in this guide:
Cluster: my-kafka-cluster
Namespace: kafka
Domain: kafka.example.com
External port: 9094
Installing the Strimzi Operator
Strimzi uses the Operator pattern — you install the operator once, then declare Kafka clusters as Kafka custom resources.
Option A: Helm (Recommended)
helm repo add strimzi https://strimzi.io/charts/
helm repo update
helm install strimzi-kafka-operator strimzi/strimzi-kafka-operator \
--namespace strimzi-system \
--create-namespace \
--version 0.40.0 \
--set watchNamespaces="{kafka}"
The watchNamespaces flag tells the operator to manage Kafka resources in the kafka namespace. You can set it to "" to watch all namespaces, but scoping it is better practice.
Option B: YAML Manifests
kubectl create namespace kafka
kubectl apply -f \
https://strimzi.io/install/latest?namespace=kafka \
-n kafka
Verify the Operator is Running
kubectl -n strimzi-system get pods
# NAME READY STATUS RESTARTS AGE
# strimzi-cluster-operator-xxxxxxxxx-xxxxx 1/1 Running 0 60s
kubectl get crd | grep strimzi
# kafkas.kafka.strimzi.io
# kafkatopics.kafka.strimzi.io
# kafkausers.kafka.strimzi.io
# ... (several more)
Deploying a Kafka Cluster with an External Listener
Strimzi's Kafka custom resource lets you declare listeners directly in the spec. We'll configure two listeners:
plainon port9092— internal cluster communication (no TLS)externalon port9094— external access via Ingress with TLS
Create the Namespace
kubectl create namespace kafka
Deploy the Kafka Cluster
# kafka-cluster.yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-kafka-cluster
namespace: kafka
spec:
kafka:
version: 3.7.0
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: external
port: 9094
type: ingress
tls: true
configuration:
bootstrap:
host: kafka.example.com
brokers:
- broker: 0
host: kafka-broker-0.example.com
- broker: 1
host: kafka-broker-1.example.com
- broker: 2
host: kafka-broker-2.example.com
class: nginx
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
inter.broker.protocol.version: "3.7"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 20Gi
deleteClaim: false
resources:
requests:
memory: 2Gi
cpu: "500m"
limits:
memory: 4Gi
cpu: "2"
jvmOptions:
-Xms: 1024m
-Xmx: 2048m
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 5Gi
deleteClaim: false
resources:
requests:
memory: 512Mi
cpu: "250m"
limits:
memory: 1Gi
cpu: "500m"
entityOperator:
topicOperator: {}
userOperator: {}
kubectl apply -f kafka-cluster.yaml
Watch the cluster come up — Strimzi will create ZooKeeper pods first, then Kafka brokers:
kubectl -n kafka get pods -w
# NAME READY STATUS RESTARTS AGE
# my-kafka-cluster-zookeeper-0 1/1 Running 0 2m
# my-kafka-cluster-zookeeper-1 1/1 Running 0 2m
# my-kafka-cluster-zookeeper-2 1/1 Running 0 2m
# my-kafka-cluster-kafka-0 1/1 Running 0 90s
# my-kafka-cluster-kafka-1 1/1 Running 0 90s
# my-kafka-cluster-kafka-2 1/1 Running 0 90s
# my-kafka-cluster-entity-operator-xxxxxxxxx 2/2 Running 0 60s
Strimzi also creates Ingress resources automatically for the external listener:
kubectl -n kafka get ingress
# NAME CLASS HOSTS ADDRESS PORTS AGE
# my-kafka-cluster-kafka-external-bootstrap nginx kafka.example.com 203.0.113.10 80, 443 2m
# my-kafka-cluster-kafka-external-broker-0 nginx kafka-broker-0.example.com 203.0.113.10 80, 443 2m
# my-kafka-cluster-kafka-external-broker-1 nginx kafka-broker-1.example.com 203.0.113.10 80, 443 2m
# my-kafka-cluster-kafka-external-broker-2 nginx kafka-broker-2.example.com 203.0.113.10 80, 443 2m
Configuring Nginx Ingress for Kafka TCP Passthrough
Kafka uses a binary protocol over TCP — it is not HTTP. Standard Nginx Ingress HTTP routing won't work here. We need to configure Nginx to do TCP passthrough on port 9094, forwarding raw TCP connections directly to the Kafka broker services.
How TCP Passthrough Works
Nginx Ingress supports TCP and UDP proxying via a ConfigMap. You map an external port on the ingress controller to an internal Kubernetes service. The connection is forwarded at the TCP layer — Nginx doesn't inspect or terminate the TLS; it just passes bytes through.
This is important: with TCP passthrough, TLS is handled by the Kafka broker itself (using the certificates Strimzi generates), not by Nginx.
Step 1: Create the TCP Services ConfigMap
# nginx-tcp-services.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
# Format: "namespace/service-name:port"
# Port 9094 on the ingress controller forwards to the Kafka bootstrap service
"9094": "kafka/my-kafka-cluster-kafka-external-bootstrap:9094"
kubectl apply -f nginx-tcp-services.yaml
Step 2: Update the Nginx Ingress Controller Deployment
Tell the ingress controller to load the TCP services ConfigMap and expose port 9094:
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--reuse-values \
--set tcp.9094="kafka/my-kafka-cluster-kafka-external-bootstrap:9094" \
--set controller.extraArgs.tcp-services-configmap="ingress-nginx/tcp-services"
If you installed Nginx Ingress via manifests, patch the Deployment directly:
kubectl -n ingress-nginx patch deployment ingress-nginx-controller \
--type=json \
-p='[
{
"op": "add",
"path": "/spec/template/spec/containers/0/args/-",
"value": "--tcp-services-configmap=ingress-nginx/tcp-services"
}
]'
Step 3: Expose Port 9094 on the Ingress Service
The ingress controller's Service needs to expose port 9094 externally:
# patch-ingress-service.yaml
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ports:
- name: kafka-external
port: 9094
targetPort: 9094
protocol: TCP
Apply as a strategic merge patch:
kubectl -n ingress-nginx patch svc ingress-nginx-controller \
--type=json \
-p='[
{
"op": "add",
"path": "/spec/ports/-",
"value": {
"name": "kafka-external",
"port": 9094,
"targetPort": 9094,
"protocol": "TCP"
}
}
]'
Verify the port is now listed:
kubectl -n ingress-nginx get svc ingress-nginx-controller
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# ingress-nginx-controller LoadBalancer 10.96.100.50 203.0.113.10 80:30080/TCP,443:30443/TCP,9094:30094/TCP 5m
Configuring TLS for External Access
Strimzi automatically generates TLS certificates for the Kafka cluster using its internal CA. External clients need to trust this CA to establish a TLS connection.
Retrieve the Cluster CA Certificate
# Extract the CA certificate from the Kubernetes secret
kubectl -n kafka get secret my-kafka-cluster-cluster-ca-cert \
-o jsonpath='{.data.ca\.crt}' | base64 -d > kafka-ca.crt
# Verify the certificate
openssl x509 -in kafka-ca.crt -text -noout | grep -E "Subject:|Issuer:|Not After"
Option A: Use Strimzi's CA (Self-Signed)
For internal or development use, you can configure your Kafka client to trust the Strimzi CA directly:
# kafka-client.properties
bootstrap.servers=kafka.example.com:9094
security.protocol=SSL
ssl.truststore.location=/path/to/kafka.truststore.jks
ssl.truststore.password=changeit
Create the truststore from the CA cert:
keytool -import \
-alias strimzi-kafka-ca \
-file kafka-ca.crt \
-keystore kafka.truststore.jks \
-storepass changeit \
-noprompt
Option B: Use cert-manager with Let's Encrypt (Production)
For production, replace Strimzi's self-signed CA with a publicly trusted certificate using cert-manager.
First, create a Certificate resource for the bootstrap and broker hostnames:
# kafka-tls-cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: kafka-external-tls
namespace: kafka
spec:
secretName: kafka-external-tls-secret
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- kafka.example.com
- kafka-broker-0.example.com
- kafka-broker-1.example.com
- kafka-broker-2.example.com
Then reference this secret in your Kafka listener configuration:
# Add to the external listener in kafka-cluster.yaml
configuration:
brokerCertChainAndKey:
secretName: kafka-external-tls-secret
certificate: tls.crt
key: tls.key
With a publicly trusted cert, clients don't need a custom truststore — the standard JVM truststore works out of the box.
DNS Configuration
Each broker needs its own DNS record pointing to the ingress controller's external IP. With TCP passthrough, Nginx routes based on the destination port, not the hostname — so all broker hostnames can point to the same IP.
# DNS records (A records or CNAMEs to your ingress IP)
kafka.example.com A 203.0.113.10
kafka-broker-0.example.com A 203.0.113.10
kafka-broker-1.example.com A 203.0.113.10
kafka-broker-2.example.com A 203.0.113.10
If you're using Cloudflare, set the proxy status to DNS only (grey cloud) for these records. Cloudflare's proxy doesn't support arbitrary TCP ports like 9094.
Testing External Connectivity
Using kafka-console-producer and kafka-console-consumer
Download the Kafka binaries on a machine outside your cluster:
# Download Kafka 3.7.0
curl -O https://downloads.apache.org/kafka/3.7.0/kafka_2.13-3.7.0.tgz
tar -xzf kafka_2.13-3.7.0.tgz
cd kafka_2.13-3.7.0
Create a client properties file:
# external-client.properties
bootstrap.servers=kafka.example.com:9094
security.protocol=SSL
ssl.truststore.location=/path/to/kafka.truststore.jks
ssl.truststore.password=changeit
Create a test topic:
bin/kafka-topics.sh \
--bootstrap-server kafka.example.com:9094 \
--command-config external-client.properties \
--create \
--topic external-test \
--partitions 3 \
--replication-factor 3
Produce some messages:
echo "hello from outside the cluster" | bin/kafka-console-producer.sh \
--bootstrap-server kafka.example.com:9094 \
--producer.config external-client.properties \
--topic external-test
Consume them back:
bin/kafka-console-consumer.sh \
--bootstrap-server kafka.example.com:9094 \
--consumer.config external-client.properties \
--topic external-test \
--from-beginning \
--max-messages 1
# hello from outside the cluster
# Processed a total of 1 messages
Verify Broker Advertisement
Check that each broker is advertising the correct external hostname:
bin/kafka-metadata-quorum.sh \
--bootstrap-server kafka.example.com:9094 \
--command-config external-client.properties \
describe --status
# Or check broker configs directly
bin/kafka-configs.sh \
--bootstrap-server kafka.example.com:9094 \
--command-config external-client.properties \
--describe \
--broker 0 \
--all | grep advertised
# advertised.listeners=EXTERNAL://kafka-broker-0.example.com:9094,PLAIN://my-kafka-cluster-kafka-0.kafka.svc.cluster.local:9092
Check Ingress Controller Logs
If connectivity fails, the Nginx Ingress logs are your first stop:
kubectl -n ingress-nginx logs \
-l app.kubernetes.io/name=ingress-nginx \
--tail=100 \
| grep -i "9094\|kafka\|error"
Troubleshooting Common Issues
Client connects but immediately disconnects:
The broker's advertised address doesn't match what the client expects. Verify the configuration.brokers[*].host values in your Kafka CR match your DNS records exactly.
SSL handshake failure: The client doesn't trust the broker's certificate. Either import the Strimzi CA into your truststore, or switch to a publicly trusted cert via cert-manager.
Port 9094 not reachable from outside:
Check that the ingress controller Service exposes port 9094, the TCP ConfigMap is applied, and the --tcp-services-configmap argument is set on the controller Deployment.
Strimzi Ingress resources not created:
The class: nginx field in the listener configuration must match the ingressClassName of your Nginx Ingress Controller. Check with kubectl get ingressclass.
ZooKeeper connection refused:
If you're on Kafka 3.7+ and want to run in KRaft mode (no ZooKeeper), Strimzi 0.40+ supports it. Replace the zookeeper spec with a cruiseControl block and set spec.kafka.metadataVersion.
Production Considerations
A few things to address before this setup handles real traffic:
- Broker-level TCP routing — The setup above routes all external traffic through the bootstrap service. For high-throughput production use, configure per-broker TCP ports (e.g.,
9095,9096,9097) so clients connect directly to individual brokers after the initial metadata fetch. - Authentication — Add a
KafkaUserresource with SCRAM-SHA-512 or mTLS authentication. Strimzi's User Operator manages credentials as Kubernetes Secrets. - Network policies — Restrict which pods and external IPs can reach the Kafka namespace using
NetworkPolicyresources. - Monitoring — Strimzi exposes JMX metrics via a Prometheus JMX Exporter sidecar. Add
metricsConfigto yourKafkaCR and scrape with Prometheus. - Topic management — Use
KafkaTopicresources instead of the CLI so topic configuration is version-controlled and reconciled by the Topic Operator.
Conclusion
Strimzi makes Kafka on Kubernetes genuinely manageable — the operator handles the hard parts of broker configuration, certificate rotation, and rolling upgrades. The trickiest piece is external access, and TCP passthrough via Nginx Ingress is a clean solution that works across cloud and on-prem environments without requiring a cloud load balancer per broker.
The key insight is that Kafka's metadata protocol means clients need to reach individual brokers by their advertised addresses after the initial bootstrap. Getting those addresses right — and making sure DNS, TLS, and TCP routing all agree — is what makes external access work reliably.
From here, the natural next steps are adding SCRAM or mTLS authentication via KafkaUser resources, setting up the Prometheus JMX Exporter for broker metrics, and exploring KRaft mode to eliminate the ZooKeeper dependency entirely.