You have just started using the built-in Kubernetes functionality on Docker for Mac? It is a promising alternative to docker-compose if you want to mirror your system infrastructure for local development. If you are using Kubernetes in production, you can easily use your existing pod definitions on your machine without the need to set up a Kubernetes cluster like minikube yourself. This short blog post will show you how to collect all logs for your local cluster.
You switch to your local cluster using kubectl config use-context docker-for-desktop and run all kubectl commands such as collecting logs for your pods and containers. Sometimes, you may want to store all your logs for later analysis. Typically, this job is for the elk / Elastic stack or tools such as Graylog or Google Cloud Operations Suite. But that seems a total overkill for a local development environment. Unfortunately, there is nothing like docker-compose logs to collect all logs simultaneously.
Using a little bash code, you can still collect all logs in one place. This script leverages the fact that all Kubernetes pods are still run using docker and pipes the log for each container into a single file:
#!/bin/sh
# If there is no logs directory, just create it
if [[ ! -d logs ]]
then
mkdir logs
fi
# Inifinite loop to collect logs
while true
do
# Get all docker containers running in kubernetes except the kubernetes system containers such as the kubernetes dashboard
# via: https://stackoverflow.com/questions/36756751/view-logs-for-all-docker-containers-simultaneously
for c in $(docker ps -a --format="" | egrep k8s | egrep -v "kubernetes")
do
if [[ ! -f logs/$c.log ]]
then
echo "Creating Log Pipe for: $c"
docker logs -f $c > logs/$c.log 2>&1 &
fi
done
# Sleep ten seconds before the next attempt
sleep 10
done
How do you like this workaround? Just leave me a comment with your thoughts!