Where Should You Actually Run Jenkins?
I've run Jenkins on bare metal, in Docker containers, on Kubernetes, and even on a Raspberry Pi (don't ask). Here's what actually works, what doesn't, and when to use each approach.
The "where should I run Jenkins?" question comes up in every architecture discussion. The answer isn't simple because Jenkins is weird—it's stateful, it spawns jobs that need their own resources, and it has plugins that expect filesystem access. Get the deployment wrong, and you'll spend more time fighting Jenkins than actually using it.
Bottom line: Start with Docker Compose on a VM for small teams. Move to Kubernetes only when you need high availability or dynamic scaling. Bare metal is fine if you're okay with manual maintenance. Avoid running Jenkins controller pods in K8s unless you really need HA—the complexity usually isn't worth it.
The Jenkins Deployment Problem
Jenkins was born in 2011 as Hudson. It predates Docker (2013), Kubernetes (2014), and modern CI/CD patterns. This means:
- It's stateful as hell: Jobs, build history, plugins, credentials—all stored on disk
- It wants to manage agents: Jenkins assumes it can SSH into machines or spin up new ones
- Plugin chaos: 1,800+ plugins with varying quality and assumptions about the runtime environment
- Java heap management: You need to tune JVM settings for your workload
This isn't a criticism—Jenkins works great. But these characteristics mean deployment strategy matters more than with stateless applications.
Best Way to Deploy Jenkins Controller: Docker Container on a VM
The Setup
Run Jenkins as a Docker container with a bind-mounted volume for persistence:
# docker-compose.yml
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
restart: unless-stopped
ports:
- "8080:8080"
- "50000:50000"
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock # For Docker-in-Docker
environment:
- JAVA_OPTS=-Xmx2048m -Xms1024m
volumes:
jenkins_home:
When This Works
This is my go-to for:
- Small to medium teams (1-50 developers): Simple, predictable, easy to troubleshoot
- Single-server simplicity: Everything on one machine, no orchestration complexity
- Quick setup: Five minutes from zero to running Jenkins
- Easy backups:
docker-compose down, copy the volume, done - Plugin testing: Spin up a test Jenkins in seconds
The Real-World Experience
I've run production Jenkins this way for a 30-person engineering team. It just works. Upgrades are docker-compose pull && docker-compose up -d. When disk fills up, you know exactly where to look. When something breaks, you're not debugging Kubernetes networking on top of Jenkins issues.
The only time this setup hurt us: the VM went down and everyone was blocked for 20 minutes while it restarted. For most teams, this is an acceptable trade-off compared to the complexity of HA setups.
What Are the Risks of Running Jenkins in Docker?
- Docker-in-Docker: If your builds need Docker, you're mounting
/var/run/docker.sock. This works but is a security risk—jobs can escape containers - Resource limits: Set them in docker-compose or Jenkins will eat all your RAM
- Backup strategy: Automate volume backups or you'll lose everything
- No high availability: VM goes down = Jenkins is down
How to Install Jenkins on Linux VM (Bare Metal)
The Setup
Install Jenkins directly on a Linux VM:
# Ubuntu/Debian
wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt update
sudo apt install jenkins
# Start Jenkins
sudo systemctl start jenkins
sudo systemctl enable jenkins
When This Works
- You're already managing VMs: No Docker/K8s in your stack
- Compliance requirements: Some orgs don't allow containers
- Heavy plugin usage: Some ancient plugins assume filesystem paths that break in containers
- Maximum performance: No container overhead (though it's negligible)
- Legacy systems: Existing Jenkins that's been running for years
The Honest Truth
This is how we ran Jenkins from 2018-2020. It worked fine, but upgrades were a pain. You're managing Java versions, system packages, plugin conflicts, and dealing with "works on my machine" problems when troubleshooting.
The killer issue: reproducibility. When Jenkins breaks, you can't just spin up an identical copy to test fixes. You're SSH'd into prod, making changes, and hoping.
Watch Out For
- Manual updates: You're responsible for OS patches, Java updates, Jenkins updates
- Difficult migrations: Moving to a new server means migrating plugins, jobs, configs manually
- No rollback: Update breaks something? Hope you have a snapshot
- Server drift: Over time, your Jenkins server becomes a unique snowflake
Should I Run Jenkins in Kubernetes? (Controller + Agents)
The Setup
Jenkins controller as a StatefulSet, agents as ephemeral pods:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jenkins
spec:
serviceName: jenkins
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- containerPort: 8080
- containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "4Gi"
cpu: "2000m"
volumeClaimTemplates:
- metadata:
name: jenkins-home
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
When This Works
- You're already on Kubernetes: Everything else runs there, why not Jenkins?
- Dynamic agent scaling: Spin up pods per build, destroy when done
- Resource isolation: Each build gets its own container with CPU/memory limits
- Multi-tenant builds: Different teams need isolated build environments
- Cloud-native stack: Your entire infrastructure is K8s-based
Why I Actually Recommend This (Sometimes)
We moved to this model when our build queue started backing up. With Kubernetes agents, build concurrency scales with available cluster resources—each build gets its own pod with allocated CPU/memory limits. When builds finish, pods disappear and resources are freed.
The Jenkins Kubernetes plugin works surprisingly well. Configure it once, and Jenkins automatically provisions pods for each build:
// Jenkinsfile
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.8-jdk-11
command: ['sleep']
args: ['infinity']
- name: docker
image: docker:latest
command: ['sleep']
args: ['infinity']
'''
}
}
stages {
stage('Build') {
steps {
container('maven') {
sh 'mvn clean package'
}
}
}
}
}
Common Problems Running Jenkins on Kubernetes
- Complexity explosion: Now you're debugging Jenkins AND Kubernetes
- Persistent volume drama: Jenkins needs ReadWriteOnce storage, which limits where pods can run
- Slow agent startup: Pulling images for each build adds 30-60 seconds
- Plugin compatibility: Some plugins don't play nice with K8s agents
- Network policies: Jenkins needs to reach agents, which can be tricky with strict network policies
Is Jenkins High Availability Worth It? (Kubernetes HA Setup)
The Reality Check
Running multiple Jenkins controllers for high availability is technically possible but honestly, don't do this unless you have a very specific need.
The problem: Jenkins wasn't designed for active-active clustering. You can set up active-passive with shared storage, but then you're dealing with:
- ReadWriteMany storage (expensive, slow, complex)
- Session affinity for web UI
- Database-backed job storage (with CloudBees plugins)
- Complex failover logic
When This Actually Makes Sense
- You're paying for CloudBees: Their enterprise version handles this better
- Downtime costs serious money: Every minute of Jenkins downtime costs thousands of dollars
- You have a dedicated platform team: Someone's full-time job is keeping Jenkins running
For 99% of teams: just accept that Jenkins will be down during upgrades. It's not worth the complexity.
Jenkins Agents: Static vs Dynamic (Which Is Better?)
Here's something nobody talks about: where you run the Jenkins controller matters way less than how you run agents.
Static Agents (The Old Way)
Jenkins Controller → SSH → Permanent Agent VMs
Pros: Simple, predictable, works with every plugin
Cons: Wasted resources, configuration drift, manual scaling
We had 10 permanent agent VMs. On average, 7 were idle. That's thousands of dollars a month on idle VMs. Plus, every time we needed to update Docker or dependencies, we had to manually SSH into 10 machines.
Dynamic Agents (The New Way)
Jenkins Controller → Kubernetes/Docker → Ephemeral Agents
Pros: Scales infinitely, no configuration drift, cost-efficient
Cons: Slower startup, requires good image management
Now we run 0 permanent agents. Every build gets a fresh pod that's destroyed when done. Builds are more reliable because there's no state lingering between builds, and we only pay for compute during active builds rather than maintaining idle capacity.
My Recommendation
Regardless of where you run the Jenkins controller, use dynamic agents. Even if Jenkins itself runs in Docker Compose, configure it to spin up Docker containers as agents.
Which Jenkins Deployment Should You Choose?
You Should Use Docker Compose If:
- Team size: 1-50 developers
- You have one VM to dedicate
- Downtime of 10-30 minutes is acceptable
- You want simple backups and upgrades
- You're not on Kubernetes for other things
You Should Use Kubernetes If:
- Team size: 50+ developers
- You're already running K8s
- You need dynamic build agent scaling
- You run many concurrent builds
- You want resource isolation between builds
You Should Use Bare Metal If:
- You already have a VM and don't want to change
- Compliance requires it
- You have plugins that don't work in containers
- You're very comfortable with traditional Linux administration
You Should Skip HA Jenkins Unless:
- Downtime literally costs $10k+ per hour
- You're paying for CloudBees Enterprise
- You have a dedicated platform team
What We Actually Run
After trying everything, here's our current setup:
- Jenkins controller: Docker Compose on a dedicated EC2 instance
- Jenkins agents: Kubernetes pods that spin up per-build
- Backup: Daily snapshots of the jenkins_home volume to S3
- Monitoring: Prometheus exports + PagerDuty for alerts
Why not run the controller on Kubernetes? Because the complexity wasn't worth it. We get:
- Simple troubleshooting (just SSH to one machine)
- Fast startup (no image pulling for controller)
- Predictable performance (dedicated resources)
- Easy rollback (docker-compose down, restore volume, up)
But we get dynamic agent scaling and resource isolation by using K8s agents. Best of both worlds.
The Mistakes I've Made
Mistake 1: Jenkins Controller in K8s Too Early
We moved to K8s-based Jenkins when we had 15 developers. The operational overhead of managing StatefulSets, PVCs, and debugging pod issues wasn't worth it. Went back to Docker Compose and didn't regret it.
Mistake 2: Not Planning for Disk Growth
Jenkins home directory grows fast—build artifacts, logs, workspaces. We started with a 50GB volume and filled it in 3 months. Now we use 200GB and have alerts at 80% usage.
Mistake 3: Mounting Docker Sock Without Thinking
Mounting /var/run/docker.sock into Jenkins lets jobs control Docker, but it's basically root access to the host. One malicious Jenkinsfile can take over your entire server. We now use dedicated build agents with Docker-in-Docker instead.
Mistake 4: Not Using Configuration as Code
We configured Jenkins via the UI for years. Every time we rebuilt it, we had to remember dozens of settings. Now we use the Configuration as Code plugin and Jenkins configs live in Git. Game changer.
How to Secure Jenkins in Production
Jenkins security is often an afterthought until an incident happens. Here's what matters in production environments.
Plugin Dependency Isolation
Jenkins plugins run in the same JVM as the controller and share the classpath. This creates dependency conflicts and security risks:
- Version Conflicts: Plugin A needs Jackson 2.12, Plugin B needs 2.15. One breaks.
- Privilege Escalation: Malicious plugins can access credentials, modify jobs, or execute arbitrary code
- Supply Chain Attacks: Compromised plugin updates can affect all Jenkins instances automatically
Mitigation Strategies:
- Pin Plugin Versions: Don't auto-update plugins. Test updates in non-production first
- Audit Installed Plugins: Remove unused plugins. Each one is a potential vulnerability
- Use Plugin Bill of Materials: Jenkins publishes tested plugin version combinations
- Monitor Plugin Security Advisories: Subscribe to jenkins-advisories mailing list
Build Job Sandboxing
By default, Jenkins jobs run with significant access. In Docker or Kubernetes deployments, jobs can often:
- Access the Docker socket (equivalent to root on host)
- Read secrets from environment variables or mounted volumes
- Network access to internal services
- Consume unlimited CPU/memory
Implement Defense in Depth:
# Kubernetes Pod Security Standards
apiVersion: v1
kind: Pod
metadata:
name: jenkins-agent
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: agent
image: jenkins/inbound-agent:latest
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
resources:
limits:
cpu: "2"
memory: "4Gi"
requests:
cpu: "500m"
memory: "1Gi"
Additional Sandboxing Techniques:
- Network Policies: Restrict agent egress to only required services (artifact repos, source control)
- Separate Namespaces: Isolate Jenkins agents by team or sensitivity level
- gVisor or Kata Containers: For high-security environments, use VM-isolated containers
- Resource Quotas: Prevent single jobs from consuming entire cluster
How to Manage Jenkins Secrets Securely
Jenkins manages credentials for source control, cloud providers, databases, and deployment targets. Poor secrets management is the most common Jenkins security failure.
Never Do This:
- Store secrets in job configurations or Jenkinsfiles
- Echo credentials in build logs
- Use the default Jenkins credentials store without encryption
- Share admin credentials across teams
Production-Grade Secrets Management:
// Use external secret managers
pipeline {
agent any
environment {
AWS_CREDENTIALS = credentials('aws-prod-deployer')
// Fetches from HashiCorp Vault via plugin
DB_PASSWORD = vault(path: 'secret/database', key: 'password')
}
stages {
stage('Deploy') {
steps {
// Credentials are masked in logs
sh '''
aws s3 cp build.zip s3://artifacts/
'''
}
}
}
}
Recommended Secret Backends:
- HashiCorp Vault: Industry standard, audit logs, dynamic secrets, credential rotation
- AWS Secrets Manager: Native AWS integration, automatic rotation
- Azure Key Vault: For Azure-centric environments
- Kubernetes Secrets: Acceptable for K8s deployments, but use External Secrets Operator
Critical: Enable Credentials Masking
Install the "Credentials Binding Plugin" and "Mask Passwords Plugin" to prevent accidental credential exposure in logs.
Network Policies and Segmentation
In Kubernetes deployments, Jenkins often has excessive network access. Apply principle of least privilege:
# Restrict Jenkins agent network access
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: jenkins-agent-policy
namespace: jenkins-agents
spec:
podSelector:
matchLabels:
app: jenkins-agent
policyTypes:
- Egress
- Ingress
ingress:
# Allow Jenkins controller to communicate with agents
- from:
- namespaceSelector:
matchLabels:
name: jenkins
ports:
- protocol: TCP
port: 50000
egress:
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
# Allow HTTPS to internet (for downloading dependencies)
- to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 443
# Block access to metadata service (prevent credential theft)
- to:
- podSelector: {}
ports:
- protocol: TCP
port: 80
except:
- cidr: 169.254.169.254/32
Additional Network Controls:
- Firewall Rules: Restrict Jenkins controller ingress to VPN or corporate network
- Service Mesh: Use Istio/Linkerd to enforce mTLS between Jenkins and agents
- Egress Filtering: Prevent data exfiltration by blocking unexpected outbound connections
Role-Based Access Control
Jenkins' default authorization model is too permissive for organizations. Implement granular RBAC:
# Configuration as Code - Matrix Authorization
jenkins:
authorizationStrategy:
projectMatrix:
permissions:
# Admins
- "Overall/Administer:jenkins-admins"
# Developers can view and build
- "Job/Build:developers"
- "Job/Read:developers"
- "Job/Cancel:developers"
# Read-only for auditors
- "Overall/Read:auditors"
- "Job/Read:auditors"
# Block anonymous access
- "Overall/Read:authenticated"
Best Practices:
- Folder-Based Authorization: Use Jenkins folders to scope permissions by team/project
- Audit Logging: Enable audit trail plugin to track who changed what
- SSO Integration: Use SAML/OIDC for centralized authentication (LDAP is acceptable)
- Principle of Least Privilege: Developers shouldn't access production deployment jobs
Should You Mount Docker Socket in Jenkins? (Alternatives)
Mounting /var/run/docker.sock is convenient but dangerous—it grants root-equivalent access to the host. Alternatives:
1. Docker-outside-of-Docker (DooD) with Rootless Docker:
# Run Docker daemon in rootless mode
dockerd-rootless-setuptool.sh install
2. Kaniko for Container Builds:
// Build containers without Docker daemon
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
command: ["/busybox/cat"]
tty: true
'''
}
}
stages {
stage('Build Image') {
steps {
container('kaniko') {
sh '''
/kaniko/executor \\
--context=dir://$WORKSPACE \\
--destination=myrepo/myapp:${BUILD_NUMBER}
'''
}
}
}
}
}
3. Buildah or Podman:
Rootless container builds without privileged access.
Jenkins Docker Compose Setup (Quick Start Guide)
If you're starting fresh today, here's what I'd do:
Step 1: Docker Compose Setup
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
restart: unless-stopped
user: root # Needed for Docker socket access
ports:
- "8080:8080"
- "50000:50000"
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
environment:
- JAVA_OPTS=-Xmx2048m -Xms1024m -Djenkins.install.runSetupWizard=false
- CASC_JENKINS_CONFIG=/var/jenkins_home/casc.yaml
volumes:
jenkins_home:
Step 2: Configuration as Code
# casc.yaml
jenkins:
systemMessage: "Jenkins configured automatically"
numExecutors: 0 # Controller doesn't run builds
securityRealm:
local:
allowsSignup: false
users:
- id: admin
password: ${JENKINS_ADMIN_PASSWORD}
authorizationStrategy:
globalMatrix:
permissions:
- "Overall/Administer:admin"
- "Overall/Read:authenticated"
clouds:
- docker:
name: "docker"
dockerApi:
dockerHost:
uri: "unix:///var/run/docker.sock"
templates:
- labelString: "docker-agent"
dockerTemplateBase:
image: "jenkins/inbound-agent:latest"
volumes:
- type: "bind"
source: "/var/run/docker.sock"
target: "/var/run/docker.sock"
remoteFs: "/home/jenkins/agent"
connector:
attach:
user: "root"
instanceCapStr: "10"
Step 3: Launch It
export JENKINS_ADMIN_PASSWORD=changeme
docker-compose up -d
That's it. You've got Jenkins with dynamic Docker agents in about 5 minutes.
When to Migrate to Kubernetes
You'll know it's time when:
- You're running 50+ builds per day and docker-compose agents can't keep up
- You need resource quotas per team/project
- You're already running everything else on K8s and Jenkins is the oddball
- You need multiple agent types with different tooling (Java, Node, Python, etc.)
- Your team has K8s expertise and troubleshooting won't be a nightmare
But even then, consider keeping the controller on Docker Compose and only moving agents to K8s.
Final Thoughts
The best Jenkins deployment is the one you can actually maintain. Docker Compose is underrated—it's simple, reliable, and handles 90% of use cases. Kubernetes makes sense for dynamic agents, but running the controller there adds complexity you might not need.
Don't overthink it. Start simple, measure actual pain points, and add complexity only when it solves a real problem. Jenkins downtime for upgrades once a month is not a real problem for most teams. Spending a week debugging PVC issues in Kubernetes is.
TL;DR: Docker Compose for the controller, dynamic agents (Docker or K8s) for builds. Kubernetes for everything only if you're already deep in the K8s ecosystem and have the expertise. Bare metal if you hate yourself or have strict compliance requirements. Skip HA unless downtime actually costs you serious money.