JFrog Artifactory is a universal binary repository manager that stores, versions, and distributes build artifacts across your software delivery pipeline. It supports package formats including Docker images, Maven, npm, PyPI, Helm charts, and more – making it the central artifact hub for CI/CD workflows.
This guide walks through deploying JFrog Artifactory on a Kubernetes cluster using Helm charts and exposing it through an Ingress controller. We cover custom values configuration, persistent storage, SSL/TLS termination with cert-manager, repository setup for Docker/Maven/npm/PyPI, backup strategy, upgrades, and Prometheus monitoring.
Prerequisites
Before starting, make sure you have the following in place:
- A running Kubernetes cluster (v1.24+) with at least 3 nodes, each with 4GB RAM and 2 CPU cores. If you need to set one up, follow our guide on installing Kubernetes with kubeadm on Ubuntu
kubectlinstalled and configured to communicate with your cluster- Helm 3 installed on your workstation. See how to install and use Helm 3 on Kubernetes
- A StorageClass configured for dynamic volume provisioning (default or custom)
- An Ingress controller deployed in the cluster (NGINX Ingress Controller recommended)
- A domain name pointing to your Ingress controller’s external IP (e.g.,
artifactory.example.com) - JFrog Artifactory license key (for Pro/Enterprise edition) or use the OSS/Community edition
Verify your cluster is healthy before proceeding.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 12d v1.31.4
worker1 Ready <none> 12d v1.31.4
worker2 Ready <none> 12d v1.31.4
Check that a StorageClass is available.
$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
standard (default) rancher.io/local-path Delete WaitForFirstConsumer false
If you need NFS-based persistent storage, follow our guide on configuring NFS for Kubernetes persistent storage.
Step 1: Add the JFrog Helm Repository
JFrog publishes official Helm charts for Artifactory. Add the repository and update the local chart index.
helm repo add jfrog https://charts.jfrog.io
helm repo update
Verify the repository was added successfully.
$ helm search repo jfrog/artifactory
NAME CHART VERSION APP VERSION DESCRIPTION
jfrog/artifactory 107.98.x 7.98.x Universal Repository Manager supporting all majo...
jfrog/artifactory-oss 107.98.x 7.98.x JFrog Artifactory OSS
Create a dedicated namespace for Artifactory.
kubectl create namespace artifactory
Step 2: Create a Custom values.yaml for Artifactory
The Helm chart ships with sensible defaults, but production deployments need custom configuration for persistence, database settings, Ingress, and resource limits. Create a values.yaml file with the settings below.
vim values.yaml
Add the following configuration.
# ----- Database Configuration -----
postgresql:
enabled: true
postgresqlUsername: artifactory
postgresqlPassword: StrongDBPass2024
postgresqlDatabase: artifactory
persistence:
enabled: true
size: 50Gi
storageClass: "standard"
# ----- Artifactory Core Settings -----
artifactory:
database:
type: postgresql
driver: org.postgresql.Driver
url: "jdbc:postgresql://{{ .Release.Name }}-postgresql:5432/artifactory"
username: artifactory
password: StrongDBPass2024
# Resource allocation
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
# Java options for the JVM
javaOpts:
xms: "2g"
xmx: "4g"
# Persistent storage for artifacts
persistence:
enabled: true
size: 200Gi
storageClass: "standard"
type: file-system
# ----- Ingress Configuration -----
ingress:
enabled: true
defaultBackend:
enabled: false
hosts:
- artifactory.example.com
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
tls:
- secretName: artifactory-tls
hosts:
- artifactory.example.com
# ----- Disable built-in Nginx (using Ingress instead) -----
nginx:
enabled: false
# ----- Service Configuration -----
artifactory:
service:
type: ClusterIP
Key settings in this file:
- PostgreSQL – The bundled PostgreSQL chart runs a database pod alongside Artifactory. For production, consider using an external managed database
- Persistence – Both the database and artifact storage use PersistentVolumeClaims backed by the
standardStorageClass. Adjust thestorageClassandsizevalues to match your environment - Ingress – NGINX Ingress Controller handles external traffic. The
proxy-body-size: "0"annotation removes upload size limits, which is necessary for large artifact uploads - Resources – Memory and CPU limits prevent Artifactory from consuming all node resources. Adjust based on your workload
Step 3: Deploy JFrog Artifactory With Helm
Install Artifactory using the custom values file. Replace artifactory.example.com in your values.yaml with your actual domain before running the install command.
helm upgrade --install artifactory jfrog/artifactory \
--namespace artifactory \
-f values.yaml \
--timeout 10m
The installation takes a few minutes. Watch the pod status until all containers are running.
$ kubectl -n artifactory get pods -w
NAME READY STATUS RESTARTS AGE
artifactory-0 1/1 Running 0 3m12s
artifactory-postgresql-0 1/1 Running 0 3m12s
Verify the Helm release status.
$ helm status artifactory -n artifactory
NAME: artifactory
LAST DEPLOYED: Thu Mar 19 10:15:32 2026
NAMESPACE: artifactory
STATUS: deployed
REVISION: 1
Check that the PersistentVolumeClaims are bound.
$ kubectl -n artifactory get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
artifactory-volume-artifactory-0 Bound pvc-abc123... 200Gi RWO standard
data-artifactory-postgresql-0 Bound pvc-def456... 50Gi RWO standard
Step 4: Configure Ingress for Artifactory
The Helm chart creates an Ingress resource automatically when ingress.enabled is set to true in values.yaml. Verify the Ingress was created.
$ kubectl -n artifactory get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
artifactory nginx artifactory.example.com 192.168.1.100 80, 443 5m
Make sure your DNS A record for artifactory.example.com points to the Ingress controller’s external IP address. You can find that IP with this command.
kubectl -n ingress-nginx get svc ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
If you do not have a LoadBalancer (e.g., bare-metal clusters), use a NodePort service or MetalLB. For testing, you can add an entry to your /etc/hosts file.
echo "192.168.1.100 artifactory.example.com" | sudo tee -a /etc/hosts
Step 5: Access the Artifactory Web UI
Open your browser and navigate to https://artifactory.example.com. The default admin credentials are:
- Username:
admin - Password:
password
Change the default admin password immediately after the first login. Artifactory prompts you to set a new password, configure the base URL, and optionally set up a proxy. Complete the setup wizard to finish initial configuration.
If you cannot access the UI, check the Artifactory pod logs for errors.
kubectl -n artifactory logs artifactory-0 -c artifactory --tail=50
Verify the service is reachable from within the cluster.
kubectl -n artifactory get svc
kubectl -n artifactory run curl-test --image=curlimages/curl --rm -it --restart=Never -- curl -s -o /dev/null -w "%{http_code}" http://artifactory:8082/
A 200 response confirms Artifactory is running and accepting requests.
Step 6: Configure Repositories in Artifactory
Artifactory supports local, remote, and virtual repositories for different package types. Set up repositories for the most common formats through the web UI or the REST API.
Docker Registry Repository
Create a local Docker repository to store your container images.
curl -u admin:NewPassword123 -X PUT \
"https://artifactory.example.com/artifactory/api/repositories/docker-local" \
-H "Content-Type: application/json" \
-d '{
"key": "docker-local",
"rclass": "local",
"packageType": "docker",
"dockerApiVersion": "V2",
"description": "Local Docker registry"
}'
Create a remote Docker repository to proxy Docker Hub.
curl -u admin:NewPassword123 -X PUT \
"https://artifactory.example.com/artifactory/api/repositories/docker-remote" \
-H "Content-Type: application/json" \
-d '{
"key": "docker-remote",
"rclass": "remote",
"packageType": "docker",
"url": "https://registry-1.docker.io/",
"description": "Docker Hub proxy"
}'
Maven Repository
Create local and remote Maven repositories for Java build artifacts.
curl -u admin:NewPassword123 -X PUT \
"https://artifactory.example.com/artifactory/api/repositories/maven-local" \
-H "Content-Type: application/json" \
-d '{
"key": "maven-local",
"rclass": "local",
"packageType": "maven",
"handleReleases": true,
"handleSnapshots": true
}'
npm Repository
Set up a local npm registry for Node.js packages.
curl -u admin:NewPassword123 -X PUT \
"https://artifactory.example.com/artifactory/api/repositories/npm-local" \
-H "Content-Type: application/json" \
-d '{
"key": "npm-local",
"rclass": "local",
"packageType": "npm",
"description": "Local npm registry"
}'
PyPI Repository
Create a PyPI repository for Python packages.
curl -u admin:NewPassword123 -X PUT \
"https://artifactory.example.com/artifactory/api/repositories/pypi-local" \
-H "Content-Type: application/json" \
-d '{
"key": "pypi-local",
"rclass": "local",
"packageType": "pypi",
"description": "Local PyPI registry"
}'
Verify all repositories were created.
$ curl -s -u admin:NewPassword123 \
"https://artifactory.example.com/artifactory/api/repositories" | python3 -m json.tool | head -20
[
{
"key": "docker-local",
"type": "LOCAL",
"packageType": "docker"
},
{
"key": "docker-remote",
"type": "REMOTE",
"packageType": "docker"
},
...
]
Step 7: Set Up Docker Registry With Artifactory
To use Artifactory as a Docker registry, you need a dedicated subdomain or port for Docker API access. The recommended approach is using a subdomain like docker.artifactory.example.com.
Add an Ingress rule for the Docker subdomain. Create a file called docker-ingress.yaml.
vim docker-ingress.yaml
Add this Ingress configuration.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: artifactory-docker
namespace: artifactory
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-JFrog-Override-Base-Url https://docker.artifactory.example.com;
spec:
tls:
- secretName: docker-artifactory-tls
hosts:
- docker.artifactory.example.com
rules:
- host: docker.artifactory.example.com
http:
paths:
- path: /v2/
pathType: Prefix
backend:
service:
name: artifactory
port:
number: 8082
Apply the Ingress resource.
kubectl apply -f docker-ingress.yaml
Now configure Docker to use Artifactory as a registry. Log in to push and pull images.
docker login docker.artifactory.example.com
Username: admin
Password: NewPassword123
Login Succeeded
Tag and push an image to Artifactory.
docker tag myapp:v1.0 docker.artifactory.example.com/docker-local/myapp:v1.0
docker push docker.artifactory.example.com/docker-local/myapp:v1.0
Pull the image back to confirm the registry is working.
docker pull docker.artifactory.example.com/docker-local/myapp:v1.0
If your Kubernetes nodes need to pull from this registry, create an image pull secret. For more details on pull secrets, see our guide on adding registry pull secrets to Kubernetes.
kubectl create secret docker-registry artifactory-pull-secret \
--docker-server=docker.artifactory.example.com \
--docker-username=admin \
--docker-password=NewPassword123 \
--namespace=default
Step 8: Configure SSL/TLS With cert-manager
For production deployments, use cert-manager to automatically provision and renew TLS certificates from Let’s Encrypt. Install cert-manager if it is not already in your cluster.
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true
Verify cert-manager pods are running.
$ kubectl -n cert-manager get pods
NAME READY STATUS RESTARTS AGE
cert-manager-5b8f47f9b-xxxx 1/1 Running 0 45s
cert-manager-cainjector-7f8d7b8f-xxxx 1/1 Running 0 45s
cert-manager-webhook-6c77b8d9d-xxxx 1/1 Running 0 45s
Create a ClusterIssuer for Let’s Encrypt. Save this as cluster-issuer.yaml.
vim cluster-issuer.yaml
Add the ClusterIssuer definition.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: nginx
Apply the ClusterIssuer.
kubectl apply -f cluster-issuer.yaml
Now update your values.yaml to include cert-manager annotations so certificates are issued automatically.
ingress:
enabled: true
hosts:
- artifactory.example.com
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
tls:
- secretName: artifactory-tls
hosts:
- artifactory.example.com
Apply the updated values.
helm upgrade artifactory jfrog/artifactory \
--namespace artifactory \
-f values.yaml
Check that the certificate was issued.
$ kubectl -n artifactory get certificate
NAME READY SECRET AGE
artifactory-tls True artifactory-tls 2m
Step 9: Backup Strategy for Artifactory
Regular backups protect against data loss. Artifactory supports built-in backup through the admin UI, but for Kubernetes deployments, a volume-level backup strategy is more reliable.
Built-in Artifactory Backup
Configure scheduled backups in the Artifactory UI under Administration > Services > Backups. Create a new backup with a cron expression for daily execution.
- Backup Key: daily-backup
- Cron Expression:
0 0 2 * * ?(runs at 2:00 AM daily) - Retention Period: 7 days
- Exclude Builds: Disabled (include build info in backups)
Volume Snapshot Backup
If your storage provider supports VolumeSnapshots (e.g., AWS EBS, GCE PD, Ceph RBD), create periodic snapshots of the Artifactory PVC. Save this as backup-snapshot.yaml.
vim backup-snapshot.yaml
Add the VolumeSnapshot definition.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: artifactory-backup-$(date +%Y%m%d)
namespace: artifactory
spec:
volumeSnapshotClassName: csi-snapclass
source:
persistentVolumeClaimName: artifactory-volume-artifactory-0
Apply the snapshot.
kubectl apply -f backup-snapshot.yaml
Database Backup
Back up the PostgreSQL database separately. Run a pg_dump from the PostgreSQL pod.
kubectl -n artifactory exec artifactory-postgresql-0 -- \
pg_dump -U artifactory -d artifactory -F c -f /tmp/artifactory-db-backup.dump
Copy the dump file to your local machine or backup storage.
kubectl -n artifactory cp artifactory-postgresql-0:/tmp/artifactory-db-backup.dump ./artifactory-db-backup.dump
Automate this with a CronJob that runs daily and pushes backups to object storage (S3, GCS, or MinIO).
Step 10: Upgrade Artifactory on Kubernetes
Upgrading Artifactory is straightforward with Helm. Always back up before upgrading and check the JFrog release notes for breaking changes.
Update the Helm repository to get the latest chart version.
helm repo update
Check the available chart versions.
$ helm search repo jfrog/artifactory --versions | head -5
NAME CHART VERSION APP VERSION
jfrog/artifactory 107.98.x 7.98.x
jfrog/artifactory 107.90.x 7.90.x
jfrog/artifactory 107.84.x 7.84.x
If your values.yaml requires updates for the new chart version, make those changes first. Some major versions introduce new configuration keys or deprecate old ones.
Run the upgrade.
helm upgrade artifactory jfrog/artifactory \
--namespace artifactory \
-f values.yaml \
--timeout 10m
Monitor the rollout to make sure the new pods start successfully.
kubectl -n artifactory rollout status statefulset/artifactory
Verify the new version is running.
$ curl -s https://artifactory.example.com/artifactory/api/system/version
{
"version": "7.98.x",
"revision": "7980000",
"license": "Pro"
}
If the upgrade fails, roll back to the previous revision.
helm rollback artifactory -n artifactory
Step 11: Monitor Artifactory With Prometheus
Artifactory exposes metrics in OpenMetrics format that Prometheus can scrape. If you already run Prometheus in your cluster (see our guide on deploying Prometheus on Kubernetes), add a ServiceMonitor or scrape config to collect Artifactory metrics.
Enable the metrics endpoint in your values.yaml.
artifactory:
metrics:
enabled: true
openMetrics:
enabled: true
Apply the change.
helm upgrade artifactory jfrog/artifactory \
--namespace artifactory \
-f values.yaml
Create a ServiceMonitor resource for Prometheus Operator. Save this as servicemonitor.yaml.
vim servicemonitor.yaml
Add the ServiceMonitor definition.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: artifactory-metrics
namespace: artifactory
labels:
release: prometheus
spec:
selector:
matchLabels:
app: artifactory
endpoints:
- port: http-artifactory
path: /artifactory/api/v1/metrics
interval: 30s
scheme: http
Apply the ServiceMonitor.
kubectl apply -f servicemonitor.yaml
Verify that Prometheus is scraping Artifactory metrics. Open the Prometheus UI and check the Targets page. You should see the Artifactory target listed as UP.
Key metrics to monitor include:
jfrog_rt_artifacts_gc_duration_seconds– Garbage collection durationjfrog_rt_artifacts_count– Total number of artifacts storedjfrog_rt_storage_used_bytes– Storage consumptionjfrog_rt_http_connections_active– Active HTTP connectionsjfrog_rt_jvm_memory_used_bytes– JVM memory usage
Set up Grafana dashboards using the JFrog-provided dashboard templates available on Grafana’s dashboard marketplace (Dashboard ID: 16631).
Troubleshooting Common Issues
Here are solutions to problems you may encounter during deployment.
Pod stuck in Pending state – Check if PVCs are bound. If the StorageClass cannot provision volumes, pods will not schedule.
kubectl -n artifactory describe pvc
kubectl -n artifactory describe pod artifactory-0
Artifactory pod CrashLoopBackOff – Usually caused by insufficient memory. Increase resource limits in values.yaml and redeploy.
kubectl -n artifactory logs artifactory-0 -c artifactory --previous
Database connection errors – Verify the PostgreSQL pod is running and the credentials match between the postgresql and artifactory.database sections in values.yaml.
kubectl -n artifactory logs artifactory-postgresql-0
Ingress returns 502 Bad Gateway – The Artifactory service may not be ready yet. Wait a few minutes for the application to fully start. Check that the service port matches the Ingress backend port.
kubectl -n artifactory get endpoints artifactory
Large file uploads fail – Ensure the nginx.ingress.kubernetes.io/proxy-body-size: "0" annotation is set on the Ingress resource. Without this, NGINX limits uploads to 1MB by default.
Conclusion
You now have JFrog Artifactory running on Kubernetes with Helm, exposed through an Ingress controller with TLS certificates managed by cert-manager. The deployment includes repositories for Docker, Maven, npm, and PyPI packages, along with a backup strategy and Prometheus monitoring.
For production hardening, consider using an external PostgreSQL database with connection pooling, enabling Artifactory’s high availability mode with multiple replicas, setting up network policies to restrict pod-to-pod traffic, and configuring RBAC for fine-grained user access control.
Related Guides
- Install JFrog Artifactory on Ubuntu
- Configure JFrog Artifactory Behind Nginx and Let’s Encrypt SSL
- Install Harbor Registry on Kubernetes with Helm
- Install and Use Helm 3 on Kubernetes Cluster
- Configure NFS as Kubernetes Persistent Volume Storage



























































