So you’ve been pushing your Docker images to Docker Hub like everybody else, and it works fine — until it doesn’t. Maybe you hit the rate limits. Maybe you realized your proprietary application code is sitting on someone else’s servers. Maybe your compliance team just walked into your office with That Look on their face.
Whatever brought you here, you’re about to learn how to run your own private Docker registry with Harbor, and honestly? It’s one of those rare self-hosting wins where the juice is absolutely worth the squeeze.
Why Self-Host a Container Registry?
Before we get our hands dirty, let’s talk about why you’d want to run your own registry instead of just paying Docker Hub or using a cloud provider’s offering.
Control. Your images, your infrastructure, your rules. No third party deciding to change their pricing model or terms of service at 2 AM on a Tuesday.
Security. When your container images contain proprietary code, API keys baked into layers (we’ve all done it, don’t lie), or sensitive configurations, keeping them on infrastructure you control is just good hygiene.
Performance. Pulling images from a local registry on your network is dramatically faster than pulling from a remote one. If you’re running a Kubernetes cluster that scales frequently, this matters a lot.
Cost. Cloud registry costs add up. Storage fees, egress fees, per-image fees — it’s like a subscription service that keeps finding new things to charge you for. A self-hosted registry on hardware you already own? Fixed cost.
Compliance. Some industries (healthcare, finance, government) have regulations about where data can live. “It’s on Docker Hub’s servers somewhere” is not an answer that makes auditors happy.
Availability. Your CI/CD pipeline shouldn’t fail because someone else’s service is having a bad day. A local registry means your deployments keep rolling even when the internet is being dramatic.
Harbor vs. The Basic Docker Registry
Now, Docker actually provides a basic registry image (registry:2) that you can spin up in about 30 seconds. So why not just use that?
Well, registry:2 is like a filing cabinet. It stores things. That’s it. No UI, no access control, no image scanning, no replication, no audit logs. It’s the bare minimum.
Harbor, on the other hand, is like a filing cabinet inside a secure building with badge access, security cameras, an automated sorting system, and a concierge who checks every document for problems before filing it.
Here’s what Harbor brings to the table that the basic registry doesn’t:
| Feature | Docker Registry | Harbor |
|---|---|---|
| Image Storage | Yes | Yes |
| Web UI | No | Yes |
| RBAC | No | Yes |
| Vulnerability Scanning | No | Yes (Trivy) |
| Image Signing | No | Yes (Cosign/Notation) |
| Replication | No | Yes |
| Garbage Collection | Manual | Built-in UI + Scheduled |
| Audit Logs | No | Yes |
| OIDC/LDAP Auth | No | Yes |
| Helm Chart Repository | No | Yes |
| Robot Accounts | No | Yes |
| Quotas | No | Yes |
It’s not even a fair fight. Harbor is a CNCF graduated project, which means it’s been through the gauntlet of enterprise adoption and community review. It’s the real deal.
Prerequisites
Before we start, you’ll need:
- A Linux server (Ubuntu 22.04+ or similar) with at least 4GB RAM and 2 CPUs
- Docker Engine 20.10+ installed
- Docker Compose v2 installed
- A domain name pointed at your server (e.g.,
registry.yourdomain.com) - Port 80 and 443 open on your firewall
If you’re doing this on a homelab machine, adjust accordingly. Harbor will run on modest hardware, but it does appreciate having some room to breathe.
Installing Harbor with Docker Compose
Harbor ships its own installer that generates Docker Compose files for you. It’s surprisingly smooth for an enterprise-grade tool.
Step 1: Download the Installer
Grab the latest release from Harbor’s GitHub. As of this writing, we’ll use v2.11.x, but check for the latest:
# Download the online installer (smaller download, pulls images during install)
curl -sL https://github.com/goharbor/harbor/releases/download/v2.11.0/harbor-online-installer-v2.11.0.tgz -o harbor-installer.tgz
# Or the offline installer (larger download, includes all images)
curl -sL https://github.com/goharbor/harbor/releases/download/v2.11.0/harbor-offline-installer-v2.11.0.tgz -o harbor-installer.tgz
# Extract it
tar xzf harbor-installer.tgz
cd harbor
Step 2: Configure Harbor
Copy the template configuration and edit it:
cp harbor.yml.tmpl harbor.yml
Now open harbor.yml and make these key changes:
# The hostname or IP address of your Harbor instance
hostname: registry.yourdomain.com
# HTTPS configuration
https:
port: 443
certificate: /etc/harbor/certs/registry.yourdomain.com.crt
private_key: /etc/harbor/certs/registry.yourdomain.com.key
# The initial password for the Harbor admin account
# CHANGE THIS. Seriously.
harbor_admin_password: SomethingBetterThanHarbor12345
# Database configuration
database:
password: also-change-this-password
max_idle_conns: 100
max_open_conns: 900
# The default data volume for Harbor
data_volume: /data/harbor
# Trivy vulnerability scanner
trivy:
ignore_unfixed: false
security_check: vuln
insecure: false
# Log configuration
log:
level: info
local:
rotate_count: 50
rotate_size: 200M
location: /var/log/harbor
A few notes on this config:
- hostname: This must match your SSL certificate. Don’t use an IP address if you can avoid it.
- harbor_admin_password: The default is
Harbor12345. If you leave this, you deserve what happens next. - data_volume: This is where all your images will be stored. Make sure this path has plenty of disk space.
Setting Up HTTPS
Running a registry without HTTPS is like leaving your front door open with a sign that says “free stuff inside.” Don’t do it. Docker itself will refuse to push to an insecure registry without extra configuration, and for good reason.
Option 1: Let’s Encrypt with Certbot
The free and automated approach:
# Install certbot
sudo apt install certbot -y
# Get your certificate
sudo certbot certonly --standalone -d registry.yourdomain.com
# Copy certs to Harbor's expected location
sudo mkdir -p /etc/harbor/certs
sudo cp /etc/letsencrypt/live/registry.yourdomain.com/fullchain.pem /etc/harbor/certs/registry.yourdomain.com.crt
sudo cp /etc/letsencrypt/live/registry.yourdomain.com/privkey.pem /etc/harbor/certs/registry.yourdomain.com.key
Set up auto-renewal so you don’t wake up to expired certs:
# Add a cron job for renewal
echo "0 0 1 */2 * certbot renew --pre-hook 'cd /opt/harbor && docker compose down' --post-hook 'cp /etc/letsencrypt/live/registry.yourdomain.com/fullchain.pem /etc/harbor/certs/registry.yourdomain.com.crt && cp /etc/letsencrypt/live/registry.yourdomain.com/privkey.pem /etc/harbor/certs/registry.yourdomain.com.key && cd /opt/harbor && docker compose up -d'" | sudo crontab -
Option 2: Self-Signed Certificates
For labs and internal networks where you control all the clients:
# Create a CA
openssl genrsa -out ca.key 4096
openssl req -x509 -new -nodes -sha512 -days 3650 \
-subj "/C=US/ST=State/L=City/O=YourOrg/CN=registry.yourdomain.com" \
-key ca.key -out ca.crt
# Generate a server certificate
openssl genrsa -out registry.yourdomain.com.key 4096
openssl req -sha512 -new \
-subj "/C=US/ST=State/L=City/O=YourOrg/CN=registry.yourdomain.com" \
-key registry.yourdomain.com.key -out registry.yourdomain.com.csr
# Create a v3 extensions file
cat > v3.ext <<EOF
authorityKeyIdentifier=keyIdentifier,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1=registry.yourdomain.com
DNS.2=registry
EOF
# Sign the certificate
openssl x509 -req -sha512 -days 3650 \
-extfile v3.ext \
-CA ca.crt -CAkey ca.key -CAcreateserial \
-in registry.yourdomain.com.csr -out registry.yourdomain.com.crt
# Copy to Harbor's cert directory
sudo mkdir -p /etc/harbor/certs
sudo cp registry.yourdomain.com.crt /etc/harbor/certs/
sudo cp registry.yourdomain.com.key /etc/harbor/certs/
If you’re using self-signed certs, you’ll need to trust the CA on every machine that talks to the registry:
# On each Docker client machine
sudo mkdir -p /etc/docker/certs.d/registry.yourdomain.com/
sudo cp ca.crt /etc/docker/certs.d/registry.yourdomain.com/
# Also add to system trust store
sudo cp ca.crt /usr/local/share/ca-certificates/harbor-ca.crt
sudo update-ca-certificates
# Restart Docker
sudo systemctl restart docker
Running the Installer
With your config ready and certs in place, run the installer with Trivy enabled:
sudo ./install.sh --with-trivy
This will pull all the required Docker images, generate the Docker Compose file, and start everything up. You’ll see a bunch of containers spin up:
Creating harbor-log ... done
Creating registryctl ... done
Creating harbor-db ... done
Creating redis ... done
Creating registry ... done
Creating harbor-portal ... done
Creating harbor-core ... done
Creating harbor-jobservice ... done
Creating nginx ... done
Creating trivy-adapter ... done
Once it’s done, hit https://registry.yourdomain.com in your browser. Log in with admin and the password you set in harbor.yml.
Welcome to your private registry. Take a moment to admire it. You built this.
Image Scanning with Trivy Integration
One of Harbor’s killer features is built-in vulnerability scanning powered by Trivy. This isn’t some bolted-on afterthought — it’s deeply integrated into the registry workflow.
How It Works
Every time an image is pushed to Harbor, Trivy can automatically scan it for known vulnerabilities in OS packages and application dependencies. It checks against multiple vulnerability databases (NVD, GitHub Advisory, etc.) and gives you a severity breakdown.
Configuring Automatic Scanning
In the Harbor UI, navigate to Administration > Configuration > Security:
- Enable “Automatically scan images on push” — this is the big one. Every image that lands in your registry gets scanned immediately.
- Set “Prevent vulnerable images from running” and choose a severity threshold. For example, block any image with Critical vulnerabilities from being pulled.
You can also configure this per-project:
Project Settings > Configuration > Automatically scan images on push
Viewing Scan Results
After pushing an image, go to the project, click the repository, and select a tag. You’ll see a vulnerability report showing:
- Total vulnerabilities by severity (Critical, High, Medium, Low)
- CVE IDs with descriptions
- Which package is affected
- Whether a fix is available
- The fixed-in version
This is incredibly powerful for maintaining security posture. No more “we’ll scan it later” — every image gets checked at the gate.
Setting Vulnerability Policies
You can create policies that prevent images with certain severity levels from being pulled:
- Go to your project
- Click Configuration
- Under Deployment Security, enable “Prevent vulnerable images from running”
- Set the threshold (Critical, High, Medium, Low, None)
Now if someone tries to docker pull an image with a Critical vulnerability, Harbor will refuse. Your production environment just got a bouncer.
Role-Based Access Control (RBAC)
Harbor’s RBAC system is legitimately good. Here’s how the permission model works:
Users and Groups
- Admin: God mode. Can do everything.
- Project Admin: Full control within a specific project.
- Developer: Can push and pull images within a project.
- Guest: Read-only. Can pull images but not push.
- Limited Guest: Can only pull from specific repositories.
Setting Up Projects
Projects in Harbor are the primary organizational unit. Think of them like namespaces:
registry.yourdomain.com/my-project/my-app:latest
^^^^^^^^^^
This is the project
To create a project:
- Log in to Harbor UI
- Click New Project
- Name it (e.g.,
production,staging,team-backend) - Choose Public (anyone can pull) or Private (members only)
- Set a storage quota if needed
Adding Members
Go to your project, click Members, and add users with appropriate roles. You can also integrate with LDAP or OIDC for enterprise authentication:
# In harbor.yml, configure LDAP
auth_mode: ldap_auth
ldap_url: ldaps://ldap.yourdomain.com
ldap_base_dn: ou=people,dc=yourdomain,dc=com
ldap_search_dn: cn=harbor,ou=service,dc=yourdomain,dc=com
ldap_search_password: ldap-search-password
ldap_uid: uid
ldap_scope: 2
Robot Accounts
For CI/CD pipelines, you don’t want to use human credentials. Robot accounts are purpose-built for automation:
- Go to your project > Robot Accounts
- Click New Robot Account
- Name it (e.g.,
ci-pipeline) - Set an expiration (or never, if you live dangerously)
- Select permissions (typically just push and pull)
You’ll get a token. Store it somewhere safe — you’ll only see it once.
Replication
Harbor can replicate images between registries, which is fantastic for:
- Disaster recovery: Mirror your registry to another site
- Multi-region deployments: Keep images close to where they’re needed
- Hybrid cloud: Sync between on-prem and cloud registries
Setting Up Replication
- Go to Administration > Registries and add your target registry (can be another Harbor instance, Docker Hub, AWS ECR, GCR, Azure ACR, etc.)
- Go to Administration > Replications and create a new rule
- Configure:
- Source: Your projects/repositories (supports filters and wildcards)
- Destination: The target registry
- Trigger: Manual, scheduled, or event-driven (on push)
Example: replicate everything in the production project to your DR site every time an image is pushed:
Name: prod-to-dr
Source: production/**
Destination: dr-harbor.yourdomain.com
Trigger: Event Based (on push)
Harbor supports both push-based and pull-based replication, so you can adapt to whatever network topology makes sense.
Garbage Collection
Over time, your registry accumulates deleted image layers that still take up disk space. Garbage collection cleans these up.
Why It Matters
When you delete an image tag, Harbor only removes the manifest reference. The actual blob data (layers) sticks around because other images might reference the same layers. Garbage collection identifies orphaned blobs and reclaims the space.
Running Garbage Collection
Through the UI:
- Go to Administration > Clean Up
- You can configure:
- Delete untagged artifacts: Remove images that lost all their tags
- Schedule: Run GC on a cron schedule
- Click GC Now for an immediate run
Recommended schedule:
Set garbage collection to run weekly during off-hours. Something like Sunday at 2 AM:
0 2 * * 0
Important Warning
Garbage collection requires a brief period where the registry is read-only. During GC, pushes will be blocked. Plan accordingly. For most teams, running it during a maintenance window is fine.
Pushing and Pulling Images
Now for the part you’ve been waiting for — actually using this thing.
Log In to Your Registry
docker login registry.yourdomain.com
# Username: admin (or your user)
# Password: your-password
For scripted environments:
echo "your-password" | docker login registry.yourdomain.com -u admin --password-stdin
Tag and Push an Image
# Tag your local image for Harbor
docker tag my-app:latest registry.yourdomain.com/my-project/my-app:latest
docker tag my-app:latest registry.yourdomain.com/my-project/my-app:v1.2.3
# Push it
docker push registry.yourdomain.com/my-project/my-app:latest
docker push registry.yourdomain.com/my-project/my-app:v1.2.3
Pull an Image
docker pull registry.yourdomain.com/my-project/my-app:v1.2.3
Working with Helm Charts
Harbor also serves as a Helm chart repository:
# Add Harbor as a Helm repo
helm repo add my-harbor https://registry.yourdomain.com/chartrepo/my-project \
--username admin --password your-password
# Push a chart
helm push my-chart-0.1.0.tgz oci://registry.yourdomain.com/my-project
# Pull a chart
helm pull oci://registry.yourdomain.com/my-project/my-chart --version 0.1.0
CI/CD Integration
This is where Harbor really earns its keep. Here are integration examples for popular CI/CD tools.
GitHub Actions
name: Build and Push to Harbor
on:
push:
branches: [main]
jobs:
build:
runs-on: self-hosted # Or use ubuntu-latest with proper network access
steps:
- uses: actions/checkout@v4
- name: Login to Harbor
uses: docker/login-action@v3
with:
registry: registry.yourdomain.com
username: ${{ secrets.HARBOR_USER }}
password: ${{ secrets.HARBOR_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
registry.yourdomain.com/my-project/my-app:${{ github.sha }}
registry.yourdomain.com/my-project/my-app:latest
GitLab CI
stages:
- build
build-image:
stage: build
image: docker:24.0
services:
- docker:24.0-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- echo "$HARBOR_PASSWORD" | docker login registry.yourdomain.com -u "$HARBOR_USER" --password-stdin
script:
- docker build -t registry.yourdomain.com/my-project/my-app:${CI_COMMIT_SHA} .
- docker push registry.yourdomain.com/my-project/my-app:${CI_COMMIT_SHA}
only:
- main
Jenkins Pipeline
pipeline {
agent any
environment {
HARBOR_CREDS = credentials('harbor-credentials')
REGISTRY = 'registry.yourdomain.com'
IMAGE = "${REGISTRY}/my-project/my-app"
}
stages {
stage('Build') {
steps {
sh "docker build -t ${IMAGE}:${BUILD_NUMBER} ."
}
}
stage('Push') {
steps {
sh "echo ${HARBOR_CREDS_PSW} | docker login ${REGISTRY} -u ${HARBOR_CREDS_USR} --password-stdin"
sh "docker push ${IMAGE}:${BUILD_NUMBER}"
}
}
}
}
Using Robot Accounts in CI/CD
Remember those robot accounts we set up earlier? Here’s how to use them:
# The robot account username format is:
# robot$<account-name> (note the dollar sign)
docker login registry.yourdomain.com -u 'robot$ci-pipeline' --password-stdin <<< "$ROBOT_TOKEN"
The dollar sign in the username trips people up constantly. Make sure your CI system handles it properly — in YAML, you might need to quote it or escape it.
Storage Backends
By default, Harbor stores images on the local filesystem. But you can configure it to use object storage for better scalability and durability.
S3-Compatible Storage
Edit harbor.yml before installation (or update and re-run the installer):
storage_service:
s3:
accesskey: your-access-key
secretkey: your-secret-key
region: us-east-1
bucket: harbor-registry
regionendpoint: https://s3.amazonaws.com
# For MinIO or other S3-compatible storage:
# regionendpoint: https://minio.yourdomain.com:9000
MinIO (Self-Hosted Object Storage)
If you’re going full self-hosted (respect), pair Harbor with MinIO:
# docker-compose.minio.yml
services:
minio:
image: minio/minio:latest
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
volumes:
- /data/minio:/data
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin-password
Then point Harbor’s storage at MinIO:
storage_service:
s3:
accesskey: minioadmin
secretkey: minioadmin-password
region: us-east-1
bucket: harbor
regionendpoint: http://minio:9000
secure: false
This gives you the scalability of object storage without sending a dime to a cloud provider.
Maintenance and Operations Tips
Monitoring Harbor
Harbor exposes Prometheus metrics out of the box:
https://registry.yourdomain.com/metrics
Point your Prometheus scraper at it and build a Grafana dashboard. Keep an eye on:
- Registry storage usage
- Request latency
- Active connections
- Scan queue depth
Backing Up Harbor
Back up these three things and you can restore your entire Harbor instance:
- Database: PostgreSQL dump
- Registry data: The
data_volumedirectory - Configuration: Your
harbor.ymland generated compose files
# Database backup
docker exec harbor-db pg_dump -U postgres registry > harbor-db-backup.sql
# Data backup (stop Harbor first for consistency)
cd /opt/harbor && docker compose down
tar czf harbor-data-backup.tar.gz /data/harbor
cd /opt/harbor && docker compose up -d
Upgrading Harbor
Harbor upgrades are straightforward but require care:
# 1. Back up everything (see above)
# 2. Stop Harbor
cd /opt/harbor && docker compose down
# 3. Download and extract the new version
# 4. Migrate the database
docker run -it --rm \
-v /data/harbor/database:/var/lib/postgresql/data \
goharbor/harbor-db-migrator:v2.11.0 up
# 5. Run the installer with your existing harbor.yml
./install.sh --with-trivy
Always read the release notes before upgrading. Harbor respects semver, but breaking changes between major versions do happen.
Common Gotchas
Because no guide is complete without a “things that will make you swear” section:
-
Certificate issues: If
docker pushfails withx509: certificate signed by unknown authority, your Docker daemon doesn’t trust Harbor’s CA. See the self-signed cert section above. -
The
robot$username: That dollar sign will ruin your day in shell scripts, CI/CD variables, and YAML files. Always single-quote it. -
Storage space: Images are big. Monitor your disk usage and set up garbage collection early, not after you run out of space at 3 AM.
-
Database migrations: Always back up before upgrading. The database migrator generally works great, but “generally” isn’t “always.”
-
DNS resolution inside Docker: If Harbor’s containers can’t resolve your hostname, you might need to add entries to the Docker daemon’s DNS config or use
extra_hostsin the compose file. -
Rate limits on pulls from Docker Hub: If you’re using the online installer, it pulls images from Docker Hub. If you’re behind a rate limit, use the offline installer.
Wrapping Up
Running your own container registry with Harbor is one of those infrastructure investments that pays dividends immediately. You get enterprise-grade security scanning, fine-grained access control, replication for disaster recovery, and the warm fuzzy feeling of knowing your container images aren’t sitting on someone else’s hardware.
Is it more work than just using Docker Hub? Sure, a little. But it’s also more control, more security, and more capability. And once it’s set up, Harbor mostly just hums along in the background doing its thing while you focus on actually building software.
The Harbor project is actively maintained, has a thriving community, and keeps getting better with each release. If you’re running anything beyond hobby projects, a private registry isn’t a luxury — it’s table stakes.
Now go push some images. Your registry is waiting.