# Harness Pipeline Error: Complete Troubleshooting Guide

Harness is a modern CI/CD platform that manages deployments across Kubernetes, cloud platforms, and traditional infrastructure. When Harness pipelines fail, it's usually due to delegate connectivity, connector authentication, deployment configuration, or workflow execution issues.

Let me walk through the most common Harness pipeline errors and how to fix each one systematically.

Understanding Harness Architecture

Harness consists of:

  • Manager - The control plane (UI, API)
  • Delegate - Worker that executes tasks in your infrastructure
  • Connectors - Connections to Git, cloud providers, registries
  • Pipelines - Execution workflows

Most failures involve delegates or connectors.

Fix 1: Delegate Connectivity Issues

The delegate is offline or unreachable.

Symptoms: - "Delegate not available" - "No delegate connected" - Pipeline stays in "Waiting for delegate" state

Diagnosis:

```bash # Check delegate status in Harness UI # Account Setup → Account Resources → Delegates

# Check delegate pod (for Kubernetes delegate) kubectl get pods -n harness-delegate

# Check delegate logs kubectl logs -n harness-delegate harness-delegate-pod -f

# Check delegate heartbeat # Delegate should send heartbeat every minute ```

Solution A: Restart delegate:

```bash # For Kubernetes delegate kubectl rollout restart deployment/harness-delegate -n harness-delegate

# Wait for new pod kubectl rollout status deployment/harness-delegate -n harness-delegate

# Check logs kubectl logs -n harness-delegate -l app=harness-delegate ```

Solution B: Fix delegate registration:

The delegate needs to register with Harness:

  1. 1.Go to Account Setup → Delegates → Download Delegate
  2. 2.Download new delegate YAML
  3. 3.Apply to cluster:
bash
kubectl apply -f harness-delegate.yaml -n harness-delegate

Solution C: Configure delegate selector:

If pipeline uses specific delegate:

yaml
# In pipeline YAML
delegateSelectors:
  - my-delegate  # Must match delegate tags

Configure delegate tags:

bash
# In delegate YAML, set tags
env:
  - name: DELEGATE_TAGS
    value: "my-delegate,production"

Solution D: Check network connectivity:

```bash # From delegate pod, test connectivity kubectl exec -n harness-delegate harness-delegate-pod -- \ curl -I https://app.harness.io

# Check if delegate can reach Harness # Delegate needs outbound access to app.harness.io ```

Fix 2: Connector Authentication Failures

Connectors fail to authenticate with external services.

Symptoms: - "Connector test failed" - "Authentication failed" - "Access denied"

Diagnosis:

In Harness UI:

  1. 1.Go to Account Setup → Account Resources → Connectors
  2. 2.Click on connector
  3. 3.Click "Test Connection"

Solution A: Fix Git connector:

For GitHub:

yaml
connector:
  type: Github
  spec:
    url: https://github.com/org/repo
    authentication:
      type: Http
      spec:
        username: git-user
        passwordRef: github-token  # Secret reference

Test connection:

bash
# Manual test
curl -H "Authorization: token YOUR_TOKEN" \
  https://api.github.com/repos/org/repo

For SSH:

yaml
authentication:
  type: SSH
  spec:
    sshKeyRef: ssh-key-secret

Solution B: Fix Kubernetes connector:

yaml
connector:
  type: K8sCluster
  spec:
    masterUrl: https://cluster-api.company.com
    auth:
      type: ServiceAccount
      spec:
        serviceAccountTokenRef: sa-token-secret

Test:

bash
# Verify Kubernetes access
kubectl --kubeconfig=kubeconfig get namespaces

Solution C: Fix Docker registry connector:

yaml
connector:
  type: DockerRegistry
  spec:
    dockerRegistryUrl: https://registry.company.com
    auth:
      type: UserPassword
      spec:
        username: docker-user
        passwordRef: docker-password-secret

Test:

bash
docker login registry.company.com -u user -p password
docker pull registry.company.com/image:tag

Solution D: Fix AWS connector:

yaml
connector:
  type: Aws
  spec:
    accessKey: AWS_ACCESS_KEY
    secretKeyRef: aws-secret-key-secret
    region: us-east-1

Test:

bash
aws configure set aws_access_key_id YOUR_KEY
aws configure set aws_secret_access_key YOUR_SECRET
aws s3 ls

Fix 3: Deployment Failures

Deployments to target environments fail.

Symptoms: - "Deployment failed" - "Manifest apply failed" - "Service not found"

Diagnosis:

```bash # Check deployment logs in Harness UI # Pipelines → [Execution] → [Failed Step] → Logs

# Check Kubernetes deployment status kubectl get deployments -n target-namespace

# Check events kubectl get events -n target-namespace ```

Solution A: Fix manifest errors:

```yaml # Ensure manifest is valid kubectl apply -f deployment.yaml --dry-run=client

# Check for common issues: # - Invalid YAML syntax # - Missing namespace # - Invalid resource types # - Resource name conflicts ```

Solution B: Fix Helm deployment:

yaml
# In Harness pipeline
deployment:
  type: Helm
  spec:
    chartName: my-chart
    chartVersion: 1.0.0
    releaseName: my-release
    namespace: target-namespace
    valuesYaml: |
      replicaCount: 3
      image:
        tag: #{BuildNumber}

Test Helm:

bash
helm template my-release my-chart -f values.yaml
helm install my-release my-chart -n namespace --dry-run

Solution C: Fix Kubernetes direct deployment:

yaml
deployment:
  type: K8sDirect
  spec:
    manifests:
      - manifest.yaml
    namespace: target-namespace

Solution D: Check resource quotas:

```bash # Check if namespace has enough resources kubectl describe namespace target-namespace

# Check resource quotas kubectl get resourcequota -n target-namespace ```

Fix 4: Build Stage Failures

CI builds fail during execution.

Symptoms: - "Build failed" - "Command returned non-zero exit code" - "Step execution error"

Diagnosis:

```bash # Check build logs in Harness # Pipelines → [Execution] → Build Step → Logs

# For Harness CI, check build pod logs kubectl logs -n harness build-pod-name ```

Solution A: Fix build command:

yaml
# In pipeline YAML
stage:
  type: CI
  spec:
    steps:
      - type: Run
        spec:
          command: |
            npm install
            npm run build
          image: node:20

Solution B: Handle dependency failures:

yaml
steps:
  - type: Run
    spec:
      command: |
        npm ci --prefer-offline
      image: node:20
    resources:
      limits:
        memory: 4Gi

Solution C: Use caching:

```yaml # Configure caching for faster builds infrastructure: type: KubernetesDirect spec: namespace: harness-ci volumes: - name: npm-cache persistentVolumeClaim: claimName: ci-cache-pvc

steps: - type: Run spec: command: | npm config set cache /cache/npm npm ci volumeMounts: - name: npm-cache mountPath: /cache/npm ```

Fix 5: Workflow Approval Issues

Approval gates don't work correctly.

Symptoms: - "Waiting for approval" - Approval notifications not sent - Can't approve or reject

Solution A: Configure approval step:

yaml
step:
  type: Approval
  spec:
    approvers:
      userGroups:
        - approvers-group
      minimumCount: 1
    timeout: 1h
    message: "Approve deployment to production"

Solution B: Configure notifications:

yaml
notification:
  type: Slack
  spec:
    channel: "#deployments"
    message: |
      Pipeline #{PipelineExecutionId} waiting for approval
      Approvers: #{Approvers}

Solution C: Fix user group permissions:

In Harness:

  1. 1.Go to Account Setup → Account Resources → User Groups
  2. 2.Add users to approvers group
  3. 3.Ensure users have pipeline permissions

Fix 6: Template Execution Errors

Pipeline templates don't execute correctly.

Symptoms: - "Template not found" - Template parameters missing - Template validation failed

Solution A: Fix template references:

yaml
pipeline:
  stages:
    - stage:
        type: Template
        spec:
          templateRef: my-template
          templateInputs:
            variable1: value1
            variable2: value2

Solution B: Configure template parameters:

yaml
# Template definition
template:
  inputs:
    variable1:
      type: String
      required: true
    variable2:
      type: String
      default: "default-value"

Solution C: Validate template:

bash
# In Harness UI
# Templates → [Template] → Validate

Fix 7: Secret Management Issues

Secrets not accessible in pipelines.

Symptoms: - "Secret not found" - "Invalid secret reference" - Empty secret values

Solution A: Configure secrets correctly:

```yaml # Reference secrets connector: spec: passwordRef: account.docker-password

# Or use inline secrets (encrypted at runtime) password: <+secrets.getValue("account.docker-password")> ```

Solution B: Use Harness expression syntax:

```yaml # Harness expressions for secrets env: API_KEY: <+secrets.getValue("project.api-key")>

# Account-level secrets password: <+secrets.getValue("account.my-secret")>

# Project-level secrets token: <+secrets.getValue("project.my-secret")> ```

Solution C: Configure secret manager:

yaml
# For HashiCorp Vault
secretManager:
  type: Vault
  spec:
    vaultUrl: https://vault.company.com
    tokenRef: vault-token-secret
    basePath: harness/secrets

Fix 8: Git Trigger Failures

Pipelines don't trigger on Git events.

Symptoms: - "Trigger not firing" - Webhook not received - Pipeline not starting on push

Solution A: Configure Git trigger:

yaml
trigger:
  type: Webhook
  spec:
    type: Github
    autoAbortPreviousExecutions: true
    payloadConditions:
      - key: <+trigger.payload.event_type>
        operator: Equals
        value: push
    actions:
      - type: TriggerPipeline
        spec:
          pipeline: my-pipeline
          inputs:
            branch: <+trigger.payload.branch>

Solution B: Verify webhook configuration:

In GitHub:

  1. 1.Go to repository Settings → Webhooks
  2. 2.Check webhook exists for Harness URL
  3. 3.Check "Recent Deliveries" for errors

Harness webhook URL:

bash
https://app.harness.io/api/webhooks/account/ACCOUNT_ID/pipeline/PIPELINE_ID

Solution C: Test trigger manually:

bash
curl -X POST \
  -H "Content-Type: application/json" \
  -H "X-Harness-ApiKey: YOUR_API_KEY" \
  https://app.harness.io/api/webhooks/trigger \
  -d '{"event_type":"push","branch":"main"}'

Fix 9: Infrastructure Provisioning Failures

Infrastructure provisioning fails.

Symptoms: - "Terraform apply failed" - "Infrastructure not provisioned" - Cloud resource creation errors

Solution A: Fix Terraform execution:

yaml
step:
  type: Terraform
  spec:
    terraformWorkspace: default
    terraformVarFiles:
      - variables.tfvars
    terraformModule:
      type: Git
      spec:
        url: https://github.com/org/terraform-modules
        branch: main
        path: aws/infrastructure

Test:

bash
# Verify Terraform module
terraform init
terraform plan

Solution B: Handle Terraform state:

yaml
spec:
  terraformBackendConfig:
    type: S3
    spec:
      bucket: terraform-state-bucket
      key: infrastructure.tfstate
      region: us-east-1

Solution C: Check cloud permissions:

```bash # AWS permissions for Terraform aws iam get-user aws sts get-caller-identity

# Verify required permissions: # - EC2: Create, describe, terminate instances # - VPC: Create, manage VPC resources # - IAM: Create roles, policies ```

Fix 10: Variable and Expression Issues

Pipeline expressions return wrong values.

Symptoms: - #{...} returns null - Wrong substitution - Expression errors

Solution A: Use correct Harness expression syntax:

```yaml # Pipeline variables branch: <+pipeline.variables.branch>

# Trigger payload commit: <+trigger.payload.commitId>

# Build output artifact: <+pipeline.stages.build.outputs.artifact>

# Stage outputs image: <+pipeline.stages.build.spec.steps.buildStep.outputs.outputVariables.imageTag>

# Secrets token: <+secrets.getValue("account.api-token")> ```

Solution B: Debug expressions:

Add debug step:

yaml
step:
  type: Run
  spec:
    command: |
      echo "Branch: <+pipeline.variables.branch>"
      echo "Commit: <+trigger.payload.commitId>"
      echo "Artifact: <+pipeline.stages.build.outputs.artifact>"

Solution C: Set default values:

```yaml # Use conditional expressions branch: <+pipeline.variables.branch ? <+pipeline.variables.branch> : "main">

# Or in step value: <+trigger.payload.branch ?: "main"> ```

Quick Reference: Harness Errors

ErrorComponentSolution
Delegate unavailableDelegateRestart, check connectivity
Connector failedConnectorsFix authentication, test
Deployment failedPipelineFix manifest, check resources
Build failedCIFix command, add caching
Approval stuckWorkflowConfigure approvers, notifications
Template errorTemplatesFix references, parameters
Secret not foundSecretsUse correct expression
Trigger not firingTriggersConfigure webhook, test
Terraform failedInfrastructureFix module, permissions
Expression errorPipelineUse correct syntax

Debugging Commands

```bash # Check delegate status kubectl get pods -n harness-delegate

# Check delegate logs kubectl logs -n harness-delegate harness-delegate-pod -f

# Test Kubernetes access kubectl get namespaces

# Test Git access curl -H "Authorization: token TOKEN" https://api.github.com/user

# Test Docker registry docker login registry.company.com

# Test AWS access aws s3 ls

# Check Harness API curl -H "x-api-key: YOUR_KEY" https://app.harness.io/api/pipelines ```