Pivotal Knowledge Base


How to get into an App Container Manually with Garden-RunC Backend


Elastic Runtime versions 

  • 1.6.45 and above
  • 1.7.28 and above
  • 1.8.8 and above
  • 1.8.40 and above
  • Below 1.9.18
  • Below 1.10.5


Due to security reasons, SSH access to the app container via "CF SSH" command, is blocked in Pivotal Cloud Foundry environments. But sometimes, it is necessary to SSH to the app containers for troubleshooting purposes. The procedure outlines the steps to SSH to an app container manually where Garden-runC backend is running.

Note that Garden-runC backend is available only on ERT versions >1.65, >1.7.28, >1.8.9, >1.9.0. Previous versions of ERT are shipped with the Garden-Linux backend.


1) Find the app guid for the app container that we want to ssh into using cf app <app-name> --guid, where <app-name> is the name of the app.


$ cf app spring-music --guid

2) Find the IP of the Diego Cell where the container app is hosted:

cf curl /v2/apps/<app-guid>/stats | grep -w host

where <app-guid> is the GUID of the app from the command in step 1)


cf curl /v2/apps/e0afa9f2-4020-4da1-919c-59d1c66d7d3c/stats | grep -w host
"host": "", 

There may be multiple IPs returned from the above command if there are multiple app instances. There is one IP for each app instance. 

3) From ops manager VM, find the name of the diego cell from the IP using bosh vms | grep <IP> command, where IP is the ip address from step 2)

ubuntu@pivotal-ops-manager:~$ bosh vms | grep
Acting as user 'director' on 'p-bosh'
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
| diego_cell/0 (4c25dade-69dd-4ad7-90e3-5916a9a0b77e) | running | az1 | xlarge.disk | |

The VM name is denoted by "<job_name>/<index>". From the above command, "diego_cell" is the job name and index is "0"

4) Target the elastic runtime deployment using bosh deployment <path_to_ert_deployment_file> command, where <path_to_ert_deployment_file> is /var/tempest/workspace/default/deployment/cf-<uuid>.yml


ubuntu@pivotal-ops-manager:~$ bosh deployment /var/tempest/workspaces/default/deployments/cf-8da0e8993ba95fe2f8d4.yml 
Deployment set to '/var/tempest/workspaces/default/deployments/cf-8da0e8993ba95fe2f8d4.yml'

5) SSH to Diego cell using bosh ssh <job_name> <index> command. Replace <job_name> and <index> from step 3); run sudo -i to become root.


ubuntu@pivotal-ops-manager:~$ bosh ssh diego_cell 0
Acting as user 'director' on deployment 'cf-8da0e8993ba95fe2f8d4' on 'p-bosh'
Last login: Tue Jan 17 14:13:21 2017 from
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.


bosh_2es4lz1bw@8330e2d5-511e-48fa-aa35-521164251a5b:~$ sudo -i

6) once SSH'ed to the Diego cell to get the <container-guid> of the app, run the following command:

grep rep.executing-container-operation.ordinary-lrp-processor.process-reserved-container.run-container.containerstore-run.node-run.monitor-run.run-step.running \
/var/vcap/sys/log/rep/rep.stdout.log | grep <app-guid> | head -1 | python -m json.tool | grep 'container-guid'

Replace <app-guid> in the above command with the app-guid of the app from step 1)


# grep rep.executing-container-operation.ordinary-lrp-processor.process-reserved-container.run-container.containerstore-run.node-run.monitor-run.run-step.running \
/var/vcap/sys/log/rep/rep.stdout.log | grep e0afa9f2-4020-4da1-919c-59d1c66d7d3c | head -1 | python -m json.tool | grep 'container-guid'
"container-guid": "891090cc-259f-4648-4d45-43ea0d8d2af9"

7) Using the value of container-guid from output of command in step 6), change directory to /var/vcap/data/garden/depot/<container-guid> directory. From the output in step 6), the container-GUID is 891090cc-259f-4648-4d45-43ea0d8d2af9. 

# cd  /var/vcap/data/garden/depot/891090cc-259f-4648-4d45-43ea0d8d2af9

Note: If you're using one of the following versions, you can skip step no.8 

Elastic Runtime

  • >= 1.8.40
  • >= 1.9.18
  • >= 1.10.5
  • 1.11 (all)
  • 1.12 (all)

8) Enter the guardian mount namespace:

# /var/vcap/packages/guardian/bin/inspector-garden -pid $(pidof guardian) /bin/bash

9) Drop into the container shell as root by running the following command:

# /var/vcap/packages/runc/bin/runc exec -t <container-guid> /bin/bash

Alternatively, to enter the container as a non-root user such as vcap, run the following command:

# /var/vcap/packages/runc/bin/runc exec -u 2000:2000 -t <container-guid> /bin/bash
10) Check the app processes inside of the container:
# ps -ef
root 1 0 0 2016 ? 00:00:00 /proc/self/exe init
vcap 13 0 0 2016 ? 00:00:00 /tmp/lifecycle/diego-sshd --allowedKeyExchanges= --address= --allowUnauthenticatedClients=false --inheritDaemonEnv=true --allowedCiphers= --allo
vcap 19 0 0 2016 ? 00:22:36 /home/vcap/app/.java-buildpack/open_jdk_jre/bin/java -Djava.util.logging.config.file=/home/vcap/app/.java-buildpack/tomcat/conf/logging.properties -Djava.ut
root 8581 0 0 14:15 ? 00:00:00 /bin/bash
root 8606 8581 0 14:15 ? 00:00:00 ps -ef
root 20334 1 0 Jan07 ? 00:00:00 [sudo] <defunct>

Note: If running commands like tcpdump from within the container, copy the tcpdump binary from /usr/sbin/tcpdump to /bin/tcpdump while inside the container.

Additional Information

For SSH'ing into a container with garden-Linux backend, please follow the procedure here. 


  • Avatar
    Theo Cushion

    We found that using: `/var/vcap/packages/runc/bin/runc exec -u 2000:2000 -t /bin/bash`
    This worked better for us as it sets the uid and gid to 2000, as opposed to setting the uid to 2000 and gid to 0. This was critical for us when we were trying to dump out a Java heap using jmap. I think it is a safe option to update the documentation above to explicitly set the gid to 2000 that will get around this kind of thing.

Powered by Zendesk