Pivotal Knowledge Base

Follow

How to use 'pks' and 'kubectl' command from local desktop to access Clusters in the GCP

Environment

Pivotal Container Service (PKS) Version 1.0
Google Cloud Platform (GCP)

Purpose

This article explains how to setup GCP & your local desktop environment to allow you to use pks and kubectl commands to access remote PKS environment deployed on GCP.

Cause 

With vSphere PKS environment, it is relatively straightforward to access PKS environment. However, due to default firewall policy with GCP, it needs few more steps required to enable remote executions of pks and kubectl commands.

Procedure

Follow below steps to resolve this issue:

1. Install PKS on the GCP environment 
https://docs.pivotal.io/runtimes/pks/1-0/gcp.html
 
2. From GCP console, open following two TCP ports applicable to the PKS(UAA) & Kubernetes Master VMs. Go to Firewall Rules from VPC Networks.
To know how to set up a GCP firewall please visit https://cloud.google.com/vpc/docs/using-firewalls for more detail.
 
  9091 - For PKS API Service Access (for pks commands)
  8443 - For PKS UAA Service & K8s API Access (for pks login and kubectl commands)
 
Here is an example of firewall setting: 
# Define firewall opening ports of tcp:8443 & tcp:9021
$ gcloud compute --project=<my-orgs-project> firewall-rules create allow-k8s-pks \
--direction=INGRESS --priority=1000 --network=<my-pks-virtual-net> \
--action=ALLOW --rules=tcp:8443,tcp:9021 --source-ranges=0.0.0.0/0 --target-tags=k8s-pks

# Add the network tag(in this case - 'k8s-pks') to PKS & Kubernetes master VMs
$ gcloud compute instances add-tags <PKS-UAA-API-SERVER-VM-INSTANCE> --tags k8s-pks
$ gcloud compute instances add-tags <KUBERNETES-MASTER-VM-INSTANCE> --tags k8s-pks

# Review the Firewall setting
$ gcloud compute firewall-rules describe allow-k8s-pks
allowed:
- IPProtocol: tcp
  ports:
  - '9021'
- IPProtocol: tcp
  ports:
  - '8443'
creationTimestamp: '2018-03-09T15:24:44.556-08:00'
description: ''
direction: INGRESS
id: '2293378771523112345'
kind: compute#firewall
name: allow-rdp
network: https://www.googleapis.com/compute/v1/projects/my-project/global/networks/<my-pks-virtual-net>
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/global/firewalls/allow-k8s-pks
sourceRanges:
- 0.0.0.0/0
targetTags:
- k8s-pks
 
3. Get the external IP address of PKS server and register it with your /etc/hosts or DNS server for name resolution purpose.
The PKS server name should be the same as the UAA URL you put in during the PKS tile configuration. To identify the external IP address of your PKS UAA server, you first need to identify internal IP of it.
 
Go to PKS tile & Status tab of Operations Manager which will give you the internal IP address. Then from the GCP console (from Compute Engine by going to VM Instances), you can find the external IP on the very right column of the Internal IP.
 
Ex)
/etc/hosts
35.189.5.5      pks-uaa.mycompany.local
 
4. From Operations Manager terminal screen, add & configure a PKS admin user using the 'uaac' command.
In case you already have UAAC command installed on your local machine (eg. Mac), you can also do these locally. To install 'uaac' on your local machine please visit https://github.com/cloudfoundry/cf-uaac
$ uaac target https://pks-uaa.mycompay.local:8443 --skip-ssl-validation
$ uaac token client get admin -s <PKS_tile_Credentials_UAA-Admin-Secret> 
$ uaac user add myadmin --emails myadmin@mycompany.local -p password
$ uaac member add pks.clusters.admin myadmin 
 
5. Login to the PKS API Server (Make usre you have already installed pks command on your local machine, ex. Mac)
$ pks login -a pks-uaa.mycompany.local -u myadmin -p password --skip-ssl-verification
After the PKS login, you can run various pks commands. 
$ pks create-cluster kube01gcp --external-hostname kube01gcp.mycompany.local --plan small
$ pks clusters
Name       Plan Name  UUID                                  Status     Action
kube01gcp  small      d335c087-cda6-4da3-86b2-332ebfd4d42f  succeeded  CREATE
 
6. From GCP console, identify the external IP of the created Kubernetes master and register it with your /etc/hosts or DNS server.
   
Ex)
/etc/hosts
35.189.5.10    kube01gcp.mycompany.local
 
7. Get Kubernetes credential for your cluster
$ pks get-credentials kube01gcp
 Now you can check and confirm the credential has successfully been retrieved and placed in ~/.kube/config file.
$ grep kube01gcp ~/.kube/config | grep server 
server: https://kube01gcp.mycompany.local:8443
8. Confirm now you have the Kubernetes context set to your cluster. (Make sure you already have installed kubectl command on your local machine) 
$ kubectl config get-contexts
CURRENT   NAME                      CLUSTER                      AUTHINFO                               NAMESPACE
          docker-for-desktop        docker-for-desktop-cluster   docker-for-desktop
*         kube01gcp                 kube01gcp                    40673fed-da49-411f-b1f0-4dc04a179fe0
          kube01pez                 kube01pez                    6189170a-76ec-4bb1-a8bf-eaf32eee0888
          kube02pez                 kube02pez                    b4744546-8f2a-49b6-93a5-982e80f17c6d
          kubernetes-the-hard-way   kubernetes-the-hard-way      admin
          minikube                  minikube                     minikube
9. To switch to other Kubernetes context(cluster) you can use the 'kubectl config use-context' command.
$ kubectl config use-context minikube 

Additional Information

  • Once you reboot the VMs (PKS & K8s servers), the external IPs will change unless you are using Static ones. Thus you’ll have to update your name resolution part(/etc/hosts or DNS server) upon every VM restart. If you plan to use the environment for long term purpose it is highly recommended to use Static IPs by reserving ones. 
  • Inside BOSH, you cannot use it through the external IP of BOSH Director due to the root CA certificate generated only for the local IP & localhost. So you will have to first login to Operations manager terminal to use BOSH command. 

 

Comments

Powered by Zendesk