IBMPartnerDemo

View project on GitHub

Partner Demo Environments on IBM Cloud

Tech Zone (14 days availability)

  1. Product demos where you are just showing the product with some canned assets, you can use Data and AI Live demos

Note: Make sure that you are signed in with your IBMID and it is linked to your companies PartnerWorld profile. Without this your options will be limited. Also the install pod has been removed, so you will not be able to add additional services to a cluster.

Back to Table of Contents

Cloud Pak for Data as a Service

In many situations, Cloud Pak for Data as a Service should work as an individual using some of the “lite” service plans. If you look to run this as team in one account, there could be more costs as you move up the services plans.

Cloud Pak for Data as a Service implementation guide Being built

Back to Table of Contents

Build on your own hardware or VMs

If you have hardware available, you can build your own CLoud Pak environment. You will follow the same instructions, however in a non-cloud environment, Red Hat software entitlement is required for installation of IBM Cloud Paks. Follow these Red Hat software access instructions to join Red Hat’s Partner Connect program and download NFR (Not for Resale) Red Hat software at no additional charge.

Back to Table of Contents

Other options to think about

  • VAD options Need some links here
  • Cloud Engagement Funds Need links to how to get this started

Back to Table of Contents

Persistent Demo Environment (Cost)

If you are looking for something you can personalized, isolated and persistent, you can go down the path of a cluster of your own on IBM Cloud. Following the steps below, you will be responsible for infrastructure costs, but we do have some offsets that can easy this monetary pain.

Back to Table of Contents

PartnerWorld Cloud Credits IBM Cloud account

  1. You may be entitled to IBM Cloud credits free of charge based on your PartnerWorld Program Status, or may choose to purchase an IBM Partner Package (with or without a Booster Pack to get more. Click here to learn about how many credits you can obtain and request your credits to be activated via this form. Once requested, the IBM PartnerWorld team will help set up an account with the IBM Cloud credits loaded for you to use.
  2. Verify you can log into your companies account.
  3. If you need more IBM Cloud credits than your base Partner Package offers, you can also choose to purchase a Booster Pack to add more IBM Cloud credits to your account.
  4. Best practices on setting up your account
  5. Add additional people to the organization’s IBM Cloud account. Make sure these email ids are also listed in your companies PartnerWorld Profile.

Back to Table of Contents

Proof of Technology (POC) account

How do I request an account? What qualifies? Can my VAD help here? Currently, need to have IBM Tech Seller to create. If you have one of these accounts, you can access by

  1. Logging into IBM Cloud
  2. Look across the black top menu bar, you will see Catalog, Docs, Support, Manage The next list box to the right is one with your accounts. Toggle the list box to select this POC Account.
  3. Most likely the IBM admin of the account has set up with ACLS and an access group. Work with this person to further access capabilities.

Coming soon

Back to Table of Contents

Creating an IBM Cloud account without credits.

  1. Sign up for IBM Cloud.
  2. Add credit card information or contact IBM Sales for a subscription.
  3. Best practices on setting up your account
  4. Add additional people to the organization’s IBM Cloud account. Make sure these email ids are also listed in your companies PartnerWorld Profile.

Back to Table of Contents

Prerequisite steps to get entitlement to Cloud Pak software:

  1. Subscribe to an IBM Partner Package. Every new package has the Software Access Catalog plus access to PartnerWorld Software developer to developer support, so choose the right Partner Package for your business Note: Must be logged in to PartnerWorld to see details
  2. Add employees to your IBM PartnerWorld Profile. This will provide them with authorization to entitled software on premises or Cloud Container registry. Note: This could take up to 24 hours to take hold.
  3. Verify that you have access to the entitlement registry. Click Library, it should say IBM SOFTWARE ACCESS 1 YEAR

Back to Table of Contents

Add IBM Cloud Security instructions and best practices

  1. Once you have access to the account, it is best to create a Resource group and Access Group in IAM.
  2. Read Best practices for organizing resources and assigning access
  3. Go to the Access (IAM) dashboard, Click Resource Groups. This will make it easier to contain access.
  4. Click the Create then enter a value for your resource group. I will pick CPD.
  5. From the Console go to Manage>Access(IAM)>Groups
  6. Click Create add you group name here.
  7. Click the Users tab then Add Users. This will produce a list of users that can be added. Always good to add yourself
  8. Click the Access Policies tab and select the following services and appropriate permissions.
    • Cloud Object Storage (Administrator or Editor)
    • Infrastructure Services (Administrator or Editor)
    • Cloud Foundry Enterprise Environment (Administrator or Writer)
    • Container Registry (Manager or Writer)
    • Kubernetes Service (Manager or Writer)
    • Certificate Manager (Manager or Writer)
    • User Management (Editor)
    • IAM Access Groups Service (Editor)
    • Internet Services (Administrator)
    • DNS Services (Administrator)
    • Add appropriate controls for your resource group

Note: For further reading on Access Management on IBM Cloud

Back to Table of Contents

Create an admin Virtual server

  1. Follow these instructions

Creating an Open Shift cluster

Via the UI

  1. Log into IBM Cloud
  2. Click on Catalog > Services
  3. Filter on Containers by checking the box on the left.
  4. Select Red Hat OpenShift on IBM Cloud
  5. On the right is a pane for estimated monthly cost, since the cost is based on hourly tiered Kubernetes Compute plus the 30 day OpenShift license. If you are authenticated and authorized for PartnerWorld Software Access Catalog per above instructions, you are only responsible the hourly compute used, however you will be billed even if Compute resources are down. This service does not have suspendible billing.
  6. On the left is where you will alter the version, license, type of compute and number of worker nodes. Note: You are only paying for Worker nodes no master or bastion nodes.
  7. Going down the page:
  8. First select the OpenShift version
  9. OpenShift Entitlement; Select Apply my Cloud Pak entitlement to this worker pool
  10. Select Resource Group - Default unless you created your own which is easier to organize later.
  11. Pick your Geography
  12. Pick your Availability Single zone
  13. Pick Worker Zone
  14. Click Change Flavor to select the appropriate Flavor then click Done Example Cloud Pak for Data uses either 16 core/64GB RAM or 32 core/64GB RAM.
  15. Pick number of worker nodes Example: Cloud Pak for Data start with 3. suggestions on bare minimum in table below I am running Cloud Pak for Data plus all favorite Services all that is installed on 7 node - 16 core 64GB cluster. I tried 32 core and 64GB Ram, but It didn’t seem to buy me much around deployment. With ROKS, it is easy to add or subtract from your worker pool.
  16. Adjust your cluster name to your liking. Example CloudPakDataDemo-HealthCare
  17. Click Create on the right hand side blue button. Just above this is your estimate. Make sure that there is a line item for Cloud Pak entitlement with a negative value.
  18. This can take 30 minutes to provision.
Cloud Pak Nodes Size: Cores/RAM
Apps 1 4/16
Automation 3 8/32
Data 4 16/64
Integration 3 8/32
Multi-Cloud Management 1 1/8
Security 3 8/32

Note: Creating a 3.11 or 4.3 cluster on IBM Cloud is the same steps, except selecting the version.

Back to Table of Contents

Via the Command line

  1. Download and install the IBM Cloud Command line interface or CLI
  2. Log into your account using ibmcloud login
  3. List out all the locations which provide infrastructure for OpenShift using classic public cloud. There are other options which are a little more complicated to create, but you can get isolation and latest networking and infrastructure. vpc-classic and vpc-gen2
    • ibmcloud oc locations --provider classic
    • If you want to narrow it to a Geography: Americas (na), Europe (eu) or Asia Pacific (ap)
      Toms-MBP:bin tjm$ ibmcloud oc locations --provider classic | grep \(na\)
      dal13   Dallas (dal)†           United States (us)    North America (na)   
      wdc06   Washington DC (wdc)†    United States (us)    North America (na)   
      sjc04   San Jose (sjc)          United States (us)    North America (na)   
      dal12   Dallas (dal)†           United States (us)    North America (na)   
      wdc07   Washington DC (wdc)†    United States (us)    North America (na)   
      tor01   Toronto (tor)           Canada (ca)           North America (na)   
      sjc03   San Jose (sjc)          United States (us)    North America (na)   
      wdc04   Washington DC (wdc)†    United States (us)    North America (na)   
      hou02   Houston (hou)           United States (us)    North America (na)   
      dal10   Dallas (dal)†           United States (us)    North America (na)   
      mex01   Mexico City (mex-cty)   Mexico (mex)          North America (na)   
      mon01   Montreal (mon)          Canada (ca)           North America (na)  
      
  4. In this case, I have picked Washington DC datacenter 06 and I want to know what worker node configurations are offered. Generally I want either 32x64GB or 16x64GB servers. To get the list execute: ibmcloud oc flavors --zone wdc06 --provider classic Since I know that I only want virtual servers and servers with 64GB of RAM, I will run the following ibmcloud oc flavors --zone wdc06 --provider classic | grep x64 | grep -v physical
    Toms-MBP:bin tjm$ ibmcloud oc flavors --zone wdc06 --provider classic | grep x64 | grep -v physical
    b2c.16x64                 16      64GB     1000Mbps        UBUNTU_16_64   virtual       25GB         100GB               classic   
    b3c.16x64                 16      64GB     1000Mbps        UBUNTU_18_64   virtual       25GB         100GB               classic   
    c2c.32x64                 32      64GB     1000Mbps        UBUNTU_16_64   virtual       25GB         100GB               classic   
    c3c.32x64                 32      64GB     1000Mbps        UBUNTU_18_64   virtual       25GB         100GB               classic   
    m2c.8x64                  8       64GB     1000Mbps        UBUNTU_16_64   virtual       25GB         100GB               classic   
    m3c.8x64                  8       64GB     1000Mbps        UBUNTU_18_64   virtual       25GB         100GB               classic   
    
  5. You need to have a VLAN ID. ibmcloud oc vlan ls --zone wdc06
    Toms-MBP:bin tjm$ ibmcloud oc vlan ls --zone wdc06
    OK
    ID        Name   Number   Type      Router         Supports Virtual Workers   
    2942828          1250     private   bcr01a.wdc06   true   
    2942826          1315     public    fcr01a.wdc06   true   
    
  6. You will need the version of OpenShift for the command line. Use ibmcloud oc versions You can get more specific using the –show-version. These options are openshift or kubernetes. Below I am listing out only the OpenShift versions
    Toms-MBP:bin tjm$ ibmcloud oc versions --show-version openshift
    OK
    OpenShift Versions   
    3.11.248_openshift   
    4.3.31_openshift (default)   
    4.4.17_openshift   
    
  7. Now that you have all the information need to build an OpenShift Cluster. Here is the command line to build out an OpenShift Cluster in the Wahington DC Data Center, using virtual servers with 16 cores and 64GB RAM. The cluster will be using shared hardware using 3 worker pool of 3. I also want to use the entitlement from the Cloud Pak. Without –entitlement cloud_pak your account will be charged for an OpenShift license. Run the following ibmcloud oc cluster create classic --zone wdc06 --flavor b3c.16x64 --hardware shared --public-vlan 2942826 --private-vlan 2942828 --workers 3 --name commandline --version 4.3.31_openshift --public-service-endpoint --entitlement cloud_pak
    Toms-MBP:bin tjm$  ibmcloud oc cluster create classic --zone wdc06 --flavor b3c.16x64 --hardware shared --public-vlan 2942826  --private-vlan 2942828 --workers 3 --name commandline --version 4.3.31_openshift  --public-service-endpoint --entitlement cloud_pak
    Creating cluster...
    OK
    Cluster created with ID bt932qpw05g1h1gagch0
    
  8. Let’s list the clusters Use ibmcloud oc cluster ls
    Toms-MBP:bin tjm$ ibmcloud oc cluster ls
    OK
    Name                        ID                     State           Created          Workers   Location          Version                 Resource Group Name   Provider   
    commandline                 bt932qpw05g1h1gagch0   normal          26 minutes ago   3         Washington D.C.   4.3.31_1536_openshift   Default               classic   
    

Checking OpenShift was created and working properly

  1. Open the Dashboard on IBM Cloud.
  2. Click the link for Clusters This give you a high level view of the cluster. Take note of the version up until the underbar. You will need to match this version with your OpenShift Client. Example:
  3. Click on the cluster name Example: CloudPakDataDemo-HealthCare
  4. From here you can see all vital information at one glance.
  5. Click on Worker nodes This lists out the specifics for each virtual server.
  6. Click Worker pool. This shows you how many in the pool, flavor, etc The three dots to the right allow you to delete or resize. If you started with three and need more, you can resize the worker pool. up or down.
  7. Click Access. This is all the tools you will want to access the underlying OpenShift cluster from the command line. You can do things from the console, but many folks want to use command line. You will need these to provision additional Cloud Pak services. Go ahead and download these to your system. Make sure that the OpenShift Client is the same version as your cluster.

Back to Table of Contents

Using OpenShift console

  1. Since master nodes are shared in the ROKS environment and you will use a token to gain access, you can get there via your IBM Cloud account.
  2. From the Main IBM Cloud page, click on the Navigation menu in thee upper left.
  3. Scroll down to OpenShift
  4. Click Clusters
  5. Click on your cluster. Mine is CloudPakData.
  6. If you are not in Overview then Click Overview
  7. Click OpenShift web console
  8. Since we want to install to a project other than default, Click the + Create Project button int the upper right. - Add zen to the Name - Add zen or Cloud Pak for Data to Display Name - Add something for you knowledge to description. I’ll leave mine blank.
  9. This is where you will install the Cloud Pak for Data Services.
  10. You can work with Deployments, jobs, volume claims and Pods later by accessing this space.

Installing the CLI environment

  1. Download the Client. Note This should match your server side.
  2. unzip and copy oc to your /usr/local/bin or include in PATH.
  3. Execute oc version to check that everything is working.
  4. Retrieving the token to log into OpenShift on IBM Cloud
  5. While you can install the Cloud Paks into the default project, it’s a better idea to put it in its own project or namespace. These terms are linked. Let’s create a new project. I’ll call mine zen from the terminal while logged in issue oc new-project zen This will create a zen project and you will use this project name when creating the Cloud Pak.
  6. If you are going to customize the Cloud Pak cluster with other services your will need to create a Route to the internal container registry. These are your two commands
  • OCP 3.11
    • oc create route reencrypt --service=docker-registry -n default
    • oc annotate route docker-registry --overwrite haproxy.router.openshift.io/balance=source -n default
    • oc get routes -n default
  • OCP 4.x
    • oc get routes -n openshift-image-registry

Back to Table of Contents

Retrieving the token to log into OpenShift on IBM Cloud

  1. Go to the dashboard and in the upper right click the blue button/link for OpenShift Web Console.
  2. In the upper right hand corner, there should be a person icon, click the arrow and click Copy Login Command.
    • OCP 3.11: This will provide the login with your token in to your copy buffer. This token is renewed daily.
    • OCP 4.x: This will launch a new page with a single URL Display Token. Click this URL. Copy the contents of the Log in with this token gray area. This token is renewed daily.
  3. Paste into your terminal window. oc login https://c106-e.us-south.containers.cloud.ibm.com:30783 --token=EAVMH6YNi0BA88H3VO90v_WidUoNNPOtF3u4Tg
  4. Test out command line connectivity to underlying OpenShift infrastructure: oc version or oc get pods You can also do much of this through the OpenShift Console

Back to Table of Contents

Quick run through of some OpenShift CLI commands

In the section installing, you ran a few commands to validate the installation. Let go a little deeper to showcase what you might want to be able to do from the command line.

  1. List all the pods in the cluster. You should see on the left the namespaces or projects. next moving right is the pod name, how many are ready, their state, restarts and days running or age. oc get pods --all-namespaces
    Toms-MBP:bin tjm$ oc get pods --all-namespaces
    NAMESPACE                           NAME                                                              READY     STATUS      RESTARTS   AGE
    default                             docker-registry-6d6cdd559d-ll5rz                                  1/1       Running     0          3d
    default                             docker-registry-6d6cdd559d-w7pb6                                  1/1       Running     0          3d
    ...
    ibm-system                          ibm-cloud-provider-ip-169-60-227-155-6b8c7f49cf-nm9v5             1/1       Running     0          3d
    ...
    kube-proxy-and-dns                  proxy-and-dns-rgkzw                                               1/1       Running     0          18d
    kube-proxy-and-dns                  proxy-and-dns-tt6rz                                               1/1       NodeLost    0          7d
    ...
    kube-service-catalog                apiserver-64d8986ddd-22f2z                                        1/1       Running     0          3d
    ...
    kube-system                         vpn-5848948c84-c9frz                                              1/1       Running     0          3d
    ...
    openshift-ansible-service-broker    asb-58f44b5c6c-6472d                                              1/1       Unknown     0          3d
    openshift-ansible-service-broker    asb-58f44b5c6c-jz6zn                                              1/1       Running     0          18m
    ...
    openshift-monitoring                prometheus-operator-68ccbf549-dwxld                               1/1       Running     0          3d
    ...
    openshift-template-service-broker   apiserver-7df5d95cfb-9pzlz                                        1/1       Unknown     0          3d
    openshift-template-service-broker   apiserver-7df5d95cfb-knzdc                                        1/1       Running     0          3d
    
  2. List all the pods in a namespace. Compare this list to the last. Notice anything? The Unknown is gone and replaced with a different pod. OpenShift realized it was not responsive and spun up a different pod, then shut down the old one.
    Toms-MBP:bin tjm$ oc get pods -n openshift-template-service-broker
    NAME                         READY     STATUS    RESTARTS   AGE
    apiserver-7df5d95cfb-knzdc   1/1       Running   0          3d
    apiserver-7df5d95cfb-p9cxm   1/1       Running   0          26m
    

    Try another, notice NodeLost is gone. NodeLost in this case the pod name is the same, so the updated state kept OpenShift from restarting the pod or deleting.

    Toms-MBP:bin tjm$ oc get pods -n kube-proxy-and-dns  
    NAME                  READY     STATUS    RESTARTS   AGE
    proxy-and-dns-2866f   1/1       Running   0          18d
    proxy-and-dns-5m5tr   1/1       Running   0          18d
    proxy-and-dns-b997g   1/1       Running   0          18d
    proxy-and-dns-d8jrr   1/1       Running   0          18d
    proxy-and-dns-fk74t   1/1       Running   0          18d
    proxy-and-dns-rgkzw   1/1       Running   0          18d
    proxy-and-dns-tt6rz   1/1       Running   1          7d
    
  3. If you have a pod that is Evicted or Terminating, you may want to delete or reboot these manually. You can gracefully terminate and restart a pod or force it down. We are going to delete a pod in the openshift-template-service-broker namespace. Note: my second command, I forgot to add a namespace.
    Toms-MBP:bin tjm$ oc get pods -n openshift-template-service-broker
    NAME                         READY     STATUS    RESTARTS   AGE
    apiserver-7df5d95cfb-knzdc   1/1       Running   0          3d
    apiserver-7df5d95cfb-p9cxm   1/1       Running   0          35m
    Toms-MBP:bin tjm$ oc delete pod apiserver-7df5d95cfb-knzdc
    Error from server (NotFound): pods "apiserver-7df5d95cfb-knzdc" not found
    Toms-MBP:bin tjm$ oc delete pod apiserver-7df5d95cfb-knzdc -n openshift-template-service-broker
    pod "apiserver-7df5d95cfb-knzdc" deleted
    Toms-MBP:bin tjm$ oc get pods -n openshift-template-service-broker
    NAME                         READY     STATUS    RESTARTS   AGE
    apiserver-7df5d95cfb-55dcd   0/1       Running   0          13s
    apiserver-7df5d95cfb-p9cxm   1/1       Running   0          36m
    Toms-MBP:bin tjm$ oc get pods -n openshift-template-service-broker
    NAME                         READY     STATUS    RESTARTS   AGE
    apiserver-7df5d95cfb-55dcd   1/1       Running   0          35s
    apiserver-7df5d95cfb-p9cxm   1/1       Running   0          36m
    
  4. For an Evicted pod, it might not go gracefully, so you can add --grace-period=0 --force to your delete command.
    Toms-MBP:~ tjm$ oc delete pods apiserver-7df5d95cfb-p9cxm -n openshift-template-service-broker --grace-period=0 --force
    warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    pod "apiserver-7df5d95cfb-p9cxm" force deleted
    Toms-MBP:~ tjm$ oc get pods -n openshift-template-service-broker
    NAME                         READY     STATUS    RESTARTS   AGE
    apiserver-7df5d95cfb-55dcd   1/1       Running   0          11m
    apiserver-7df5d95cfb-m67nk   1/1       Running   0          35s
    
  5. Always adding -n openshift-template-service-broker or another namespace can be annoying. If you think you will be in a specific namespace more often than not, like zen for Cloud Pak for Data then execute oc project zen this will allow you to drop the -n <namespace> for anything that is in this namespace.
  6. If you want to look into the logs of a pod, you can run oc logs <pod name> -n <namespace> Sometimes there is a lot of information and other times not so much. You can watch or follow the logs by adding -f to the end of the command.
    Toms-MBP:~ tjm$ oc logs apiserver-7df5d95cfb-m67nk -n openshift-template-service-broker
    I0616 17:12:52.000294       1 serve.go:116] Serving securely on [::]:8443
    I0616 17:12:52.000593       1 controller_utils.go:1025] Waiting for caches to sync for tsb controller
    I0616 17:12:52.101071       1 controller_utils.go:1032] Caches are synced for tsb controller
    
  7. If this doesn’t yield much information, then you can use oc describe <resource>.
    Toms-MBP:~ tjm$ oc describe pod  apiserver-7df5d95cfb-m67nk -n openshift-template-service-broker
    Name:               apiserver-7df5d95cfb-m67nk
    Namespace:          openshift-template-service-broker
    Priority:           0
    PriorityClassName:  <none>
    Node:               10.95.7.50/10.95.7.50
    Start Time:         Tue, 16 Jun 2020 13:12:49 -0400
    ...
    ...
    Events:
      Type    Reason     Age   From                 Message
      ----    ------     ----  ----                 -------
      Normal  Scheduled  14m   default-scheduler    Successfully assigned openshift-template-service-broker/apiserver-7df5d95cfb-m67nk to 10.95.7.50
      Normal  Pulled     14m   kubelet, 10.95.7.50  Container image "registry.ng.bluemix.net/armada-master/iksorigin-ose-template-service-broker:v3.11.216" already present on machine
      Normal  Created    14m   kubelet, 10.95.7.50  Created container
      Normal  Started    14m   kubelet, 10.95.7.50  Started container
    

Advanced OpenShift tasks

Watch for all unsuccessful Pods

In another terminal run watch watch "oc get pods | egrep -v 'Completed|1/1|2/2|3/3|4/4|5/5|6/6|7/7'". To verify it is working delete a few pods and watch OpenShift recover. This is good to run in the background to see how it works. I like to run this during service installations to understand what is associated with what. In the example below, I scaled down a StatefulSet for Data virtualization to 0 then scaled it back up. This is a capture of it coming back up.

Every 2.0s: oc get pods   | egrep -v 'Completed|1/1|2/2|3/3|4/4|5/5|6/6|7/7'                                                                                                     Thu Nov 11 08:51:50 2021

NAME                                                         READY   STATUS      RESTARTS   
c-db2u-dv-db2u-0                                             0/1     Init:0/3    0          
c-db2u-dv-db2u-1                                             0/1     Init:0/3    0          

You might want to watch all namespaces or project and also see which node the pod is being scheduled to run. In this case you will run the following in a terminal window. This is my default view oc get pods -A -o wide | egrep -v 'Completed|1/1|2/2|3/3|4/4|5/5|6/6|7/7'

Every 2.0s: oc get pods -A -o wide   | egrep -v 'Completed|1/1|2/2|3/3|4/4|5/5|6/6|7/7'  
NAMESPACE       NAME                                            READY   STATUS              RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
kube-system     ibmcloud-block-storage-driver-lps26             0/1     ContainerCreating   0          3s    10.190.161.122   10.190.161.122   <none>           <none>
kube-system     ibmcloud-block-storage-plugin-58f87f78d9-xwvpp  0/1     ContainerCreating   0          2s    <none>           10.190.161.74    <none>           <none>

Accessing Pod metadata

This might not be something you do often, but it helps to know how to do it. How to query meta data.

I want to get the Product Name and Product version from a particular Cloud Pak for Data pod.
This command will yield the values from a specific pod. oc get pod -n zen wdp-couchdb-0 -o template --template '\n Service Name: \n Service version number: \n

  tjm$ oc get pod  -n zen wdp-couchdb-0 -o template --template '\n Service Name:    \n Service version number:  \n '

  Service Name:  IBM Cloud Pak for Data Common Core Services  
   Service version number:  3.5.3

Maybe I want to get all of the pods. This command will yield this. Note: For space, I removed the redundant entries, but for 185 pods this would yield 370 lines of output. oc get pod -n clonetest -o jsonpath='{range .items[*].metadata.annotations}{"Product name: "}{.productName}{"\n "}{"productVersion: "}{.productVersion}{"\n"}{end}'

  Toms-MBP:cpd352 tjm$ oc  get pod  -n clonetest -o jsonpath='{range .items[*].metadata.annotations}{"Product name: "}{.productName}{"\n "}{"productVersion: "}{.productVersion}{"\n"}{end}'

   Product name: IBM Cloud Pak for Data Common Core Services
   productVersion: 3.5.3
   Product name: IBM Watson Knowledge Catalog for IBM Cloud Pak for Data
   productVersion: 3.5.1
   Product name: IBM Cognos Dashboard Embedded
   productVersion: 3.5.1
   Product name: IBM Cloud Pak for Data Control Plane
   productVersion: 3.5.2
   Product name: IBM Data Virtualization
   productVersion: 1.5.0

Another common thing to do is select a pods to access. Here is a command to get the pod names, so you might need to get more specific on your search. In this one, it is getting the install operator for Cloud Pak for Data. ` oc get pods | grep install | awk ‘{ print $1 }’ Why do I care, well maybe you will want to review the install operator or something went wrong and you need to get into the pod. In that case you would use oc rsh `oc get pods | grep install | awk ‘{ print $1 }’``

   tjm$ oc get pods | grep install | awk '{ print $1 }'
   cpd-install-operator-69f55944f8-vggrb

Stopping Terminating object

From time to time, you will get a pod or a customer resource definition or pvc will get stuck terminating. To clean this up, issue the following. Example, the cc-home-pvc pvc is stuck in the state of Terminating To clear this condition, you can issue the following commmand:

oc patch pvc  cc-home-pvc -p '{"metadata":{"finalizers":[]}}' --type=merge

Performance issues on one Nodes

How can I figure out what is being run on my node and what percentage of resources are being consumed.

  1. Get a list of the nodes using oc get nodes
      Toms-MBP:cpd4 tjm$ oc get nodes
      NAME            STATUS   ROLES           AGE   VERSION
      10.191.185.13   Ready    master,worker   18d   v1.21.1+a620f50
      10.191.185.25   Ready    master,worker   21d   v1.21.1+a620f50
      10.191.185.5    Ready    master,worker   21d   v1.21.1+a620f50
      10.191.185.54   Ready    master,worker   18d   v1.21.1+a620f50
      10.191.185.6    Ready    master,worker   21d   v1.21.1+a620f50
      10.191.185.60   Ready    master,worker   18d   v1.21.1+a620f50
    
  2. Pick the node that is giving you issues and run a describe against it. oc describe node 10.191.185.13
  3. Looking at the output from the bottom up, you will see a high level summarized number of the Resource Requests and Limits scheduled on this node. One of the good thing about Kubernetes/Openshift if the ability to overcommit resources limits. The Request is the initial ask of the node for scheduling. The limit is a defined value to give guardrails to the platform to avoid runaway processes. These can be adjusted by running oc set resources on a StatefulSet or ReplicaSet or Deployment. Take note that if you want to change the resources on a Pod, you need to change its template definition which is either a StatefulSet or ReplicaSet or Deployment If you are getting OOMKilled states then you can increase the limit on the memory. Do this with the understanding that this will always stay as a limit of x vs y. If you increase the CPU limit this could affect the number of VPCs you are consuming. Not saying don’t do it, just understand any side affects that might come from that change.
      Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource           Requests           Limits
      --------           --------           ------
      cpu                13755m (86%)       42330m (266%)
      memory             41944900480 (68%)  101969336192 (165%)
      ephemeral-storage  0 (0%)             0 (0%)
    
  4. This section is the Non terminated pods section. Here you will see the Namespace or project and a listing of every pods
      Non-terminated Pods:                      (62 in total)
      Namespace                               Name                                                             CPU Requests  CPU Limits   Memory Requests  Memory Limits  AGE
      ---------                               ----                                                             ------------  ----------   ---------------  -------------  ---
      calico-system                           calico-node-sbxgp                                                250m (1%)     0 (0%)       80Mi (0%)        0 (0%)         18d
      ibm-common-services                     db2u-operator-manager-779695d676-8qtbv                           500m (3%)     1 (6%)       500Mi (0%)       1000Mi (1%)    8d
      ibm-common-services                     ibm-cde-operator-69c7f6f587-jpzg6                                100m (0%)     500m (3%)    256Mi (0%)       1Gi (1%)       8d
      ibm-common-services                     ibm-cloud-databases-redis-operator-cf6c9676b-bjd2m               100m (0%)     200m (1%)    128Mi (0%)       256Mi (0%)     21h
      ibm-common-services                     ibm-cpd-datarefinery-operator-c596fbb6c-zqtct                    100m (0%)     500m (3%)    256Mi (0%)       1Gi (1%)       8d
    
  5. Next is the CIDRs which until recently I ignored. It came to my attention that if you are running dense compute node cluster that its possible that you may run into a limit on number of pods you can create on a node based on the CIDR value.
      PodCIDR:                                  172.30.4.0/24
      PodCIDRs:                                 172.30.4.0/24
    
  6. The past section I would look at is the Taints, Unschedulable and Conditions. The first two can tell you why certain pods are assigned or scheduled (Taints) and also why there might not be any or many pods scheduled (Unschedulable). When a node is unschedulable generally someone has Cordoned the node ` oc adm cordon ` The Conditions will give you a high level status indicator around the resources on this particular node.
      Taints:             <none>
      Unschedulable:      false
      Conditions:
     Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
     ----                 ------  -----------------                 ------------------                ------                       -------
     NetworkUnavailable   False   Sat, 23 Oct 2021 11:11:19 -0400   Sat, 23 Oct 2021 11:11:19 -0400   CalicoIsUp                   Calico is running on this node
     MemoryPressure       False   Thu, 11 Nov 2021 08:56:17 -0500   Sat, 23 Oct 2021 11:10:04 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
     DiskPressure         False   Thu, 11 Nov 2021 08:56:17 -0500   Sat, 23 Oct 2021 11:10:04 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
     PIDPressure          False   Thu, 11 Nov 2021 08:56:17 -0500   Sat, 23 Oct 2021 11:10:04 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
     Ready                True    Thu, 11 Nov 2021 08:56:17 -0500   Sat, 23 Oct 2021 11:10:54 -0400   KubeletReady                 kubelet is posting ready status
    

MORE TO COME

Back to Table of Contents

Install your Cloud Pak of Choice

Back to Table of Contents

Install Netezza on IBM Cloud

Back to Table of Contents

Cloning an existing Cloud Pak for Data environment

Back to Table of Contents