Wednesday, October 26, 2022

Restore Archived Log into VMware Aria Operations for Logs (formerly known as vRealize Log Insight - vRLI)

As we cannot keep all logs in searchable space in vRLI production system due to performance and slowness issue, it is always recommended to archive logs based on different retention policy. Once the log in being compressed, archived and moved to an archive system (e.g., nfs), the logs are no longer searchable in vRLI.

Let us first see and analyze the log structure how it is being archived and stored. Below is a simple High-Level Diagram (HLD) of log archiving structure in vRealize Log Insight (vRLI). vRLI keeps the logs into a bucket with some bucket index, then compressed and pushed to NFS.  

Here is a sample directory location of an archived log bucket.

nfs-dir/2022/10/26/10/c25b0ace-cd26-47c4-ab87-d8f302ddd7c4



Due to operational requirement or regulatory need, it's a time-to-time ask to retrieve some particular old logs from the archive system. In such cases, we need another vRLI instance (other than the production system) where we can be able to import the archive logs. Once imported, we can search necessary data and extract what necessary.


It is just simple two steps to import the logs in vRLI from an archive.

Step 1: copy data.blob files from NFS server to anywhere in new vRLI server.

Step 2:  Run below command to import logs in vRLI:


root@vrli82-test [ /storage/core/import]# /usr/lib/loginsight/application/bin/loginsight repository import 1f6d813e-25d6-4b3e-96b0-9dfaecbf939e/

The operation is in progress and it may take some time.

Added one-shot task for '/storage/core/import/1f6d813e-25d6-4b3e-96b0-9dfaecbf939e' with filter list:'[]' with parser: default-imported




VMware Official Document



Thursday, October 6, 2022

Update VMware Tanzu Kubernetes Cluster (TKC) Version on vSphere Kubernetes

vSphere 7 onwards provides vSphere with Kubernetes (formerly Project Pacific). Natively it supports VMs and containers on vSphere. Tanzu Kubernetes Grid Service (TKGS) helps to run fully compliant and conformant Kubernetes with vSphere. vSphere Pod runs natively on vSphere whereas Tanzu Kubernetes Cluster (TKC) is a managed cluster by the Tanzu Kubernetes Grid Service, with the virtual machine objects deployed inside of a vSphere Namespace.

A rolling update can be initiated to Tanzu Kubernetes Cluster (TKC), including the Kubernetes version, by updating the Tanzu Kubernetes release, the virtual machine class or storage class. In this article, I will be doing version update of Kubernetes cluster on vSphere. 

 Let's first inspect the environment: We are having three separate Kubernetes cluster inside vSphere namespace validation-test-1. Below span is taken from vSphere Client UI.


Let's now check the versions and update path of Tanzu Kubernetes Cluster (TKC). To view this, we need to login to the SupervisorControlVM. If you are not sure how to login to SupervisorControlVM, this article will help you. 

[ ~ ]# kubectl get tanzukubernetesclusters -A
or
[ ~ ]# kubectl get tkc -A

 
Let's now check the list of available Tanzu Kubernetes Releases:
[ ~ ]# kubectl get tanzukubernetesreleases
or
[ ~ ]# kubectl get tkr


To make the Kubernetes versions available, first we need to download the releases from here. Then from vSphere Client UI, a Content Library needs to be created where we should upload the releases files downloaded from the link. Once the releases are downloaded from VMware repository and uploaded in vSphere Client UI, releases will be available under Content Library section.


To initialize the update, lets edit the Kubernetes cluster manifest file and put the release version details appropriately like below:

[ ~ ]# kubectl edit tanzukubernetesclusters validation-tkgs-cluster -n validation-test-1

Once I have the manifest file in edit mode, first I need to check the API version. API version defines how to put the release names. Check the API reference here. Then in spec section, update the Kubernetes release version accordingly. Lastly, in topology >> tkr >> reference section, update the release version. In latest version, it supports to have different Kubernetes release version in ControlPlaneNode and WorkerNode. For uniformity, I am using the same version details in all three sections.

apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster

spec:
  distribution:
    fullVersion: v1.21.6---vmware.1-tkg.1.b3d708a
    version: ""

topology:
    controlPlane:
      replicas: 3
      storageClass: tanzu-policy
      tkr:
        reference:
          name: v1.21.6---vmware.1-tkg.1.b3d708a
      vmClass: best-effort-large
    nodePools:
    - name: workers
      replicas: 2
      storageClass: tanzu-policy
      tkr:
        reference:
          name: v1.21.6---vmware.1-tkg.1.b3d708a
      vmClass: best-effort-large


Now, while monitoring the cluster status, it will be not in ready state and doing a rolling update to all of the ControlPlane nodes and Worker nodes.


A series of actions can also be reviewed from vSphere Client UI


Cluster release version before update

Cluster release version after update


Once update completed in all three clusters



Here is the official documentation link for reference.


Cheers 😎



How to Login to SupervisorControlVM in vSphere Kubernetes

 vSphere 7 onwards provides vSphere with Kubernetes (formerly Project Pacific). Natively it supports VMs and containers on vSphere. Tanzu Kubernetes Grid Service (TKGS) helps to run fully compliant and conformant Kubernetes with vSphere. vSphere Pod runs natively on vSphere whereas Tanzu Kubernetes Cluster (TKC) is a managed cluster by the Tanzu Kubernetes Grid Service, with the virtual machine objects deployed inside of a vSphere Namespace.

For many administrative operations, system or cloud administrator needs to login to SupervisorControlVM to do those tasks. This article describes a step-by-step process to login to SupervisorControlVM and do administrative tasks.


First step is to login to vCenter via SSH. Execute decryptK8Pwd.py script under /usr/lib/vmware-wcp/ directory. This will help us to obtain SupervisorControlVM Virtual IP (VIP) and login credential

Connected to service
    * List APIs: "help api list"
    * List Plugins: "help pi list"
    * Launch BASH: "shell"

Command> shell

Shell access is granted to root

[ ~ ]# cd /usr/lib/vmware-wcp/
[ /usr/lib/vmware-wcp ]# ./decryptK8Pwd.py


Read key from file
Connected to PSQL

Cluster: domain-cxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx
IP: 10.10.10.2
PWD: sadfjhsdifudnnxjzxcnAIJDIDJFKASD-=+ASDJASDNksdjfhkcbbcdcbk
------------------------------------------------------------

Now I can login to SupervisorControlVM with above obtained virtual IP and password


[ /usr/lib/vmware-wcp ]# ssh 10.10.10.2

FIPS mode initialized

Password:

Last login: Tue Oct  4 05:35:57 2022 from 10.10.100.102

root@420f007a3156d05baab95084b457eb4c [ ~ ]#


I am successfully able to log in to SupervisorControlVM and do administrative tasks now.

root@420f007a3156d05baab95084b457eb4c [ ~ ]# kubectl get pods -n kube-system

NAME                           READY   STATUS    RESTARTS   AGE

coredns-855c5b4cfd-ftfx9       1/1     Running   0          3d2h

coredns-855c5b4cfd-lfcjf       1/1     Running   0          3d2h

coredns-855c5b4cfd-vzj42       1/1     Running   0          3d2h


Cheers 😎

Restore Archived Log into VMware Aria Operations for Logs (formerly known as vRealize Log Insight - vRLI)

As we cannot keep all logs in searchable space in vRLI production system due to performance and slowness issue, it is always recommended to ...