Wednesday, October 26, 2022

Restore Archived Log into VMware Aria Operations for Logs (formerly known as vRealize Log Insight - vRLI)

As we cannot keep all logs in searchable space in vRLI production system due to performance and slowness issue, it is always recommended to archive logs based on different retention policy. Once the log in being compressed, archived and moved to an archive system (e.g., nfs), the logs are no longer searchable in vRLI.

Let us first see and analyze the log structure how it is being archived and stored. Below is a simple High-Level Diagram (HLD) of log archiving structure in vRealize Log Insight (vRLI). vRLI keeps the logs into a bucket with some bucket index, then compressed and pushed to NFS.  

Here is a sample directory location of an archived log bucket.

nfs-dir/2022/10/26/10/c25b0ace-cd26-47c4-ab87-d8f302ddd7c4



Due to operational requirement or regulatory need, it's a time-to-time ask to retrieve some particular old logs from the archive system. In such cases, we need another vRLI instance (other than the production system) where we can be able to import the archive logs. Once imported, we can search necessary data and extract what necessary.


It is just simple two steps to import the logs in vRLI from an archive.

Step 1: copy data.blob files from NFS server to anywhere in new vRLI server.

Step 2:  Run below command to import logs in vRLI:


root@vrli82-test [ /storage/core/import]# /usr/lib/loginsight/application/bin/loginsight repository import 1f6d813e-25d6-4b3e-96b0-9dfaecbf939e/

The operation is in progress and it may take some time.

Added one-shot task for '/storage/core/import/1f6d813e-25d6-4b3e-96b0-9dfaecbf939e' with filter list:'[]' with parser: default-imported




VMware Official Document



Thursday, October 6, 2022

Update VMware Tanzu Kubernetes Cluster (TKC) Version on vSphere Kubernetes

vSphere 7 onwards provides vSphere with Kubernetes (formerly Project Pacific). Natively it supports VMs and containers on vSphere. Tanzu Kubernetes Grid Service (TKGS) helps to run fully compliant and conformant Kubernetes with vSphere. vSphere Pod runs natively on vSphere whereas Tanzu Kubernetes Cluster (TKC) is a managed cluster by the Tanzu Kubernetes Grid Service, with the virtual machine objects deployed inside of a vSphere Namespace.

A rolling update can be initiated to Tanzu Kubernetes Cluster (TKC), including the Kubernetes version, by updating the Tanzu Kubernetes release, the virtual machine class or storage class. In this article, I will be doing version update of Kubernetes cluster on vSphere. 

 Let's first inspect the environment: We are having three separate Kubernetes cluster inside vSphere namespace validation-test-1. Below span is taken from vSphere Client UI.


Let's now check the versions and update path of Tanzu Kubernetes Cluster (TKC). To view this, we need to login to the SupervisorControlVM. If you are not sure how to login to SupervisorControlVM, this article will help you. 

[ ~ ]# kubectl get tanzukubernetesclusters -A
or
[ ~ ]# kubectl get tkc -A

 
Let's now check the list of available Tanzu Kubernetes Releases:
[ ~ ]# kubectl get tanzukubernetesreleases
or
[ ~ ]# kubectl get tkr


To make the Kubernetes versions available, first we need to download the releases from here. Then from vSphere Client UI, a Content Library needs to be created where we should upload the releases files downloaded from the link. Once the releases are downloaded from VMware repository and uploaded in vSphere Client UI, releases will be available under Content Library section.


To initialize the update, lets edit the Kubernetes cluster manifest file and put the release version details appropriately like below:

[ ~ ]# kubectl edit tanzukubernetesclusters validation-tkgs-cluster -n validation-test-1

Once I have the manifest file in edit mode, first I need to check the API version. API version defines how to put the release names. Check the API reference here. Then in spec section, update the Kubernetes release version accordingly. Lastly, in topology >> tkr >> reference section, update the release version. In latest version, it supports to have different Kubernetes release version in ControlPlaneNode and WorkerNode. For uniformity, I am using the same version details in all three sections.

apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster

spec:
  distribution:
    fullVersion: v1.21.6---vmware.1-tkg.1.b3d708a
    version: ""

topology:
    controlPlane:
      replicas: 3
      storageClass: tanzu-policy
      tkr:
        reference:
          name: v1.21.6---vmware.1-tkg.1.b3d708a
      vmClass: best-effort-large
    nodePools:
    - name: workers
      replicas: 2
      storageClass: tanzu-policy
      tkr:
        reference:
          name: v1.21.6---vmware.1-tkg.1.b3d708a
      vmClass: best-effort-large


Now, while monitoring the cluster status, it will be not in ready state and doing a rolling update to all of the ControlPlane nodes and Worker nodes.


A series of actions can also be reviewed from vSphere Client UI


Cluster release version before update

Cluster release version after update


Once update completed in all three clusters



Here is the official documentation link for reference.


Cheers 😎



How to Login to SupervisorControlVM in vSphere Kubernetes

 vSphere 7 onwards provides vSphere with Kubernetes (formerly Project Pacific). Natively it supports VMs and containers on vSphere. Tanzu Kubernetes Grid Service (TKGS) helps to run fully compliant and conformant Kubernetes with vSphere. vSphere Pod runs natively on vSphere whereas Tanzu Kubernetes Cluster (TKC) is a managed cluster by the Tanzu Kubernetes Grid Service, with the virtual machine objects deployed inside of a vSphere Namespace.

For many administrative operations, system or cloud administrator needs to login to SupervisorControlVM to do those tasks. This article describes a step-by-step process to login to SupervisorControlVM and do administrative tasks.


First step is to login to vCenter via SSH. Execute decryptK8Pwd.py script under /usr/lib/vmware-wcp/ directory. This will help us to obtain SupervisorControlVM Virtual IP (VIP) and login credential

Connected to service
    * List APIs: "help api list"
    * List Plugins: "help pi list"
    * Launch BASH: "shell"

Command> shell

Shell access is granted to root

[ ~ ]# cd /usr/lib/vmware-wcp/
[ /usr/lib/vmware-wcp ]# ./decryptK8Pwd.py


Read key from file
Connected to PSQL

Cluster: domain-cxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx
IP: 10.10.10.2
PWD: sadfjhsdifudnnxjzxcnAIJDIDJFKASD-=+ASDJASDNksdjfhkcbbcdcbk
------------------------------------------------------------

Now I can login to SupervisorControlVM with above obtained virtual IP and password


[ /usr/lib/vmware-wcp ]# ssh 10.10.10.2

FIPS mode initialized

Password:

Last login: Tue Oct  4 05:35:57 2022 from 10.10.100.102

root@420f007a3156d05baab95084b457eb4c [ ~ ]#


I am successfully able to log in to SupervisorControlVM and do administrative tasks now.

root@420f007a3156d05baab95084b457eb4c [ ~ ]# kubectl get pods -n kube-system

NAME                           READY   STATUS    RESTARTS   AGE

coredns-855c5b4cfd-ftfx9       1/1     Running   0          3d2h

coredns-855c5b4cfd-lfcjf       1/1     Running   0          3d2h

coredns-855c5b4cfd-vzj42       1/1     Running   0          3d2h


Cheers 😎

Monday, March 14, 2022

Ansible playbook to install utility tools in Linux

 Ansible is one of the most popular open-source software provisioning, configuration management and application-deployment tool enabling infrastructure as code. Ansible is mainly used as a DevOps tool and can perform a lot of tasks that otherwise are time-consuming, complex, repetitive, and can make a lot of errors or issues.

Let's write a simple ansible playbook which enables us to install few of the utility tools which are required by a System Administrator to troubleshoot a system. Let's say we have to install this tools in 10, 100 or even 1000 servers. With the help of this tiny powerful playbook anyone can install those required tools in a minute to thousand servers.

Assuming that, we have an ansible controller server and we have passwordless access to destination servers by doing ssh-copy-id. Additionally, we have an inventory file e.g. inventory.ini where we listed all destination servers IP or hostname  in a group like below:






# vi inventory.ini
[blablabla]
10.10.10.20
10.10.10.21
10.10.10.22
host1.abc.com
host2.abc.com

[db-nodes]
10.10.10.50
10.10.10.51

ssh-copy-id installs an SSH key on a server as an authorized key. Its purpose is to provision access without requiring a password for each login. This facilitates automated, passwordless logins and single sign-on using the SSH protocol.

Here is the playbook named required-packages.yaml

---
- hosts: blablabla
  tasks:
  - name : Installing net-tools service
    yum :
      name : net-tools
      state : present
  - name : Installing telnet service
    yum :
      name : telnet
      state : present
  - name : Installing iostat service
    yum :
      name : sysstat
      state : present
  - name : Installing dstat service
    yum :
      name : dstat
      state : present
  - name : Installing curl service
    yum :
      name : curl
      state : present  
---

Now from ansible controller, execute below command to invoke the playbook to apply to destination server group mentioned in inventory file e.g. inventory.ini

[root@ansible-controller ~]# ansible-playbook -i inventory.ini required-packages.yaml


voilà :-)


I am thankful to Tanzeeb thus dedicate this post for him.  

Friday, March 11, 2022

Retrieve Admin Credential of Embedded Harbor Registry on Supervisor Cluster - vSphere with Tanzu

Below are few steps to retrieve username and credential of embedded harbor registry on supervisor cluster - vSphere with Tanzu


Step-1: Login to vCenter via SSH. Execute decryptK8Pwd.py script under /usr/lib/vmware-wcp/ directory. This will help us to obtain supervisor control VM VIP and login credential

Connected to service
    * List APIs: "help api list"
    * List Plugins: "help pi list"
    * Launch BASH: "shell"

Command> shell

Shell access is granted to root

[ ~ ]# cd /usr/lib/vmware-wcp/
[ /usr/lib/vmware-wcp ]# ./decryptK8Pwd.py
Read key from file
Connected to PSQL

Cluster: domain-cxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx
IP: 10.10.10.2
PWD: sadfjhsdifudnnxjzxcnAIJDIDJFKASD-=+ASDJASDNksdjfhkcbbcdcbk
------------------------------------------------------------

Step-2: Login to supervisor control VM with above obtained VIP and password

[ /usr/lib/vmware-wcp ]# ssh 10.10.10.2
Password:

Step-3: Retrieve the namespace, associated pods and secrets related to Harbor registry

Retrieve the namespace:
[ ~ ]# kubectl get namespace | grep -i registry 
vmware-system-registry                      Active   100d
vmware-system-registry-xxxxxxxx             Active   100d


Step-4: Retrieve the secret and associated properties related to harbor registry

Retrieve the secrets:
[ ~ ]# kubectl get secrets -n vmware-system-registry-xxxxxxxx
NAME                                  TYPE                                 DATA   AGE
default-token-ghcbt                   kubernetes.io/service-account-token  3      100d
harbor-xxxxxxxx-controller-registry   Opaque                                3      100d
harbor-xxxxxxxx-harbor-core           Opaque                                6      100d
harbor-xxxxxxxx-harbor-database       Opaque                                1      100d
harbor-xxxxxxxx-harbor-jobservice     Opaque                                1      100d
harbor-xxxxxxxx-harbor-registry       Opaque                                2      100d
harbor-xxxxxxxx-ssl                   Opaque                                3      100d
sh.helm.release.v1.harbor-xxxxxxxx.v1 helm.sh/release.v1                    1      100d


Check the secret and its properties:

[ ~ ]# kubectl describe secrets harbor-xxxxxxxx-controller-registry -n vmware-system-registry-xxxxxxxx
Name:         harbor-xxxxxxxx-controller-registry
Namespace:    vmware-system-registry-xxxxxxxx
Labels:       <none>
Annotations:  <none>
Type:  Opaque
Data
====
harborAdminPassword:     24 bytes
harborAdminUsername:     8 bytes
harborPostgresPassword:  24 bytes


Step-5: Retrieve the username by using the properties (harborAdminUsername) obtained from above secret. String values are base64 encoded, thus we need to decode this as well.

[ ~ ]# kubectl get secrets harbor-xxxxxxxx-controller-registry -nvmware-system-registry-xxxxxxxx --template={{.data.harborAdminUsername}} | base64 -d | base64 -d
admin


Step-6: Retrieve the password by using the properties (harborAdminPassword) obtained from above secret. String values are base64 encoded, thus we need to decode this as well.

[ ~ ]# kubectl get secrets harbor-xxxxxxxx-controller-registry -nvmware-system-registry-xxxxxxxx --template={{.data.harborAdminPassword}} | base64 -d | base64 -d
da7SMxx&v#ZZR@w2tPP


Step-7: Check login using username and password obtained from Step-5 and Step-6



voilà :-) 


Restore Archived Log into VMware Aria Operations for Logs (formerly known as vRealize Log Insight - vRLI)

As we cannot keep all logs in searchable space in vRLI production system due to performance and slowness issue, it is always recommended to ...