Thursday, March 25, 2021

Send Custom Application Log to Central Syslog/SIEM Server (e.g. graylog/QRadar)

Let's say, our application which is running on a Windows 2016 server which is generating customs logs and we want to send those logs to our central syslog or SIEM server. For log collection and forwarding we will use NXLog Community Edition. NXLog Community Edition is an open source log collection tool available at no cost.

Let's assume our syslog server (e.g. graylog or Qradar) is installed and configured. We will just collect and forward logs from a Windows Server to destination server. To achieve that, let's follow below steps:


1) Install NXLog Community Edition.

2) Depending on this installation location, update the configuration file nxlog.conf. In my case, the file location is C:\Program Files (x86)\nxlog\conf

3) Let's say we wanna forward access and error logs. Add below configuration in the above mentioned configuration file. Beforehand, make sure the input methods are properly configured and service is running on respective server. In case of graylog, GELF input method is configured and running (Picture-1)


# For sending logs to graylog, we'll use xm_gelf module. Add below in configuration file.
# Assume graylog server IP is 10.10.100.100 and QRadar server IP is 10.10.100.200.
# Application is generating custom logs at a shared folder under directory \\10.20.30.40\LOG\WebApp\

<Extension _gelf>
    Module      xm_gelf
</Extension>

# send logs to graylog

<Input application_accesslog_graylog>
Module im_file
        File '\\10.20.30.40\LOG\WebApp\Information\\accesslog_*'
#File 'C:\Program Files (x86)\nxlog\data\\*.log'
PollInterval 1
SavePos True
ReadFromLast True
Recursive False
RenameCheck False
        Exec $FileName = file_name();
</Input>

<Input application_errorlog_graylog>
Module im_file
File '\\10.20.30.40\LOG\WebApp\Error\\errorlog_*'
#File 'C:\Program Files (x86)\nxlog\data\\*.log'
PollInterval 1
SavePos True
ReadFromLast True
Recursive False
RenameCheck False
Exec $FileName = file_name();
</Input>

<Output gelf>
Module om_tcp
Host 10.10.100.100
Port 12201
OutputType  GELF_TCP
</Output>

<Route graylog>
Path application_accesslog_graylog , application_errorlog_graylog => gelf
</Route>

# send logs to QRadar

<Input application_accesslog_qradar>
Module im_file
        File '\\10.20.30.40\LOG\WebApp\Information\\accesslog_*'
#File 'C:\Program Files (x86)\nxlog\data\\*.log'
ReadFromLast False
Exec parse_syslog();
Exec log_info("Input Event: " + $raw_event);
</Input>

<Input application_errorlog_qradar>
Module im_file
        File '\\10.20.30.40\LOG\WebApp\Error\\errorlog_*'
#File 'C:\Program Files (x86)\nxlog\data\\*.log'
ReadFromLast False
Exec parse_syslog();
Exec log_info("Input Event: " + $raw_event);
</Input>

<Output event-out-qradar>
Module om_tcp
Host 10.10.100.200
Port 514
</Output>

<Route qradar>
    Path application_accesslog_qradar, application_errorlog_qradar  => event-out-qradar
</Route>

############################
#End of configuration file
############################

4) Save the configuration file and restart nxlog service from services.msc


After login to respective portal, verify the results.


Picture-2: graplog



Picture-3: QRadar



Cheers :-)



Sunday, March 21, 2021

Create Logical Volume Manager (LVM) in Linux

In Linux, Logical Volume Manager (LVM) is a device mapper framework that provides logical volume management for the Linux kernel. Most modern Linux distributions are LVM-aware to the point of being able to have their root file systems on a logical volume. Heinz Mauelshagen wrote the original LVM code in 1998 (Source: wiki)

Advantages of LVM: (Source: wiki)

  • Volume groups (VGs) can be resized online by absorbing new physical volumes (PVs) or ejecting existing ones.
  • Logical volumes (LVs) can be resized online by concatenating extents onto them or truncating extents from them.
  • LVs can be moved between PVs.
  • Creation of read-only snapshots of logical volumes (LVM1), leveraging a copy on write (CoW) feature, or read/write snapshots (LVM2)
  • VGs can be split or merged in situ as long as no LVs span the split. This can be useful when migrating whole LVs to or from offline storage.
  • LVM objects can be tagged for administrative convenience.
  • VGs and LVs can be made active as the underlying devices become available through use of the lvmetad daemon
LVM Elements: (Source: wiki)


Physical volumes (pv) are regular storage devices. LVM writes a header to the device to allocate it for management.

Volume Groups (vg) is the combination of physical volumes into storage pools known as volume groups.

Logical Volumes (lv) is the sliced portion of volume group

Create LVM:
Usually fdisk is used for drives that are smaller than 2TB and either parted or gdisk is used for disks that are lasger than 2TB. Here is a very good article that covers the differences, titled: The Differences Between MBR and GPT.

Find out the harddisk:
server1:~# fdisk -l

As there is no partitions yet on /dev/sdb, let's create a partition on it and assume the harddisk is larger than 2TB, we will use parted tool

parted /dev/sdb
(parted) mklabel gpt
(parted) unit TB
(parted) print
(parted) mkpart
Partition type?  primary/extended? primary
File system type?  [ext2]? xfs # other known filesystems are ext3, ext4
Start? 0
End? 11TB
(parted) print # print newly created disk
(parted) quit

Now check the changes again:
server1:~# fdisk -l

Now create PV, VG and LV. Once this is done, format the partition in desired partition type and mount the volume in under a mount point. For persistent, update fstab file accordingly.

server1:~# pvcreate /dev/sdb1        # create physical volume
server1:~# pvs                         # view physical volume
server1:~# vgcreate vg_data /dev/sdb1  # create volume group
server1:~# vgs                         # view volume group
server1:~# vgdisplay -v vg_data        # display volume group details with FREE PE (Physical Extent) number
server1:~# lvcreate -l +2621437 -n lv_data vg_data    # create logical volume group with FREE Physical Entent (PE) number
server1:~# vgdisplay -v vg_data        # view the volume group
server1:~# lvs                         # view the logical volume
server1:~# mkfs.ext4 /dev/vg_data/lv_data # format partition with ext4
OR
server1:~# mkfs.xfs /dev/vg_data/lv_data # format partition with xfs
server1:~# mkdir /data     # create a mount point
server1:~# mount /dev/vg_data/lv_data /data/ # mount newly created partition

server1:~# df -Th

server1:~# vim /etc/fstab

/dev/vg_data/lv_data /data/ default 0 0 # add fstab entry for a persistent automount after reboot, first 0 option is for skip backup and second 0 option is for no filesystem check (fsck) at boot time

Let's assume the harddisk /dev/sdb is < 2TB. So for this, we will use fdisk instead of parted

server1:~# fdisk -l
server1:~# fdisk /dev/sdb
server1:~# fdisk> m # for help
fdisk> n # create new partition
fdisk> Primary/ Extended? Primary
fdisk> Partition ID: 1
fdisk> Start: [enter for default]
fdisk> End: [enter for default]
fdisk> p       # print newly created partition
fdisk> t                              # change the partition ID
fdisk> L [Print all types]       # change the partition ID
fdisk> 8e           # 8e is the partition code for Linux LVM
fdisk> w       # write and exit
server1:~# fdisk -l

Repeat the same process again starting from creation of PV, VG and LV. Format the partition and mount as above process.

Cheers :-)

Wednesday, February 17, 2021

NIC Bonding/Teaming in RHEL 6-7/CentOS 6-7

Linux allows administrators to bind multiple network interfaces together into a single channel using the bonding kernel module and a special network interface called a channel bonding interface. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. Network Bonding is a kernel feature and also known as NIC teaming. 

Let’s assume we are configuring bond0 with interfaces ifcfg-enp25s0f0 and ifcfg-enp25s0f1

We need to create a channel bonding interface configuration file on /etc/sysconfig/network-scripts/ directory called ifcfg-bond<N> replacing <N> with the number for the interface, such as 0 and specify the bonding parameters on the file. Here we are creating ifcfg-bond0 file with following contents:


# cat ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.20.10.11
NETMASK=255.255.255.0
GATEWAY=10.20.10.1
BONDING_OPTS="mode=4 miimon=200"

or 

# cat ifcfg-bond1
DEVICE=bond1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.20.10.11
NETMASK=255.255.255.0
GATEWAY=10.20.10.1
MTU=9000
BONDING_OPTS="mode=802.3ad miimon=100 lacp_rate=slow xmit_hash_policy=layer2+3"



Below are the bonding modes:
  • mode=0 (Balance Round Robin)
  • mode=1 (Active backup)
  • mode=2 (Balance XOR)
  • mode=3 (Broadcast)
  • mode=4 (802.3ad)
  • mode=5 (Balance Transmit Load Balance (TLB))
  • mode=6 (Balance Adaptive Load Balance (ALB))

After the channel bonding interface is created, the network interfaces to be bound together and configured by adding the MASTER= and SLAVE= directives to their configuration files. Below are the interface files: 


# cat ifcfg-enp25s0f0 
DEVICE=
enp25s0f0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
IPV6INIT=no
MASTER=bond0 #or bond1
SLAVE=yes



#cat ifcfg-enp25s0f1
DEVICE=enp25s0f1
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
IPV6INIT=no
MASTER=bond0 
#or bond1
SLAVE=yes



Now, load the bond driver, bring up the newly created bond0 or bond1 interface and verify the same by following commands:

# modprobe bonding
# ifconfig bond0 up
# ifconfig
# ip a
# cat /proc/net/bonding/bond0



Cheers :-) 

Restore Archived Log into VMware Aria Operations for Logs (formerly known as vRealize Log Insight - vRLI)

As we cannot keep all logs in searchable space in vRLI production system due to performance and slowness issue, it is always recommended to ...