Supported Target Versions
Pacemaker : Pacemaker 1.1.23-1.el7_9.1
Non-PaceMaker : RGManager - 6.5 ( Linux nodes : redhat-6.2.0)

Application Version and Upgrade Details

Application VersionBug fixes / Enhancements
2.0.0
  • Support given for native type wise discovery.
  • Support given to below metrics.
    • linux_cluster_fence_status
    • linux_cluster_fence_failover_status
    • linux_cluster_service_failover_status
    • linux_cluster_failed_actions_count
1.0.9Support added for metric label changes.
1.0.8Fixed discovery response parsing issue.
Click here to view the earlier version updates
Application VersionBug fixes / Enhancements
1.0.7Full discovery support added.
1.0.6
  • Monitoring parsing issues have been fixed for service status metrics.
  • Fixed the logic to make the ssh connection for RGManager linux cluster.
1.0.5Monitoring parsing issues have been fixed.
1.0.4
  • Macro support for alert subject and description customization.
  • Support added to get latest metric snapshot data (from Gateway v14.0.0).
  • Added support for Template level component filters.
1.0.3Added support to alert on gateway in case initial discovery fails with connectivity/authorization issues.
1.0.2Fixed the metrics graphs issue.
1.0.1Initial sdk2.0 app discovery & monitoring implementation.

Introduction

Linux cluster is a group of Linux computers or nodes, storage devices that work together and are managed as a single system. A traditional clustering configuration has two nodes that are connected to shared storage (typically a SAN). With Linux clustering, an application is run on one node, and clustering software is used to monitor its operation.

A Linux cluster provides faster processing speed, larger storage capacity, better data integrity, greater reliability and wider availability of resources.

Failover

Failover is a process. Whenever a primary system, network or a database fails or is abnormally terminated, then a Failover acts as a standby which helps resume these operations.

Failover Cluster

Failover cluster is a set of servers that work together to provide High Availability (HA) or Continuous availability (CA). As mentioned earlier, if one of the servers goes down another node in the cluster can take over its workload with minimal or no downtime. Some failover clusters use physical servers whereas others involve virtual machines (VMs).

CA clusters allow users to access and work on the services and applications without any incidence of timeouts (100% availability), in case of a server failure. HA clusters, on the other hand, may cause a short hiatus in the service, but system recovers automatically with minimum downtime and no data loss.

A cluster is a set of two or more nodes (servers) that transmit data for processing through cables or a dedicated secure network. Even load balancing, storage or concurrent/parallel processing is possible through other clustering technologies.

Linux Failover Cluster Monitoring

If you look at the above image, Node 1 and Node 2 have common shared storage. Whenever one node goes down, the other one will pick up from there. These two nodes have one virtual IP that all other clients connect to.

Let us take a look at the two failover clusters, namely High Availability Failover Clusters and Continuous Availability Failover Clusters.

High Availability Failover Clusters

In case of High Availability Failover Clusters, a set of servers share data and resources in the system. All the nodes have access to the shared storage.

High Availability Clusters also include a monitoring connection that servers use to check the “heartbeat” or health of the other servers. At any time, at least one of the nodes in a cluster is active, while at least one is passive.

Continuous Availability Failover Clusters

This system consists of multiple systems that share a single copy of a computer’s operating system. Software commands issued by one system are also executed on the other systems. In case of a failover, the user can check critical data in a transaction.

There are a few Failover Cluster types like Linux Server Failover Cluster (WSFC), VMware Failover Clusters, SQL Server Failover Clusters, and Red Hat Linux Failover Clusters.

Prerequisites

  • OpsRamp Classic Gateway 14.0.0 and above.
  • OpsRamp Nextgen Gateway 14.0.0 and above.
    Note: OpsRamp recommends using the latest Gateway version for full coverage of recent bug fixes, enhancements, etc.
  • Prerequisites for Pacemaker
    • Credentials: root / non-root privileges with a member of “haclient” group.
    • Cluster management: Pacemaker
    • Accessibility: All nodes within a cluster should be accessible by a single credential set.
    • For non-root users: Update “~/.bashrc” file with “pcs” command path across all cluster nodes.
      Ex: export PATH=$PATH:/usr/sbin -> as a new line in ~/.bashrc file.
  • Prerequisites for RGManager (non-pacemaker)
    • Credentials: should provide access to both root and non-root users.

    • Cluster management: RGManager

    • Accessibility: All the nodes within a cluster should be accessible by a single credential set.

    • For non-root users: Update the following commands in “etc/sudoers” file to provide access for non-root users to execute these commands.

      “/usr/sbin/cman_tool nodes,/usr/sbin/cman_tool status,/usr/sbin/clustat -l,/sbin/service cman status,/sbin/service rgmanager status,/sbin/service corosync status,/usr/sbin/dmidecode -s system-uuid,/bin/cat /sys/class/dmi/id/product_serial”

      Note: Usually a linux cluster will be configured with a virtual-ip normally called as cluster-virtual-ip.We use this Ip for adding configurations during the installation of integration.

    • If the cluster-virtual-ip is not configured give the ip address of the reachable node associated with the cluster.

Hierarchy of Linux Cluster

Cluster
  -Nodes

Application Migration

  1. Check for the gateway version as a prerequisite step - classic gateway-12.0.1 and above.
    Notes:

    • You only have to follow these steps when you want to migrate from sdk 1.0 to sdk 2.0.
    • For the first time installation below steps are not required.
  2. Disable all configurations associated with sdk 1.0 adaptor integration application.

  3. Install and Add the configuration to that sdk 2.0 application.
    Note: refer to Configure and Install the Linux Failover Cluster Integration & View the Linux Failover Cluster Details sections of this document.

  4. Once all discoveries are completed with the sdk 2.0 application, follow any one of the approaches.

    • Direct uninstallation of the sdk 1.0 adaptor application through the uninstall API with skipDeleteResources=true in the post request

      End-Point: https://{{host}}/api/v2/tenants/{tenantId}/integrations/installed/{installedIntgId}

      Request Body:
          {
          "uninstallReason": "Test",
          "skipDeleteResources": true
          }


      (OR)

    • Delete the configuration one by one through the Delete adaptor config API with the request parameter as skipDeleteResources=true

      End-Point: https://{{host}}/api/v2/tenants/{tenantId}/integrations/installed/config/{configId}?skipDeleteResources=true.

    • Finally, uninstall the adaptor application through API with skipDeleteResources=true in the post request.

      End-Point: https://{{host}}/api/v2/tenants/{tenantId}/integrations/installed/{installedIntgId}

      Request Body:
          {
          "uninstallReason": "Test",
          "skipDeleteResources": true
          }

Supported Metrics

Click here to view the supported metrics

Resource Type: Cluster

Native TypeMetric NamesDisplay NameUnitApplication VersionPacemaker / RGManagerDescription
Linux Clusterlinux_cluster_nodes_statusCluster Node Status1.0.0BothStatus of each nodes present in linux cluster. 0 - offline, 1- online, 2- standby
linux_cluster_system_OS_UptimeSystem Uptimem1.0.0BothTime lapsed since last reboot in minutes
linux_cluster_system_cpu_LoadSystem CPU Load1.0.0BothMonitors the system's last 1min, 5min and 15min load. It sends per cpu core load average.
linux_cluster_system_cpu_UtilizationSystem CPU Utilization%1.0.0BothThe percentage of elapsed time that the processor spends to execute a non-Idle thread(This doesn't includes CPU steal time).
linux_cluster_system_memory_UsedspaceSystem Memory Used SpaceGb1.0.0BothPhysical and virtual memory usage in GB
linux_cluster_system_memory_UtilizationSystem Memory Utilization%1.0.0BothPhysical and virtual memory usage in percentage.
linux_cluster_system_cpu_Usage_StatsSystem CPU Usage Statistics%1.0.0BothMonitors cpu time in percentage spent in various program spaces. User - The processor time spent running user space processes. System - The amount of time that the CPU spent running the kernel. IOWait - The time the CPU spends idle while waiting for an I/O operation to complete. Idle - The time the processor spends idle. Steal - The time virtual CPU has spent waiting for the hypervisor to service another virtual CPU running on a different virtual machine. Kernal Time Total Time.
linux_cluster_system_disk_UsedspaceSystem Disk UsedSpaceGb1.0.0BothMonitors disk used space in GB
linux_cluster_system_disk_UtilizationSystem Disk Utilization%1.0.0BothMonitors disk utilization in percentage.
linux_cluster_system_disk_Inode_UtilizationSystem Disk Inode Utilization%1.0.0BothThis monitor is to collect DISK Inode metrics for all physical disks in a server.
linux_cluster_system_disk_freespaceSystem FreeDisk UsageGb1.0.0BothMonitors the Free Space usage in GB
linux_cluster_system_network_interface_Traffic_InSystem Network In TrafficKbps1.0.0BothMonitors In traffic of each interface for Linux Devices
linux_cluster_system_network_interface_Traffic_OutSystem Network Out TrafficKbps1.0.0BothMonitors Out traffic of each interface for Linux Devices
linux_cluster_system_network_interface_Packets_InSystem Network In packetspackets/sec1.0.0BothMonitors in Packets of each interface for Linux Devices
linux_cluster_system_network_interface_Packets_OutSystem Network out packets1.0.0BothMonitors Out packets of each interface for Linux Devices
linux_cluster_system_network_interface_Errors_InSystem Network In ErrorsErrors per Sec1.0.0BothMonitors network in errors of each interface for Linux Devices
linux_cluster_system_network_interface_Errors_OutSystem Network Out ErrorsErrors per Sec1.0.0BothMonitors Network Out traffic of each interface for Linux Devices
linux_cluster_system_network_interface_discards_InSystem Network In discardspsec1.0.0BothMonitors Network in discards of each interface for Linux Devices
linux_cluster_system_network_interface_discards_OutSystem Network Out discardspsec1.0.0BothMonitors network Out Discards of each interface for Linux Devices
linux_cluster_service_status_PacemakerPacemaker Service Status1.0.0PacemakerPacemaker High Availability Cluster Manager. The status representation as follows : 0 - "failed", 1 - "active" & 2 - "unknown"
linux_cluster_service_status_CorosyncCorosync Service Status1.0.0PacemakerThe Corosync Cluster Engine is a Group Communication System. The status representation as follows : 0 - "failed", 1 - "active" & 2 - "unknown"
linux_cluster_service_status_PCSDPCSD Service Status1.0.0PacemakerPCS GUI and remote configuration interface. The status representation as follows : 0 - "failed", 1 - "active" & 2 - "unknown"
linux_cluster_Online_Nodes_CountOnline Nodes Countcount1.0.0BothOnline cluster nodes count.
linux_cluster_Failover_StatusCluster FailOver Status1.0.0BothProvides the details about cluster failover status. The integer representation as follows, 0 - cluster is running on the same node, 1 - there is failover happened.
linux_cluster_node_HealthCluster Node Health Percentage%1.0.0BothThis metrics gives the info about the percentage of online linux nodes available within a cluster.
linux_cluster_service_StatusLinux Cluster Service Status1.0.0BothCluster Services Status. The status representation as follows : 0 - disabled, 1-blocked, 2 - failed, 3 - stopped, 4 - recovering, 5 - stopping, 6 - starting, 7 - started, 8 - unknown
linux_cluster_service_status_rgmanagerRGManager Service Status1.0.0RGManagerRGManager Service Status. The status representation as follows : 0 - \"failed\", 1 - \"active\" , 2 - \"unknown\"
linux_cluster_service_status_CMANCMAN Service Status1.0.0RGManagerCMAN Service Status. The status representation as follows : 0 - \"failed\", 1 - \"active\" \u0026 2 - \"unknown\"
linux_cluster_fence_statusLinux Cluster Fence Status2.0.0PacemakerCluster Fence Status. The status representation as follows : 0 - disabled, 1-blocked, 2 - failed, 3 - stopped, 4 - recovering, 5 - stopping, 6 - starting, 7 - started, 8 - unknown
linux_cluster_fence_failover_statusCluster Fence FailOver Status2.0.0PacemakerProvides the details about cluster fence failover status. The integer representation as follows , 0 - fence is running on the same node , 1 - there is failover happened
linux_cluster_service_failover_statusCluster Service FailOver Status2.0.0PacemakerProvides the details about cluster service failover status. The integer representation as follows , 0 - resource group is running on the same node , 1 - there is failover happened
linux_cluster_failed_actions_countCluster Failed Resource Actions CountCount2.0.0PacemakerProvides the cluster failed resource actions Count

Resource Type: Server

Native TypeMetric NamesDisplay NameUnitApplication VersionPacemaker / RGManagerDescription
Linux Cluster Nodelinux_node_system_OS_UptimeSystem Uptimem1.0.0BothTime lapsed since last reboot in minutes
linux_node_system_cpu_LoadSystem CPU Load1.0.0BothMonitors the system's last 1min, 5min and 15min load. It sends per cpu core load average.
linux_node_system_cpu_UtilizationSystem CPU Utilization%1.0.0BothThe percentage of elapsed time that the processor spends to execute a non-Idle thread(This doesn't includes CPU steal time)
linux_node_system_memory_UsedspaceSystem Memory Used SpaceGb1.0.0BothPhysical and virtual memory usage in GB
linux_node_system_memory_UtilizationSystem Memory Utilization%1.0.0BothPhysical and virtual memory usage in percentage.
linux_node_system_cpu_Usage_StatsSystem CPU Usage Statistics%1.0.0BothMonitors cpu time in percentage spent in various program spaces. User - The processor time spent running user space processes. System - The amount of time that the CPU spent running the kernel. IOWait - The time the CPU spends idle while waiting for an I/O operation to complete. Idle - The time the processor spends idle. Steal - The time virtual CPU has spent waiting for the hypervisor to service another virtual CPU running on a different virtual machine. Kernal Time Total Time
linux_node_system_disk_UsedspaceSystem Disk UsedSpaceGb1.0.0BothMonitors disk used space in GB
linux_node_system_disk_UtilizationSystem Disk Utilization%1.0.0BothMonitors disk utilization in percentage
linux_node_system_disk_Inode_UtilizationSystem Disk Inode Utilization%1.0.0BothThis monitor is to collect DISK Inode metrics for all physical disks in a server.
linux_node_system_disk_freespaceSystem FreeDisk Usage.Gb1.0.0BothMonitors the Free Space usage in GB
linux_node_system_network_interface_Traffic_InSystem Network In Traffic.Kbps1.0.0BothMonitors In traffic of each interface for Linux Devices
linux_node_system_network_interface_Traffic_OutSystem Network Out TrafficKbps1.0.0BothMonitors Out traffic of each interface for Linux Devices
linux_node_system_network_interface_Packets_InSystem Network In packetspackets/sec1.0.0BothMonitors in Packets of each interface for Linux Devices
linux_node_system_network_interface_Packets_OutSystem Network out packetspackets/sec1.0.0BothMonitors Out packets of each interface for Linux Devices
linux_node_system_network_interface_Errors_InSystem Network In ErrorsErrors per Sec1.0.0BothMonitors network in errors of each interface for Linux Devices
linux_node_system_network_interface_Errors_OutSystem Network Out ErrorsErrors per Sec1.0.0BothMonitors Network Out traffic of each interface for Linux Devices
linux_node_system_network_interface_discards_InSystem Network In discardspsec1.0.0BothMonitors Network in discards of each interface for Linux Devices
linux_node_system_network_interface_discards_OutSystem Network Out discardspsec1.0.0BothMonitors network Out Discards of each interface for Linux Devices

Default Monitoring Configurations

Linux Failover Cluster application has default Global Device Management Policies, Global Templates, Global Monitors and Global metrics in OpsRamp. Users can customize these default monitoring configurations as per their business use cases by cloning respective Global Templates, and Global Device Management Policies. OpsRamp recommends doing this activity before installing the application to avoid noise alerts and data.

  1. Default Global Device Management Policies

    OpsRamp has a Global Device Management Policy for each Native Type of Lnux Failover Cluster. You can find those Device Management Policies at Setup > Resources > Device Management Policies, search with suggested names in global scope. Each Device Management Policy follows below naming convention:

    {appName nativeType - version}

    Ex: linux-failover-cluster Linux Cluster - 1 (i.e, appName = linux-failover-cluster, nativeType = Linux Cluster, version = 1)

  2. Default Global Templates

    OpsRamp has a Global template for each Native Type of LINUX-FAILOVER-CLUSTER. You can find those templates at Setup > Monitoring > Templates, search with suggested names in global scope. Each template follows below naming convention:

    {appName nativeType 'Template' - version}

    Ex: linux-failover-cluster Linux Cluster Template - 1 (i.e, appName = linux-failover-cluster, nativeType = Linux Cluster, version = 1)

  3. Default Global Monitors

    OpsRamp has a Global Monitors for each Native Type which has monitoring support. You can find those monitors at Setup > Monitoring > Monitors, search with suggested names in global scope. Each Monitors follows below naming convention:

    {monitorKey appName nativeType - version}

    Example: Linux Failover Cluster Monitor linux-failover-cluster Linux Cluster 1 (i.e, monitorKey = Linux Failover Cluster Monitor, appName = linux-failover-cluster, nativeType = Linux Cluster, version = 1)

Configure and Install the Linux Failover Cluster Integration

  1. From All Clients, select a client.
  2. Navigate to Setup > Account.
  3. Select the Integrations and Apps tab.
  4. The Installed Integrations page, where all the installed applications are displayed. Note: If there are no installed applications, it will navigate to the Available Integrations and Apps page.
  5. Click + ADD on the Installed Integrations page. The Available Integrations and Apps page displays all the available applications along with the newly created application with the version.
  6. Search for the application using the search option available. Alternatively, use the All Categories option to search.
Linux Install Integration
  1. Click ADD in the Linux Failover Cluster application.
  2. In the Configurations page, click + ADD. The Add Configuration page appears.
  3. Enter the below mentioned BASIC INFORMATION:
Object NameDescription
NameEnter the name for the integration
IP Address/Host NameIP address/host name of the target.
CredentialsSelect the credentials from the drop-down list.

Note: Click + Add to create a credential.
Cluster TypeSelect Pacemake or RGManager from the Cluster Type drop-down list.

Note:

  • Ip Address/Host Name should be accessible from Gateway.
  • Select App Failure Notifications to be notified in case of an application failure that is, Connectivity Exception, Authentication Exception.
  1. Select the below mentioned Custom Attribute:
FunctionalityDescription
Custom AttributeSelect the custom attribute from the drop down list box.
ValueSelect the value from the drop down list box.

Note: The custom attribute that you add here will be assigned to all the resources that are created by the integration. You can add a maximum of five custom attributes (key and value pair).

  1. In the RESOURCE TYPE section, select:
    • ALL: All the existing and future resources will be discovered.
    • SELECT: You can select one or multiple resources to be discovered.
  2. In the DISCOVERY SCHEDULE section, select Recurrence Pattern to add one of the following patterns:
    • Minutes
    • Hourly
    • Daily
    • Weekly
    • Monthly
  3. Click ADD.

Now the configuration is saved and displayed on the configurations page after you save it.
Note: From the same page, you may Edit and Remove the created configuration.

  1. Under the ADVANCED SETTINGS, Select the Bypass Resource Reconciliation option, if you wish to bypass resource reconciliation when encountering the same resources discovered by multiple applications.

    Note: If two different applications provide identical discovery attributes, two separate resources will be generated with those respective attributes from the individual discoveries.

  2. Click NEXT.

  3. (Optional) Click +ADD to create a new collector by providing a name or use the pre-populated name.

Veeam
  1. Select an existing registered profile.
Veeam
  1. Click FINISH.

The application is installed and displayed on the INSTALLED INTEGRATION page. Use the search field to find the installed integration.

Modify the Configuration

View the Linux Failover Cluster Details

To discover resources for HPE StoreOnce:

  1. Navigate to Infrastructure > Search > OS > Linux Failover Cluster.
  2. The LINUX FAILOVER CLUSTER page is displayed, select the application name.
  3. The RESOURCE page appears from the right.
  4. Click the ellipsis () on the top right and select View details.
Linux Install Integration

View resource attributes

The discovered resource(s) are displayed under Attributes. In this page you will get the basic information about the resources such as: Resource Type, Native Resource Type, Resource Name, IP Address etc.

Linux Install Integration

View resource metrics

To confirm Linux Cluster monitoring, review the following:

  • Metric graphs: A graph is plotted for each metric that is enabled in the configuration.
  • Alerts: Alerts are generated for metrics that are configured as defined for integration.
Linux Install Integration

Supported Alert Custom Macros

Customize the alert subject and description with below macros then it will generate alert based on customisation.
Supported macros keys:

Click here to view the alert subject and description with macros

                                ${resource.name}

                                ${resource.ip}

                                ${resource.mac}

                                ${resource.aliasname}

                                ${resource.os}

                                ${resource.type}

                                ${resource.dnsname}

                                ${resource.alternateip}

                                ${resource.make}

                                ${resource.model}

                                ${resource.serialnumber}

                                ${resource.systemId}

                                ${Custome Attributes in the resource}

                                ${parent.resource.name}

Risks, Limitations & Assumptions

  • Application can handle Critical/Recovery failure notifications for below two cases when user enables App Failure Notifications in configuration
    • Connectivity Exception
    • Authentication Exception
  • The application will send duplicate/repeat failure alert notifications for every 6 hours.
  • Support for Macro replacement for threshold breach alerts (i.e, customisation for threshold breach alert’s subject, description).
  • Application cannot control monitoring pause/resume actions based on above alerts.
  • Metrics can be used to monitor Linux-Failover-Cluster resources and can generate alerts based on the threshold values.
  • No support of showing activity log and applied time.
  • This application supports both Classic Gateway and NextGen Gateway.
  • For the metric linux_cluster_failed_actions_count, an alert will be generated if the failed actions count is greater than or equal to 1, and if there is an alert raised on a component then the repeat alert will be generated only after 6 hours on that component if any threshold breach exists. The created alerts will not be healed by the application.
  • For metrics linux_cluster_fence_failover_status and linux_cluster_service_failover_status, an alert will be generated if there is a change of the node on which service runs. The created alerts will be healed by the application in the subsequent poll if it is running on the same node.Also the metric graphs will have the discontinuity as we’re setting the component name as service_name:node.
  • The minimum supported version for the option to get the latest snapshot metric is Nextgen-14.0.0.