Introduction

MicroK8s is a streamlined, lightweight Kubernetes solution designed for minimal operational overhead. It simplifies the deployment, scaling, and management of containerized applications. It offers the essential features of Kubernetes in a compact package, suitable for environments ranging from single-node setups to high-availability production clusters.

Note: MicroK8s is supported starting from Kubernetes agent version 17.0.0 and onward.

You can monitor the following Kubernetes components using MicroK8s:

  • API Server (using Kubelite )
  • Kube Controller (using Kubelite )
  • Kube Scheduler (using Kubelite )
  • KubeDNS / CoreDNS
  • Kube State (Not installed by default in the MicroK8s Cluster)
  • Metric Server (Not installed by default in the MicroK8s Cluster)

Configure MicroK8s

Step 1: Install and Configure the MicroK8s Integration

  1. From All Clients, select a client.
  2. Navigate to Setup > Integrations.
  3. The Installed Integrations page, where all the installed applications are displayed. If there are no installed applications, it will navigate to the Available Integrations and Apps page.
  4. Click + ADD on the Installed Integrations page. The Available Integrations page displays all the available applications along with the newly created application with the version. Note: You can even search for the application using the search option available. Also you can use the All Categories option to search.
  5. Click ADD on the Microk8s tile.
    Configure Kubernetes
  6. In the Configurations page, click + ADD.
  7. In the configure page, enter the following details:
    • Name: Name for the integration.
    • Deployment type: On-prem or Cloud (AWS, GKE, and AKS)
    • Container Engine: ContainerD (default container engine for Microk8s)
  8. Click Next.
    Configure Kubernetes

Step 2: Deploy Agent on MicroK8s

  1. Copy the YAML content and paste to a new file in kubernetes control plane (Example file name: opsramp-agent.yaml)

  2. Execute the following command in kub controller plane.

    microk8s kubectl apply -f opsramp-agent.yaml
    Configure Kubernetes

Environment Variables in an Agent YAML file

  • You can adjust the following environment variable to change the Log Level of the agent:

      - name: LOG_LEVEL
        value: "warn"
      

  • Worker Agent: This deployment is responsible for collecting System Performance Metrics, Container Metrics (ContainerD), Kubelet, and all the container app metrics.

  • Master Agent: This deployment is responsible for collecting microk8s-apiserver, microk8s-controller, microk8s-scheduler, microk8s-kube-state, microk8s-metrics-server, microk8s-coreDNS / kubeDNS metrics.

kubernetes
kubernetes

Connecting Agents using a proxy

  • Use the following environment variables to connect agent.
CONN_MODE=proxy
PROXY_SERVER=<ProxyServerIP>
PROXY_PORT=<ProxyPort>
  • If authentication is required for the proxy server, configure the credentials using the following command.
PROXY_USER=<User>
PROXY_PASSWORD=<Password>

Step 3: Apply Monitoring Templates and Create Device Management Policy

  1. Apply the appropriate Kubernetes template on the Integration resource (cluster resource) created after the deployment of the agent YAML file.
    Apply the above Kubernetes components template only on the Integration resource, not on the nodes.
  2. Apply the Container template and Kubelet Template on each node created under the Integration resource in the application. Alternatively, you can also create a Device Management Policy to do Step 1 and Step 2.

Step 4: (Optional) Configure the Docker and Kubernetes Event

Configure Docker/Container Events

The agent supports the following three Docker events:

  • Start
  • Kill
  • Oom (Out of Memory)

By default, Docker events are disabled in the agent deployment YAML file. To enable the Docker events, change the DOCKER_EVENTS environment variable to TRUE.

Disabled by Default

- name: DOCKER_EVENTS
  value: "FALSE"

Enabled

- name: DOCKER_EVENTS
  value: "TRUE"

For agent versions 8.0.1-1 and above, the Docker events are sent as monitoring alerts. For the older versions of agent, the Docker events are sent as maintenance alerts to the OpsRamp alert browser.

Configure Kubernetes Events

OpsRamp Agent can forward the Kubernetes events that are generated in the cluster.

The events are categorized into the following two types:

  • Node
  • Other

By default, the agent forwards all the Kubernetes events without making any updates.

  • To forward only selected events, you must edit the kube events config map in the YAML file.
  • To remove the the event, remove it from the agent deployment YAML file.

To add a new event, add the event (Kube Event Reason) under the other category. If the reason matches with the actual Kubernetes event reason, events are forwarded as alerts.

By default, Kubernetes events are disabled in the agent deployment YAML file. To enable, change the K8S_EVENTS environment variable to TRUE.

Disabled by Default

- name: K8S_EVENTS
  value: "FALSE"

Enabled

- name: K8S_EVENTS
  value: "TRUE"

For agent versions 8.0.1-1 and above, the Kubernetes events are sent as monitoring alerts. For the older versions of agent, the Kubernetes events are sent as maintenance alerts to the OpsRamp alert browser.

By default, all events are converted as warning alerts. To forward any events with a different alert state, change the event name followed by alert state (Critical/Warning), as shown below.

node:
  - RegisteredNode:Critical
  - RemovingNode:Warning
  - DeletingNode
  - TerminatingEvictedPod
  

Events supported by default

nodeother
  • RegisteredNode
  • RemovingNode
  • DeletingNode
  • TerminatingEvictedPod
  • NodeReady
  • NodeNotReady
  • NodeSchedulable
  • NodeNotSchedulable
  • CIDRNotAvailable
  • CIDRAssignmentFailed
  • Starting
  • KubeletSetupFailed
  • FailedMount
  • NodeSelectorMismatching
  • InsufficientFreeCPU
  • InsufficientFreeMemory
  • OutOfDisk
  • HostNetworkNotSupported
  • NilShaper
  • Rebooted
  • NodeHasSufficientDisk
  • NodeOutOfDisk
  • InvalidDiskCapacity
  • FreeDiskSpaceFailed
  • FailedBinding
  • FailedScheduling
  • SuccessfulCreate
  • FailedCreate
  • SuccessfulDelete
  • FailedDelete

List of Metrics

Explore the microk8s metrics list along with descriptions and the monitors they apply to.

Addons

Install Kube State and Metrics Server manually to fetch and monitor metrics.

  • To deploy Kube State, get the latest version of the deployment YAML file from GitHub.
  • To deploy Metrics Server, get the latest version of the deployment YAML file GitHub or, enable metrics-server from addons of Microk8s.

Configure Addons

Configure Kube state

Step 1:

  1. To monitor Kube-state-metrics, use the right version of kube state YAML for the deployment according to the Kubernetes version of the cluster.
  2. To install kube-state-metrics, complete the following on the Kubernetes control plane node:
    • Clone the Kubernetes kube-state-metrics Github repo.
    • Run microk8s kubectl apply -f kube-state-metrics/examples/standard/.
  3. When deployed, set the kube state service Cluster IP with an IP Address.
    The agent requires the address to fetch the metrics from kube state. If Cluster IP is not set (Shown as NONE) modify the service.yaml file and set clusterIP: None.

Here is an example of the modified service.yaml file (version 1.9):

    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/name: kube-state-metrics
        app.kubernetes.io/version: 2.12.0
      name: kube-state-metrics
      namespace: kube-system
    spec:
      ports:
      - name: http-metrics
        port: 8080
        targetPort: http-metrics
      - name: telemetry
        port: 8081
        targetPort: telemetry
      selector:
        app.kubernetes.io/name: kube-state-metrics

Step 2:

  1. To check if kube-state-metrics is installed in the cluster, run the following command on the control plane nodes:
    microk8s kubectl get svc --all-namespaces | grep kube-state-metrics | grep -v grep
    The following sample output confirms that kube-state-metrics is installed in the cluster.
    kube-system kube-state-metrics ClusterIP 10.96.186.34 <none> 8080/TCP,8081/TCP 19d 

Configure Metrics-Server

Step 1:

  1. To monitor metrics-server, enable the metrics-server from addons of Microk8s by using following command.
    microk8s enable metrics-server
  2. Once enabled, metrics-server pod is deployed to the kube-system namespace. Alternatively, you can use the following command to deploy:
microk8s kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml

Step 2:

  1. To see if metrics-server is installed in the cluster, run the following command on the control plane nodes:
    microk8s kubectl get svc -n kube-system | grep metrics-server 
    The below sample output confirms that metrics-server is installed in the cluster.
    metrics-server   ClusterIP   10.152.183.142   <none>        443/TCP                  21m 

Next Steps

After a discovery profile is created:

  • View the integration, navigate to Infrastructure > Resources.
  • Assign monitoring templates to the resource.
  • Validate that the resource was successfully added to OpsRamp.