Prometheus

Describes how to setup an integration to ingest Prometheus monitoring events.

Leave Feedback

Introduction

Prometheus is an open-source software application used for event monitoring and alerting.

Validated Version: Prometheus 2.14.0

OpsRamp configuration

Configuration involves the following:

  1. Installing the integration.
  2. Configuring the integration.

Step 1: Install the integration

To install:

  1. From All Clients, select a client.
  2. Go to Setup > Integrations > Integrations.
  3. From Available Integrations, select Monitoring > Prometheus.
  4. Click Install.

Step 2: Configure the integration

To configure the integration:

  1. From the API tab, provide the following:
    • Authentication: Copy Tenant Id, Token and Webhook URL for configuration. These settings are used for creating a HTTP Request template.
    • Map Attributes: Provide the mapping information for the third-party.
  2. From the Monitoring of Integration tab, click Assign Templates.
  3. From the Audit Logs, set up audit log criteria and time frame.

Configuring the map attributes

To configure the mapping attributes:

  1. Select the required OpsRamp property from the drop-down.
  2. Click Add Mapping Attributes to map attributes for the specific OpsRamp alert property.
  3. Click + to define the mappings.
  4. From Create Alert Mappings on Status, define the mappings, parsing conditions, and default values, and Save.

The following table shows attribute mappings.

Property Mappings
Third-Party EntityOpsRamp EntityThird-Party PropertyOpsRamp PropertyThird-Party Property ValueOpsRamp Property Value
AlertALERTseverityalert.currentStatecriticalCritical
AlertALERTmetricalert.serviceNamecontainer_memory_usage_bytesNA
AlertALERTdescriptionalert.descriptiontesting alert1NA
AlertALERTsummaryalert.subjectHigh Memory UsageNA

Prometheus configuration

The configuration to route the Prometheus alerts is configured via the YAML definition used during deployment. Use the customized labels displayed in the Bold font in the YAML definitions for the Prometheus integration.

Configuration involves:

  1. Configuring Prometheus Alert Manager
  2. Configuring alert rules

Step 1: Configure Prometheus Alert Manager

Alert Manager is the receiver to route the alerts.

To configure Alerts in Prometheus.

  1. Get the Webhook URL from the Opsramp configuration.
  2. Use the following Webhook URL in Prometheus Alert Manager configuration map YAML file:
kind: ConfigMap

apiVersion: v1 

metadata: 
    name: alertmanager-config 
    namespace: monitoring 
    data: 
      config.yml: 

global: 

templates: 
- '/etc/alertmanager/*.tmpl'

receivers: 
    - name: default-receiver 
    - name: opsramp-webhook
      webhook_configs:
        - url: "https://<webhook_url>/integrations/alertsWebhook/client_14/alerts?vtoken=<TokenValue>"

route: 
  group_wait: 10s 
  group_interval: 5m 
  receiver: default-receiver 
  repeat_interval: 3h

routes:
  - receiver: opsramp-webhook
  match_re:
  app: opsramp

Step 2: Configure alert rules

This step configures the alert rules in the Prometheus Alert Manager. Filtering rules are created by using Alerting profiles. The alert rules are labeled as OpsRamp alerts for the receiver.

To configure alert rules:

  1. Add the required Opsramp labels in the prometheus.rules file (config map for alert rules) so that you can map the alerts generated from these rules to corresponding OpsRamp entities in the OpsRamp alert browser:

Yaml file

    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: prometheus-server-conf
      labels:
        name: prometheus-server-conf
      namespace: monitoring
    data:
      prometheus.rules: |-
        groups:
        - name: devopscube demo alert
          rules:
          - alert: High Pod Memory
            expr: sum(container\_memory\_usage\_bytes) > 1
            for: 1m
            **labels:**
     **severity: critical**
     **app: opsramp**
     **metric: container\_memory\_usage\_bytes**
     **description: testing alert1**
            annotations:
              **summary: High Memory Usage**
        - name: devopscude demo alert2
          rules:
          - alert: High Pod Memory2
            expr: sum(container\_memory\_usage\_bytes) > 2
            for: 1m
           ** labels:**
     **severity: VeryCritical**
     **app: opsramp**
     **metric: container\_memory\_usage\_bytes**
     **description: testing alert2**
            annotations:
              **summary: High Memory Usage2**
      prometheus.yml: |-
        global:
          scrape\_interval: 5s
          evaluation\_interval: 5s
        rule\_files:
          - /etc/prometheus/prometheus.rules
        alerting:
          alertmanagers:
          - scheme: http
            static\_configs:
            - targets:
              - "alertmanager.monitoring.svc:9093"
        scrape\_configs:
          - job\_name: 'kubernetes-apiservers'
    
            kubernetes\_sd\_configs:
            - role: endpoints
            scheme: https
    
            tls\_config:
              ca\_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            bearer\_token\_file: /var/run/secrets/kubernetes.io/serviceaccount/token
    
            relabel\_configs:
            - source\_labels: \[\_\_meta\_kubernetes\_namespace, \_\_meta\_kubernetes\_service\_name, \_\_meta\_kubernetes\_endpoint\_port\_name\]
              action: keep
              regex: default;kubernetes;https
    
          - job\_name: 'kubernetes-nodes'
    
            scheme: https
    
            tls\_config:
              ca\_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            bearer\_token\_file: /var/run/secrets/kubernetes.io/serviceaccount/token
    
            kubernetes\_sd\_configs:
            - role: node
    
            relabel\_configs:
            - action: labelmap
              regex: \_\_meta\_kubernetes\_node\_label\_(.+)
            - target\_label: \_\_address\_\_
              replacement: kubernetes.default.svc:443
            - source\_labels: \[\_\_meta\_kubernetes\_node\_name\]
              regex: (.+)
              target\_label: \_\_metrics\_path\_\_
              replacement: /api/v1/nodes/${1}/proxy/metrics
    
          - job\_name: 'kubernetes-pods'
            kubernetes\_sd\_configs:
            - role: pod
    
            relabel\_configs:
            - source\_labels: \[\_\_meta\_kubernetes\_pod\_annotation\_prometheus\_io\_scrape\]
              action: keep
              regex: true
            - source\_labels: \[\_\_meta\_kubernetes\_pod\_annotation\_prometheus\_io\_path\]
              action: replace
              target\_label: \_\_metrics\_path\_\_
              regex: (.+)
            - source\_labels: \[\_\_address\_\_, \_\_meta\_kubernetes\_pod\_annotation\_prometheus\_io\_port\]
              action: replace
              regex: (\[^:\]+)(?::\\d+)?;(\\d+)
              replacement: $1:$2
              target\_label: \_\_address\_\_
            - action: labelmap
              regex: \_\_meta\_kubernetes\_pod\_label\_(.+)
            - source\_labels: \[\_\_meta\_kubernetes\_namespace\]
              action: replace
              target\_label: kubernetes\_namespace
            - source\_labels: \[\_\_meta\_kubernetes\_pod\_name\]
              action: replace
              target\_label: kubernetes\_pod\_name
    
          - job\_name: 'kubernetes-cadvisor'
    
            scheme: https
    
            tls\_config:
              ca\_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            bearer\_token\_file: /var/run/secrets/kubernetes.io/serviceaccount/token
    
            kubernetes\_sd\_configs:
            - role: node
    
            relabel\_configs:
            - action: labelmap
              regex: \_\_meta\_kubernetes\_node\_label\_(.+)
            - target\_label: \_\_address\_\_
              replacement: kubernetes.default.svc:443
            - source\_labels: \[\_\_meta\_kubernetes\_node\_name\]
              regex: (.+)
              target\_label: \_\_metrics\_path\_\_
              replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
    
          - job\_name: 'kubernetes-service-endpoints'
    
            kubernetes\_sd\_configs:
            - role: endpoints
    
            relabel\_configs:
            - source\_labels: \[\_\_meta\_kubernetes\_service\_annotation\_prometheus\_io\_scrape\]
              action: keep
              regex: true
            - source\_labels: \[\_\_meta\_kubernetes\_service\_annotation\_prometheus\_io\_scheme\]
              action: replace
              target\_label: \_\_scheme\_\_
              regex: (https?)
            - source\_labels: \[\_\_meta\_kubernetes\_service\_annotation\_prometheus\_io\_path\]
              action: replace
              target\_label: \_\_metrics\_path\_\_
              regex: (.+)
            - source\_labels: \[\_\_address\_\_, \_\_meta\_kubernetes\_service\_annotation\_prometheus\_io\_port\]
              action: replace
              target\_label: \_\_address\_\_
              regex: (\[^:\]+)(?::\\d+)?;(\\d+)
              replacement: $1:$2
            - action: labelmap
              regex: \_\_meta\_kubernetes\_service\_label\_(.+)
            - source\_labels: \[\_\_meta\_kubernetes\_namespace\]
              action: replace
              target\_label: kubernetes\_namespace
            - source\_labels: \[\_\_meta\_kubernetes\_service\_name\]
              action: replace
              target\_label: kubernetes\_name

Sample payload

{ "receiver": "opsramp-webhook", "status": "firing", "alerts": 
    [{ 
        "status": "firing", "labels": 
            { "alertname": "High Pod Memory", "app": "opsramp", "severity": "slack" }, 
        "annotations": { "summary": "High Memory Usage" }, 
        "startsAt": "2019-09-19T08:14:52.059731582Z", 
        "endsAt": "0001-01-01T00:00:00Z", 
        "generatorURL": "[http://prometheus-deployment-7bc6dc6f77-ds6j2:9090/graph?g0.expr=sum%28container_memory_usage_bytes%29+%3E+1u0026g0.tab=1](http://prometheus-deployment-7bc6dc6f77-ds6j2:9090/graph?g0.expr=sum%28container_memory_usage_bytes%29+%3E+1\\u0026g0.tab=1)", 
        "fingerprint": "243ccc9d065e8b26" 
        }, 
    { 
        "status": "firing", 
        "labels": 
            { "alertname": "Low Containers Count", "app": "opsramp", "severity": "page" }, 
        "annotations": { "summary": "Low Container Count" }, 
        "startsAt": "2019-09-19T08:14:53.135072669Z", 
        "endsAt": "0001-01-01T00:00:00Z", 
        "generatorURL": "[http://prometheus-deployment-7bc6dc6f77-ds6j2:9090/graph?g0.expr=sum%28kubelet_running_container_count%29+%3C+40u0026g0.tab=1](http://prometheus-deployment-7bc6dc6f77-ds6j2:9090/graph?g0.expr=sum%28kubelet_running_container_count%29+%3C+40\\u0026g0.tab=1)", 
        "fingerprint": "a95e6f948c14554a" 
     }
     ], 
  "groupLabels": { }, 
  "commonLabels": { "app": "opsramp" }, 
  "commonAnnotations": { }, 
  "externalURL": "[http://alertmanager-7b6d855bd8-7mvf2:9093](http://alertmanager-7b6d855bd8-7mvf2:9093/)", 
  "version": "4", 
  "groupKey": "{}/{app=~\\" ^ ( ? : opsramp) $\\"}:{}" 

}

 

What to do next

  • View alerts in OpsRamp.
    1. From Workspace drop-down options at OpsRamp Console, navigate to Alerts and on the Alerts page, search with the Source name as Prometheus. Related alerts are displayed.
    2. Click an Alert ID to view.
Prometheus Integration

Prometheus Integration