1. What is the OpsRamp NextGen Gateway, and how is it different from existing gateways?

The OpsRamp NextGen Gateway is an advanced, cloud-native successor to the OpsRamp Classic Gateway. It is purpose-built for modern hybrid IT environments and provides enhanced performance, reliability, and scalability.

Unlike traditional gateways, the NextGen Gateway uses a Kubernetes-based microservices architecture that enables improved resilience, operational flexibility, and simplified lifecycle management.

Core Advantages

  • High Availability (HA): Ensures continuous monitoring during component or node failures.
  • Horizontal Scalability: Scales dynamically to support growing infrastructure and workloads using Elastic profiles.
  • Cloud-Native Architecture: Designed for containerized environments and modern IT ecosystems.
  • Self-Healing Capabilities: Automatically recovers failed services through Kubernetes orchestration.
  • Simplified Onboarding: Accelerates deployment and configuration for hybrid environments.
  • Reduced Operational Overhead: Automation minimizes manual maintenance requirements.

The NextGen Gateway represents a strategic architectural evolution designed to meet the demands of dynamic, large-scale IT environments.

2. What is the difference between NextGen, Classic, and Windows gateways?

The OpsRamp gateway portfolio includes three deployment options, each designed for specific infrastructure requirements:

  • NextGen Gateway: A modern, cloud-native gateway built on Kubernetes architecture, optimized for scalability and high availability.
  • Classic Gateway: A traditional Linux-based gateway used in existing deployments.
  • Windows Gateway: Designed for lightweight monitoring in Windows environments.

For detailed comparisons, see the Comparison section.

3. What deployment models are available for the NextGen Gateway?

The NextGen Gateway supports three deployment models to accommodate diverse infrastructure environments:

  1. ISO-Based Deployment (OpsRamp Provided)
  2. OVA-Based Deployment (OpsRamp Provided)
  3. Bring Your Own Kubernetes (BYO-K8s) Deployment

Deployment Details

ISO-Based Deployment
In this model, OpsRamp provides an ISO appliance that includes a preconfigured environment for deploying the NextGen Gateway.

  • The ISO image contains a lightweight Kubernetes distribution called k3s.
  • It also includes a pre-bundled OpsRamp bootstrap tool that simplifies gateway installation and configuration.
  • This option is suitable for environments where administrators want a fully packaged deployment with minimal manual setup.

OVA-Based Deployment
In this model, OpsRamp provides a preconfigured OVA appliance that can be directly deployed on supported virtualization platforms such as VMware vSphere, HPE VME.

  • The OVA image also includes the lightweight k3s Kubernetes distribution.
  • The built-in bootstrap tool automates the gateway deployment process.
  • This method is ideal for organizations using traditional virtualization infrastructure.

Bring Your Own Kubernetes (BYO-K8s) Deployment
Organizations that already operate Kubernetes environments can deploy the NextGen Gateway directly on their existing or new clusters.

  • Deployment is performed using Helm charts provided by OpsRamp.
  • This model eliminates the need for additional virtual machines or prepackaged appliances.
  • It integrates seamlessly with existing containerized infrastructure.

Before deployment, ensure the Kubernetes cluster meets OpsRamp-defined resource and configuration requirements.

4. Should I deploy the NextGen Gateway on a single-node or multi-node Kubernetes cluster?

The OpsRamp NextGen Gateway can be deployed on both single-node and multi-node Kubernetes clusters, depending on your infrastructure requirements, availability needs, and cost considerations.

Single-Node Deployment

Recommended for:

  • Development and test environments
  • Small deployments with limited infrastructure
  • Organizations prioritizing minimal resource investment

Characteristics:

  • Lower infrastructure cost
  • Simplified setup and management
  • No high availability (single point of failure)

Multi-Node Deployment

Recommended for:

  • Production environments
  • Deployments requiring high availability (HA)
  • Large-scale or mission-critical infrastructure monitoring

Characteristics:

  • High availability and fault tolerance
  • Optimized workload distribution across nodes
  • Higher infrastructure resource requirements

OpsRamp Recommendation<br OpsRamp strongly recommends deploying the NextGen Gateway on a minimum of 3 Kubernetes nodes to achieve proper high availability and ensure resilience in case of node failures.

Summary

Deployment TypeBest ForHigh AvailabilityInfrastructure Cost
Single-Node ClusterSmall environments, testingNot AvailableLow
Multi-Node Cluster (3+ nodes)Production environmentsAvailableHigh

5. What are the resource requirements per node when deploying the NextGen Gateway on a Kubernetes cluster?

The resource requirements depend on the number of NextGen Gateway instances deployed in the Kubernetes cluster.

Each Gateway instance requires the following minimum resources:

ResourceRequirement per Gateway
CPU4 vCPU
Memory8 GB RAM
Disk50 GB

For example, if a single gateway instance is deployed across a 3-node Kubernetes cluster for High Availability (HA), each node should have at least the above resources available to allow scheduling and failover.

To ensure reliability and resiliency, cluster capacity should also account for node failure scenarios.

Gateway Capacity Planning
The following table shows the minimum resource requirements for different numbers of Gateway instances.

Number of GatewaysTotal CPU RequiredTotal Memory RequiredTotal Disk Required
14 vCPU8 GB50 GB
28 vCPU16 GB100 GB
312 vCPU24 GB150 GB
416 vCPU32 GB200 GB
520 vCPU40 GB250 GB
624 vCPU48 GB300 GB

Recommended Kubernetes Cluster Size
To maintain high availability and fault tolerance, the number of cluster nodes should increase as the number of gateway instances grows.

Number of GatewaysRecommended Cluster NodesNotes
1–23 NodesSupports HA and basic failover
3–44–5 NodesProvides better workload distribution
5–66+ NodesRecommended for production workloads
7+7–9 NodesEnsures scalability and resilience

Failure Scenario Planning
Capacity planning should also consider node failure scenarios to ensure the cluster continues operating.

Single Node Failure Scenario
The cluster should be able to run all gateway instances even if one node fails.

GatewaysRecommended NodesNodes After FailureResult
132Gateway continues running
232Both gateways remain operational
343Load redistributed across nodes
454No service interruption

Two Node Failure Scenario (Higher Resilience) For critical environments, clusters can be designed to tolerate two simultaneous node failures.

GatewaysRecommended NodesNodes After FailureResult
1-242Gateway remains operational
3-464Workloads redistributed
5-67-85-6High resiliency maintained

Best Practices

  • Always deploy at least 3 Kubernetes nodes for production environments.
  • Maintain sufficient spare capacity to handle node failures.

6: What components are involved in the NextGen Gateway deployment?

The NextGen Gateway deployment consists of several Kubernetes pods that provide gateway processing, caching, messaging, proxy services, and supporting infrastructure.

The components deployed may vary depending on the deployment method (ISO/OVA-based deployment or customer-managed Kubernetes cluster).

Core Gateway Components The following components are typically deployed as part of the NextGen Gateway.

ComponentDescription
Gateway PodThe primary gateway service responsible for processing monitoring and discovery data and communication with the platform. This pod contains multiple containers including **vProbe** (for discovery and monitoring tasks), **PostgreSQL** (for storing gateway configuration and operational data), and **Native Bridge** (for internal gateway communication).
Redis PodProvides caching support for SDK-based applications used by the gateway.

Optional Components
The following components are deployed only when specific integrations or features are required.

ComponentDescription
NATS PodMessaging system used for communication between the gateway and third-party applications like NPM.
Squid PodProxy service used by customer-managed OpsRamp agents to connect securely to the OpsRamp SaaS platform.

Additional Kubernetes Infrastructure Components (ISO / OVA Deployments Only)
For ISO or OVA-based deployments, Kubernetes infrastructure components are installed as part of the deployment to support networking and storage.

ComponentDescription
CoreDNSProvides DNS-based service discovery within the Kubernetes cluster.
Service Load BalancerHandles internal service exposure and traffic routing.
Local Path ProvisionerProvides local persistent storage for single-node deployments.

Additional Components for Multi-Node Deployments For multi-node Kubernetes clusters deployed using ISO/OVA, additional components are installed to support load balancing and distributed storage.

ComponentDescription
MetalLBProvides external load balancing for incoming traffic for SNMP traps and Syslogs in bare-metal Kubernetes environments.
LonghornDistributed storage solution that provides persistent storage and replication across cluster nodes.

Deployment in Customer-Managed Kubernetes Clusters
When deploying NextGen Gateway in a customer-managed Kubernetes cluster, only the application-level components are deployed:

  • Gateway Pod (includes vProbe, PostgreSQL, and Native Bridge containers)
  • Redis Pod
  • Optional NATS Pod
  • Optional Squid Pod

All other infrastructure components such as storage, networking, DNS, and load balancing are expected to be provided and managed by the customer’s Kubernetes environment.

7. How many resources can a single NextGen Gateway manage?

The number of resources a gateway can manage depends on the allocated server capacity.

Managed ResourcesRecommended Server Capacity
Up to 100 resources4 CPU cores, 8 GB RAM.
Up to 500 resources8 CPU cores, 16 GB RAM.
More than 500 resourcesDeploy multiple gateways.

Actual capacity may vary depending on factors such as monitoring frequency, protocol usage, and workload characteristics.

8. Which network ports must be open for NextGen Gateway communication?

The following ports should be open to enable communication between the gateway, managed resources, and the OpsRamp platform.

PurposeProtocolPort
SSH access to gateway nodesTCP22
Agent communication via proxy (optional)TCP3128
SNMP trapsUDP162
Syslog messagesTCP/UDP514
Gateway communication with OpsRamp CloudTCP443
DNS resolutionTCP53
NTP time synchronizationUDP123

These ports allow monitoring data, alerts, and device communication to flow correctly between the gateway and external systems.

9. How do you download the NextGen Gateway ISO or OVA image?

To download the gateway installation package:

  1. Navigate to Setup → Account → Collector Profile.
  2. Click Add Collector Profile.
  3. Select either Virtual Appliance (ISO) NextGen or Virtual Appliance (OVA) NextGen.
  4. Download the installation image.

The downloaded image is used to deploy the gateway as a virtual machine.

10. Why should the hostname be configured before installing Kubernetes?

It is recommended to set a unique hostname before installing K3s or Kubernetes because changing the hostname afterward may cause cluster configuration issues.

Setting a proper hostname also helps maintain consistent network identification and simplifies troubleshooting.

11. Can proxy settings be configured for the NextGen Gateway?

Yes. If your environment requires a proxy for outbound communication, you must configure proxy settings on the gateway node.

Proxy configuration is typically added to the following file:

/etc/environment

After updating the proxy settings, users must log in again for the changes to take effect.

12. Can proxy settings be modified after Kubernetes installation?

Yes. Proxy settings can be modified after installation by updating the configuration file:

/etc/systemd/system/k3s.service.env

After updating the file, restart the K3s service to apply the changes.

systemctl restart k3s

13. Why is a storage class required in Kubernetes deployments for Non ISO/OVA based clusters?

For Non ISO/OVA based clusters, A default storage class and CSI plugin must be configured because the gateway uses persistent volumes for storing operational data such as logs, metadata, and monitoring information.

Without a storage class, Kubernetes cannot dynamically provision persistent storage for gateway components.

For ISO/OVA based deployments by default OpsRamp is creating storage class and no need to create from user side.

14. What are some best practices before installing NextGen Gateway?

Before installation, it is recommended to:

  • Verify hardware and resource requirements.
  • Configure hostname and network settings.
  • Ensure required ports are open.
  • Whitelist the gateway IP addresses.
  • Confirm Kubernetes and Helm versions meet the supported requirements.
  • Validate storage and load balancer configuration for multi-node deployments.

15. What is K3s and why is it required for NextGen Gateway?

K3s is a lightweight Kubernetes distribution used to run containerized workloads. The NextGen Gateway runs its services as Kubernetes pods, so installing K3s is required to create the Kubernetes environment where the gateway components will operate.

K3s is preferred because it is optimized for edge deployments and requires fewer system resources compared to standard Kubernetes.

16. When should K3s be installed during the gateway deployment process?

K3s should be installed after the Gateway VM is prepared and the hostname is configured. Installing K3s creates the Kubernetes cluster environment required for running the gateway components.

It is recommended to finalize the following before installing K3s:

  • Hostname configuration
  • Network settings
  • Proxy configuration (if required)

Changing these settings after K3s installation may cause cluster configuration issues.

17. How do you install K3s for a single-node NextGen Gateway deployment?

For a single-node deployment, run the following command on the gateway VM:

opsramp-collector-start setup init
This command installs K3s and initializes the Kubernetes environment required to deploy the gateway components.

18. How do you verify that K3s installation completed successfully?

After installation, verify the K3s status using Kubernetes commands or by checking the node status.

service k3s status

kubectl get nodes

A successful installation typically shows:

  • Kubernetes services running
  • The node registered in the cluster
  • K3s services active on the host

19. Can custom network ranges be configured during K3s installation?

Yes. Custom network ranges for pods and services can be specified during installation.

Example command:

opsramp-collector-start setup init \
--cluster-cidr <cluster-cidr-ip> \
--service-cidr <service-cidr-ip>

These parameters allow administrators to avoid conflicts with existing network ranges in their environment.

20. How is K3s installed for a high availability (HA) NextGen Gateway cluster?

For HA deployments, K3s is installed with the HA option enabled:

opsramp-collector-start setup init --enable-ha=true --loadbalancer-ip {loadbalancerIP}
This command initializes the first node of the Kubernetes cluster and configures load balancing for the gateway services.

21. How do you add additional nodes to the K3s cluster?

To add a new node to the existing cluster:

  1. Generate the node token on the first node:

    opsramp-collector-start setup node token

  2. Join the new node using the generated token:

    opsramp-collector-start setup node add -u https://{NodeIP}:6443 -t {token}

Repeat the process for additional nodes to build the HA cluster.

22. What is Registry Configuration in K3s and why is it needed?

A container registry is a storage location where container images are hosted. These images are required for deploying applications inside Kubernetes clusters.

In the context of OpsRamp NextGen Gateway, Kubernetes (K3s) downloads container images for gateway components from a container registry. By default, these images are typically pulled from OpsRamp public registries such as GCR Repo (us-docker.pkg.dev).

However, in some environments, access to public registries may be restricted due to:

  • Corporate security policies
  • Firewall restrictions
  • Air-gapped environments
  • Compliance requirements

In such cases, organizations configure a private container registry where the required images are stored internally. K3s must then be configured to pull images from this private registry.

For more details refer Install K3s on ISO/OVA page.

23: What happens if the registry configuration is incorrect?

If the registry configuration is incorrect, Kubernetes may fail to pull container images. This will result in pods entering an error state such as:

  • ImagePullBackOff
  • ErrImagePull

These errors indicate that Kubernetes cannot download the required container images.

To resolve the issue, verify:

  • Registry URL
  • Authentication credentials
  • Network connectivity
  • TLS configuration

24: What are Pod CIDR and Service CIDR ranges in Kubernetes (K3s)?

In Kubernetes environments, network communication between containers and services is handled using internal IP address ranges called CIDR blocks.

Two important network ranges are used:

Pod CIDR Range

  • Defines the IP address range assigned to Pods running in the Kubernetes cluster.
  • Each pod receives a unique IP address from this range.

Service CIDR Range

  • Defines the IP address range assigned to Kubernetes services.
  • These service IPs provide a stable endpoint to access pods.

These CIDR ranges enable internal communication between gateway components running inside the Kubernetes cluster.

25: Why would you need to customize Pod and Service CIDR ranges?

By default, Kubernetes assigns standard CIDR ranges for pods and services. However, these default ranges may conflict with existing network configurations in enterprise environments.

Custom CIDR ranges are useful when:

  • Your organization already uses the default Kubernetes CIDR ranges internally.
  • Network conflicts occur with existing infrastructure.
  • The gateway is deployed in a large enterprise network with strict IP address management policies.
  • You are integrating with multiple Kubernetes clusters in the same network. Customizing CIDR ranges helps avoid IP conflicts and communication issues within the infrastructure.

26: What are the default CIDR ranges used by Kubernetes?

Although configurations may vary, the common default ranges used in Kubernetes are:

Network TypeDefault CIDR Range
Pod Network10.42.0.0/16
Service Network10.43.0.0/16

These ranges are reserved for internal Kubernetes communication and are not typically exposed outside the cluster.

If these ranges overlap with existing network segments in your infrastructure, you should configure custom ranges before installation.

27. When should Custom Pod and Service CIDR ranges be configured?

Custom CIDR ranges must be configured before installing K3s.

You should configure custom ranges if:

  • The default Kubernetes network ranges conflict with your existing IP ranges.
  • Your organization follows a predefined network segmentation policy.
  • You are deploying gateways across multiple network zones or data centers.

Once the cluster is installed, changing CIDR ranges becomes complex and may require reinstalling the cluster.

28: How do you configure custom Pod and Service CIDR ranges in K3s?

Custom network ranges can be configured during the K3s installation process by specifying the CIDR values.

Example configuration:

--cluster-cidr=<POD_CIDR_RANGE>
--service-cidr=<SERVICE_CIDR_RANGE>

Example:

--cluster-cidr=172.16.0.0/16
--service-cidr=172.17.0.0/16

Explanation:

  • cluster-cidr defines the IP range for pods.
  • service-cidr defines the IP range for Kubernetes services.

These ranges should be carefully selected to ensure they do not overlap with other network segments.

For more details refer, Install K3s on ISO/OVA page.

29: What happens if Pod or Service CIDR ranges overlap with existing networks?

If the CIDR ranges overlap with other network segments, several networking issues can occur:

  • Pods may not be able to communicate with services.
  • Internal DNS resolution may fail.
  • Traffic routing may become unpredictable.
  • Monitoring and gateway services may fail to connect to resources.

In some cases, overlapping CIDR ranges can completely break cluster networking.

To prevent such issues, always validate CIDR ranges before installation.

30: Can Pod and Service CIDR ranges be changed after installation?

Technically, modifying CIDR ranges after cluster installation is very difficult and not recommended.

Changing CIDR ranges may require:

  • Reconfiguring Kubernetes networking
  • Recreating cluster nodes
  • Redeploying applications
  • Restarting multiple services

Because of this complexity, it is strongly recommended to configure the correct CIDR ranges before installing K3s and the OpsRamp gateway.

31: How can you verify the configured Pod and Service CIDR ranges?

After the cluster is installed, you can verify the configured network ranges using Kubernetes commands.

Example command:

kubectl cluster-info dump | grep -i cidr

You can also inspect the node configuration or Kubernetes network settings to confirm the CIDR values.

This verification ensures the cluster networking has been configured correctly.

32. What is the OpsRamp Collector Bootstrap Tool?

The Collector Bootstrap Tool is a lightweight utility used to:

  • Deploy gateway components
  • Register the gateway with OpsRamp
  • Install necessary services

It simplifies the onboarding of collectors and gateways.