Apica Docs
  • Welcome to Apica Docs!
  • PRODUCT OVERVIEW
    • Ascent Overview
    • Ascent User Interface
  • TECHNOLOGIES
    • Ascent with Kubernetes
      • Kubernetes is a Game-Changer
      • Ascent: Built on Kubernetes
    • Ascent with OpenTelemetry
      • Why Implement OpenTelemetry?
      • Common Use Cases for OpenTelemetry
      • How to Get Started with OpenTelemetry
      • Best Practices for OpenTelemetry Implementations
  • RELEASE NOTES
    • Release Notes
      • Ascent 2.10.3
      • Ascent 2.10.2
      • Ascent 2.9.0
      • Ascent 2.8.1
      • Ascent 2.8.0
      • Ascent 2.7.0
      • Ascent 2.6.0
      • Ascent 2.5.0
      • Ascent 2.4.0
      • Ascent 2.3.0
      • Ascent 2.2.0
      • Ascent 2.1.0
        • Data Fabric
          • Releases-old
        • Synthetic Monitoring
        • Advanced Scripting Engine
        • IRONdb
      • Synthetic Monitoring
  • GETTING STARTED
    • Getting Started with Ascent
      • Getting Started with Metrics
      • Getting Started with Logs
        • OpenTelemetry
    • Ascent Deployment Overview
    • Quickstart with Docker-Compose
    • On-Premise PaaS deployment
      • On-Premise PaaS Deployment Architecture
      • Deploying Apica Ascent PaaS on Kubernetes
      • Deploying Apica Ascent PaaS on MicroK8s
      • Deploying Apica Ascent PaaS on AWS
      • Deploying Apica Ascent EKS on AWS using CloudFormation
      • Deploying Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using Cloud Formation
        • Deploying Apica Ascent on AWS EKS with Aurora PostgreSQL and ElastiCache Redis using CloudFormation
        • Apica Ascent on AWS EKS (Private Endpoint) with Aurora PostgreSQL and ElastiCache Redis on prod VPC
      • Deploying Apica Ascent EKS on AWS using custom AMI
      • Deploying Apica Ascent EKS with AWS ALB
      • Deploying Apica Ascent PaaS in Azure Kubernetes Service
        • Azure Blob Storage Lifecycle Management
      • Deploying Apica Ascent with OpenShift
    • Boomi RTO Quick Start Guide
      • RTO Dashboarding
      • Alerting on RTO Metrics
      • Alerting on RTO Logs
    • Dashboards & Visualizations
  • DATA SOURCES
    • Data Source Overview
    • API
      • JSON Data source
      • RSS
    • AWS
      • Amazon Athena
      • Amazon CloudWatch ( YAML )
      • Amazon Elasticsearch Service
      • Amazon Redshift
      • MySQL Server (Amazon RDS)
    • NoSQL Data Sources
      • MongoDB
    • OLAP
      • Data Bricks
      • Druid
      • Snowflake
    • SQL Data Sources
      • PostgreSQL
      • Microsoft SQL Server
      • MySQL Server
    • Time Series Databases
      • Prometheus Compatible
      • Elasticsearch
      • InfluxDB
    • Ascent Synthetics
      • Checks
    • Ascent Logs
      • Logs
  • INTEGRATIONS
    • Integrations Overview
      • Generating a secure ingest token
      • Data Ingest Ports
    • List of Integrations
      • Apache Beam
        • Export Metrics to Prometheus
          • Pull Mechanism via Push-Gateway
        • Export Events to Apica Ascent
      • Apica ASM
      • Apica Ascent Observability Data Collector Agent
      • AWS
        • AWS CloudWatch
        • AWS ECS
          • Forwarding AWS ECS logs to Apica Ascent using AWS FireLens
          • ECS prometheus metrics to Apica Ascent
        • AWS S3
      • Azure
        • Azure Databricks
        • Azure Eventhub
        • Azure Event Hubs
      • Docker Compose
      • Docker Swarm logging
      • Docker Syslog log driver
      • F5 Big-Ip System
      • Filebeat
      • Fluent Bit
        • Forwarding Amazon-Linux logs to Apica Ascent using Fluent Bit
        • Fluent Bit installation on Ubuntu
        • Enabling IoT(MQTT) Input (PAAS)
        • IIS Logs on Windows
      • Fluentd
      • FortiNet Firewalls
      • GCP PubSub
      • GCP Cloud Logging
      • IBM QRadar
      • ilert
      • Incident Management
        • Webhooks
      • Jaeger
      • Kafka
      • Kinesis
      • Kubernetes
      • Logstash
      • MQTT
      • Network Packets
      • OpenTelemetry
      • Object store (S3 Compatible)
      • Oracle OCI Infrastructure Audit/Logs
      • Oracle Data Integrator (ODI)
      • OSSEC Variants (OSSEC/WAZUH/ATOMIC)
        • Apica Ascent-OSSEC Agent for Windows
      • Palo Alto Firewall
      • Prometheus
        • Spring Boot
        • Prometheus on Windows
        • Prometheus Remote Write
        • MongoDB Exporter
        • JMX Exporter
      • Rsyslogd
      • Syslog
      • Syslog-ng
      • Splunk Universal Forwarder
      • Splunk Heavy Forwarder
      • SNMP
      • Splunk Forwarding Proxy
      • Vault
        • Audit Vault Logs - AWS
        • Audit Vault Logs - OCI
        • Audit Vault Metrics
    • Apica API DOCS
  • DATA MANAGEMENT
    • Data Management Overview
    • Data Explorer Overview
      • Query Builder
      • Widget
      • Alerts
      • JSON Import
      • Creating Json Schema
        • Visualization
          • Line chart
          • Bar chart
          • Area chart
          • Scatter chart
          • Status chart
          • Counter chart
          • Stat chart
          • Size chart
          • Dense Status chart
          • Honeycomb chart
          • Gauge chart
          • Pie chart
          • Disk chart
          • Table chart
          • Date time chart
      • Time-Series AI/ML
        • Anomaly Detection
        • Averaging
        • Standard Deviation(STD)
      • Data Explorer Dashboard
        • Create a Dashboard
        • Editing Dashboard
          • Dashboard level filters
    • Timestamp handling
      • Timestamp bookmark
    • Large log/events/metrics/traces
  • OBSERVE
    • Monitoring Overview
      • Connecting Prometheus
      • Connecting Amazon Managed Service for Prometheus
      • Windows Redis Monitoring
      • Writing queries
        • Query Snippets
      • Query API
      • Use Apica API to ingest JSON data
    • Distributed Tracing
      • Traces
      • Spans
      • Native support for OTEL Traces
      • Windows .NET Application Tracing
      • Linux+Java Application Tracing
    • Log Management
      • Terminology
      • Explore Logs
      • Topology
      • Apica Ascent Search Cheat Sheet
      • Share Search Results
      • Severity Metrics
      • Log2Metrics
      • Native support for OTEL Logs
      • Reports
        • Accessing Reports results via API
      • Role-Based Access Control (RBAC)
      • Configuring RBAC
    • AI and LLM Observability
      • AI Agent Deployment
      • Ascent AI Agent Monitoring
      • Ascent Quick Start Guide
    • Synthetic Check Monitoring
      • Map View
      • List View
      • Alerting for Check Results
  • Flow
    • Overview
    • Pipeline Management
      • Configuring Pipelines
      • Visualize Pipelines
      • Pipeline Overview Dashboard
      • Forwarding Data
    • OpenTelemetry Ingest
      • OpenTelemetry Logs / Traces
      • OpenTelemetry Metrics
        • Transforming Metrics through Code Rules
    • Vault
      • Certificates
      • Variables
      • Lookups
    • Rules
      • FILTER
      • EXTRACT
      • SIEM and TAG
      • REWRITE
      • CODE
      • FORWARD
        • Rename Attributes
      • STREAM
    • Functions
      • ascent.encode
      • ascent.decode
      • ascent.persist
      • Ascent.variables
      • ascent.crypto
      • Ascent.mask
      • Ascent.net
      • Ascent.text
      • Ascent.time
      • Ascent.lookups
    • List of Forwarders
    • OpenTelemetry Forwarding
      • Metrics
      • Traces
      • Logs
    • Splunk Forwarding
      • Apica UF Proxy App Extension
        • Standalone Instance
        • List of Indexer Instances
        • Indexer Discovery
      • Metric Indexes
      • Non Metric Indexes
      • Syslog Forwarding
    • Real-Time Stream Forwarding
      • AWS Kinesis
      • Azure Eventhub
      • Google Pub/Sub
    • Security Monitor Forwarding
      • Arc Sight
      • RSA New Witness
    • Forwarding to Monitoring Tools
      • Datadog Forwarding
      • New Relic Forwarding
      • Dynatrace Forwarding
      • Elasticsearch Forwarding
      • Coralogix Forwarding
      • Azure Log Analytics Forwarding
    • Object Store Forwarding
      • S3 Compatible
      • Azure Blob Storage
    • Forwarding to Data Warehouse
      • GCP Bigquery
  • Customized Forwarders
    • JS Code Forwarding
  • LAKE
    • Powered by Instastoreâ„¢
  • FLEET MANAGEMENT
    • Overview
    • Agents
    • Configurations
    • Packages
    • Fleet Repository Management
    • Advanced Search
    • List of Agents
      • Datadog Agent
      • Fluent-bit Agent
      • Grafana Alloy
      • OpenTelemetry Collector
      • OpenTelemetry Kubernetes
      • Prometheus Agent
  • COMMAND LINE INTERFACE
    • apicactl Documentation
  • AUTONOMOUS INSIGHTS
    • Time Series AI-ML
      • Anomaly Detection
      • Averaging
      • Standard Deviation(STD)
      • Forecasting
      • AI-ML on PromQL Query Data Set
      • Statistical Data Description
    • Pattern-Signature (PS)
      • Log PS Explained
        • Unstructured Logs
        • Semi-structured JSON
        • Reduce Logs Based on PS
        • Log PS Use Cases
          • Log Outlier Isolation
          • Log Trending Analysis
          • Simple Log Compare
      • Config PS
        • Config JSON PS
    • ALIVE Log Visualization
      • ALIVE Pattern Signature Summary
      • ALIVE Log Compare
    • Log Explained using Generative AI
      • Configuring Generative AI Access
      • GenAI Example Using Log Explain
    • Alerts
    • Alerts (Simple/Anomaly)
    • Alerts On Logs
    • Rule Packs
    • AI-powered Search
  • PLATFORM DOCS
    • Synthetic Monitoring Overview
      • Getting Started with ASM
        • Achieving 3 Clicks to Issue Resolution via ASM
        • FAQ - Frequently Asked Questions
        • Creating A New Check
          • Creating a New Real Browser Check
      • Explore the Platform
        • API Details
        • Check Types
          • Android Check
          • Command Check
          • Compound Check
          • Browser Check
          • Desktop Application Check
          • AWS Lambda Check
          • DNS Resolver Check
          • DNS Security Check
          • Domain Availability Check
          • Domain Delegation Check
          • Domain Expiration Date Check
          • Hostname Integrity Check
          • iPad Check
          • iPhone Check
          • Ping Check
          • Port Check
          • Postman Check
          • Response Time Check
          • SSL Certificate Expiration Check
          • Scripted Check
        • Dashboards
        • Integrations
          • DynaTrace Integration
          • Google Analytics Integration
          • Akamai Integration
          • Centrify Integration
          • AppDynamics Integration
          • PagerDuty Integration
          • ServiceNow Integration
          • Splunk Integration
        • Metrics
          • Analyze Site
          • Result Values
          • Trends
          • Analyze Metrics
        • Monitoring
          • Integrating ASM Metrics into Grafana Using Apica Panels
            • Understanding the ASM Imported Dashboards
            • Using the Apica Panels Dashboards
          • Understanding ASM Check Host Locations
        • Navigation
          • Manage Menu
        • Reports
        • Use Cases
      • Configurations
        • Configuring Checks
          • Understanding Check Results
            • Understanding ZebraTester Check Results
            • Understanding Browser Check Results
            • Understanding Check Details
          • Editing Checks
            • Editing Browser Checks
            • Editing ZebraTester Checks
          • Using Regular Expressions Within the ASM Platform
          • Understanding the Edit Scenario Page
          • Comparing Selenium IDE Scripts to ASM Scenarios
          • Configuring Apica DNS Check Types
          • Implementing Tags Effectively Within ASM
          • Storing and Retrieving Information Using the ASM Dictionary
        • Configuring Users
          • Configuring SSO Within ASM
        • Configuring Alerts
          • Configuring Webhook Alerts
      • How-To Articles
        • ASM Monitoring Best Practices
        • API Monitoring Guide
        • IT Monitoring Guide
        • Monitor Mission-Critical Applications through the Eyes of Your Users
        • How To Mask Sensitive Data in ASM
        • How to Mask Sensitive Data When Using Postman Checks
        • How to Handle URL Errors in a Check
        • How To Set Up SSO Using Azure AD
        • How to Set Up SSO Using Centrify
        • ASM Scenarios How-To
          • How To Pace a Selenium Script
          • How to Utilize XPath Within a Selenium Script
          • How to Mask Sensitive Information Within an ASM Scenario
          • Handling Elements Which Do Not Appear Consistently
          • How to Handle HTML Windows in ASM Scenarios
    • ZebraTester Scripting
      • ZebraTester Overview
      • Install ZebraTester
        • Download ZebraTester
          • Core ZebraTester V7.5-A Documentation
          • Core ZebraTester V7.0-B Documentation
          • Core ZebraTester V7.0-A Documentation
          • Core ZebraTester V5.5-Z Documentation
          • Core ZebraTester V5.5-F Documentation
        • Download the ZebraTester Recorder Extension
        • Windows Installation
          • ZebraTester on Windows
          • Generate Private CA Root Certificate
          • Windows System Tuning
          • Install a new ZT version on Windows Server
          • Install/Uninstall ZT Windows Installer Silently
        • macOS Installation
          • macOS Preinstallation Instructions
          • Generate Private CA Root Cert (macOS X)
          • System Tuning (macOS)
          • Import a CA Root Certificate to an iOS device
          • Memory Configuration Guidelines for ZebraTester Agents
      • ZebraTester User Guide
        • Menu and Navigation Overview
        • 1. Get a Load Test Session
          • Recording Web Surfing Sessions with ZebraTester
            • Further Hints for Recording Web Surfing Sessions
            • Recording Extension
              • Record Web Session
              • Cookies and Cache
              • Proxy
              • Page Breaks
              • Recording Extension Introduction
              • Troubleshooting
            • Add URL to ZebraTester
            • Page Scanner
          • Next Steps after Recording a Web Surfing Session
        • 2. Scripting the Load Test Session
          • 1. Assertions - HTTP Response Verificaton
          • 2. Correlation - Dynamic Session Parameters
            • 2b. Configuring Variable Rules
            • 2a. Var Finder
          • 3. Parameterization: Input Fields, ADR and Input Files
            • ADR
          • 4. Execution Control - Inner Loops
          • 5. Execution Control - URL Loops
          • 6. Execution Control -User-Defined Transactions And Page Breaks
          • 7. Custom Scripting - Inline Scripts
          • 8. Custom Scripting - Load Test Plug-ins
            • ZebraTester Plug-in Handbooks
          • Modular Scripting Support
        • 3. Recording Session Replay
        • 4. Execute the Load Test
          • Executing a First Load Test
          • Executing Load Test Programs
            • Project Navigator
              • Configuration of the Project Navigator Main Directory
            • Real-Time Load Test Actions
            • Real-Time Error Analysis
            • Acquiring the Load Test Result
            • More Tips for Executing Load Tests
          • Distributed Load Tests
            • Exec Agents
            • Exec Agent Clusters
          • Multiple Client IP Addresses
            • Sending Email And Alerts
            • Using Multiple Client IP Addresses per Load-Releasing System
        • 5. Analyzing Results
          • Detail Results
          • Load Test Result Detail-Statistics and Diagrams
          • Enhanced HTTP Status Codes
          • Error Snapshots
          • Load Curve Diagrams
          • URL Exec Step
          • Comparison Diagrams
            • Analysis Load Test Response Time Comparison
            • Performance Overview
            • Session Failures
        • Programmatic Access to Measured Data
          • Extracting Error Snapshots
          • Extracting Performance Data
        • Web Tools
        • Advanced Topics
          • Execute a JMeter Test Plan in ZebraTester
          • Credentials Manager for ZebraTester
          • Wildcard Edition
          • Execution Plan in ZebraTester
          • Log rotation settings for ZebraTester Processes
          • Modify Session
          • Modular Scripting Support
          • Understanding Pacing
          • Integrating ZebraTester with GIT
            • GitHub Integration Manual V5.4.1
      • ZebraTester FAQ
      • ZebraTester How-to articles
        • How to Combine Multiple ZebraTester Scripts Into One
        • Inline Scripting
        • How to Configure a ZebraTester Script to Fetch Credentials from CyberArk
        • How to Configure a ZebraTester Scenario to Fetch Credentials from CyberArk
        • How to Convert a HAR file into a ZebraTester Script
        • How to Convert a LoadRunner Script to ZebraTester
        • How to Import the ZT Root Certificate to an iOS device
        • How to iterate over JSON objects in ZebraTester using Inline Scripts
        • How to round a number to a certain number of decimal points within a ZebraTester Inline Script
        • How to Use a Custom DNS Host File Within a ZebraTester Script
        • How to Move a ZebraTester Script to an Older Format
        • API Plugin Version
        • Setting up the Memu Player for ZebraTester Recording
        • Inline Script Version
      • Apica Data Repository (ADR) aka Apica Table Server
        • ADR related inline functions available in ZT
        • Apica Data Repository Release Notes
        • REST Endpoint Examples
        • Accessing the ADR with Inline Scripts
      • ZebraTester Plugin Repository
      • Apica YAML
        • Installing and Using the ApicaYAML CLI Tool
        • Understanding ApicaYAML Scripting and Syntax
    • Load Testing Overview
      • Getting Started with ALT
      • Creating / Running a Single Load Test
      • Running Multiple Tests Concurrently
      • Understanding Loadtest Results
    • Test Data Orchestrator (TDO)
      • Technical Guides
        • Hardware / Environment Requirements
        • IP Forwarding Instructions (Linux)
        • Self-Signed Certificate
        • Windows Server Install
        • Linux Server Install
        • User Maintenance
        • LDAP Setup
        • MongoDB Community Server Setup
        • TDX Installation Guide
      • User Documentation
        • End User Guide for TDO
          • Connecting to Orson
          • Coverage Sets and Business Rules
          • Data Assembly
          • Downloading Data
        • User Guide for TDX
          • Connecting to TDX
          • Setting up a Data Profile
          • Extracting Data
          • Analyzing Data Patterns
          • Performing Table Updates
        • API Guide
          • API Structure and Usage
          • Determining Attribute APIs
            • Create Determining Attribute (Range-based)
            • Create Determining Attribute (Value-based)
            • Update Determining Attributes
            • Get Determining Attribute Details
            • Delete a Determining Attribute
          • Coverage Set API’s
            • Create Coverage Set
            • Update Coverage Set
            • Get All Coverage Set Details
            • Get Single Coverage Set Details
            • Lock Coverage Set
            • Unlock Coverage Set
            • Delete Coverage Set
          • Business Rule API’s
            • Create Business Rule
            • Update Business Rule
            • Get Business Rule Details
            • Get All Business Rules
            • Delete Business Rule
          • Workset API's
            • Create Workset
            • Update Workset
            • Get All Worksets
            • Get Workset Details
            • Unlock Workset
            • Clone Workset
            • Delete Workset
          • Data Assembly API's
            • Assemble Data
            • Check Assembly Process
          • Data Movement API's
            • Ingest (Upload) Data Files
            • Download Data Files
              • HTML Download
              • CSV Download
              • Comma Delimited with Sequence Numbers Download
              • Pipe Delimited Download
              • Tab Delimited with Sequence Numbers Download
              • EDI X12 834 Download
              • SQL Lite db Download
              • Alight File Format Download
          • Reporting API's
            • Session Events
            • Rules Events
            • Coverage Events
            • Retrieve Data Block Contents
            • Data Assembly Summary
        • Workflow Guide
      • Release Notes
        • Build 1.0.2.0-20250213-1458
  • IRONdb
    • Getting Started
      • Installation
      • Configuration
      • Cluster Sizing
      • Command Line Options
      • ZFS Guide
    • Administration
      • Activity Tracking
      • Compacting Numeric Rollups
      • Migrating To A New Cluster
      • Monitoring
      • Operations
      • Rebuilding IRONdb Nodes
      • Resizing Clusters
    • API
      • API Specs
      • Data Deletion
      • Data Retrieval
      • Data Submission
      • Rebalance
      • State and Topology
    • Integrations
      • Graphite
      • Prometheus
      • OpenTSDB
    • Tools
      • Grafana Data Source
      • Graphite Plugin
      • IRONdb Relay
      • IRONdb Relay Release Notes
    • Metric Names and Tags
    • Release Notes
    • Archived Release Notes
  • Administration
    • E-Mail Configuration
    • Single Sign-On with SAML
    • Port Management
    • Audit Trail
      • Events Trail
      • Alerts Trail
Powered by GitBook
On this page
  • System Requirements
  • System Tuning
  • Installation Steps
  • Configure Software Sources
  • Install Package
  • Setup Process
  • Add License
  • Cluster Configuration
  • Determine Cluster Parameters
  • Topology Requirements
  • Create Topology Layout
  • Import Topology
  • Verify Cluster Communication
  • Updating

Was this helpful?

Edit on GitHub
Export as PDF
  1. IRONdb
  2. Getting Started

Installation

How to install IRONdb on a system.

PreviousGetting StartedNextConfiguration

Last updated 2 hours ago

Was this helpful?

System Requirements

IRONdb requires one of the following operating systems:

  • Ubuntu 22.04 LTS

Additionally, IRONdb requires the ZFS filesystem. This is available natively on Ubuntu.

Hardware requirements will necessarily vary depending upon system scale and cluster size. An appendix with general guidelines for calculating cluster size is provided. Please contact us with questions regarding system sizing.

Apica recommends the following minimum system specification for the single-node, free, 25K-metrics option:

  • 1 CPU

  • 4 GB RAM

  • SSD-based storage, 20 GB available space

The following network protocols and ports are utilized. These are defaults and may be changed via configuration files.

  • 2003/tcp (Carbon plaintext submission)

  • 4242/tcp (OpenTSDB plaintext submission)

  • 8112/tcp (admin UI, HTTP REST API, , )

  • 8112/udp ()

  • 8443/tcp (admin UI, HTTP REST API when TLS configuration is used)

  • 32322/tcp (admin console, localhost only)

System Tuning

IRONdb is expected to perform well on a standard installation of supported platforms, but to ensure optimal performance, there are a few tuning changes that should be made. This is especially important if you plan to push your IRONdb systems to the limit of your hardware.

Disable Swap

With systems dedicated solely to IRONdb, there is no need for swap space. Configuring no swap space during installation is ideal, but you can also swapoff -a and comment out any swap lines from /etc/fstab.

Disable Transparent Hugepages

THP can interact poorly with the ZFS ARC, causing reduced performance for IRONdb.

Disable by setting these two kernel options to never:

echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag

Making these changes persistent across reboot differs depending on distribution.

For Ubuntu, install the sysfsutils package and edit /etc/sysfs.conf, adding the following lines:

kernel/mm/transparent_hugepage/enabled = never
kernel/mm/transparent_hugepage/defrag = never

Note: the sysfs mount directory is automatically prepended to the attribute name.

Installation Steps

Follow these steps to get IRONdb installed on your system.

System commands must be run as a privileged user, such as root, or via sudo.

Configure Software Sources

Install the signing keys:

sudo curl -s -o /etc/apt/trusted.gpg.d/circonus.asc \
  'https://keybase.io/circonuspkg/pgp_keys.asc?fingerprint=14ff6826503494d85e62d2f22dd15eba6d4fa648'

sudo curl -s -o /etc/apt/trusted.gpg.d/backtrace.asc \
  https://updates.circonus.net/backtrace/ubuntu/backtrace_package_signing.key

Create the file /etc/apt/sources.list.d/circonus.list with the following contents, depending on the version:

For Ubuntu 22.04:

deb https://updates.circonus.net/irondb/ubuntu/ jammy main
deb https://updates.circonus.net/backtrace/ubuntu/ jammy main

For Ubuntu 24.04:

deb https://updates.circonus.net/irondb/ubuntu/ noble main
deb https://updates.circonus.net/backtrace/ubuntu/ noble main

Finally, run sudo apt-get update.

Install Package

There is a helper package that works around issues with dependency resolution, since IRONdb is very specific about the versions of dependent Apica packages, and apt-get is unable to cope with them. The helper package must be installed first, i.e., it cannot be installed in the same transaction as the main package.

sudo apt-get install circonus-platform-irondb-apt-policy
sudo apt-get install circonus-platform-irondb

Setup Process

Prepare site-specific information for setup. These values may be set via shell environment variables, or as arguments to the setup script. The environment variables are listed below.

NOTE: if you wish to use environment variables, you will need to run the install from a root shell, as sudo will clear the environment when it runs.

IRONDB_NODE_UUID

(required) The ID of the current node, which must be unique within a given cluster. You may use the uuidgen command that comes with your OS, or generate a well-formed, non-nil UUID with an external tool or website. Note that this must be a lowercase UUID. The uuidgen tool on some systems, notably MacOS, produces uppercase. Setup will warn and convert the UUID to lowercase.

IRONDB_NODE_ADDR

(required) The IPv4 address or hostname of the current node, e.g., "192.168.1.100" or "host1.domain.com". Hostnames will be resolved to IP addresses once at service start. Failures in DNS resolution may cause service outages.

IRONDB_CHECK_UUID

(required) Check ID for Graphite, OpenTSDB, and Prometheus metric ingestion, which must be the same on all cluster nodes. You may use the uuidgen command that comes with your OS, or generate a well-formed, non-nil UUID with an external tool or website. Note that this must be a lowercase UUID. The uuidgen tool on some systems, notably MacOS, produces uppercase. Setup will warn and convert the UUID to lowercase.

IRONDB_TLS

This is currently an alpha feature, for testing only.

Note that OpenTSDB does not support TLS. Even if this option is set to "on", the listener on port 4242 will not use TLS.

Because of the certificate requirement, the service will not automatically start post-setup.

IRONDB_CRASH_REPORTING

(optional) Controls enablement of automated crash reporting. Default is "on". IRONdb utilizes sophisticated crash tracing technology to help diagnose errors. Enabling crash reporting requires that the system be able to connect out to the Apica reporting endpoint:https://circonus.sp.backtrace.io:6098 . If your site's network policy forbids this type of outbound connectivity, set the value to "off".

IRONDB_ZPOOL

(optional) The name of the zpool that should be used for IRONdb storage. If this is not specified and there are multiple zpools in the system, setup chooses the pool with the most available space.

Run Installer

Run the setup script. All required options must be present, either as environment variables or via command-line arguments. A mix of environment variables and arguments is permitted, but environment variables take precedence over command-line arguments.

/opt/circonus/bin/setup-irondb \
    -a <ip_or_hostname> \
    -n <node_uuid> \
    -u <integration_check_uuid>

Use the -h option to view a usage summary.

Upon successful completion, it will print out specific information about how to submit Graphite, OpenTSDB, and Prometheus metrics. See the Integrations section for details.

Add License

(Optional)

IRONdb comes with an embedded license that allows all features with a limit of 25K active, unique metric streams. If you wish to obtain a more expansive license, please contact Apica Sales.

Add the <license> stanza from your purchased IRONdb license to the file/opt/circonus/etc/licenses.conf on your IRONdb instance, within the enclosing<licenses> tags. It should look something like this:

<licenses>
  <license id="(number)" sig="(cryptographic signature)">
    <graphite>true</graphite>
    <max_streams>25000</max_streams>
    <company>MyCompany</company>
  </license>
</licenses>

If you are running a cluster of IRONdb nodes, the license must be installed on all nodes.

Restart the IRONdb service:

  • /bin/systemctl restart circonus-irondb

Cluster Configuration

Additional configuration is required for clusters of more than one IRONdb node. The topology of a cluster describes the addresses and UUIDs of the participating nodes, as well as the desired number of write copies for stored data. Ownership of metric streams (deciding which node that stream's data should be written to) is determined by the topology.

The above setup script configures a single, standalone instance. If you have already been using such an instance, configuring it to be part of a cluster will cause your existing stored data to become unavailable. It is therefore preferable to complete cluster setup prior to ingesting any metric data into IRONdb.

Note for existing clusters: adding one or more nodes to an existing cluster requires a special "rebalance" operation to shift stored metric data to different nodes, as determined by a new topology. See Resizing Clusters for details.

Determine Cluster Parameters

The number and size of nodes you need is determined by several factors:

  • Frequency of measurement ingestion

  • Desired level of redundancy (write copies)

  • Minimum granularity of rollups

  • Retention period

The number of write copies determines the number of nodes that can be unavailable before metric data become inaccessible. A cluster with W write copies can survive W-1 node failures before data become inaccessible.

See the appendix on cluster sizing for details.

Topology Requirements

There are a few important considerations for IRONdb cluster topologies:

  • A specific topology is identified by a hash. IRONdb clusters always have an "active" topology, referenced by the hash.

  • The topology hash is determined using the values of id, port, and weight, as well as the ordering of the <node> stanzas. Changing any of these on a previously configured node will invalidate the topology and cause the node to refuse to start. This is a safety measure to guard against data loss.

  • UUIDs must be well-formed, non-nil, and lowercase.

  • The node address may be changed at any time without affecting the topology hash, but care should be taken not to change the ordering of any node stanzas.

  • If a node fails, its replacement should keep the same UUID, but it can have a different IP address or hostname.

Create Topology Layout

The topology layout describes the particular nodes that are part of the cluster as well as aspects of operation for the cluster as a whole, such as the number of write copies. The layout file is not read directly by IRONdb, rather it is used to create a canonical topology representation that will be referenced by the IRONdb config.

A helper script exists for creating the topology: /opt/circonus/bin/topo-helper:

Usage: ./topo-helper [-h] -a <start address>|-A <addr_file> -w <write copies> [-i <uuid,uuid,...>|-n <node_count>] [-s]
  -a <start address> : Starting IP address (inclusive)
  -A <addr_file>     : File containing node IPs or hostnames, one per line
  -i <uuid,uuid,...> : List of (lowercased) node UUIDs
                       If omitted, UUIDs will be auto-generated
  -n <node_count>    : Number of nodes in the cluster (required if -i is omitted)
  -s                 : Create a sided configuration
  -w <write copies>  : Number of write copies
  -h                 : Show usage summary

This will create a temporary config, which you can edit afterward, if needed, before importing. There are multiple options for generating the list of IP addresses or hostnames, and for choosing the node UUIDs.

The simplest form is to give a starting IP address, a node count, and a write-copies value. For example, in a cluster of 3 nodes, where we want 2 write copies:

/opt/circonus/bin/topo-helper -a 192.168.1.11 -n 3 -w 2

The resulting temporary config (/tmp/topology.tmp) looks like this:

<nodes write_copies="2">
  <node id="7dffe44b-47c6-43e1-db6f-dc3094b793a8"
        address="192.168.1.11"
        apiport="8112"
        port="8112"
        weight="170"/>
  <node id="964f7a5a-6aa5-4123-c07c-8e1a4fdb8870"
        address="192.168.1.12"
        apiport="8112"
        port="8112"
        weight="170"/>
  <node id="c85237f1-b6d7-cf98-bfef-d2a77b7e0181"
        address="192.168.1.13"
        apiport="8112"
        port="8112"
        weight="170"/>
</nodes>

The helper script auto-generated the node UUIDs. You may edit this file if needed, for example if your IP addresses are not sequential.

You may supply your own UUIDs in a comma-separated list, in which case the node count will be implied by the number of UUIDs:

/opt/circonus/bin/topo-helper -a 192.168.1.11 -w 2 -i <uuid>,<uuid>,<uuid>

If you wish to use DNS names instead of IP addresses, you can provide them in a file, one per line:

$ cat host_list.txt
myhost1.example.com
myhost2.example.com
myhost3.example.com

Then pass the filename to the helper script:

/opt/circonus/bin/topo-helper -A host_list.txt -n 3 -w 2

To configure a sided cluster, use the -s option. This will assign alternate nodes to side "a" or "b". If you wish to divide the list differently, you may edit the /tmp/topology.tmp file accordingly. If omitted, the cluster will be non-sided, if the node count is less than 10. For clusters of 10 or more nodes, the helper script will default to configuring a sided cluster, because there are significant operational benefits, described below.

When you are satisfied that it looks the way you want, copy /tmp/topology.tmp to /opt/circonus/etc/topology on each node, then proceed to the Import Topology step.

Sided Clusters

One additional configuration dimension is possible for IRONdb clusters. A cluster may be divided into two "sides", with the guarantee that at least one copy of each stored metric exists on each side of the cluster. For W values greater than 2, write copies will be assigned to sides as evenly as possible. Values divisible by 2 will have the same number of copies on each side, while odd-numbered W values will place the additional copy on the same side as the primary node for each metric. This allows for clusters deployed across typical failure domains such as network switches, rack cabinets or physical locations.

Even if the cluster nodes are not actually deployed across a failure domain, there are operational benefits to using a sided configuration, and as such it is highly recommended that clusters of 10 or more nodes be configured to be sided. For example, a 32-node, non-sided cluster with 2 write copies will have a partial outage of data availability if any 2 nodes are unavailable simultaneously. If the same cluster were configured with sides, then up to half the nodes (8 from side A and 8 from side B) could be unavailable and all data would still be readable.

Sided-cluster configuration is subject to the following restrictions:

  • Only 2 sides are permitted.

  • An active, non-sided cluster cannot be converted into a sided cluster as this would change the existing topology, which is not permitted. The same is true for conversion from sided to non-sided.

  • Both sides must be specified, and non-empty (in other words, it is an error to configure a sided cluster with all hosts on one side.)

To configure a sided topology, add the side attribute to each <node>, with a value of either a or b. If using the topo-helper tool in the previous section, use the -s option. A sided configuration looks something like this:

<nodes write_copies="2">
  <node id="7dffe44b-47c6-43e1-db6f-dc3094b793a8"
        address="192.168.1.11"
        apiport="8112"
        port="8112"
        side="a"
        weight="170"/>
  <node id="964f7a5a-6aa5-4123-c07c-8e1a4fdb8870"
        address="192.168.1.12"
        apiport="8112"
        port="8112"
        side="a"
        weight="170"/>
  <node id="c85237f1-b6d7-cf98-bfef-d2a77b7e0181"
        address="192.168.1.13"
        apiport="8112"
        port="8112"
        side="b"
        weight="170"/>
</nodes>

Import Topology

This step calculates a hash of certain attributes of the topology, creating a unique "fingerprint" that identifies this specific topology. It is this hash that IRONdb uses to load the cluster topology at startup. Import the desired topology with the following command:

/opt/circonus/bin/snowthimport \
  -c /opt/circonus/etc/irondb.conf \
  -f /opt/circonus/etc/topology

If successful, the output of the command is compiling to <long-hash-string>.

Next, update /opt/circonus/etc/irondb.conf and locate the topology section, typically near the end of the file. Set the value of the topology's active attribute to the hash reported by snowthimport. It should look something like this:

<topology path="/opt/circonus/etc/irondb-topo"
          active="742097e543a5fb8754667a79b9b2dc59e266593974fb2d4288b03e48a4cbcff2"
          next=""
          redo="/irondb/redo/{node}"
/>

Save the file and restart IRONdb:

  • /bin/systemctl restart circonus-irondb

Repeat the import process on each cluster node.

Verify Cluster Communication

Updating

An installed node may be updated to the latest available version of IRONdb by following these steps:

Ubuntu:

We have a helper package on Ubuntu that works around issues with dependency resolution, since IRONdb is very specific about the versions of dependent Apica packages, and apt-get is unable to cope with them. The helper package must be upgraded first, i.e., it cannot be upgraded in the same transaction as the main package.

/usr/bin/apt-get update && \
/usr/bin/apt-get install circonus-platform-irondb-apt-policy && \
/usr/bin/apt-get install circonus-platform-irondb && \
/bin/systemctl restart circonus-irondb

In a cluster of IRONdb nodes, service restarts should be staggered so as not to jeopardize availability of metric data. An interval of 30 seconds between node restarts is considered safe.

(optional) Configures listeners to require TLS where applicable. Default is "off". If set to "on", a second HTTPS listener will be created on port 8443, for external clients to use for metric submission and querying. Two SSL certificates will be required, utilizing different CNs. See for details.

The setup script will configure your IRONdb instance and start the service. If you chose to turn on TLS support, the service will not automatically start. Once you have installed the necessary key and certificate files, .

For more on licensing see:

Once all nodes have the cluster topology imported and have been restarted, verify that the nodes are communicating with one another by viewing the Replication Latency tab of the on any node. You should see all of the cluster nodes listed by their IP address and port, and there should be a latency meter for each of the other cluster peers listed within each node's box.

The node currently being viewed is always listed in blue, with the other nodes listed in either green, yellow, or red, depending on when the current node last received a gossip message from that node. If a node is listed in black, then no gossip message has been received from that node since the current node started. Ensure that the nodes can communicate with each other via port 8112 over both TCP and UDP. See the documentation for details on the information visible in this tab.

cluster replication
request proxying
cluster gossip
enable and start the service
IRONdb Operations Dashboard
Replication Latency tab
TLS Configuration
Configuration/licenses