Reference: Audit data export configuration example - BlueCat Integrity - 9.5.0

Address Manager Legacy v1 API Guide

Locale
English
Product name
BlueCat Integrity
Version
9.5.0
You can choose to export the audit data to an HTTP endpoint, Splunk server, Kafka cluster, or Elasticsearch server. If you are configuring to export the audit data to a Splunk server, ensure that you have the Splunk HTTP Event Collector (HEC) host and token information.
Attention:
  • Output to Kafka clusters and Elasticsearch servers can only be configured on Address Manager v9.5.0.
  • Starting in Address Manager v9.5.0, the audit data export service has been updated to output data as valid JSON that includes the hostname of the Address Manager server. This allows log management tools (such as Splunk servers) to properly parse the data as JSON, and helps users identify data sources in environments with multiple Address Manager servers.
    Warning: Users with existing audit data export configurations may need to update the settings of their log management tool (data sink) after upgrade to v9.5.0, to ensure that messages continue to be received. If messages are no longer being received after upgrade, ensure that the source and sink type are set to JSON and restart the log tool.
  • The audit data export service stores event data in a buffer before it is exported to the HTTP, Splunk, Kafka, or Elasticsearch endpoint. In the event that the service fails to export data to the endpoint, there may be a loss of event data. If the service is enabled but not working, it will consume additional disk space to hold the audit data in the BAM database until it is exported successfully to an external database.
  • Starting in Address Manager v9.5.0, the default number of Address Manager database rows sent per event has been reduced from 20 to 5 to not exceed the default Splunk settings. However if the Address Manager database table grows larger the default, the number of rows sent per event will scale upward accordingly to keep up with the table size. This means that the default Splunk limits may still be exceeded for large databases. Customers are advised to monitor audit data export output to ensure that Splunk settings allow for large amounts of data to be exported.
    The following Splunk settings can be modified to support the handling of large audit data export events:
    TRUNCATE Defines the number of characters per line, once reached exceed characters are dropped.
    MAX_EVENTS Defines the maximum number of lines per multi-line events. Once reached the event is broken, and exceeding lines are interpreted as a new events (sometimes causing a new timestamp detection).
    A parameter can also be configured on Address Manager servers to specify an exact amount of rows sent per event, refer to the next list item for more details.
  • Starting in Address Manager v9.5.0, a property can be configured on Address Manager servers to specify the exact number of database table rows to send per event, thus disabling the automatic increase. If such a configuration is required to avoid exceeding sink settings, contact Customer Care for assistance with configuration of this server property.
    Warning: With this property configured, in the event the system is busy, the queue used to hold the data before it goes to the sink may grow, causing increased usage of disk space in a busy system. Users are advised to carefully monitor disk space if using this option.
  • When replicating the database for disaster recovery, ensure that the audit data export service is enabled on all BAMs before configuring replication. Enabling the audit data export service on all the BAMs ensures that the audit data export service and its settings are present on all the BAMs in replication, allowing failover to work. This will also ensure that failover does not result in the loss of audit data.
  • If you have enabled database replication prior to configuring audit data export, contact BlueCat Customer Care for assistance with configuring audit data export in an existing replication environment.

Example http configuration

{
   "enable":true,
   "sinks":[
      {
         "type":"http",
         "uri":"https://10.0.0.1:9002",
         "healthCheck":true,
         "healthCheckUri":"http://10.0.0.1:9002/endpoint/healthcheck",
         "tls":{
            "caCert": "-----BEGIN CERTIFICATE-----\n
                       <certificate_content>\n
                       -----END CERTIFICATE-----", 
            "verifyCertificate":false,
            "verifyHostname":false
         }
      }
   ]
}           
Example splunk_hec configuration
{
   "enable":true,
   "sinks":[
      {
         "type":"splunk_hec",
         "healthCheck":true,
         "host":"https://192.168.218.178:8088",
         "token":"c7a1c0495dc64f6f844c3fa577ca7143",
         "tls":{
            "caCert": "-----BEGIN CERTIFICATE-----\n
                       <certificate_content>\n
                       -----END CERTIFICATE-----",
            "verifyCertificate":false,
            "verifyHostname":false
         }
      }
   ]
   } 
Example kafka configuration
{
   "enable":true,
   "sinks":[
      {
         "type": "kafka",
         "bootstrap_servers": "10.14.22.123:9092,10.14.23.332:9092",
         "topic": "topic-1234",
         "key_field":"user_id",
         "healthCheck":true,
         "tls":{
            "caCert": "-----BEGIN CERTIFICATE-----\n
                       <certificate_content>\n
                       -----END CERTIFICATE-----",
            "verifyCertificate":false,
            "verifyHostname":false
         }
      }
   ]
   } 
Example elasticsearch configuration
{
   "enable":true,
   "sinks":[
      {
         "type": "elasticsearch",
         "endpoint": "http://10.24.32.122:9000",
         "user": "user1",
         "password": "pass123",
         "index": "testIndex",
         "healthCheck":true,
         "tls":{
            "caCert": "-----BEGIN CERTIFICATE-----\n
                       <certificate_content>\n
                       -----END CERTIFICATE-----",
            "verifyCertificate":false,
            "verifyHostname":false
         }
      }
   ]
} 
Parameters
  • enable—set to true to enable audit data export service; set to false to disable audit data export service.
  • type—enter where the audit data will be exported. You can log data to an HTTP endpoint, Splunk server, Kafka cluster, or Elasticsearch server.
    If you enter http, enter the following additional parameters:
    • uri—enter the URI of the HTTP endpoint.
      Note:
      • BlueCat recommends entering the IP address of the endpoint in this field.
      • The URI for the uri field must follow the format outlined in RFC2396.
    • healthCheck—set to true to enable health check service; set to false to disable health check service. By default, the value is set to false.
    • healthCheckUri—enter the URI of the HTTP endpoint that will be consuming the health check information.
      Note: The URI for the healthCheck field must follow the format outlined in RFC2396.
    If you enter splunk_hec, enter the following additional parameters:
    • healthCheck—set to true to enable health check service; set to false to disable health check service.
      Note: When selecting this check box, the Address Manager Server uses the default Splunk healthcheck endpoint at /services/collector/health/1.0.
    • host—enter the URI of the Splunk HEC host. The standard format of the HEC URI in Splunk Enterprise is as follows:
      <protocol>://<FQDN or IP address of the host only>:<port>
      Note:
      • BlueCat recommends entering the IP address of the endpoint in this field.
      • Ensure that the HEC URI format is followed exactly as described above without adding or omitting any pieces. The port is required, even if default. Do not include extra slashes or folders in the URI.
      • The URI for the host field must follow the format outlined in RFC2396.
    • token—enter the Splunk HEC token.
    If you enter kafka, enter the following additional parameters:
    • bootstrap_servers—enter a comma-separated list of host and port pairs that are the addresses of the Kafka brokers in a “bootstrap” Kafka cluster that a Kafka client connects to initially to bootstrap itself. This field supports IPv4, IPv6 and FQDN values.

      Example: 10.14.22.123:9092,10.14.23.332:9092

      Note:
      • BlueCat recommends using IP addresses in this field.
      • Do not include http or https in the addresses of the Kafka brokers.
    • topic—enter the name of the Kafka topic to write events to.
    • key_field—enter the log field name or tags key to use for the topic key. If the field does not exist in the log or in tags, a blank value will be used. If unspecified, the key is not sent. Kafka uses a hash of the key to choose the partition or uses round-robin if the record has no key. This field is optional.
    • healthCheck—set to true to enable health check service; set to false to disable health check service. Upon initialization, the healthcheck ensure that the downstream service is accessible and can accept the DHCP statistics data.
      Note: The health check URI is configured based on the Kafka Broker address.
    If you enter elasticsearch, enter the following additional parameters:
    • endpoint—enter the Elasticsearch endpoint to send logs to. This field supports IPv4, IPv6, and FQDN values.

      Example: http://10.24.32.122:9000

      Example: https://example.com

      Example: https://user:password@example.com

      Note:
      • BlueCat recommends using the IP address of the endpoint in this field.
    • user—enter the basic authentication user name.
    • password—enter the basic authentication password.
    • index—enter Elasticsearch index name to write events to.
    • healthCheck—set to true to enable health check service; set to false to disable health check service. Upon initialization, the healthcheck ensure that the downstream service is accessible and can accept the DHCP statistics data.
      Note: The health check URI is configured based on the Elasticsearch instance.
  • When configuring tls settings, enter the following parameters:
    Attention: If you enter a HTTPS endpoint in the uri, healthCheckUri, host, bootstrap_servers, or endpoint field when configuring output, you must select this check box and enter TLS information.
    • caCert—enter the content of CA certificate (trusted third party or self-signed) that will be used to authenticate the CA signature on the TLS server certificate of the remote host.
      Note: The CA certificate or certificate bundle must be in PEM format. To ensure a successful TLS handshake, the CA certificate provided to the client (BAM) should be the same CA certificate (and intermediate certificates if applicable) used by the server to authenticate the CA signature of its TLS server certificate. The CA certificate can be acquired via browser export or other trusted source, and converted to PEM format.
    • verifyCertificate—set to true to attempt a TLS handshake using the provided CA certificate with the remote host's TLS server certificate.
      Note: Verify Certificate does not verify the authenticity of the provided certificate. Verify Certificate in this context only checks if the CA certificate matches correctly with the TLS server certificate to create a successful handshake.
      Note: If encountering errors with Verify Certificate, the CA/chain-CA certificates may have to be installed manually on the DNS/DHCP Server. Refer to KB-17944 on the BlueCat Customer Care portal for manual installation instructions.
    • verifyHostname—set to true to validate the hostname section of the URI against the CN (Common Name) or SAN (Subject Alternative Name) of the server certificate during the TLS handshake; set to false if you do not want to perform this validation.
      Note: If using self-signed certificates, users are advised to add a subject alternative name with the IP address (see RFC 5280 4.2.1.6), or disable the Verify Hostname check.