Importing Creator Owned GSS-TSIG records - Adaptive Applications - BlueCat Gateway - 22.1

BlueCat Distributed DDNS Administration Guide

Locale
English
Product name
BlueCat Gateway
Version
22.1

The following section contains a download for a script that automates the importing of Creator Owned GSS-TSIG records into the Data Node. The script uses the input of a CSV file to import the Creator Owned GSS-TSIG records.

Starting in Distributed DDNS v22.1, the Creator Owned GSS-TSIG script is included in the Data Node under the /usr/local/bin directory. If you cannot locate it, click here to download the script that automates the importing of Creator Owned GSS-TSIG records into the Data Node.
Note: When downloading the script from the BlueCat Product Documentation Portal, the zip file name displays a hashed value. Once you extract the zip file, the correct script name appears.

Before you begin

Once you have downloaded the script, you must copy the script to the Data Node container and ensure that the script is executable:
  1. Copy the script to the Data Node container using the following command:
    docker cp <path_to_script> <database_node_container_name>:/usr/local/bin/
  2. Change the permissions of the script to ensure that it is executable using the following command:
    docker exec -it <database_node_container_name> chmod +x /usr/local/bin/import_creator_owned_records

Creating the CSV file

When creating the CSV file of the Creator Owned GSS-TSIG records to be imported, you must ensure that the following columns are present:
  • zone_name: the name of the zone in which the resource record resides.
  • rr_name: the name of the resource record.
  • ttl: the TTL value of the resource record.
  • rr_type: the resource record type.
    Note: The following resource record types are supported: A, AAAA, SRV, CNAME
  • rr_data: the resource record data. When entering the rr_data, depending on the type of resource recording being configured, the content must be one of the following formats:
    • A records: IPv4 address
    • AAAA records: IPv6 address
    • SRV records: <priority> <weight> <port> <target>
    • CNAME records: a canonical name
  • client_name: the name of the client.
  • client_domain: the domain of the client.
The following shows the contents of an example CSV file:
zone_name,rr_name,ttl,rr_type,rr_data,client_name,client_domain
example.com,rr1,1000,A,1.1.1.1,client1,example.com
example.com,rr2,1000,AAAA,2404:6800:4003:c03::8b,client2,example.com
example.com,rr3,1000,SRV,1 50 88 srv-target,client3,example.com
example.com,rr4,1000,CNAME,cname_data,client4,example.com
Once you have created the CSV file, you must copy the file to the same directory location in which the import script resides on the Data Node using the following command:
docker cp <path_to_CSV_file> <database_node_container_name>:/usr/local/bin/

Executing the script to import Creator Owned GSS-TSIG records

After importing the script and creating the CSV file that contains the records, you can execute the script to import the records into the Data Node.
  1. Add zones and Creator Owned GSS-TSIG Record permissions for each of the zones that you are importing records into. For more information, refer to Permissions.
  2. Log in to the console of the Data Node that contains the import script and CSV file.
  3. Navigate to the /usr/local/bin/ directory that contains the import script and CSV file.
  4. Import the Creator Owned GSS-TSIG records using the following command:
    import_creator_owned_records --file <CSV_file_name>

Once you execute the script, the script reads the input file and raises any exceptions when the zone or permissions of "Creator Owned GSS-TSIG records" do not exist in the database. The script continues to the next row of the CSV file and sends dynamic updates to the primary DNS zone for each record. If the dynamic update fails, it does not import the data to the database and continues to the next row of the CSV file. Once the script has successfully completed running, INFO logs are displayed in the console and DEBUG logs are logged in the log fire working directory.