guides:reference:api:exchange_gateway_and_api
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
guides:reference:api:exchange_gateway_and_api [2019/12/31 09:21] – [Fetch report] yspeerte | guides:reference:api:exchange_gateway_and_api [2024/07/03 12:31] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== YCE Exchange gateway and API ====== | ||
+ | |||
+ | The YCE exchange gateway is intended for integrations with north-bound systems although it can also be used to interface with peer systems. The gateway is an XML based request-response system where YCE can be instructed to perform an action or deliver information. | ||
+ | |||
+ | Several types of integrations have been realized to date and due to its highly flexible and extensible implementation, | ||
+ | |||
+ | At the core, each YCE server has a service running to accept incoming requests to execute specific tasks. These tasks can be specific to YCE or customer specific. YCE API functions available include the preparation of (standardized) changes (e.g. adding new devices, setting up services, manipulating topology) as well as the scheduling of the provisioning of these changes and their monitoring. Customer specific functions allow for the interaction with YCE-connected systems like Infoblox to perform IPAM, DNS and DHCP tasks. | ||
+ | |||
+ | ==== Authorization ==== | ||
+ | |||
+ | The request header requires both a '' | ||
+ | |||
+ | The password may be cleartext (not advised), the md5-hash from the NetYCE YCE.Users.Passwd column, or the des3 encrypted password that can be generated using the cli tool ''/ | ||
+ | |||
+ | The md5-hash taken from the indicated table cannot be self-generated since it is a hash created using a concatenation of the userid and a secret realm string. | ||
+ | |||
+ | ==== Implementation ==== | ||
+ | |||
+ | The Exchange or API system consists of two parts, a daemon and a series of plugins. | ||
+ | |||
+ | First there is the xch-daemon that permanently runs in the background to accept new requests from remote systems using the network. It listens to port 8888 by default and is available on any of the YCE servers of its implementation. | ||
+ | |||
+ | The daemon can accept tcp socket calls over which it receives the request in XML format directly, but the method used most widely is the HTTP POST. In this case the XML formatted request will be issued as a parameter of the POST. During the processing of the request the network connection is kept alive until a response is sent. Depending on the transaction type, the response is available immediately or can take several minutes. | ||
+ | |||
+ | The Exchange daemon is multi-threaded so that requests are processed in parallel. Up to 30 requests can be executed in parallel, any additional requests are queued until a slot is available. During the queuing the connection remains open. From issuers perspective these calls are identical, just take a little longer. | ||
+ | |||
+ | The second part of the exchange gateway are the plugins. These plugins provide the actual implementation of the request and are therefore highly modular and easily extensible. Most of the integrations between NetYCE and external NMS systems to date are using xch-plugins. Also the various NetYCE API functions are realized as an xch-plugin. | ||
+ | |||
+ | The plugins currently available: | ||
+ | |||
+ | * NetYCE command jobs - xch_jobs | ||
+ | * NetYCE Service type and service task launcher - xch_st | ||
+ | * NetYCE NCCM function - xch_nccm | ||
+ | * NetYCE system maintenance functions - xch_system | ||
+ | * Infoblox IPAM and DHCP provisioning - xch_ib_dhcp | ||
+ | * Infoblox DNS provisioning - xch_ib_dns | ||
+ | * Other modules are customer specific and deal with Maintenance Event suppression, | ||
+ | |||
+ | ==== XCH configuration ==== | ||
+ | |||
+ | Exchange plugins are registered in a configuration file, ''/ | ||
+ | |||
+ | <code ini> | ||
+ | auth_agent = internal | ||
+ | user_level = 5 | ||
+ | task_module = xch_system | ||
+ | task_sub = system_status | ||
+ | |||
+ | [system_fput] | ||
+ | auth_agent = internal | ||
+ | user_level = 5 | ||
+ | task_module = xch_system | ||
+ | task_sub = fput | ||
+ | |||
+ | [system_get] | ||
+ | auth_agent = internal | ||
+ | user_level = 5 | ||
+ | task_module = xch_system | ||
+ | task_sub = fget | ||
+ | </ | ||
+ | |||
+ | In the section above of the ini-file, three different tasks are exposed to the the xch server from the same module: system status, system_fput and system_get. All three are intended for internal use only, which is reflected in the authorization agent that is to be used in these tasks. The plugin module is ' | ||
+ | |||
+ | <code ini> | ||
+ | auth_agent = yce | ||
+ | user_level = 2 | ||
+ | task_module = xch_jobs | ||
+ | task_sub = command_job | ||
+ | |||
+ | [job_status] | ||
+ | auth_agent = yce | ||
+ | user_level = 3 | ||
+ | task_module = xch_jobs | ||
+ | task_sub = job_status | ||
+ | </ | ||
+ | |||
+ | In this example the command jobs are made accessible through the API. The submission of a job and the retrieval of the job-status are registered in the plugin module '' | ||
+ | |||
+ | ====== XCH Transaction types ====== | ||
+ | |||
+ | ===== Service type execution ===== | ||
+ | |||
+ | ==== Service | ||
+ | |||
+ | Part of the YCE modeling is defined in Service types. A Service type mirrors in high detail the actions a designer performs when defining how a device must be connected or a service implemented. The process can be visualized as making a drawing of the design where nodes are added, lines are drawn, ports are assigned, vlans created and addresses mapped. | ||
+ | |||
+ | YCE uses Service types to define the standardized actions and have them executed by engineers or operators where the design (as modeled) allows them to do so. In this way a single click can result in an entire device layer be added and properly hooked up to the core devices, including all (management) IP addresses, vlan setup and port configurations of all devices involved. | ||
+ | |||
+ | The XCH Service task request allows remote systems to initiate the execution of a Service type (or service task). An example of such a xml request is shown below. The set of attributes provided is highly customizable. In the case below, the minimal set is used. | ||
+ | |||
+ | <code xml> | ||
+ | <task response=""> | ||
+ | <head | ||
+ | passwd=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | < | ||
+ | client_type=" | ||
+ | service_class=" | ||
+ | service_type=" | ||
+ | service_task=" | ||
+ | client_code=" | ||
+ | site_code=" | ||
+ | /> | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | ==== Custom variables ==== | ||
+ | |||
+ | In extension of the Service type above, custom variables can be inserted in the Service type using the API. The '' | ||
+ | |||
+ | The variable names of these custom variables in the API call can either be chosen freely or are pre-defined, | ||
+ | |||
+ | {{ : | ||
+ | |||
+ | The Service type as defined above will need to be called by the API using the request below: | ||
+ | |||
+ | <code xml> | ||
+ | < | ||
+ | <head passwd=" | ||
+ | | ||
+ | <request | ||
+ | client_type=" | ||
+ | service_class=" | ||
+ | service_task=" | ||
+ | service_type=" | ||
+ | > | ||
+ | <custom name=" | ||
+ | <custom name=" | ||
+ | <custom name=" | ||
+ | <custom name=" | ||
+ | <custom name=" | ||
+ | <custom name=" | ||
+ | <custom name=" | ||
+ | <custom name=" | ||
+ | <custom name=" | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | This method provides exact control over which API variable gets used in each of the Service type records, but will not allow the Service type to be used without the API. | ||
+ | |||
+ | A set of reserved names can be used where such a mix of API and front-end usage of the Service types is required. The service types can be designed including valid values but can be replaced by providing a reserved custom variable if provided by the API call. The variable names are named after their obvious use: '' | ||
+ | |||
+ | ==== Retrieving information ==== | ||
+ | |||
+ | The Service types API also allows to retrieve information from the NetYCE network model. In the Service type, ' | ||
+ | |||
+ | By setting '' | ||
+ | |||
+ | As an example, consider a Service type where a Node is located using Client_code and Site_code. The following XML call will retrieve the Client, Site, and Node data-sets. | ||
+ | |||
+ | <code xml> | ||
+ | <task response=""> | ||
+ | <head passwd=" | ||
+ | < | ||
+ | client_type=" | ||
+ | service_class=" | ||
+ | service_task=" | ||
+ | service_type=" | ||
+ | log_aliases=" | ||
+ | <custom name=" | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | |||
+ | <code xml> | ||
+ | < | ||
+ | <head abort_on_error=" | ||
+ | < | ||
+ | </ | ||
+ | < | ||
+ | client_type=" | ||
+ | log_aliases=" | ||
+ | request_id=" | ||
+ | service_class=" | ||
+ | service_task=" | ||
+ | service_type=" | ||
+ | <custom name=" | ||
+ | </ | ||
+ | < | ||
+ | <alias name="& | ||
+ | <alias name="& | ||
+ | < | ||
+ | <node Boot_loader="" | ||
+ | <port Bandwidth_down="" | ||
+ | </ | ||
+ | <custom name=" | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== Job execution ===== | ||
+ | |||
+ | ==== Command job ==== | ||
+ | |||
+ | A command job is a generic tool to execute changes in the network. These changes are prepared in YCE using either the client(s) or remotely using the XCH Service task method. Once the modeled network has the desired change(s) incorporated, | ||
+ | |||
+ | A range of tools is available for the operator to create these jobs. The Command job is the most versatile of these. The XCH Command_job task is its equivalent for remote use. | ||
+ | |||
+ | Standardized changes are available as “Stored jobs”, requiring only the device (node) selection and the stored_job name. For non-standard jobs, the complete set of commands (or template names) can be specified in the job request. The same is true for the desired scenario: it is either defined in the stored job or can be defined in the job request (step by step or by task name). | ||
+ | |||
+ | The job request example below uses the full version where all available options are defined. Note that here a stored_job_name is still defined although both command section and scenario sections are provided. In these cases the stored job functions as the default should one of these sections be left blank. | ||
+ | |||
+ | Because the commands and scenario sections support the full set of template and scenario syntaxes for parameter substitution and conditionals, | ||
+ | |||
+ | <code xml> | ||
+ | <task response=""> | ||
+ | <head | ||
+ | passwd=" | ||
+ | req_app=" | ||
+ | req_host=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | < | ||
+ | node_name=" | ||
+ | client_type="" | ||
+ | site_code="" | ||
+ | client_code=" | ||
+ | stored_job_name=" | ||
+ | sched_day=" | ||
+ | sched_time=" | ||
+ | sched_now=" | ||
+ | sched_queue=" | ||
+ | verbose_log=" | ||
+ | > | ||
+ | < | ||
+ | < | ||
+ | ! Change enable from '< | ||
+ | enable secret < | ||
+ | ! | ||
+ | ]]> | ||
+ | </ | ||
+ | < | ||
+ | < | ||
+ | Description < | ||
+ | |||
+ | Import_cfg -q -n < | ||
+ | if Error | ||
+ | LogAction -n < | ||
+ | stop | ||
+ | endif | ||
+ | |||
+ | Db_update -t SiteRouter -f Enable_secret -v '< | ||
+ | |||
+ | Logaction -n < | ||
+ | LogAction -n < | ||
+ | ]]> | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | The job request above is listing the full version, not using any defaults. In the request below, most of the defaults are used, only the host_name and the commands are specified. In this case the scenario used is the “Default command job”. In this case it is demonstrated that the use of the < | ||
+ | |||
+ | The default schedule time is ‘tomorrow 05:05’. Other defaults are ‘verbose_log=”yes”’ and ‘sched_now=”no”. | ||
+ | |||
+ | <code xml> | ||
+ | <task response=""> | ||
+ | <head | ||
+ | passwd=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | < | ||
+ | node_name=" | ||
+ | > | ||
+ | < | ||
+ | ! | ||
+ | my hostname is < | ||
+ | ! | ||
+ | |hostname = ' | ||
+ | ! | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | Sample response (full): | ||
+ | |||
+ | <code xml> | ||
+ | < | ||
+ | <head | ||
+ | error=" | ||
+ | passwd=" | ||
+ | sched_status=" | ||
+ | status=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | user_func=" | ||
+ | user_level=" | ||
+ | user_name=" | ||
+ | userid=" | ||
+ | /> | ||
+ | <request | ||
+ | auth_agent=" | ||
+ | client_level=" | ||
+ | commands=" | ||
+ | ! my hostname is < | ||
+ | ! | ||
+ | |hostname = ' | ||
+ | !" | ||
+ | group_id=" | ||
+ | node_name=" | ||
+ | operator=" | ||
+ | sched_day=" | ||
+ | sched_time=" | ||
+ | stored_job_description=" | ||
+ | stored_job_name=" | ||
+ | task_module=" | ||
+ | task_sub=" | ||
+ | user_level=" | ||
+ | verbose_log=" | ||
+ | /> | ||
+ | < | ||
+ | client_code=" | ||
+ | client_type=" | ||
+ | commands=" | ||
+ | ! my hostname is < | ||
+ | ! | ||
+ | |hostname = ' | ||
+ | !" | ||
+ | job_descr=" | ||
+ | jobid=" | ||
+ | node_fqdn=" | ||
+ | node_name=" | ||
+ | node_type=" | ||
+ | scenario=" | ||
+ | Description < | ||
+ | Command_job... task = Command_job " | ||
+ | sched_job=" | ||
+ | sched_queue=" | ||
+ | sched_req=" | ||
+ | site_code=" | ||
+ | vendor_type=" | ||
+ | verbose_log=" | ||
+ | /> | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | === Parameters for a stored_job_name === | ||
+ | When using a stored_job additional parameters may be provided. These will be treated as if they were parameters provided in the stored job ' | ||
+ | > NOTE: They will not override existing set values! | ||
+ | |||
+ | |||
+ | <code xml> | ||
+ | < | ||
+ | <head abort_on_error=" | ||
+ | passwd=" | ||
+ | req_app="/ | ||
+ | req_host=" | ||
+ | request_id=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | usr_type=" | ||
+ | xml_decode=" | ||
+ | | ||
+ | < | ||
+ | change_id="" | ||
+ | client_type="" | ||
+ | commands="" | ||
+ | description="" | ||
+ | evaluate=" | ||
+ | node_name=" | ||
+ | sched_day=" | ||
+ | sched_epoch="" | ||
+ | sched_now=" | ||
+ | sched_queue=" | ||
+ | sched_server=" | ||
+ | sched_time=" | ||
+ | stored_job_name=" | ||
+ | verbose_log=" | ||
+ | scenario=""> | ||
+ | < | ||
+ | parameter2=" | ||
+ | some_name=" | ||
+ | /> | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | |||
+ | ==== Basic Command job ==== | ||
+ | The basic command job API call is exactly the same as the [[guides: | ||
+ | |||
+ | **task_name** is set to ' | ||
+ | |||
+ | A basic command job can point to both CMDB nodes (the default) or YCE nodes. | ||
+ | |||
+ | |||
+ | ==== Job status ==== | ||
+ | |||
+ | The results of any job can be retrieved using it’s jobID. While the job is in RUNNING state, the details will keep pace with its progress. The job results can be retrieved from any YCE server once it has become active. | ||
+ | |||
+ | <code xml> | ||
+ | <task response=""> | ||
+ | <head | ||
+ | passwd=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | < | ||
+ | jobid=" | ||
+ | /> | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | Sample response (full): | ||
+ | |||
+ | <code xml> | ||
+ | < | ||
+ | <head | ||
+ | error=" | ||
+ | passwd=" | ||
+ | status=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | <request | ||
+ | auth_agent=" | ||
+ | jobid=" | ||
+ | status_timestamp=" | ||
+ | task_module=" | ||
+ | task_sub=" | ||
+ | user_level=" | ||
+ | xch_server=" | ||
+ | /> | ||
+ | < | ||
+ | job_state=" | ||
+ | jobid=" | ||
+ | log_details=" | ||
+ | Tasks: Command_job 03-Import_cfg (-q -n TESTRN01001 –f TESTRN01001.cmd -v) | ||
+ | 00-ARGUMENTS | ||
+ | Command: import | ||
+ | Starting import on TESTRN01001 | ||
+ | Session stopped | ||
+ | Node TESTRN01001 is unreachable at 10.10.62.192. | ||
+ | Aborted 2013-09-12 16:20:09 10.34.62.192 finished with Errors | ||
+ | ERROR import_cfg failed: Node TESTRN01001 is unreachable at 10.10.62.192. | ||
+ | Aborted 05-Logaction (-n TESTRN01001 -a Command_job -m " | ||
+ | Failed executing commands" | ||
+ | 06-Stop () | ||
+ | 2013-09-12 16:20:09 ABORTED after 8 seconds " | ||
+ | log_head=" | ||
+ | log_info=" | ||
+ | log_server=" | ||
+ | log_tail=" | ||
+ | log_timestamp=" | ||
+ | operator=" | ||
+ | /> | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | If the job exists, but waits execution by the scheduler, there are no job results as yet, but schedule information can be retrieved if the XCH request was directed at the server where the job was scheduled. | ||
+ | |||
+ | The results for a job scheduled, but not yet active: | ||
+ | |||
+ | <code xml> | ||
+ | < | ||
+ | <head | ||
+ | error=" | ||
+ | passwd=" | ||
+ | status=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | <request | ||
+ | auth_agent=" | ||
+ | jobid=" | ||
+ | task_module=" | ||
+ | task_sub=" | ||
+ | user_level=" | ||
+ | /> | ||
+ | < | ||
+ | job_status=" | ||
+ | jobid=" | ||
+ | log_info="< | ||
+ | operator=" | ||
+ | sched_job=" | ||
+ | sched_queue=" | ||
+ | sched_start="" | ||
+ | /> | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | ===== Reports ===== | ||
+ | |||
+ | ==== Fetch report ==== | ||
+ | |||
+ | Previously generated custom reports can be retrieved using the API. The filename or query name is the only required attribute to fetch the CSV report and have it converted to XML. | ||
+ | |||
+ | === using the URL === | ||
+ | |||
+ | All reports are created in a CSV format and are converted to html (when viewing) or XML (for the API) when needed. If the original CSV is required, the download link is included when viewing the report. Generated reports can be downloaded as a CSV file directly using the URL below. Note that the file path is case sensitive but the report-name is not. To download the file using Dos formatting append ''& | ||
+ | |||
+ | Created reports are deleted automatically after 30 days, or the period in days defined by the Lookup tweak ' | ||
+ | |||
+ | < | ||
+ | https://< | ||
+ | </ | ||
+ | |||
+ | === using XCH API === | ||
+ | |||
+ | To retrieve the CSV report in XML format using the API, the '' | ||
+ | |||
+ | * the case-insensitive custom report name | ||
+ | * the path and filename of the custom report csv file | ||
+ | |||
+ | The latter format (' | ||
+ | |||
+ | If the report settings indicate it may not be overwritten, | ||
+ | |||
+ | Sample request: | ||
+ | |||
+ | <code xml> | ||
+ | <task response=""> | ||
+ | <head | ||
+ | userid=" | ||
+ | passwd=" | ||
+ | task_type=" | ||
+ | task_name=" | ||
+ | /> | ||
+ | <request | ||
+ | report_name=" | ||
+ | /> | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | The request resulted in the response below. Note that each row in the report is represented as a hash keyed with '' | ||
+ | |||
+ | Included are the timestamp of the generated report, as is the number of rows. The report columns are listed in their original (sql) order under the key '' | ||
+ | |||
+ | Column names starting with a digit that do not comply with the XML tag-formats will be automatically protected by prepending the string '' | ||
+ | |||
+ | <code xml> | ||
+ | < | ||
+ | <head error=" | ||
+ | < | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
+ | <request auth_agent=" | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | <row_01 ClientCode=" | ||
+ | <row_02 ClientCode=" | ||
+ | <row_03 ClientCode=" | ||
+ | <row_04 ClientCode=" | ||
+ | <row_05 ClientCode=" | ||
+ | <row_06 ClientCode=" | ||
+ | <row_07 ClientCode=" | ||
+ | <row_08 ClientCode=" | ||
+ | <row_09 ClientCode=" | ||
+ | <row_10 ClientCode=" | ||
+ | <row_11 ClientCode=" | ||
+ | <row_12 ClientCode=" | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | ==== Run report ==== | ||
+ | |||
+ | Similar to '' | ||
+ | |||
+ | Since custom reports can be defined not to overwrite any results from previous days, the resulting csv report will have the date appended to the report name ('< | ||
+ | |||
+ | The response message will be identical to the '' | ||
+ | |||
+ | Sample request: | ||
+ | |||
+ | <code xml> | ||
+ | <task response=""> | ||
+ | <head | ||
+ | userid=" | ||
+ | passwd=" | ||
+ | task_type=" | ||
+ | task_name=" | ||
+ | log_level=" | ||
+ | /> | ||
+ | <request | ||
+ | report_name=" | ||
+ | /> | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | |||
+ | ====== Infoblox DNS ====== | ||
+ | |||
+ | ==== IPAM and DNS report ==== | ||
+ | |||
+ | <color orange> >> This section has been replaced by the article on the [[guides: | ||
+ | |||
+ | ==== Infoblox DNS registration ==== | ||
+ | |||
+ | <color orange> >> his section will shortly be replaced by the article on the [[guides: | ||
+ | |||
+ | === Add Host === | ||
+ | |||
+ | The add_host request finds and allocates an IP-address for a new host name in a pre-existing zone. A free IP-address is located in the included set of IPAM subnets where ‘free’ means no that DNS entry exists, nor is part of DHCP range. A new ‘Host’-type DNS record is created by default, or an A-record if specified. When aliases are specified, those are added to the host record or created as Cname-records as is appropriate. | ||
+ | |||
+ | < | ||
+ | <task response=""> | ||
+ | <head | ||
+ | passwd=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | <request action_type=" | ||
+ | <host | ||
+ | comment=" | ||
+ | host_domain=" | ||
+ | host_name=" | ||
+ | record_type=" | ||
+ | request_id=" | ||
+ | > | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
+ | </task | ||
+ | </ | ||
+ | |||
+ | The changes are directly made to the life DNS GridMaster. The allocated ip-address and the registered DNS entry is returned. The task rejects non-existing zones and applies restrictions on the hostnames; e.g. no dotted hosts, hosts starting with a numeric digit or the use of underscores. | ||
+ | |||
+ | In the request, the host name and zone can be provided as two attributes ‘host_name’ and ‘host_domain’, | ||
+ | |||
+ | Multiple host requests may be included in the task. Each is expected to have a unique request_id (within the task). These hosts are processed in sequence before the task responds. | ||
+ | |||
+ | === Add alias === | ||
+ | |||
+ | The add alias request updates an existing host record to include the aliases listed in the request. Existing or overlapping aliases are ignored. The response lists the resulting set of aliases. When no aliases are provided in the request, the existing set of aliases for this host are listed. | ||
+ | |||
+ | < | ||
+ | <task response=""> | ||
+ | <head | ||
+ | passwd=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | <request action_type=" | ||
+ | <host | ||
+ | comment=" | ||
+ | host_domain=" | ||
+ | host_name=" | ||
+ | request_id=" | ||
+ | > | ||
+ | < | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | === Clear alias === | ||
+ | |||
+ | The clear_alias request updates an existing host record to remove the aliases listed in the request. Existing aliases named in the request are removed, others ignored. The response lists the resulting set of aliases. When no aliases are provided in the request, the existing set of aliases for this host are listed. | ||
+ | |||
+ | < | ||
+ | <task response=""> | ||
+ | <head | ||
+ | passwd=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | <request action_type=" | ||
+ | <host | ||
+ | comment=" | ||
+ | host_domain=" | ||
+ | host_name=" | ||
+ | request_id=" | ||
+ | > | ||
+ | < | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | === Clear host === | ||
+ | |||
+ | The clear host request removes the host record including all its ip-addresses and aliases if it is a host-record. When the DNS name belongs to an A-record or Cname, the appropriate record is removed from the DNS. | ||
+ | |||
+ | < | ||
+ | < | ||
+ | <head | ||
+ | passwd=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | <request action_type=" | ||
+ | <host | ||
+ | comment=" | ||
+ | host_domain=" | ||
+ | host_name=" | ||
+ | request_id=" | ||
+ | > | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | ===== Infoblox IPAM and DHCP ===== | ||
+ | |||
+ | ==== Client IPAM tree and DHCP configuration ==== | ||
+ | |||
+ | The IPAM and DHCP report is used to feed an IPAM and/or DHCP configuration tool. YCE includes such a tool for Infoblox where this report is used internally, but is also used externally. | ||
+ | |||
+ | The report requires the name (ClientCode) of an YCE-client that is fully modeled and uses the YCE ip-plan(s). Combined with the information found in an IPAM definition table within YCE, a report is generated where both the IPAM subnet tree and the associated DHCP scopes are fully defined. The DHCP definition includes the (customer defined) options and their calculated values. | ||
+ | |||
+ | The intended use for the report is to automate the (Infoblox) IPAM subtree’s en DHCP scope provisioning. When, for example, an operator adds a new location or some devices requiring ip-subnets, these are automatically assigned using the YCE ip-plans for this customer and used in the respective configurations. Next, the operator initiates the IPAM/DHCP update function for this customer which results in having the assigned subnets added to the IPAM tree, but also activating the required DHCP scopes including all their options. Similarly, when removing or freeing a subnet, the same process removes both DHCP definitions and returns the subnet to the ‘free’ pool. | ||
+ | |||
+ | ==== IPAM/DHCP tree ==== | ||
+ | |||
+ | This report can also be extended to include the IPAM trees of all clients in a client type. | ||
+ | |||
+ | <code xml> | ||
+ | < | ||
+ | <head | ||
+ | passwd=" | ||
+ | task_name=" | ||
+ | task_type=" | ||
+ | userid=" | ||
+ | /> | ||
+ | <request | ||
+ | client=" | ||
+ | client_type="" | ||
+ | /> | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | Sample report - below a single subnet record out of tens of thousands: | ||
+ | |||
+ | <code xml> | ||
+ | <tree | ||
+ | ddns=" | ||
+ | line_number=" | ||
+ | net_address=" | ||
+ | net_mask=" | ||
+ | net_name=" | ||
+ | net_options=" | ||
+ | net_size=" | ||
+ | net_tier=" | ||
+ | net_type=" | ||
+ | site_type="" | ||
+ | > | ||
+ | < | ||
+ | < | ||
+ | <option | ||
+ | option_name=" | ||
+ | option_number=" | ||
+ | /> | ||
+ | <option | ||
+ | option_name=" | ||
+ | option_number=" | ||
+ | option_val=" | ||
+ | /> | ||
+ | <option | ||
+ | option_name=" | ||
+ | option_number=" | ||
+ | option_val=" | ||
+ | /> | ||
+ | <option | ||
+ | option_name=" | ||
+ | option_number=" | ||
+ | > | ||
+ | < | ||
+ | </ | ||
+ | <option | ||
+ | option_name=" | ||
+ | option_number=" | ||
+ | option_val=" | ||
+ | /> | ||
+ | <option | ||
+ | option_name=" | ||
+ | option_number=" | ||
+ | option_val=" | ||
+ | /> | ||
+ | <option | ||
+ | option_name=" | ||
+ | option_number=" | ||
+ | option_space=" | ||
+ | > | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||