The ExtraView – ExtraView peer daemon synchronizes issues between two or more ExtraView servers in near real time. The configuration allows issues with specific criteria to be synchronized. For example, you may synchronize all issues within a Business Area, or all issues within a Business Area that have a specific Status value. To support this, specific metadata between the ExtraView instances can be mapped, for example creating values in the target instance that do not exist in the source instance.
The configuration is accessed and managed via an ExtraView task which must be configured and be running before the synchronization operates. More than one integration task may be configured, if your requirement is to synchronize more than two ExtraView instances. You may even configure more than one task to synchronize the same two databases, for example with different tasks being used to map issues between different business areas.
The synchronization supports all field types stored in the database with issue data, including file attachments. Relationships and repeating row records may also be synchronized. The daemon synchronizes specific metadata objects such as list values and allowed values combinations. However, metadata information such as field definitions, layouts, rules and other metadata is not synchronized – these objects are the domain of the XML export / import feature. Overall, issue data plus metadata that directly supports the issue data is replicated between two ExtraView instances.
General Features
- The administrator is able to map user defined fields (UDFs) and their values between two ExtraView instances. It is not essential for the instances to have identical configurations
- It is possible to use the integration to synchronize issues from one ExtraView business area to another business area within the same instance
- List values for UDF’s are replicated only when necessary. If a list value for a mapped list field does not exist in the target, the list value/title will be added to the target metadata for that field but not otherwise
- Inbuilt field metadata is not synchronized. The issue data for inbuilt fields is moved, but the metadata values are not replicated, e.g., new STATUS field values are not inserted in a target instance. The built in fields affected by this limitation are AREA, CATEGORY, MODULE_ID, PRIORITY, PRODUCT, PRODUCT_LINE, PROJECT, RESOLUTION, SEVERITY_LEVEL, and STATUS
- Allowed values on UDF’s are replicated only when necessary. If field metadata changes are made to the target, these changes will be reflected in the allowed values, if any, for that field. For example, if a new list value is added, and it is a child of an allowed value, then a new allowed value is added to the allowed value type
- Relationship groups may be mapped between source and target instances. Mapped relationship groups must be replicated on the source and target instances. The daemon may be configured to move relationship group issue relationships even if underlying issues are not touched
- Attachments may be mapped between source and target instances. Attachments are then replicated on the target instance. The daemon moves attachments even if the underlying issues are not touched
- Repeating row types may be mapped between source and target instances. Since repeating record types are defined by Layout Types, this in essence consists of a mapping of Layout Types
- An important aspect of the synchronization is that all updates over time to an issue are synchronized, not just the current values within an issue
- To support both SECURITY_USER_ID matches with a potentially different LOGIN_ID there is matching on the LOGIN_ID as well as the SECURITY_USER_ID when replicating a user to a target. The problem is that a target instance may have a user with the same SECURITY_USER_ID representing a different human being. This limitation is not implemented at this time
- Field security is enforced on issue updates at the target. Fields that are not writeable by the role under which the daemon is executing may not be changed by the daemon and will generate an error
- The ID field is not mapped from the source to the target instance. Each instance keeps track of its own unique ID field values with all issues
- ALT_ID across all instances: the daemon supports the transference of a common ALT_ID that refers to the same issue across all connected instances. This primarily means that the ALT_ID is set by the source issue and not automatically generated in the target. Maintaining uniqueness will be largely the responsibility of the administrator, e.g. by configuring unique prefixes for IDs within the different instances
- Errors and warnings may be reported from the daemon via email to a specific person or a user role
- Mapping configuration properties generally use regular expressions (regex’s)
- Filters for defining which issues to replicate are configured using an expression language that uses substitutable variables
- Issues are migrated at the approximate rate of one per second, dependent upon the speed of the hardware and the complexity of the records being synchronized. For most purposes this is perfectly adequate to keep multiple databases synchronized in near real-time. However, take this time into account if the requirement is to use the task to perform the initial population of masses of data from one instance to be integrated into another separate instance
- There is the capability within the task manager for the integration to start or restart the synchronization from any time in the past.
Caveats
- Each instance should have all the same locales defined. if the target does not have a locale that exists in the source, then the update of that localization does not occur, but updates of all other localizations that do exist are made. It is good practice to have the same locales within each instance.
- Synchronization of behavior settings is not supported
- All databases must have ALT_ID defined if the single ALT_ID feature is required
- Mapped entities must have metadata predefined on each instance. The daemon does not replicate metadata except for list values required for the synchronization along with the required allowed values to support these values
- Any issue that is currently being held on the source instance within the SAVE POINT feature is not mapped to the remote instance, as there is no certainty that the transaction will be completed
- The integration process does not support the moving of rankings between installations.
The Integration Process
- Upon initialization of the daemon and at the time the configuration is saved, all list value mappings are checked and validated in order to avoid transaction failures. With each issue update or add transaction from the source instance, metadata updates shall be performed first, followed by the issue data synchronization
- The ExtraView integration daemon is not limited by the default Area and Project concept when validating mapped fields based on the Add, Edit, and Detailed Report layouts of the user accounts used to access the two ExtraView instances. Any accessible valid field in Data Dictionary is allowed to be mapped
- The ExtraView URL setting (the EVURL_FIELD) that is currently required with other integration daemons is optional
- The same ExtraView instance can be specified as both the source and target instances in order to synchronize records between different business Areas and / or Projects
- An administration tool within the Task Management utility configures the integrations. There is the ability to create multiple integration daemons, so that integrations between multiple ExtraView instances is supported. Key fields for the integration, such as the sign on credentials for each instance, the frequency of running the integration and the mappings of fields and values are managed within this tool
- Given the integration runs as a background process, there are email notifications for errors encountered during the integration. These inform administrators of problems that need to be addressed. Optional emails may be generated to indicate successful synchronizations. All of these notifications are logged in the SYSTEM_LOG table as well as the application server log file
- When the integration is processing issues that contain relationships to other issues which may themselves be processed and mapped to a second server, the parent issues are migrated before the related issues.