Data Center: Deploying clustered Data Center on AWS and Azure


A multiple data center deployment implies a logical partitioning of all GWS nodes into segregated groups that are using dedicated service resources, such as T-Server, StatServers, and so on. The topology of a GWS Cluster can be considered as a standard directory tree where a leaf node is a GWS data center. The following diagram shows a GWS Cluster with 2 geographical regions (US and EU), and 3 GWS data centers (East and West in the US region, and EU as its own data center).


You can find AWS Quick Start guides for Jira Software, Bitbucket, Jira Service Desk, and Confluence online with use of AWS CloudFormation templates to deploy Data Center on AWS. What equipment is required for a Data Center installation?

We also have templates for Azure deployments. 

Deploying clustered Data Center on your own hardware
Before you begin implementing clustered Data Center, it is important to recognize the infrastructure beyond the multiple server application nodes. You will also need to provide and configure hardware including a load balancer to share traffic between nodes, a shared file system for effective attachment and artifact management, and a database to manage metadata. 

View an overview of Data Center architecture with Jira Data Center or Clustering with Confluence Data Center as examples.

Load Balancer:

The load balancer is the first stop your requests will make as they come in. The purpose of the load balancer is to direct incoming traffic to the various application nodes within the cluster. You could configure it in a way that certain types of traffic are sent to particular nodes, or that certain teams have their own nodes or any way that best meets your use case. Both hardware and software based load balancers are supported. The one requirement is the load balancer must be configured for cookie-based session affinity or sticky sessions. This means that once a user enters the application, they remain on the same node for the entirety of their session.

Note the following restrictions of this architecture:

Only 1 Sync node is deployed within a GWS Cluster
Each data center must have a dedicated list of Genesys servers, such as Configuration Servers, Stat Servers, and T-Servers.
The Cassandra Keyspace definition must comply with the number of GWS data centers.
Each GWS data center must have its own standalone and dedicated Elasticsearch Cluster.
The GWS node identity must be unique across the entire Cluster.

The application nodes are where the actual Atlassian application lives. Each node will have its own installation of the software. These nodes will be configured in a cluster, acting as one, serving the application to your users. Each node in your Data Center cluster must run the same version of the application and must be located in the same location. Keep in mind that Data Center pricing is not dependent on the number of nodes you are using, meaning that you may have as many as you'd like without affecting your licensing costs. We have found that typically between two and four nodes is enough for nearly all organizations.

Database:

In Data Center, the main requirement is the database to be installed on its own node. Clustered database technology is supported and recommended as it provides further resiliency to your system; although a clustered database is not required. Data Center supports the same databases as our Server offering, so be sure to consult the products supported platforms page to ensure your preferred database technology and version are supported.

Shared File System:

The shared file system is used by Data Center to store plugins, attachments, icons, user profiles, and avatars. This must be set up as its own node to be used by the Data Center deployment. You can use SAN, NFS, or NAS file sharing protocols for your shared file system, but note that distributed protocols such as DFS are not supported and will result in malfunction.

Disaster Recovery Infrastructure:

You can achieve Disaster recovery by deploying an offsite DR system. This system will largely resemble the production system limited to one application node. Once the DR system is up and running, you can add more application nodes. Next, implement a database replication strategy according to the database technology you have implemented, to replicate your database from production to DR. Lastly, ensure the shared file system is also being replicated from production to DR. There are two ways to do this, first would be a standard replication process in which the whole shared file system is replicated by a process you put in place. The second alternative is to create a shared file system in DR and mount it to your production system, the application can then be configured to automatically replicate the production file system to this mount.

When installing Data Center, we recommend starting with one application node to ensure the application is working as it should. When testing has confirmed proper functionality, add another application node to the Data Center cluster. At this point test the load balancer is directing traffic between the nodes properly; if so, the Data Center now has high availability From here, you can add more nodes at any time if necessary.

We leave it up to you to choose which infrastructure to host your deployment on. Whether it’s bare metal servers, virtual machines, or a hosted environment, Data Center runs in whatever environment you prefer.

It may be worth noting that in a recent survey of Data Center customers, 85% of installations were at least partially virtualized. Infrastructure as a service is becoming more and more popular among advanced IT teams and is compatible with the Data Center deployment option. If you choose IaaS, however, ensure that all instances and services used by Data Center are as co-located as possible. This means that, to the best of your ability, all nodes are located in the same geographical location. For example, in AWS, ensure that all nodes are in the same region and subnet. This ensures Data Center will function properly.

Development with Data Center

Interested in developing in regard to your Data Center installation? Feel free to browse Developing for high availability and clustering from our Jira Server Developer Documentation.

If any disaster causes this node to terminate or become unavailable because of network issues or the whole data center goes down, provisioning of any object in Configuration Server will not be reflected in the GWS cluster until the Sync node is recovered. Other functionality related to Agent activity is not affected in this case.

Configuration Server
The GWS Cluster Application object (typically named CloudCluster) in the Configuration Database must be configured with a specified location for each connection to Genesys servers, like Configuration Server, Stat Server, T-Server, and so on. This setting defines which server instance is used by the GWS node based on its position in the GWS Cluster. The visibility resource rule is based on comparing the nodePath attribute and the specified specification in connections.



Website of Vietnam Union of Science and Technology Associations
License number: 169 / GP-TTĐT, dated October 31, 2012
Head of Editorial Department: DANG VU
The Vietnam Union system was founded with 15 members. Currently, that number has risen to 148, including 86 national industry associations and 63 local associations. In addition, in the system of the Vietnam Union, there are more than 500 scientific and technological research units established under Decree 81 (now Decree 08); over 200 newspapers, magazines, electronic newspapers, newsletters, specialties, electronic news sites.

Contact Us

INFORMATION ABOUT SCIENCE BLOG

Address: 77 Nguyen Du - Hanoi - Vietnam. - Email: [email protected] Phone: 04.3.9432207
Copyright © 2014 - SDC. All rights reserved