Skip to content

Prerequisites

Requirements

In order to deploy Atlassian’s Data Center products, the following is required:

  1. An understanding of Kubernetes and Helm concepts.
  2. kubectl v1.21 or later, must be compatible with your cluster.
  3. helm v3.3 or later.

Environment setup

Before installing the Data Center Helm charts you need to set up your environment:

  1. Create and connect to the Kubernetes cluster
  2. Provision an Ingress Controller
  3. Provision a database
  4. Configure a shared-home volume
  5. Configure a local-home volume

Elasticsearch for Bitbucket

We highly recommend you use an external Elasticsearch installation for Bitbucket. When you run more than one node you need to have a separate Elasticsearch cluster to enable code search. See Bitbucket Elasticsearch recommendations.


Create and connect to the Kubernetes cluster

  • In order to install the charts to your Kubernetes cluster (version 1.21+), your Kubernetes client config must be configured appropriately, and you must have the necessary permissions.
  • It is up to you to set up security policies.

Provision an Ingress Controller

  • This step is necessary in order to make your Atlassian product available from outside of the Kubernetes cluster after deployment.
  • The Kubernetes project supports and maintains ingress controllers for the major cloud providers including; AWS, GCE and nginx. There are also a number of open-source third-party projects available.
  • Because different Kubernetes clusters use different ingress configurations/controllers, the Helm charts provide Ingress Object templates only.
  • The Ingress resource provided as part of the Helm charts is geared toward the NGINX Ingress Controller and can be configured via the ingress stanza in the appropriate values.yaml (an alternative controller can be used).
  • For more information about the Ingress controller go to the Ingress section of the configuration guide.

Provision a database

  • Must be of a type and version supported by the Data Center product you wish to install:
  • Must be reachable from the product deployed within your Kubernetes cluster.
  • The database service may be deployed within the same Kubernetes cluster as the Data Center product or elsewhere.
  • The products need to be provided with the information they need to connect to the database service. Configuration for each product is mostly the same, with some small differences. For more information go to the Database connectivity section of the configuration guide.

Reducing pod to database latency

For better performance consider co-locating your database in the same Availability Zone (AZ) as your product nodes. Database-heavy operations, such as full re-index, become significantly faster when the database is collocated with the Data Center node in the same AZ. However we don't recommend this if you're running critical workloads.

Configure a shared-home volume

  • All of the Data Center products require a shared network filesystem if they are to be operated in multi-node clusters. If no shared filesystem is available, the products can only be operated in single-node configuration.
  • Some cloud based options for a shared filesystem include AWS EFS, Azure Files. You can also stand up your own NFS.

Bitbucket shared storage

In the case of Bitbucket, the following must be taken into account.

  • Due to the high-performance requirements on IO operations, Bitbucket needs a dedicated NFS server providing persistence for a shared home.
  • Before choosing AWS EFS as the File system, review the prerequisite mentioned on cloud managed storage services. We include example for Bitbucket Mesh.
  • Bitbucket doesn't support other cloud-managed storage service such as Azure Files.
  • The logical representation of the chosen storage type within Kubernetes can be defined as PersistentVolumes with an associated PersistentVolumeClaims in a ReadWriteMany (RWX) access mode.
  • For more information about volumes see the Volumes section of the configuration guide.

See examples of creating shared storage.

Configure local-home volume

  • As with the shared-home, each pod requires its own volume for local-home. Each product needs this for defining operational data.
  • If not defined, an emptyDir will be utilised.
  • Although an emptyDir may be acceptable for evaluation purposes, we recommend that each pod is allocated its own volume.
  • A local-home volume could be logically represented within the cluster using a StorageClass. This will dynamically provision an AWS EBS volume to each pod.

An example of this strategy can be found the local storage example.