Prerequisites ¶
Requirements ¶
In order to deploy Atlassian’s Data Center products, the following is required:
- An understanding of Kubernetes and Helm concepts.
kubectl
v1.21
or later, must be compatible with your cluster.helm
v3.3
or later.
Environment setup ¶
Before installing the Data Center Helm charts you need to set up your environment:
- Create and connect to the Kubernetes cluster
- Provision an Ingress Controller
- Provision a database
- Configure a shared-home volume
- Configure a local-home volume
Elasticsearch for Bitbucket
We highly recommend you use an external Elasticsearch installation for Bitbucket. When you run more than one node you need to have a separate Elasticsearch cluster to enable code search. See Bitbucket Elasticsearch recommendations.
Create and connect to the Kubernetes cluster ¶
- In order to install the charts to your Kubernetes cluster (version 1.21+), your Kubernetes client config must be configured appropriately, and you must have the necessary permissions.
- It is up to you to set up security policies.
See examples of provisioning Kubernetes clusters on cloud-based providers.
Provision an Ingress Controller ¶
- This step is necessary in order to make your Atlassian product available from outside of the Kubernetes cluster after deployment.
- The Kubernetes project supports and maintains ingress controllers for the major cloud providers including; AWS, GCE and nginx. There are also a number of open-source third-party projects available.
- Because different Kubernetes clusters use different ingress configurations/controllers, the Helm charts provide Ingress Object templates only.
- The Ingress resource provided as part of the Helm charts is geared toward the NGINX Ingress Controller and can be configured via the
ingress
stanza in the appropriatevalues.yaml
(an alternative controller can be used). - For more information about the Ingress controller go to the Ingress section of the configuration guide.
See an example of provisioning an NGINX Ingress Controller.
Provision a database ¶
- Must be of a type and version supported by the Data Center product you wish to install:
- Must be reachable from the product deployed within your Kubernetes cluster.
- The database service may be deployed within the same Kubernetes cluster as the Data Center product or elsewhere.
- The products need to be provided with the information they need to connect to the database service. Configuration for each product is mostly the same, with some small differences. For more information go to the Database connectivity section of the configuration guide.
Reducing pod to database latency
For better performance consider co-locating your database in the same Availability Zone (AZ) as your product nodes. Database-heavy operations, such as full re-index, become significantly faster when the database is collocated with the Data Center node in the same AZ. However we don't recommend this if you're running critical workloads.
See an example of provisioning databases on cloud-based providers.
Configure a shared-home volume ¶
- All of the Data Center products require a shared network filesystem if they are to be operated in multi-node clusters. If no shared filesystem is available, the products can only be operated in single-node configuration.
- Some cloud based options for a shared filesystem include AWS EFS, Azure Files. You can also stand up your own NFS.
Bitbucket shared storage
In the case of Bitbucket, the following must be taken into account.
- Due to the high-performance requirements on IO operations, Bitbucket needs a dedicated NFS server providing persistence for a shared home.
- Before choosing AWS EFS as the File system, review the prerequisite mentioned on cloud managed storage services. We include example for Bitbucket Mesh.
- Bitbucket doesn't support other cloud-managed storage service such as Azure Files.
- The logical representation of the chosen storage type within Kubernetes can be defined as
PersistentVolumes
with an associatedPersistentVolumeClaims
in aReadWriteMany (RWX)
access mode. - For more information about volumes see the Volumes section of the configuration guide.
See examples of creating shared storage.
Configure local-home volume ¶
- As with the shared-home, each pod requires its own volume for
local-home
. Each product needs this for defining operational data. - If not defined, an emptyDir will be utilised.
- Although an
emptyDir
may be acceptable for evaluation purposes, we recommend that each pod is allocated its own volume. - A
local-home
volume could be logically represented within the cluster using aStorageClass
. This will dynamically provision an AWS EBS volume to each pod.
An example of this strategy can be found the local storage example.