You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -11,200 +11,120 @@ Can't wait to try out the PostgreSQL Operator? Let us show you the quickest poss
11
11
12
12
There are two paths to quickly get you up and running with the PostgreSQL Operator:
13
13
14
-
-[Installation via Ansible](#ansible)
14
+
-[Installation via the PostgreSQL Operator Installer](#postgresql-operator-installer)
15
15
- Installation via a Marketplace
16
16
- Installation via [Google Cloud Platform Marketplace](#google-cloud-platform-marketplace)
17
17
18
18
Marketplaces can help you get more quickly started in your environment as they provide a mostly automated process, but there are a few steps you will need to take to ensure you can fully utilize your PostgreSQL Operator environment.
19
19
20
-
# Ansible
20
+
# PostgreSQL Operator Installer
21
21
22
22
Below will guide you through the steps for installing and using the PostgreSQL Operator using an installer that works with Ansible.
23
23
24
-
## Step 1: Prerequisites
24
+
## The Very, VERY Quickstart
25
25
26
-
### Kubernetes / OpenShift
26
+
If your environment is set up to use hostpath storage (found in things like [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) or [OpenShift Code Ready Containers](https://developers.redhat.com/products/codeready-containers/overview), the following command could work for you:
27
27
28
-
- A Kubernetes or OpenShift environment where you have enough privileges to install an application, i.e. you can add a [ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole). If you're a Cluster Admin, you're all set.
29
-
- Your Kubernetes version should be 1.13+. **NOTE**: For v4.3.0, while we have updated the PostgreSQL Operator for compatibility with 1.16+, we have not fully tested it.
30
-
- For OpenShift, the PostgreSQL Operator will work in 3.11+
31
-
-[PersistentVolume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)s that are available
32
-
33
-
### Your Environment
34
-
35
-
-[`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or [`oc`](https://www.okd.io/download.html). Ensure you can access your Kubernetes or OpenShift cluster (this is outside the scope of this document)
36
-
-[`ansible`](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) 2.8.0+. Learn how to [download ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html)
37
-
-`git`
38
-
- If you are installing to Google Kubernetes Engine, you will need the [`gcloud`](https://cloud.google.com/sdk/install) utility
39
-
40
-
## Step 2: Configuration
41
-
42
-
### Get the PostgreSQL Operator Ansible Installation Playbook
43
-
44
-
You can download the playbook by cloning the [PostgreSQL Operator git repository](https://github.com/CrunchyData/postgres-operator) and running the following commands:
Within the `ansible` folder, there exists a file called `inventory`. When you open up this file, you can see several options that are used to install the PostgreSQL Operator. Most of these contain some sensible defaults for getting up and running quickly, but some you will need to fill out yourself.
56
-
57
-
Lines that start with a `#` are commented out. To activate that configuration setting, you will have to delete the `#`.
33
+
If not, please read onward: you can still get up and running fairly quickly with just a little bit of configuration.
58
34
59
-
Set up your `inventory` file based on one of the environments that you are deploying to:
35
+
## Step 1: Configuration
60
36
61
-
#### Kubernetes
37
+
###Get the PostgreSQL Operator Installer Manifest
62
38
63
-
You will have to uncomment and set the `kubernetes_context` variable. This can be determined based on the output of the `kubectl config current-context` e.g.:
39
+
You will need to download the PostgreSQL Operator Installer manifest to your environment, which you can do with the following command:
64
40
65
-
```shell
66
-
kubectl config current-context
67
-
kubernetes-admin@kubernetes
68
41
```
69
-
70
-
Note that the output will vary based on the Kubernetes cluster you are using.
71
-
72
-
Using the above example, set the value of `kubernetes_context` to the output of the `kubectl config current-context` command, e.g.
Find the location of the `pgo_admin_password` configuration variable. Set this to a password of your choosing, e.g.
45
+
If you wish to download a specific version of the installer, you can substitute `master` with the version of the tag, i.e.
79
46
80
-
```python
81
-
pgo_admin_password="hippo-elephant"
82
47
```
83
-
84
-
Finally, you will need to set the storage default storage classes that you would like the Operator to use. For example, if your Kubernetes environment is using NFS storage, you would set this variables to the following:
For a full list of available storage types that can be used with this installation method, see: $URL
94
-
95
-
#### OpenShift
96
-
97
-
For an OpenShfit deployment, you will at a minimum have to to uncomment and set the `openshift_host` variable. This is the location of where your OpenShift environment is, and can be obtained from your administrator. For example:
Based on how your OpenShift environment is configured, you may need to set the following variables:
53
+
There are many [configuration parameters]({{< relref "/installation/configuration.md">}}) to help you fine tune your installation, but there are a few that you may want to change to get the PostgreSQL Operator to run in your environment. Open up the `postgres-operator.yml` file and edit a few variables.
104
54
105
-
-`openshift_user`
106
-
-`openshift_password`
107
-
-`openshift_token`
55
+
Find the `PGO_ADMIN_PASSWORD` variable. This is the password you will use with the [`pgo` client]({{< relref "/installation/pgo-client" >}}) to manage your PostgreSQL clusters. The default is `password`, but you can change it to something like `hippo-elephant`.
108
56
109
-
An optional `openshift_skip_tls_verify=true` variable is available if your OpenShift environment allows you to skip TLS verification.
57
+
You will need also need to set the storage default storage classes that you would like the PostgreSQL Operator to use. These variables are called `PRIMARY_STORAGE`, `REPLICA_STORAGE`, `BACKUP_STORAGE`, and `BACKREST_STORAGE`. There are several storage configurations listed out in the configuration file under the heading `STORAGE[1-9]_TYPE`. Find the one that you want to use, and set it to that value.
110
58
111
-
Next, find the location of the `pgo_admin_password` configuration variable. Set this to a password of your choosing, e.g.
59
+
For example, if your Kubernetes environment is using NFS storage, you would set these variables to the following:
112
60
113
-
```python
114
-
pgo_admin_password="hippo-elephant"
115
61
```
116
-
117
-
Finally, you will need to set the storage default storage classes that you would like the Operator to use. For example, if your OpenShift environment is using Rook storage, you would set this variables to the following:
118
-
119
-
```python
120
-
backrest_storage='rook'
121
-
backup_storage='rook'
122
-
primary_storage='rook'
123
-
replica_storage='rook'
62
+
- name: BACKREST_STORAGE
63
+
value: "nfsstorage"
64
+
- name: BACKUP_STORAGE
65
+
value: "nfsstorage"
66
+
- name: PRIMARY_STORAGE
67
+
value: "nfsstorage"
68
+
- name: REPLICA_STORAGE
69
+
value: "nfsstorage"
124
70
```
125
71
126
-
For a full list of available storage types that can be used with this installation method, see: $URL
72
+
For a full list of available storage types that can be used with this installation method, please review the [configuration parameters]({{< relref "/installation/configuration.md">}}).
127
73
128
-
#### Google Kubernetes Engine (GKE)
74
+
##Step 2: Installation
129
75
130
-
For deploying the PostgreSQL Operator to GKE, you will need to set up your cluster similar to the Kubernetes set up. First, you will need to get the value for the `kubernetes_context` variable. Using the `gcloud` utility, ensure you are logged into the GCP Project that you are installing the PostgreSQL Operator into:
76
+
Installation is as easy as executing:
131
77
132
-
```shell
133
-
gcloud config set project [PROJECT_ID]
134
78
```
135
-
136
-
You can read about how you can [get the value of `[PROJECT_ID]`](https://cloud.google.com/resource-manager/docs/creating-managing-projects?visit_id=637125463737632776-3096453244&rd=1#identifying_projects)
137
-
138
-
From here, you can get the value that needs to be set into the `kubernetes_context`.
139
-
140
-
You will have to uncomment and set the `kubernetes_context` variable. This can be determined based on the output of the `kubectl config current-context` e.g.:
141
-
142
-
```shell
143
-
kubectl config current-context
144
-
gke_some-name_some-zone-some_project
79
+
kubectl create namespace pgo
80
+
kubectl apply -f postgres-operator.yml
145
81
```
146
82
147
-
Note that the output will vary based on your GKE project.
83
+
This will launch the `pgo-deployer` container that will run the various setup and installation jobs. This can take a few minutes to complete depending on your Kubernetes cluster.
148
84
149
-
Using the above example, set the value of `kubernetes_context` to the output of the `kubectl config current-context` command, e.g.
85
+
While the installation is occurring, download the `pgo` client set up script. This will help set up your local environment for using the PostgreSQL Operator:
Finally, you will need to set the storage default storage classes that you would like the Operator to use. For deploying to GKE it is recommended to use the `gce` storag class:
92
+
When the PostgreSQL Operator is done installing, run the client setup script:
162
93
163
-
```python
164
-
backrest_storage='gce'
165
-
backup_storage='gce'
166
-
primary_storage='gce'
167
-
replica_storage='gce'
168
94
```
169
-
## Step 3: Installation
170
-
171
-
Ensure you are still in the `ansible` directory and run the following command to install the PostgreSQL Operator:
This can take a few minutes to complete depending on your Kubernetes cluster.
98
+
This will download the `pgo` client and provide instructions for how to easily use it in your environment. It will prompt you to add some environmental variables for you to set up in your session, which you can do with the following commands:
178
99
179
-
While the PostgreSQL Operator is installing, for ease of using the `pgo` command line interface, you will need to set up some environmental variables. You can do so with the following command:
**NOTE**: For macOS users, you must use `~/.bash_profile` instead of `~/.bashrc`
206
126
207
-
## Step 4: Verification
127
+
## Step 3: Verification
208
128
209
129
Below are a few steps to check if the PostgreSQL Operator is up and running.
210
130
@@ -255,12 +175,12 @@ pgo client version 4.3.0
255
175
pgo-apiserver version 4.3.0
256
176
```
257
177
258
-
## Step 5: Have Some Fun - Create a PostgreSQL Cluster
178
+
## Step 4: Have Some Fun - Create a PostgreSQL Cluster
259
179
260
-
The quickstart installation method creates two namespaces that you can deploy your PostgreSQL clusters into called `pgouser1` and `pgouser2`. Let's create a new PostgreSQL cluster in `pgouser1`:
180
+
The quickstart installation method creates a namespace called `pgo` where the PostgreSQL Operator manages PostgreSQL clusters. Try creating a PostgreSQL cluster called `hippo`:
261
181
262
182
```shell
263
-
pgo create cluster -n pgouser1 hippo
183
+
pgo create cluster -n pgo hippo
264
184
```
265
185
266
186
Alternatively, because we set the [`PGO_NAMESPACE`](/pgo-client/#general-notes-on-using-the-pgo-client) environmental variable in our `.bashrc` file, we could omit the `-n` flag from the [`pgo create cluster`](/pgo-client/reference/pgo_create_cluster/) command and just run this:
@@ -281,7 +201,7 @@ workflow id 1cd0d225-7cd4-4044-b269-aa7bedae219b
281
201
This will create a PostgreSQL cluster named `hippo`. It may take a few moments for the cluster to be provisioned. You can see the status of this cluster using the `pgo test` command:
282
202
283
203
```shell
284
-
pgo test -n pgouser1 hippo
204
+
pgo test -n pgo hippo
285
205
```
286
206
287
207
When everything is up and running, you should see output similar to this:
@@ -294,7 +214,7 @@ cluster : hippo
294
214
primary (hippo-7b64747476-6dr4h): UP
295
215
```
296
216
297
-
The `pgo test` command provides you the basic information you need to connect to your PostgreSQL cluster from within your Kubernetes environment. For more detailed information, you can use `pgo show cluster -n pgouser1 hippo`.
217
+
The `pgo test` command provides you the basic information you need to connect to your PostgreSQL cluster from within your Kubernetes environment. For more detailed information, you can use `pgo show cluster -n pgo hippo`.
0 commit comments