AlloyDB Omni : Hybrid Transactional and Analytical Processing

Follow the steps to move your data from PostgreSQL to AlloyDB Omni, a Hybrid Transactional and Analytical database, and learn how to create a read-replica and enable the columnar engine for faster queries and insights.

Arthur VESTU
Google Cloud - Community

--

https://cloud.google.com/blog/products/databases/announcing-the-general-availability-of-alloydb-for-postgresql
Image from Google Cloud

Introduction

Looking for a unified database solution for both operational and analytical workloads? Look no further than Google Cloud’s AlloyDB. As a fully managed Hybrid Transactional and Analytical Processing (HTAP) service, AlloyDB offers a unified platform that efficiently organizes data using its innovative columnar engine. This results in exceptional performance while minimizing management overhead.

AlloyDB Omni, the downloadable edition, combines PostgreSQL compatibility with a high-performance columnar engine for real-time analytics on live transactional data. Whether you choose to deploy it on-premise, in the Cloud, or both, this flexibility allows you to reduce costs and accelerate insights.

In this article, we will migrate a PostgreSQL database to AlloyDB Omni, set up a read-replica database, and enable the columnar engine for faster scans, joins, and aggregations. By the end, you will better understand how AlloyDB can manage workloads and deliver excellent performance for a wide range of queries.

As a bonus, this article will conclude with a comparison table between AlloyDB and Cloud SQL to better make your choice.

Summary

Introduction
Part 1 Migrate a PostgreSQL database to AlloyDB Omni via export
Step 1: Create a PostgreSQL database server VM
Step 2: Create a pgAdmin server VM
Step 3: Generate data using pgbench
Step 4: Dump the PostgreSQL data using pg_dump
Step 5: Upload the dump file to Google Cloud Storage
Step 6: Create a VM with AlloyDB Omni
Step 7: Connect AlloyDB to pgAdmin
Step 8: Get the dump file from GCS
Step 9: Copy the dump file to the pg-service container
Step 10: Load the data using pg_restore
Part 2 Create a read-only replica
Step 1: Allow replica connection on the primary database
Step 2: Create the read-replica database
Step 3: Setup the read-replica connection to the primary database
Part 3 How to use Columnar engine
Step 1: Verify that Columnar engine isn’t enabled
Step 2: Enable Columnar engine
Step 3: Add a table to columnar engine
Step 4: Try it out

Part 1 Migrate a PostgreSQL database to AlloyDB Omni via export :

Step 1: Create a PostgreSQL database server VM

To migrate a PostgreSQL database to AlloyDB Omni, you first need to create a demonstration PostgreSQL database server VM. This will be the primary database server that you will export your data from. You can do this using the Google Cloud platform console or the Cloud Shell command line interface.

In the GCP console you first need to create a Firewall rules that allow ingress on port 5432

  1. Go to VPC network > Firewall.
  2. Click create a firewall rule.
  3. Give it a name (here allow-pgport), a tag (here pgdb) and a source range (here 0.0.0.0/0), and a rules (tcp:5432).

Then still in the GCP console you can create the virtual machine instance :

  1. Go to Compute Engine > VM instances.
  2. Click create instance.
  3. In Machine type choose e2-highmem-2.
  4. In Identity and API access, set Access scopes to Allow full access to all Cloud APIs.

Alternatively if you want to create a PostgreSQL database server VM using the Cloud Shell, follow these steps:

  1. Open the Cloud Shell by clicking on the Activate Cloud Shell button (look like >_) in the GCP console.
  2. Run the following command to create a firewall rule to allow incoming connections to the PostgreSQL server port (TCP 5432):
gcloud compute --project=your-project-id firewall-rules create allow-pgport --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:5432 --source-ranges=0.0.0.0/0 --target-tags=pgdb

This command creates a firewall rule named allow-pgportthat allows incoming TCP connections to port 5432 from any source IP address.

Still in Cloud Shell run the following command to create a virtual machine instance:

gcloud compute instances create postgres \
--project=your-project-id \
--zone=europe-west1-b \
--machine-type=e2-highmem-2 \
--tags=pgdb \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--create-disk=auto-delete=yes,boot=yes,device-name=postgres,image=projects/debian-cloud/global/images/debian-11-bullseye-v20230411,mode=rw,size=20,type=projects/your-project-id/zones/europe-west1-b/diskTypes/pd-balanced

This command creates a virtual machine instance named Postgres in the Europe-west1-bzone with a machine type of e2-highmem-2. It also adds the pgdb tag to the instance, which is associated with the firewall rule created in the previous step. Additionally, it sets the scopes parameter to allow full access to all Cloud APIs, and creates a disk for the read-replica database.

After creating the database VM you need to configure and install postgreSQL.

Install Docker using the following commands:

# Update the package index and install some dependencies
sudo apt-get update &&\
sudo apt-get install ca-certificates curl gnupg lsb-release

# Create a directory for the docker keyring and download it
sudo mkdir -m 0755 -p /etc/apt/keyrings &&\
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# Add the docker repository to the sources list
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Update the package index again and install docker and its plugins
sudo apt-get update &&\
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Add the current user to the docker group and reload it
sudo usermod -aG docker $USER &&\
newgrp docker

These commands update the package repository, install dependencies, and install Docker.

Start the PostgreSQL server using the following command:

docker run --name postgresql -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=password -p 5432:5432 -v /data:/var/lib/postgresql/data -d postgres

This command starts a Docker container named postgresql running the latest version of the PostgreSQL image. It sets the username and password for the database to Postgres and password , respectively, and maps the container’s port 5432 to the host’s port 5432. It also creates a volume named /data and maps it to the container’s /var/lib/postgresql/data directory to persist the database data.

Once your PostgreSQL database server VM is up and running, you can proceed to the next step and create a pgAdmin server.

Note: pgAdmin is a web-based GUI tool used to interact with the Postgres database sessions, both locally and remotely

Step 2: Create a pgAdmin server VM

Set up a pgAdmin server VM will allow you to easily connect to your primary database server and manage your data.

To create a pgAdmin server VM using the GCP console, follow these steps:

  1. In the GCP console, click on the “Create Instance” button to create a new virtual machine instance.
  2. Choose “Deploy container” as the deployment option.
  3. Set the container image to “dpage/pgadmin4” and add the following environment variables:
  4. PGADMIN_DEFAULT_EMAIL: Set this to your preferred email address for pgAdmin login.
  5. PGADMIN_DEFAULT_PASSWORD: Set this to your preferred password for pgAdmin login.
  6. Allow HTTP traffic for the instance.
  7. Click on the “Create” button to create the pgAdmin server VM.

Alternatively, you can create a pgAdmin server VM using the Cloud Shell by running the following command:

gcloud compute instances create-with-container pgadmin --project=your-project-id --zone=europe-west1-b --machine-type=e2-medium --tags=http-server --image=projects/cos-cloud/global/images/cos-stable-105-17412-1-61 --boot-disk-size=10GB --boot-disk-type=pd-balanced --boot-disk-device-name=pgadmin --container-image=dpage/pgadmin4 --container-restart-policy=always --container-env=PGADMIN_DEFAULT_EMAIL=name@email.com,PGADMIN_DEFAULT_PASSWORD=admin

This command creates a virtual machine instance named pgadmin with a machine type of e2-medium in the europe-west1-b zone. It also adds the http-server tag to the instance to allow HTTP traffic, sets the image to the latest version of Container-Optimized OS, and sets the container image to dpage/pgadmin4. Additionally, it sets the environment variables for the pgAdmin login email and password.

Once the pgAdmin server VM is up and running, you can connect to it and add a new server.

Enter the postgreSQL server details, including the internal IP address, username (postgres), password (password) and port number (5432).

Step 3: Generate data using pgbench

Note : pgbench is a program for running benchmark tests on PostgreSQL. It executes a sequence of SQL commands repeatedly, possibly in multiple concurrent database sessions, to simulate a workload against a PostgreSQL server

In order to have some data to migrate, you can use pgbench to generate a sample dataset. pgbench can create a variety of different data sets for testing and benchmarking purposes.

To generate data in the PostgreSQL database using the pgbench tool, follow these steps:

Run the following command to generate data in the PostgreSQL database:

docker exec postgresql pgbench -U postgres -i -F 10 -n postgres

This command uses the pgbench tool to initialise a new database with 10 data partitions and generates data for the Postgres database using the default PostgreSQL user Postgres.

Verify the generated data by running the following command:

docker exec -it postgresql psql -h localhost -U postgres

This command starts a postgres prompt where you can query the database using SQL commands. You can use SQL commands like SELECT or \dtto verify that data has been successfully generated in the database.

Step 4: Dump the PostgreSQL data using pg_dump

Once you have generated your sample data, you can use the pg_dump tool to export your PostgreSQL database. This will create a file that contains all of your database schema and data.

To create a dump file of the PostgreSQL database, you can use the pg_dump tool in the PostgreSQL server container. Follow these steps:

SSH into the PostgreSQL server container by running the following command:

docker exec -it postgresql bash

This command starts a bash shell in the running Docker container named postgresql.

Run the following command to create a dump file of the PostgreSQL database:

pg_dump -U postgres -Fc postgres > pg_dump.DMP &&\
exit

This command uses the pg_dump tool to create a compressed dump file of the Postgres database and saves it as pg_dump.DMP in the container’s root directory. The exit command exits from the container’s bash shell.

Step 5: Upload the dump file to Google Cloud Storage

To transfer your database dump file to your AlloyDB Omni instance, you can upload it to a cloud storage service like Google Cloud Storage (GCS). This will allow you to easily access the file from your AlloyDB VM and load it into your database.

Create a new GCS bucket in the GCP console by following these steps:

  1. Go to the GCP console and navigate to the Cloud Storage section.
  2. Click on the “Create bucket” button.
  3. Follow the prompts to create a new bucket with a unique name.

Alternatively, you can create a new GCS bucket using the Cloud Shell by running the following command:

gcloud storage buckets create gs://NEW_BUCKET_NAME

Inside the Postgres server, get the dump from the container and use the gsutil tool to upload the dump file to GCS by running the following command:

docker cp postgresql:/pg_dump.DMP pg_dump.DMP &&\
gsutil cp pg_dump.DMP gs://NEW_BUCKET_NAME/pg_dump.DMP

This command copies the dump file pg_dump.DMP from the PostgreSQL container to the local file system. Then, uploads the dump file pg_dump.DMP to the GCS bucket with the name NEW_BUCKET_NAME.

Step 6: Create a VM with AlloyDB Omni

Next, you will need to create a new VM with AlloyDB Omni installed.

To create a new VM instance and install AlloyDB Omni in cloud Shell. Run the following commands:

gcloud compute instances create omni-primary \
--project=your-project-id --zone=europe-west1-b \
--machine-type=e2-highmem-2 \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--tags=pgdb \
--create-disk=auto-delete=yes,boot=yes,device-name=omni-primary,image=projects/debian-cloud/global/images/debian-11-bullseye-v20230411,mode=rw,size=20,type=projects/your-project-id/zones/europe-west1-b/diskTypes/pd-balanced

This command creates a new VM instance called omni-primary in the europe-west1-b zone with the specified machine type and tags. It also creates a new boot disk with the specified size and type.

# Update the package index and install some dependencies
sudo apt-get update &&\
sudo apt-get install ca-certificates curl gnupg lsb-release

# Create a directory for the docker keyring and download it
sudo mkdir -m 0755 -p /etc/apt/keyrings &&\
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# Add the docker repository to the sources list
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Update the package index again and install docker and its plugins
sudo apt-get update &&\
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Add the current user to the docker group and reload it
sudo usermod -aG docker $USER &&\
newgrp docker

These commands install necessary dependencies, such as ca-certificates, curl, gnupg, and lsb-release. Then, they install Docker and add the current user to the docker group.

# Pull the images for pg-service and memory-agent from Google Cloud Registry
docker pull gcr.io/alloydb-omni/pg-service:latest &&\
docker pull gcr.io/alloydb-omni/memory-agent:latest

# Copy the latest installer from Google Cloud Storage to the current directory
gsutil cp -r gs://alloydb-omni-install/$(gsutil cat gs://alloydb-omni-install/latest) .

# Change to the installer directory and extract it
cd $(gsutil cat gs://alloydb-omni-install/latest) &&\
tar -xzf alloydb_omni_installer.tar.gz && cd installer

# Run the installer script with sudo privileges
sudo bash install_alloydb.sh

# Create a directory for alloydb data
mkdir /home/$USER/alloydb-data

# Edit the dataplane configuration file and replace the data directory path
sudo sed -i "s|^\(DATADIR_PATH=\).*|\1/home/$USER/alloydb-data|" /var/alloydb/config/dataplane.conf

These commands download and install the latest version of AlloyDB Omni from GCS. create a data directory for AlloyDB Omni, update the configuration file to point to the new data directory, and start the AlloyDB Omni dataplane service.

Step 7: Connect AlloyDB to pgAdmin

After creating your AlloyDB VM, you can connect it to your pgAdmin server to manage your databases. This will allow you to easily view your AlloyDB databases and perform tasks like creating tables, running queries, and monitoring database performance.

Use the following command to modify the pg_hba.conf file and allow remote connections:

sudo chmod 777 /var/alloydb/config/pg_hba.conf
sudo sed -i '86s/127\.0\.0\.1\/32/0.0.0.0\/0/' /var/alloydb/config/pg_hba.conf

The first command grants write permission to the file /var/alloydb/config/pg_hba.conf for all users. The second command modifies the pg_hba.conf file to allow remote connections to the PostgreSQL server.

Restart the AlloyDB dataplane service by running the following command:

sudo systemctl restart alloydb-dataplane

This command restarts the AlloyDB dataplane service to apply the changes made to the pg_hba.conf file.

Connect to pgAdmin and add a new server by following these steps:

  1. Open pgAdmin and click on the “Add New Server” button.
  2. Enter the AlloyDB server details, including the internal IP address and port number (5432).
  3. Enter the PostgreSQL user credentials to authenticate with the AlloyDB server. Here only username postgres
  4. Click on the “Save” button to add the AlloyDB server to pgAdmin.

Step 8: Get the dump file from GCS

Once you have connected your AlloyDB VM to your pgAdmin server, you can retrieve the dump file from GCS into your AlloyDB VM. This will allow you to transfer the data from your PostgreSQL database to your AlloyDB database.

Use the gsutil tool to get the dump file from GCS by running the following command:

gsutil cp gs://NEW_BUCKET_NAME/pg_dump.DMP pg_dump.DMP

This command downloads the dump file pg_dump.DMP from the GCS bucket with the name NEW_BUCKET_NAME to the local file system.

Step 9: Copy the dump file to the pg-service container

To load the data into your AlloyDB database, you will need to copy the dump file to the pg-service container. This is the container that runs the PostgreSQL database engine in your AlloyDB instance.

Use the following command to copy the dump file to the pg-service container:

docker cp pg_dump.DMP pg-service:/

This command copies the dump file pg_dump.DMP from the local file system to the root directory of the pg-service container.

Step 10: Load the data using pg_restore

Finally, you can use the pg_restore to load your PostgreSQL data into your AlloyDB database. This will create all of the tables and insert all of the data from your PostgreSQL database into your AlloyDB database.

Use the pg_restore tool to load the data from the dump file to AlloyDB by running the following command:

docker exec -it pg-service pg_restore -h localhost -U postgres -d postgres pg_dump.DMP

This command uses the pg_restore tool to restore the data from the dump file pg_dump.DMP to the Postgres database in the AlloyDB server.

Verify the data has been restored by running the following commands:

docker exec -it pg-service psql -h localhost -U postgres -c "\dt"

The first command starts a postgres prompt in the pg-service container. The second command lists the tables in the Postgres database to verify that the data has been successfully restored.

Conclusion:

Congratulations! You have successfully migrated your PostgreSQL data to AlloyDB Omni. In the next steps, we will set up a read-replica and enable the columnar engine.

Part 2 Create a read-only replica :

To create a read only replica you need to install alloyDB omni on a new instance.

Step 1: Allow replica connection on the primary database

To create a read-only replica of your AlloyDB database, you will need to first allow replica connections on your primary database. This will enable the replica server to connect to the primary server and replicate the data.

To allow replica connection on the primary database, run the following command on the omni-primary instance:

echo "host all         alloydbreplica 0.0.0.0/0 trust
host replication alloydbreplica 0.0.0.0/0 trust" >> /var/alloydb/config/pg_hba.conf &&\
sudo systemctl restart alloydb-dataplane

This command adds two new lines to the pg_hba.conf file to allow replica connection from any IP address for the alloydbreplica user and restarts the AlloyDB dataplane service to apply the changes.

Step 2: Create the read-replica database

Once you have allowed replica connections on your primary database, you can create a read-replica database. This will be a copy of your primary database that is read-only and can be used for reporting or analytics purposes.

Create the read-replica by running the following command in Cloud Shell:

gcloud compute instances create omni-read-replica \
--project=your-project-id \
--zone=europe-west1-b \
--machine-type=e2-highmem-2 \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--tags=pgdb \
--create-disk=auto-delete=yes,boot=yes,device-name=omni-read-replica,image=projects/debian-cloud/global/images/debian-11-bullseye-v20230411,mode=rw,size=20,type=projects/your-project-id/zones/europe-west1-b/diskTypes/pd-balanced

Then in the omni-read-replica VM run :

# Update the package index and install some dependencies
sudo apt-get update &&\
sudo apt-get install ca-certificates curl gnupg lsb-release

# Create a directory for the docker keyring and download it
sudo mkdir -m 0755 -p /etc/apt/keyrings &&\
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# Add the docker repository to the sources list
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Update the package index again and install docker and its plugins
sudo apt-get update &&\
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Add the current user to the docker group and reload it
sudo usermod -aG docker $USER &&\
newgrp docker
# Pull the images for pg-service and memory-agent from Google Cloud Registry
docker pull gcr.io/alloydb-omni/pg-service:latest &&\
docker pull gcr.io/alloydb-omni/memory-agent:latest

# Copy the latest installer from Google Cloud Storage to the current directory
gsutil cp -r gs://alloydb-omni-install/$(gsutil cat gs://alloydb-omni-install/latest) .

# Change to the installer directory and extract it
cd $(gsutil cat gs://alloydb-omni-install/latest) &&\
tar -xzf alloydb_omni_installer.tar.gz && cd installer

# Run the installer script with sudo privileges
sudo bash install_alloydb.sh

# Create a directory for alloydb data
mkdir /home/$USER/alloydb-data

# Edit the dataplane configuration file and replace the data directory path
sudo sed -i "s|^\(DATADIR_PATH=\).*|\1/home/$USER/alloydb-data|" /var/alloydb/config/dataplane.conf

These commands create a new instance named omni-read-replica with the specified configuration and install AlloyDB Omni on it.

Step 3: Setup the read-replica connection to the primary database

To ensure that your read-replica replicate the data and stays in sync with your primary database, you will need to set up a replication connection between the two databases. This will allow your read-replica to continuously receive updates from your primary database and stay up-to-date.

To setup the read-replica connection to the primary database, connect to the omni-read-replica instance and run the following commands:

sudo chmod 777 /var/alloydb/config/dataplane.conf &&\
sudo sed -i '$ d; $ d; $ d; a\INSTANCE_TYPE=READ_REPLICA\nPRIMARY_IP_ADDRESS=PRIMARY_IP_ADDRESS(internal)\nREPLICA_SLOT_NAME="alloydb_omni_replica"' /var/alloydb/config/dataplane.conf
sudo systemctl restart alloydb-dataplane

These commands modify the dataplane.conf file to set the instance type to READ_REPLICA, specify the IP address of the primary database, and set the name of the replication slot to alloydb_omni_replica. Then, they restart the AlloyDB dataplane service to apply the changes.

You can verify the connection on the primary VM using :

 docker exec -it pg-service psql -h localhost -U postgres -c "select * from pg_stat_replication;"

Conclusion:

You have successfully created a read-only replica of the primary database in AlloyDB Omni. Now, you can use this replica for read-only queries to offload read traffic from the primary database and improve performance.

Part 3 How to use Columnar engine

Step 1: Verify that Columnar engine isn’t enabled

Before using the Columnar engine in AlloyDB, you will need to verify that it is not already enabled. This can be done by checking the configuration settings for your AlloyDB primary instance.

To verify that the Columnar engine isn’t enabled, run the following command:

docker exec -it pg-service psql -h localhost -U postgres -c "
EXPLAIN (ANALYZE,COSTS,SETTINGS,BUFFERS,TIMING,SUMMARY,WAL,VERBOSE)
SELECT count(*) FROM pgbench_accounts WHERE bid < 189 OR abalance > 100;"

This command runs a SELECTquery on the pgbench_accounts table and shows the execution plan with various details. If the Columnar engine isn’t enabled, the execution plan won’t include any Columnar engine-related details.

Step 2: Enable Columnar engine

To enable the Columnar engine in AlloyDB, you will need to update the configuration settings for your database and enable the flag. This will allow you to take advantage ofthe Columnar engine’s benefits, such as improved query performance and reduced storage requirements for certain types of data.

To enable the Columnar engine, run the following command:

docker exec -it pg-service psql -h localhost -U postgres -c "ALTER SYSTEM SET  google_columnar_engine.enabled=on;" &&\
sudo systemctl restart alloydb-dataplane

This command updates the database configuration to enable the Columnar engine and restarts the AlloyDB dataplane service to apply the changes.

To verify that the Columnar engine is now enabled, run the following command:

docker exec -it pg-service psql -h localhost -U postgres -c "SELECT name, setting FROM pg_settings WHERE name='google_columnar_engine.enabled';"

This command shows the value of the google_columnar_engine.enabled configuration parameter, which should be on if the Columnar engine is enabled.

Step 3: Add a table to columnar engine

Once you have enabled the Columnar engine, you can choose a table and add it to the engine. This will allow you to take advantage of the engine’s columnar storage format, which stores data in memory by column rather than by row.

Run the google_columnar_engine_add() function to add the table to Columnar engine :

docker exec -it pg-service psql -h localhost -U postgres -c "SELECT google_columnar_engine_add('pgbench_accounts');"

This command runs the google_columnar_engine_add function to add the pgbench_accounts table to the Columnar engine.

Step 4: Try it out

After adding a table to the Columnar engine, you can try running some queries on it to see the performance improvements. You should see faster query times.

To try out the Columnar engine, run a SELECT query on the pgbench_accounts table with the EXPLAIN command to see the execution plan with Columnar engine-related details. Run the following command:

docker exec -it pg-service psql -h localhost -U postgres -c "EXPLAIN (ANALYZE,COSTS,SETTINGS,BUFFERS,TIMING,SUMMARY,WAL,VERBOSE)
SELECT count(*) FROM pgbench_accounts WHERE bid < 189 OR abalance > 100;"

This command runs a SELECT query on the pgbench_accounts table with a WHERE clause and shows the execution plan with various details, including Columnar engine-related details. If the Columnar engine is working correctly, the execution plan should show that the Columnar engine is used to scan the table.

Congratulations! You have successfully enabled the Columnar engine for a table in your AlloyDB Omni database. Run some queries on your tables to see how the Columnar engine improves performance.

Conclusion

In this tutorial, we have learned how to migrate data from a PostgreSQL database to AlloyDB Omni, set up a read-replica, and enable the Columnar engine. AlloyDB Omni is a powerful distributed database that can handle large volumes of data and is highly scalable.

AlloyDB Omni is a powerful distributed database that offers several benefits such as unified data storage, real-time insights, reduced costs, PostgreSQL compatibility, and advanced analytics.

While it may be more expensive than the basic version of Cloud SQL, when compared to the HA version, AlloyDB becomes more affordable.

Additionally, AlloyDB offers the flexibility of choosing between a fully managed version by Google Cloud or a downloadable self-managed version. Overall, AlloyDB is a great option for those looking for a highly scalable and performant database solution.

--

--