Intro
VMware Tanzu Greenplum runs anywhere. The same software runs on-premise or in the cloud. However, the commercial clouds have similar but unique characteristics that make optimizing the performance of Greenplum also similar but unique to each cloud.
Over the past 2+ years, VMware Tanzu has developed the Greenplum on AWS, Azure, and GCP Marketplaces products with the following design goals:
- FAST
- Same Experience Across Clouds
- Leverage the Cloud Features
- Secure
- Automated Management
This blog post is focuses on the tuning and configuration in the clouds that make the commercial cloud products as FAST as possible. This work resulted in the Hourly and Bring Your Own License (BYOL) products to the AWS, Azure, and GCP Marketplaces.
Note: This blog does not cover the Installation Guide guidance such as disk mounting options or operating system configuration. These settings are a constant and the variables used for tuning here are the cloud resources and subsequent Greenplum configuration settings.
Picking Possible Instance Types
This is the first step in figuring out how to deploy Greenplum in the cloud. Examine the Network, CPU, Memory, and Disk performance of the available types to narrow down the possible instance types based on the specifications.
Network
VMware Tanzu recommends that Production deployments have at least a 10 GBit network due to the network demands moving large amounts of data between nodes as you correlate and combine data inside of Greenplum. So when picking the instance type to run Greenplum, you can quickly eliminate instance types that don’t have at least 10 Gbit.
For AWS and Azure, the larger the instance type, the faster the network. For GCP, all instance types with at least 8 vCPUs have the same 10 Gbit speed.
– Pick Instance Types with at least a 10 Gbit Network |
CPU
The speed of the CPU core is important but not nearly as important as the quantity. Because Greenplum is a Massively Parallel Processing database, and because Postgres spawns new processes for every query, the architecture uses multiple processors on each host. Therefore, having more CPU cores means more segment processes can be used on each host which increases performance.
In general, I have found that it is a good idea to have no more than 1 Segment process per 2 cores. You may even want to have 1 Segment process per 4 cores in an environment with lots of concurrency.
For AWS and GCP, a vCPU is a hyperthreaded core so for counting real cores, divide this by 2. For Azure, some instance types have hyperthreading while others do not.
– For AWS and Azure, only instance types with many vCPUs meet the network requirement – For GCP, anywhere from 8 to 64 vCPUs per instance type meet the network requirement |
Memory
A key Greenplum memory configuration setting is “gp_vmem_protect_limit”. Here is the definition from the Greenplum Best Practices Guide:
gp_vmem_protect_limit
Use gp_vmem_protect_limit to set the maximum memory that the instance can allocate for all work being done in each segment database. Never set this value larger than the physical RAM on the system. If gp_vmem_protect_limit is too high, it is possible for memory to become exhausted on the system and normal operations may fail, causing segment failures. If gp_vmem_protect_limit is set to a safe lower value, true memory exhaustion on the system is prevented; queries may fail for hitting the limit, but system disruption and segment failures are avoided, which is the desired behavior.
The default value is 8GB and I have found that in most cases, use this as the minimum amount of RAM to allocate per Segment.
Also helpful is the Greenplum Community website which has a calculator you can use to determine how to set gp_vmem_protect_limit based on the amount of RAM you have, the amount of Swap, and the number of Segments you will run per host.
Using the calculator, you can see that a system where the Segment host has 128GB of RAM, 32GB of swap, and 10 segments per host, the gp_vmem_protect_limit can stay above 8GB and be safely set to 8800.
For AWS, there are several instance types to work with but some will limit the ability to run more Segments per host because of the reduced amount of RAM while others are “memory optimized” which have far more memory so these can handle more Segments.
– More RAM will improve performance |
Disk
Disk throughput is probably the most variable aspect of running Greenplum in the cloud. Every cloud puts throughput limits on VMs based on the instance type which is well documented. Throughput is measured in MB/s.
Also, each disk will also have throughput caps on it but for now, focus on the Instance Type limit.
Note: Don’t get caught up in IOPS when examining the disk performance. Greenplum performance is impacted by disk throughput more than anything else.
– Allocate at least 60 MB/s of disk throughput per Segment |
Testing
Network Testing
This can be achieved with the gpcheckperf utility which is included as part of the Greenplum installation.
Note: Network performance is expected to be 10 Gbit and 1 Gbit equates to 125 MB/s so 10 Gbit should result in 1250 MB/s.
[gpadmin@mdw ~]$ gpcheckperf -f all_hosts.txt -r n -d /data1
-------------------
-- NETPERF TEST
-------------------
====================
== RESULT
====================
Netperf bisection bandwidth test
mdw -> sdw1 = 1145.000000
sdw1 -> mdw = 1145.010000
sdw2 -> sdw3 = 1144.810000
sdw3 -> sdw2 = 1144.940000
sdw4 -> mdw = 1144.740000
mdw -> sdw4 = 1145.080000
Summary:
sum = 6869.58 MB/sec
min = 1144.74 MB/sec
max = 1145.08 MB/sec
avg = 1144.93 MB/sec
median = 1145.00 MB/sec
– This test confirms that the network for this cluster is 10 Gbit |
Disk Performance
Each disk has a throughput limit in addition to the Instance Type having a limit. For AWS and Azure, the disk limit is lower than the Instance Type limit so, in order to achieve the instance type limit you must use multiple disks per VM. This also impacts the number of Segments because each segment is only in one directory.
– The number of Segments can not exceed the number of mounts – The number of Segments must be a multiple of the number of mounts |
For AWS, (4) ST1 disks can reach the throughput limit of all tested Instance Types. The size of the ST1 disk also impacts performance. At 12.5TB, the ST1 performance is maximized.
Azure Notes:
- Premium and Standard storage options (SSD and HDD) but Instance Types that support Premium storage have lower throughput limits
- More disks are needed per host than the Instance Type will support Segments (memory and CPU)
- Software RAID 0 needed to reduce the number of mounts but Software RAID negatively impacts performance
The larger the number of disks in software RAID, the worse the performance
The size of the disk impacts performance
For Azure, much more testing was needed to optimize each instance type than AWS or GCP.
For GCP, disk performance is relatively very low for each instance type. The number and size of the disks does not impact throughput. The type of disk has a small impact on performance where SSD is slightly faster. The HDD performance of 120 MB/s write and 180 MB/s read means GCP can not handle very many Segments per host.
How to Test Disk Performance
With the database and pgBouncer stopped, execute gpcheckperf with the “-r ds” flags to test the disk performance. Be sure to use the “-d” option for every /data[1-4] volume in the cluster.
[gpadmin@sdw1 ~]$ gpcheckperf -h sdw1 -r ds -D -d /data1 -d /data2 -d /data3 -d /data4
--------------------
-- DISK WRITE TEST
--------------------
--------------------
-- DISK READ TEST
--------------------
--------------------
-- STREAM TEST
--------------------
====================
== RESULT
====================
disk write avg time (sec): 591.27
disk write tot bytes: 515224109056
disk write tot bandwidth (MB/s): 831.02
disk write min bandwidth (MB/s): 831.02 [sdw1]
disk write max bandwidth (MB/s): 831.02 [sdw1]
-- per host bandwidth --
disk write bandwidth (MB/s): 831.02 [sdw1]
disk read avg time (sec): 585.57
disk read tot bytes: 515224109056
disk read tot bandwidth (MB/s): 839.11
disk read min bandwidth (MB/s): 839.11 [sdw1]
disk read max bandwidth (MB/s): 839.11 [sdw1]
-- per host bandwidth --
disk read bandwidth (MB/s): 839.11 [sdw1]
stream tot bandwidth (MB/s): 16965.96
stream min bandwidth (MB/s): 16965.96 [sdw1]
stream max bandwidth (MB/s): 16965.96 [sdw1]
-- per host bandwidth --
stream bandwidth (MB/s): 16965.96 [sdw1]
Thorough testing for multiple configurations (especially Azure) has been performed to maximize performance for each instance type.
TPC-DS Test
This test creates a database schema, loads the data, executes 99 decision-support type queries, and then executes those same 99 queries in a random order with 5 different sessions. The test was used to simulate real world activities in the database with simple to rather complex queries.
A score is the output of the test and can be used to quickly compare different configurations. Comparing the execution times for each step (load, 1 user queries, and 5 concurrent queries) also was helpful in comparing results.
This test was helpful in validating the performance and optimizing the number of segments per host. In some tests, it was found that loading and single user query performance was better with more segments per host but then faltered with the concurrency test.
This test was monumental in determining the best configuration for GCP. GCP’s pricing for VMs is based on the number of vCPUs so (2) 8 vCPU machines cost the same as (1) 16 vCPU machine. VMware Tanzu Greenplum pricing is also based on the number of cores.
– It is not the number of nodes that is important, but the number of cores deployed |
Test – (16) n1-highmem-64 vs (128) n1-highmem-8 – 512 cores in each cluster – Same cost |
Test | Loading | 1 User’s Queries | 5 Users’ Queries | Overall |
n1-highmem-8 | 64.47% Faster | 11.99% Faster | 18.81% Faster | 22.86% Faster |
Similar comparisons were performed in Azure and AWS to optimize the number of segments per host. This also impacted memory allocation and for Azure, it meant trying different disk configurations.
More information on TCP-DS: https://github.com/pivotalguru/TPC-DS
Summary
VMware is committed to making Greenplum run anywhere and the AWS, Azure, and GCP Marketplaces make it easy for customers to deploy Greenplum in the cloud with the confidence that it is as Fast as possible. If you run in one of these clouds, leveraging these Marketplace offerings will net you the best Greenplum experience possible.
Links: – AWS BYOL and Hourly – Azure BYOL and Hourly – GCP BYOL and Hourly Notes: – Hourly is billed by the hour and includes VMware Tanzu Support via email. |