5 Essential Tips for Managing Greenplum Databases

Greenplum Database is a massively parallel processing (MPP) database designed for high-performance analytics and data warehousing. Similar to handling other MPP databases, it mandates routine query refinement, resource allocation adjustments, and data safeguarding. Within this blog post, we will delve into five indispensable guidelines tailored to the effective management of Greenplum.

1. Specify a Distribution Policy

Greenplum can be thought of as an MPP adaptation of PostgreSQL. It enhances PostgreSQL by introducing parallelism into its operations. To achieve this, Greenplum leverages the distribution of data across numerous machines, referred to as segments, to enable parallel query execution. Greenplum provides users with the flexibility to choose from three data distribution methods: hash, random, and replicated distribution.
 
By default, Greenplum employs hash distribution. When users create tables without specifying a distribution method, the system automatically distributes tuples based on the first hashable column. However, it’s advisable for users to explicitly define the distribution policy. When opting for hash distribution, it’s important to select a column that is commonly used in join operations or as a filter condition. Popular choices include primary keys or frequently accessed foreign keys.
 
Scenario: E-commerce database that contains a “Sales” table
To maximize query parallelism (Eg. join operations with related tables such as “Customers” and “Products”), use hash distribution on the “Customer ID” or a composite key involving both the “Customer ID” and the “Date of Purchase.” 
				
					CREATE TABLE "Sales" (
    sale_id SERIAL,
    customer_id NUMERIC,
    customer_name VARCHAR(255),
    product_name VARCHAR(255),
    purchase_date DATE,
    sales_amount DECIMAL(10, 2)
) DISTRIBUTED BY (customer_id);
				
			

 

Random distribution is a less commonly employed feature in Greenplum. With random distribution, data is scattered across all the segments in a round-robin fashion. However, no session-level state is maintained to guarantee uniformity or any distribution pattern. Random distribution can be a good choice, if no prevalent query patterns rely heavily on specific columns.

 

Scenario: Mobile application that stores a “UserEvents” table
The “UserEvents” table records various user events like clicks, app launches, and transactions. Events are randomly generated continuously with no specific pattern. To store the user events efficiently and prevent data skew, use random distribution to accommodate varying user event rates.
				
					CREATE TABLE "UserEvents" (
    event_id SERIAL,
    timestamp TIMESTAMP NOT NULL,
    user_id INT NOT NULL,
    event_type VARCHAR(255) NOT NULL
) DISTRIBUTED RANDOMLY;
				
			
 
Replicated distribution is another option provided by Greenplum. In this scenario, every segment retains a complete copy of all the tuples in a table. The advantage of maintaining a compact replicated table becomes evident when it’s used in conjunction with a larger table. In such cases, there’s no need to redistribute data across segments via the network. Instead, join operations can be executed directly on the segments in parallel.
 

Scenario: Healthcare organization that stores a “HospitalContacts” table

With a limited number of hospitals and minimal data changes, the objective is to reduce query latency and avoid data redistribution during joins. Using replicated distribution ensures instant access on all segments.

				
					CREATE TABLE "HospitalContacts" (
    hospital_name VARCHAR(255) NOT NULL,
    contact_name VARCHAR(100),
    phone_number VARCHAR(20),
    email_address VARCHAR(255),
    specialized_department_contacts TEXT[]
) DISTRIBUTED REPLICATED;
				
			

2. Choose the Right Partition Strategy

Data partitioning is dividing large tables into smaller units known as partitions. Greenplum provides users with two choices of partitioning strategies. Range partitioning is often the go-to option for organizing time-series data, while list partitioning allows for the creation of partitions based on predefined value lists.

The Greenplum query optimizer is aware of the partitioning schemes. It generates query plans that leverage partition elimination. This means that the optimizer may bypass unnecessary partitions when executing queries. Partition elimination may occur during query processing, i.e. static partition elimination. Or, it may occur at runtime, i.e. dynamic partition elimination.

Combining partitioning with distribution bestows a significant advantage in optimizing query performance. Data distribution maximizes parallelism, with each segment handling only a fraction of the total tuples. Simultaneously, data partitioning further reduces data volume by excluding irrelevant partitions. In the end, decreased table scan leads to shorter query execution.

Scenario: Online sales transactions over many years
Table “SalesData” is distribute on the “customer_id” column. This ensures that tuples with the same customer IDs are stored on the same segment. This distribution method is optimized for queries that involve customer-level analytics. In the meantime, the “SalesData” table is partitioned by date to manage the historical data efficiently. This partitioning scheme is especially helpful for archiving older data and speeding up queries that involve a specific time frame.
				
					CREATE TABLE SalesData (
    transaction_id SERIAL,
    customer_id INT,
    transaction_date DATE,
    product_id INT,
    order_amount DECIMAL(10, 2)
) DISTRIBUTED BY (customer_id)
 PARTITION BY RANGE (transaction_date);;

-- Create partitions for different years
CREATE TABLE SalesData_2021 PARTITION OF SalesData
    FOR VALUES FROM ('2021-01-01') TO ('2021-12-31');
CREATE TABLE SalesData_2022 PARTITION OF SalesData
    FOR VALUES FROM ('2022-01-01') TO ('2022-12-31');
-- Additional partitions for subsequent years can be created similarly

				
			

3. Design Reasonable Index

Greenplum provides support for a variety of index types, including B-tree, bitmap, GIN, BRIN, hash, and spatial indexing via the PostGIS extension. While indexes offer substantial advantages, they come with associated costs. In addition to consuming extra disk space, they demand regular maintenance to stay current. Nevertheless, when thoughtfully designed, indexes could significantly amplify query performance. This is especially evident in index-only scans, where Greenplum retrieves directly from an index, circumventing the underlying table. Well-designed indexes should include columns that frequently appear in queries.
 
Greenplum offers support for both index scans and index-only scans. Moreover, Greenplum indexes are aware of data partitioning. Alongside the standard index and index-only scans that take place across all segments, Greenplum introduces dynamic index scans and dynamic index-only scans, which are exclusively performed on unpruned partitions.

Scenario: Online sales transactions over many years

Continuing from the “SalesData” example, we create a B-tree index on the order amount column. This enables the dynamic index scan, since the filter criteria includes both the transaction date (partition key) and the order amount (indexed column).

				
					CREATE INDEX idx_order_amount ON SalesData USING btree (order_amount);

EXPLAIN SELECT product_id FROM SalesData WHERE transaction_date = '2022-01-01' AND order_amount = 200;
				
			

4. VACUUM and ANALYZE Regularly

So far we have discussed the importance of making informed choices regarding data distribution, partitioning, and indexing in Greenplum. For these decisions to truly demonstrate their value, it’s imperative that Greenplum accurately captures these aspects of the data. This means ensuring the data statistics remains up-to-date.
 
VACUUM and ANALYZE are two maintenance operations, each serving a unique purpose. VACUUM frees up storage space by reclaiming areas marked as obsolete. This process is crucial in preventing the database from ballooning in size over time. As part of its operation, VACUUM updates the visibility map, marking data blocks as either “set” (indicating live rows) or “clear” (indicating dead rows).
 
The visibility information is pertinent for index-only scans. If the data block is “set”, the query optimizer confidently extracts all the data from the index. On the contrary, it skips the data blocks that exclusively contain dead rows. Only when the visibility map signals a mix of live and dead rows does the index-only scan resort to the table data. 
 
ANALYZE, on the other hand, is responsible for gathering an array of statistics, including column histograms, most common values (MCV), row counts, the number of distinct values (NDV), and correlations between columns. These statistics are crucial in estimating cardinality, selectivity, and data skew. The query optimizer relies on these estimates to determine optimal index utilization, join strategies, filter conditions, and access methods.
 
Typically, ANALYZE is performed automatically by the background auto-analyze daemon, which scrutinizes tables that surpass a predefined threshold of updates and deletes. However, this threshold is seldom met for large tables. Consequently, it becomes imperative to manually execute ANALYZE, ideally in conjunction with VACUUM FULL, at regular intervals to ensure that the data statistics remain accurate.

5. Query Optimization

Greenplum relies heavily on its query optimizer, a pivotal component tasked with selecting the most efficient route of operations, known as the query plan. To achieve optimal query performance, we need to understand how the Greenplum query optimizer operates.
 
When a query is submitted to Greenplum, it undergoes a parsing and rewriting phase, where basic transformations take place. The optimizer then takes the parsed query and explores considerable execution possibilities. It takes into account various factors, such as join strategies, aggregation methods, the presence of indexes, and access methods. This exploration results in the generation of multiple potential query execution plans that fulfill the query’s requirements.
 
Greenplum employs a cost-based optimization strategy, where it assigns costs to different operations based on data statistics. The costing guides the optimizer to choose the most efficient plan. As mentioned earlier, maintaining accurate statistics through regular VACUUM and ANALYZE operations is vital for the query optimizer’s effectiveness.
 
Now that we’ve gained a fundamental understanding of the query optimizer’s functioning, we can use the EXPLAIN command to dissect the execution plans. This will provide insights into how Greenplum plans to execute the query and can help identify potential bottlenecks. When query execution takes more time than anticipated, keep an eye out for common issues, which may include:
  • High disk I/O when significant amount data is written to the disk
  • Plan doesn’t use available indexes
  • Suboptimal join strategies, such as using nested loop joins when hash joins would be more efficient
  • Large intermediate output during join, sorting or aggregating
  • High data redistribution or broadcasting
  • Underestimate or overestimate of cardinality
  • Lack of parallelism, when the execution occurs on the coordinator instead of the segments
  • Filter conditions not pushed down
  • Unnecessary scanning of partitons or full tables
At times, altering the query into an equivalent form can assist the query optimizer in generating a more efficient plan. This could involve revising joins, simplifying filter conditions, or decomposing complex queries into multiple manageable queries. If you’re grappling with persistent performance issues, consider reach out to Greenplum support or the Greenplum community. Query optimization is an ongoing process, often requiring multiple iterations of analysis, adjustments, and testing to achieve the desired query performance.