Greenplum Command Center (GPCC) is the single application needed by database administrators to manage and monitor Pivotal Greenplum. In this post I will talk about some new changes that GPCC users should be aware of with the recent 6.0 release of GPCC that is designed to work with Pivotal Greenplum version 6.
Turn on query history
GPCC collects query performance data and system metrics from a Greenplum cluster in real time, and stores the data to its own history database. In the new release history is turned on by default and uses a new set of agents that are native to GPCC and not the legacy gpperfmon agents. The new agents have better performance and more historical metrics data available than the old gpperfmon history.
GPCC history captures all queries by default. Users can set it to skip the collection for queries shorter than a time threshold if they have no interest in storing the history of quickly running queries. Besides query history, GPCC now also collects system metrics, disk usage history, and pg log history. The historical data is saved to some tables under gpmetrics schema on gpperfmon database. Please check the GPCC document for a detailed explanation of the tables and schema.
Please be sure to disable the legacy gpperfmon agents as the new agents have less overhead its important to use them and not the old agents.
When you turn off gpperfmon, please make sure that GPCC history is turned ON. The old data collected by gpperfmon will still be shown in GPCC, and there is no migration of the gpperfmon data required. But if you have some existing scripts that use data from old gpperfmon tables, it is possible that some of them will not be updated any more. For example, those *_now and *_tail tables will not be refreshed when gpperfmon is turned off. But you may find the needed data from some other tables and views under gpmetrics schema. Please refer to this document for further information regarding the contents under that schema to help you revise your scripts to get refreshed data.
Also the gpperfmon_install utility can be replaced by the new GPCC installer. GPCC installer now has got the gpperfmon_install functionalities for GPCC initialization, including the creation of gpperfmon database and gpmon user. Except one thing, with gpperfmon_install, users have an option to specify the password for gpmon user in plain text. GPCC installer doesn’t have that option. Instead, users could use “-W” option to get a chance to type in an initial password(won’t be saved anywhere), or get a default password(saved in .pgpass file) without using “-W” option.
GPCC stores all the metrics and history queries to some tables under a schema named “gpmetrics” on GPDB:
- gpcc_alert_rule — saves alert rules configured on the Admin –> Alerts page.
- gpcc_alert_log — records an event when an alert rule is triggered.
- gpcc_database_history — saves summarized query activity information.
- gpcc_disk_history — saves disk usage statistics for each DPDB host file system.
- gpcc_plannode_history — saves plan node execution statistics for completed queries.
- gpcc_queries_history table — saves execution metrics for completed GPDB queries.
- gpcc_system_history table — saves system metrics sampled from GPDB segments hosts.
With customized queries, users may get what GPCC doesn’t have on its web UI. For example, below is a query to find out the top 100 queries executed today with the max number of slices. It could be useful to help identify queries that are possibly badly written or tables that are not properly designed.
Now let’s see another example to get the latest 10 alerts triggered:
For detailed information about the tables, please refer to the GPCC documentation.
Enable Alerts for Proactive Monitoring
Users can configure GPCC to send alerts via emails for some selected events, like query runtime exceeds 10 minutes, or number of connections exceeds 100. For a full list of configurable alerts, please refer to the picture below.
Moreover, users may extend GPCC’s capability of sending alerts by a script, so that they can receive alerts from SMS, Slack messages or some other services. When an alert is triggered, GPCC will execute a script named $MASTER_DATA_DIRECTORY/gpmetrics/send_alert.sh, if it exists. GPCC will pass the alert information to the script, and in turn the script can convey the message to the specific destination with the specific API.
Now let’s see an example which relays the alerts to a Slack channel using their webhook.
First, create a webhook on Slack, please follow the instructions on api.slack.com. And then, copy the original send_alert.sh.sample to send_alert.sh, and customize it to utilize the Slack webhook to send the message. Please be noted that those variables in capital get their values from the script’s caller, which is GPCC web server.
Save it and restart GPCC. Then when the alerts are triggered, users will receive the messages like below on Slack:
Recent GPCC restrict SSL connections TLS1.2 and above for improved security. GPCC also provides a “-W” option to the installer and gpcc utility to run without a saved password. Besides, GPCC also supports Kerberos authentication. If Kerberos authentication is enabled on GPDB, GPCC can be configured to accept connections from Kerberos-authenticated users. Kerberos authentication can be enabled when install GPCC, or you can use the “gpcc –krbenable” command to enable Kerberos after GPCC has been installed.
GPCC can handle the authenticated user’s connection request in one of three modes, called strict, normal, or gpmon-only.
GPCC has a Kerberos keytab file containing the GPCC service principal and a principal for every GPCC user. If the principal in the client’s connection request is in the keytab file, GPCC grants the client access and GPCC connects to GPDB using the client’s principal name. If the principal is not in the keytab file, the connection request fails.
GPCC Kerberos keytab file contains GPCC principal and may contain principals for GPCC users. If the principal in the client’s connection request is in GPCC’s keytab file, it uses the client’s principal for database connections. Otherwise, GPCC uses the gpmon user for database connections.
GPCC uses the gpmon database role for all GPDB connections. No client principals are required in GPCC’s keytab file. This option can be used, for example, if GPCC users authenticate with Active Directory and you do not want to maintain client principals in the keytab file.
For more information, please check the document.
Disk space usage is one of the key metrics to pay attention to in the GPDB daily operations. However, GPCC’s old versions have a flaw that in some cases the disk usage is not correctly presented. To demonstrate the problem, let’s see an example configuration below.
As the screenshot shows, we have two data disks: /dev/md0 mounted as /data1, and /dev/sdd mounted as /mnt. And a symbol link /data1/primary is pointing to a directory /mnt/primary.
On GPCC 4.7.0 and earlier versions, you may see the storage status page like this:
As you can see that the results do not correctly reflect the reality. With the latest GPCC users can see the correct disks and their usage data, and they also see what data directories are mapped to a specific disk.
By the way, some users get confused about the “GP Master” storage size. Actually that size is the sum of the disk space on both of master host and the standby master host. We are going to improve it in the future releases to make it more clear.
GPCC has a workload management page that allows users to view or edit their resource groups. Besides the resource group editing, GPCC provides some other important features that will help users manage their workloads more easily.
Assignment by role
This helps route queries to a designated resource group by the query owner’s role.
Assignment by tag
This diverts queries by matching the tags. A query tag is a user-defined <name>=<value> pair, set in the Greenplum Database gpcc.query_tags parameter in the Greenplum Database session. (Note: currently it doesn’t take effect if used in a command executed by “psql -c”)
Terminate idle connections
Automatically terminate connections when idle for prescribed period of time.
In the past, users were often hindered from upgrading to a new GPCC release by the prerequisite to upgrade GPDB. This is no longer the case for GPCC 6.x users. We will make each GPCC 6.x release work with all the GPDB 6.x versions. If you don’t upgrade GPDB, you may not get some new metrics, but you could still get bug fixes and some new features. For GPCC 4.x users, if you are on GPDB 5.19 and above, you could freely upgrade to GPCC 4.8.0 without upgrading GPDB. And hopefully the future GPCC 4.x releases will not require a GPDB upgrade as well.
Now we have our own tile on Pivotal Network. Please go there to get the latest release. GPCC can still be downloaded from GPDB tile, but we will stop uploading there in the near future.
Besides the above changes, GPCC 4.8.0/6.0.0 also brings a lot of new features, bug fixes and performance improvements.