Skip to toolbar

Altered States: Greenplum Alter Table Command by Howard Goldberg

A common question that is frequently asked when performing maintenance on Greenplum tables is “Why does my ALTER TABLE add column DDL statement take so long to run?” Although it appears to be a simple command and the expectations are that it will execute in minutes this is not true depending on the table organization (heap, AO columnar compressed, AO row compressed), number of range partitions and the options used in the alter table command.

Depending on the size of the table a rewrite table operation triggered by an alter table/column DDL command could take from minutes to multiple hours. During this time the table will hold the access exclusive locks and may cause cascading effects on other ETL processing. While this rewrite operation is occurring there is no easy way to predict its completion time. Please note that since Greenplum supports polymorphic tables a range partitioned table can contain all three table organizations within a single parent table, this implies that some child partitions can trigger a rewrite while others may be altered quickly. However, all operations on a range partitioned table must complete before the DDL operation is completed.

The table below lists the table organization and the alter DDL commands that will trigger a table rewrite operation.

Rewrite matrix

✔ – Entire Table will be recreated due to the alter operation.

No – Only the column within the operation will be affected.

Table Organization
Operation AO columnar AO row Heap

When a DDL such as “alter table add column” is issued, an implicit transaction is created even if you do not wrap the DDL within an explicit transaction block (Begin/Commit). A transaction block would be recommended if you were issuing multiple DDL commands and wanted them treated as a set for rollback purposes. Furthermore, during the execution of this alter statement, Greenplum will acquire an ACCESS EXCLUSIVE lock on the target object, which makes it inaccessible until the DDL transaction completes. Because of the implicit transaction controls, a long alter table add column operation can be safely terminated using a pg_terminate_backend() or using a CTRL-C from the client and Greenplum will gracefully handle the rollback.

As a best practice for very large tables that will require a table rewrite based on the chart above, it’s advisable to create a copy of the table and perform a select/insert into/rename operation. During this operation, new columns can be added without impacting the original table. Another benefit of this approach is that the new target table will be more optimally organized due to dead space (MVCC holes) removal, potentially tighter compression, and AO table file reductions which may have been added during concurrent loads or insert operations. In addition, the select/insert technique is better because the size of the target table can be monitored as the operation progresses and the source table is not locked using an ACCESS EXCLUSIVE lock. Upon completion of the select/insert operation the original and new table names will need to be swapped, which is a quick operation. As a note, large select/insert operations in Greenplum are quite fast and running out of log space is not an issue, as it is with other DBMS’s.

Another cause of a slow “alter table add column” DDL command occurs when the target table has a large number of range partitions. The alter DDL command executes on one range at a time and an alter table add column operation can take a significant amount of time on a large range partitioned table(1000+ range partitions).

A Common Mistake

If an “alter table add column” DDL command is executed on an AO compressed table without the “encoding (compresstype=’zlib’,compresslevel=5)” attributes then the column will be added without compression. The Greenplum “alter table add column” command does not inherit the compression attributes from it’s parent tables options. In Figure 1, compression was not enabled for col3 when the the column was added using:

db1=# alter table tb_ao_col add column col3 int default 0

Whereas compression on “col4” was enabled using:

db1=# alter table tb_ao_col add column col4 int default 0 encoding (compresstype='zlib',compresslevel=5)

Figure 1: Example of alter / add column

create table tb_ao_col (col1 int, col2 int)
with (appendonly='true', orientation='column', compresstype='zlib', compresslevel=5)
distributed by (col2);
Time: 101.839 ms
alter table tb_ao_col add column col3 int default 1;
Time: 48.285 ms

alter table tb_ao_col add column col4 int default 0 encoding (compresstype='zlib',compresslevel=5);
Time: 46.555 ms

(gpadmin@[local])[admin]> \d+ tb_ao_col
                              Append-Only Columnar Table "public.tb_ao_col"
 Column |  Type   | Modifiers | Storage | Compression Type | Compression Level | Block Size | Description
 col1   | integer |           | plain   | zlib             | 5                 | 32768      |
 col2   | integer |           | plain   | zlib             | 5                 | 32768      |
 col3   | integer | default 1 | plain   | none←bad         | 0                 | 32768      |
 col4   | integer | default 0 | plain   | zlib←good        | 5                 | 32768      |
Checksum: t
Has OIDs: no
Options: appendonly=true, orientation=column, compresstype=zlib, compresslevel=5
Distributed by: (col2)

Based on these different commands col4 is compressed but col3 is not. Customers should review their AO compressed tables that have been altered to add columns as some columns may have inadvertently been added without compression. The query in figure 2 can be used to identify the compressed AO Columnar tables and columns that were added without compression. There are two ways to convert a column from no compression to compression enabled:

  1. Create a copy of the table and perform a select/insert/rename table operation.
  2. Alter the table and add a new column with compression enabled followed by an update new column with the original old column data/alter drop old column/alter rename new column. For large range partitioned tables the update can be done at the child partition level to manage resources. A vacuum/analyze is also recommend to update the catalog information.

Figure 2: Query to find uncompressed columns in an AO table

n.nspname as schema,
c.relname as table_nm,
case when c.reloptions[1] like 'compresstype%'
then split_part(c.reloptions[1],'=',2)
when c.reloptions[2] like 'compresstype%'
then split_part(c.reloptions[2],'=',2)
when c.reloptions[3] like 'compresstype%'
then split_part(c.reloptions[3],'=',2)
when c.reloptions[4] like 'compresstype%'
then split_part(c.reloptions[4],'=',2)
when c.reloptions[5] like 'compresstype%'
then split_part(c.reloptions[5],'=',2)
when c.reloptions[6] like 'compresstype%'
then split_part(c.reloptions[6],'=',2)
when c.reloptions[7] like 'compresstype%'
then split_part(c.reloptions[7],'=',2)
else 'none'
end as "Table compresstype",
array_agg(a.attname) as uncompressed_cols,
count(*) as cnt
pg_class c inner join pg_attribute a
on a.attrelid = c.oid
inner join pg_type t
on a.atttypid = t.oid
inner join pg_namespace n
on n.oid = c.relnamespace
left outer join pg_catalog.pg_attribute_encoding e
on e.attrelid = a.attrelid
and e.attnum = a.attnum
and array_to_string(c.reloptions, ',') like '%compresstype%'
and a.attnum > 0
and relstorage = 'c'
and array_to_string(e.attoptions, ',') like '%none%'
and c.relname not in (select partitiontablename from pg_partitions)
group by 1,2,3
--having count(*) > 10  --Group uncompressed column counts
ORDER BY 5 desc, 1,2;

How was this “alter” behavior verified?

Every table in the Greenplum catalog is associated with a disk file. The relfilenode column in the pg_class table associates a relation or table name to this disk file number. The gp_dist_random is a proprietary Greenplum function that returns the contents of a table from every data segment.

By querying the pg_class table using the relfilenode column combined with the gp_dist_random function, simple DDL test cases can be developed to ascertain if a Greenplum object underlying file structure has been changed. Please refer to the figure 3 for an example.

Figure 3: Example alter DDL test

create table tb_ao_col (col1 int, col2 int)
with (appendonly='true', orientation='column', compresstype='zlib', compresslevel=5)
distributed by (col2);

-- generate data
insert into tb_ao_col select generate_series(1,500), generate_series(1,500);

-- Acquire file identifiers from the GP catalog

with abc as
(select gp_segment_id, relfilenode
from gp_dist_random('pg_class')
where relname = 'tb_ao_col'
order by 1)
select min( relfilenode) , max(relfilenode) , sum(relfilenode::bigint) from abc;

-- alter command.

--alter table tb_ao_col add column col3 int default 0;
--alter table tb_ao_col alter column col1 type float;
--alter table tb_ao_col add column col3 int default 0 encoding (compresstype='zlib',compresslevel=5)
alter table tb_ao_col add column col3 int default 

-- Acquire file identifiers from the GP catalog
-- If the sum(relfilenode) changes from the above query then a new file was created.

with abc as
(select gp_segment_id, relfilenode
from gp_dist_random('pg_class')
where relname = 'tb_ao_col'
order by 1)
select min( relfilenode) , max(relfilenode) , sum(relfilenode::bigint) from abc;

Note: I would like to thank Louis Mugnano, Advisory Data Engineer at Pivotal, who reviewed this article and provided valuable feedback and input.

Share This Blog:

Share on twitter
Share on linkedin
Share on facebook
Share on email

2020 Greenplum Summit

Click and Get More Information!

《Data Warehousing with Greenplum》

We wrote a book about Greenplum! Read it to understand how to bring the most out of your Greenplum capabilities.