20 Jun

Greenplum 5.9.0: A Minor but Powerful Release

We have recently released Greenplum 5.9.0. The release has  a good number of exciting features, including:

This is very impressive for a minor release and an indicator of the magnitude of the Greenplum user value creation. This is possible thanks to Pivotal’s agile development methodologies and a close collaboration with the PostgreSQL community.

For more information:

09 May

A New Era of Greenplum Monitoring & Workload Management – Greenplum Command Center v4

We’re excited to announce the Pivotal Greenplum Command Center v4 release. It is available for download from Pivotal Network for Enterprise Greenplum 5.7 or later.

Quickly identify and troubleshoot problematic queries with ease with new query monitoring capabilities. New workload management features improves mixed workloads handling, system resource management, and SLAs support.

 

Monitor Queries in Real-time

For Command Center v4, the query monitoring happens in real time. Now, queries will immediately appear on the Query Monitor when submitted to GPDB. There is no longer a minimum required runtime for queries to show up on the Query Monitor.

Long and short running queries mix on the query monitor.

Read More

07 May

Data Tells the Community’s Story – Greenplum Summit Highlights!

It has been a little over two weeks since the first Greenplum Summit wrapped and it is my humble privilege to share with you the highlights. Jacque Istok, Head of Pivotal Data, wrote an engaging and passionate post prior to the event commencing. Greenplum Summit is a conference within a conference via PostgresConf which happened in Jersey City in April 2018. Greenplum Summit is where decision makers, data scientists, analysts, DBAs, and developers met to discuss, share and shape the future of advanced open-source data technologies.
Read More

14 Apr

PLContainer: customize and secure your runtime of procedure language

Greenplum is an advanced MPP database, which stores and analyzes data in place. Procedure language is one of the analytical tools provided by Greenplum, it enables users to write user defined function(UDF in short) in different kinds of languages. For example, Python and R are widely used languages among data scientists, Greenplum supports them in the form of plpython and plr.

The implementations of plpython and plr are based on embedded Python and embedded R, where Python and R code is run in the same process as GDPB C code. It makes the malicious Python or R code has the chance to break the whole GPDB core engine down. Moreover, user could even execute “rm -rf $MASTER_DATA_DIRECTORY” in UDF code to delete all of data in the database. So we usually call plpython and plr are untrusted language and only DBA could create UDF for these untrusted languages. As a result, It’s quite inconvenient for a data scientist to using Python or R to do in-database data analysis.

To fix this problem, we introduce PLContainer, a docker container based technology, to secure and customize the runtime of Python or R UDF inside Greenplum. It provides a sandbox environment to run the Python or R code, any malicious operation is guaranteed to be inside the container. For example UDF code cannot access the file system of the host, CPU and memory resource is bounded separately and network access is also limited.

The architecture of PLContainer is shown in Figure 1. The GPDB query executor(short for QE) receives the query plan and parses the runtime name from the UDF body. Then it will search the runtime entry base on runtime name in its configuration map, which is loaded from the plcontainer_configuration.xml when the first PLContainer UDF is called. After that, QE will create and start a docker container as the computing unit to execute Python or R code based on the configuration of runtime entry. Next, function body and arguments will be encoded into a request message and send from QE to container to do the real calculation. Finally the container returns the results back to QE and QE continues its execution of plan tree.

 

Figure 1 Architecture of PLContainer

 

PLContainer is easy to use, we’ll illustrate:

  • As a DBA, how to install and manage PLContainer.
  • As a data scientist, how to use PLContainer.

Install PLContainer

  1. Download PLContainer binary from pivotal network
  2. Install PContainer packages with gppkg command
    gppkg -i plcontainer-1.1.0-rhel7-x86_64.gppkg
  3. Enable PLContainer as a extension for a database
    psql -d your_database -c “create  extension plcontainer;”

Manage PLContainer

  1. To add docker images for Python and R, we provide two prebuilt docker images, one for Python, the other is for R. Both of them include data science packages preinstall. As a result, data scientists could use numpy, scipy etc. directly.
    plcontainer image-add -f /home/gpadmin/plcontainer-python-images-1.0.0.tar.gz
    plcontainer image-add -f /home/gpadmin/plcontainer-r-images-1.0.0.tar.gz
  2. Add runtime entries into PLContainer configuration files. Runtime entry specify the container parameters: such as the image name, the memory limit of plcontainer, the cpu share, logging switch and so on. Data scientist could choose one of the runtime to run their PLContainer UDF.
    plcontainer runtime-add -r plc_python_shared -i pivotaldata/plcontainer_python_shared:devel -l python -s use_container_logging=yes;
    plcontainer runtime-add -r plc_r_shared -i pivotaldata/plcontainer_r_shared:devel -l r -s use_container_logging=yes
  3. DBA could check the configuration in file plcontainer_configuration.xml

Use PLContainer

Data scientists use the PLContainer UDF to execute Python or R code for data analysis. To create a PLContainer UDF, user needs to specify the runtime name in the format “# container: runtime_name” at the beginning of UDF definition and set the language type with “LANGUAGE plcontainer” at the end of UDF definition.

The following example shows how to calculate the log value of each tuple in the table “test”.

postgres=# CREATE OR REPLACE FUNCTION pylog10(i integer) RETURNS double precision AS $$
# container: plc_python_shared
import math
return math.log10(i)
$$ LANGUAGE plcontainer;

postgres=#  CREATE TABLE test (i int);

postgres=#  INSERT INTO test values(10),(100),(1000),(10000);

postgres=# select pylog10() from test order by i;

pylog100
———-
       1
       2
       3
       4
(4 rows)

 

Conclusion

PLContainer enables users to customize and secure their runtime of Python or R code. Along with the MPP feature of Greenplum, it provides an excellent platform for data scientist to analyze big  data in a distributed, secure and customized ways. In future, we also plan to support PLContainer on PKS and Postgres to make it more extensible.

16 Mar

Greenplum Geospatial Big Data Analytics with PostGIS

Its great to see users who leverage the native Geospatial query capability of Greenplum and PostGIS to solve real world problems.

This video discusses the use case of NICT, a department of the national government in Japan, that is helping their country to better manage weather and traffic conditions using data analytics:

Shipping and Logistical use cases are also great use cases for Greenplum with PostGIS. This is a nice article also show casing how to use Open Source geospatial visualization ontop of Greenplum for real world shipping data. Anthony Calamito from Boundless Geospatial says:

In GeoServer, simply create a new Store using the PostGIS type, and enter the machine details for your Greenplum master host (which appears to clients as just another Postgres database). It really is just that simple. With almost no setup time you are off and running with a scalable GIS to meet your geospatial ‘big data’ needs.

And also for folks who want to see basic examples of how to query geospatial data with SQL on Greenplum check out this video:

One of the things I am looking forward to in the future is the ability to store and analyze LIDAR data in Greenplum.

Because of the voluminous nature of LIDAR data, storing it and processing it, in a big data database, Greenplum, makes a ton of sense.

If you want to learn more and do a hands on tutorial I recommend the online tutorial from Boundless here.

Working on enterprise software since 2002, and on big data and database management systems since 2007. Started on Greenplum Database in 2009 as a performance engineer and worked in various R&D and support capacities until shifting into product management for the world’s greatest database: Greenplum.

14 Mar

PostgreSQL 9.0 is here in OSS Greenplum

On March 10th, Greenplum/PostgreSQL developer, Heikki Linnakangas, announced the completion of merging PostgreSQL 9.0 into the OSS Greenplum project:

We’ve completed merging PostgreSQL 9.0 into GPDB master. 9.0 was a relatively straightforward release. There was a bunch of refactoring needed, as there always is, this time e.g. around rewriting of VACUUM FULL in the upstream. See commit message (https://github.com/greenplum-db/gpdb/commit/e5d17790c185217831828169884f992be32502a6) for details.

Putting a PM hat on for a second: we’ve now merged three major releases in total. We did the 8.3 merge in spring 2016. It took about 6 months. Since then, we’ve done a lot of cleanup, refactoring, and we’ve learned a lot on how to do this. We did the 8.4 merge in about 3 months, and the 9.0 merge in a bit under 2 months.

Read More

Working on enterprise software since 2002, and on big data and database management systems since 2007. Started on Greenplum Database in 2009 as a performance engineer and worked in various R&D and support capacities until shifting into product management for the world’s greatest database: Greenplum.

31 Jan

Data Tells the Story at Greenplum Summit

As the time draws near to the first annual Greenplum Summit, a conference within a conference at PostgresConf which is taking place in Jersey City in April of this year – I have begun to reflect on all of the things that make an event like this successful.  It includes the venue and the ambiance of the rooms within that venue.  It includes the food and the drinks (both caffeinated and alcoholic and just plain ole hydrating).  It includes the vendors and partners, the quality of their products and the attraction of their give-aways.  These events take months of effort, and when done correctly, they really kick off the excitement and passion that a community of like-minded individuals can rally around.  And passion isn’t something that can be faked.  It’s not something you can force.  It comes when you share the same ideas with others that face a similar adversity (or opportunity)  as you.  It comes when you feel that you’re part of a movement that is even bigger than you or what you face on a day to basis.  My colleagues and I at Pivotal carry this passion for a product that has it’s roots with Postgres.  We carry this passion for our embracement of open source.  We carry this passion for the innovation and power that we bring to our users.  Ultimately Greenplum Summit is a place where we plan to tell our story.  For more than 10 years, I’ve personally held this passion and it grows more strongly every day.  Every day I see new data problems that are solved nicely and neatly with our product, and my passion grows.  Every day I see competitive products that blatantly copy our message and direction, and my passion grows.  Every day I see new open source projects popup that try to emulate our capabilities, and my passion grows.  Greenplum Summit is going to be a great event where I can tell these stories.  But it won’t be my story that I tell.  In fact it won’t even be Greenplum’s story that I tell.  The real story to be told is one about data – and data tells the story for everyone.

Read More

Head of Data for Pivotal

19 Jan

Greenplum Filespaces and Tablespaces

Greenplum is a fast, flexible, software-only analytics data processing engine that has the tools and features needed to make extensive use of any number of hardware or virtual environments that can be used for cluster deployment. One of those features discussed here is the use of file spaces to match data load and query activity with the underlying I/O volumes to support it. Once a physical file space is created across the cluster, it is mapped to a logical tablespace, which is then used during the table and index creation process.

Read More

17 Jan

Greenplum 6, Devevelopment Updates, Jan 2018

Greenplum v5 launched in September 2017 and the Greenplum developers have been hard at work since then on the next major version, V6, Code Name Mars, which is slated to release September 2018. In this post I will provide some high level updates on new developments on the V6 code line.

Read More

Working on enterprise software since 2002, and on big data and database management systems since 2007. Started on Greenplum Database in 2009 as a performance engineer and worked in various R&D and support capacities until shifting into product management for the world’s greatest database: Greenplum.

11 Jan

Optimizing Greenplum Performance

Greenplum Database is a MPP relational database based on the Postgres Core engine.  It is used for data warehousing and analytics by thousands of users around the world for business critical reporting, analysis, and data science.

Optimizing performance of your Greenplum system can ensure your users are happy and getting the fastest responses to all their queries.  Here are the top 5 things you can do to ensure your system is operating at peak performance: Read More

Working on enterprise software since 2002, and on big data and database management systems since 2007. Started on Greenplum Database in 2009 as a performance engineer and worked in various R&D and support capacities until shifting into product management for the world’s greatest database: Greenplum.