Jacque Istok

  • hi lou – it sounds like your virtual box settings for that vm are wonky (or you’re on some super old machine or something). the way I read that, the vm is looking for a 64bit “machine” and yours is only 32 bit.

  • using google translate – your best bet is going to be doing a CTAS (create table as) and duplicate the table into a new table space.

  • so first, it would be a duplicate set of data, however it sounds like you could use an “aggregated” or more user experience oriented set of data anyway, so im not sure that would be my biggest concern. spark […]

  • well it depends on the use case. both external tables and PCF are going to get you parallel data passing back and forth between greenplum and hdfs, so your limitation is going to be network between the two […]

  • Hi – we have deployments that our on a single server with gigabytes to deployments spanning 10s of racks with multiple petabytes. the limits (for loading, analyzing, or exporting) are really dependent on your […]

  • pgadmin should work – in fact, just about anything Postgres compatible should.

  • can you explain why you think it didn’t work?

  • I think we might need a little more info here – can you post the hostnames, ip addresses, and the contents of “select * from gp_segment_configuration” ?

  • As the time draws near to the first annual Greenplum Summit, a conference within a conference at PostgresConf which is taking place in Jersey City in April of this year – I have begun to reflect on all of the t […]

  • the short answer is yes. the longer answer is what do you mean you operate on the master node? When you run an SQL statement from the master, it operates in parallel always. So does a UDF. do you want to paste […]

  • Analytics On IaaS Must Think Differently Than It’s On Premise Implementations

    We have always maintained that having a data platform that is portable is not only one of the key differentiators of Greenplum, but […]

  • im not sure I completely understand the statement – distributions are how we define where in the cluster the data goes. when it’s random, it will go to 1 of the segments and generally keep all the segments in […]

  • im not 100% sure but I would try setting the max_staement_mem and restarting the cluster. then set statement_mem.

  • greenplum will work out of the box with any BI tool, most often I see tableau, MicroStrategy, cognos, power bi, etc. anything that your end users are comfortable with and that you have today will work (via […]

  • I like to think of the external tables similar to a unix pipe. Data is streamed from the source to one or more segments whether it’s a lot or a little and then something can be done with it (insert to a physical […]

  • it’s a pretty varied answer. generally speaking, I personally prefer a dimensional model over something more snowflake’d out because I think it’s easier for our users to understand. it also necessitates less […]

  • binaries are planned to be available shortly for greenplum v5.2 and above.

  • there does exist examples where gpu’s can be leveraged via language extensions (like python). so the short answer is yes.

  • the short answer is yes. the longer answer is, can you give me more information..? 🙂

  • Load More