From: | Sam Mason <sam(at)samason(dot)me(dot)uk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: reducing IO and memory usage: sending the content of a table to multiple files |
Date: | 2009-04-02 16:27:55 |
Message-ID: | 20090402162755.GM12225@frubble.xen.chris-lamb.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Apr 02, 2009 at 11:20:02AM +0200, Ivan Sergio Borgonovo wrote:
> This is the work-flow I've in mind:
>
> 1a) take out *all* data from a table in chunks (M record for each
> file, one big file?) (\copy??, from inside a scripting language?)
What about using cursors here?
> 2a) process each file with awk to produce N files very similar each
> other (substantially turn them into very simple xml)
> 3a) gzip them
GZIP uses significant CPU time; there are various lighter weight schemes
available that may be better depending on where this data is going.
> 2b) use any scripting language to process and gzip them avoiding a
> bit of disk IO
What disk IO are you trying to save and why?
> Does PostgreSQL offer me any contrib, module, technique... to save
> some IO (and maybe disk space for temporary results?).
>
> Are there any memory usage implication if I'm doing a:
> pg_query("select a,b,c from verylargetable; --no where clause");
> vs.
> the \copy equivalent
> any way to avoid them?
As far as I understand it will get all the data from the database into
memory first and then your code gets a chance. For large datasets this
obviously doesn't work well. CURSORs are you friend here.
--
Sam http://samason.me.uk/
From | Date | Subject | |
---|---|---|---|
Next Message | Steve Crawford | 2009-04-02 16:29:04 | Re: [HACKERS] string_to_array with empty input |
Previous Message | David E. Wheeler | 2009-04-02 16:17:21 | Re: [HACKERS] string_to_array with empty input |