This is the work-flow I've in mind:
1a) take out *all* data from a table in chunks (M record for each
file, one big file?) (\copy??, from inside a scripting language?)
2a) process each file with awk to produce N files very similar each
other (substantially turn them into very simple xml)
3a) gzip them
2b) use any scripting language to process and gzip them avoiding a
bit of disk IO
Does PostgreSQL offer me any contrib, module, technique... to save
some IO (and maybe disk space for temporary results?).
Are there any memory usage implication if I'm doing a:
pg_query("select a,b,c from verylargetable; --no where clause");
vs.
the \copy equivalent
any way to avoid them?
thanks
--
Ivan Sergio Borgonovo
http://www.webthatworks.it