From: | Marc Mamin <M(dot)Mamin(at)intershop(dot)de> |
---|---|
To: | William Nolf <bnolf(at)xceleratesolutions(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: copy/dump database to text/csv files |
Date: | 2014-07-25 19:53:13 |
Message-ID: | B6F6FD62F2624C4C9916AC0175D56D8828AC2DA6@jenmbs01.ad.intershop.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
This is probably an easy one for most sql users but I don't use it very often.
>
>
>
>We have a postgres database that was used for an application we no longer use. However, we would
>
>like to copy/dump the tables to files, text or csv so we can post them to sharepoint.
>
>
>
>Copy seems to be what I want. But it only does one table at a time. There is 204 tables with a schemaname=public. I need to be copy the tables with data to files. I need something like a for
>
>loop which checks if the table is populated if so, copy it to tablename.csv file
>
>
>
>Any ideas?
You could do it with pgpsql, or if your schema is too large generate a sript with it.
something like:
DO $$
DECLARE
test boolean;
rec record;
BEGIN
for rec in select tablename from pg_tables where schemaname = 'public'
LOOP
EXECUTE 'select exists (select * from public.'||rec.tablename||')' into test;
IF test THEN raise notice 'COPY public.% TO %.dump',rec.tablename,rec.tablename;
END IF;
END LOOP;
END;
$$ language plpgsql
regards,
Marc Mamin
From | Date | Subject | |
---|---|---|---|
Next Message | Prabhjot Sheena | 2014-07-26 00:58:43 | Re: Checkpoint_segments optimal value |
Previous Message | Marc Mamin | 2014-07-25 19:14:06 | Re: Index usage with slow query |