From: | Alexander Burbello <burbello(at)yahoo(dot)com(dot)br> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Exp/Imp data with blobs |
Date: | 2011-11-09 11:28:37 |
Message-ID: | CAJcRiCUR7hwy=np6LvT=m=3a_0uMMb=yYa7FGA=WR09cfD2kiA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-general |
Hi,
In one db that I have, there are a few columns that are blob datatype.
This db has around 200MB of data today and as it a development db yet,
so I am replicating data to another db for testing purposes using
pg_dump and pg_restore.
To export the data it is pretty fast, about 3~4 minutes, that means acceptable.
However, when I import this data to another db (even on the same
machine) it takes around 4 hours to perform the pg_restore. I see
during the process that spend most of the time importing the blob
records.
So I stopped to think if I need to adjust this db with different
parameters, or if this behavior is already expected when working with
blobs.
Does anyone have suggestions how can I tune this process??
Here is the basic info about my env.
Windows 32;
Postgres 9.1;
shared_buffers = 256M
maintenance_work_mem = 32M
Any other question or doubt, please let me know.
Thank you in advance.
Alex
From | Date | Subject | |
---|---|---|---|
Next Message | Craig Ringer | 2011-11-09 13:04:47 | Re: How to deal with corrupted database? |
Previous Message | Ruslan A. Bondar | 2011-11-09 11:02:25 | How to deal with corrupted database? |
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2011-11-09 14:11:23 | Re: Foreign Keys and Deadlocks |
Previous Message | Thomas Markus | 2011-11-09 06:33:44 | Re: Grouping logs by ip and time |