From: | anj patnaik <patna73(at)gmail(dot)com> |
---|---|
To: | Melvin Davidson <melvin6925(at)gmail(dot)com> |
Cc: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: question |
Date: | 2015-10-15 18:40:55 |
Message-ID: | CAEQKwSk7nkUr4hvs0FYeLMmZcadvG2=02iLA433uwmZQ6Pv8Mg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
It's a Linux machine with 8 CPUs. I don't have the other details.
I get archive member too large for tar format.
Is there a recommended command/options when dealing with very large tables,
aka 150K rows and half of the rows have data being inserted with 22MB?
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w > /tmp/dump
pg_dump: [archiver (db)] connection to database "postgres" failed:
fe_sendauth: no password supplied
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t > /tmp/dump
Password:
pg_dump: [tar archiver] archive member too large for tar format
-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
-bash: pg_dumpall: command not found
-bash: tmpdb.out-2015101510.gz: Permission denied
-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
Thank you so much for replying and accepting my post to this NG.
On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6925(at)gmail(dot)com>
wrote:
> In addition to exactly what you mean by "a long time" to pg_dump 77k of
> your table,
>
> What is your O/S and how much memory is on your system?
> How many CPU's are in your system?
> Also, what is your hard disk configuration?
> What other applications are running simultaneously with pg_dump?
> What is the value of shared_memory & maintenance_work_mem in
> postgresql.conf?
>
> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com
> > wrote:
>
>> On 10/14/2015 06:39 PM, anj patnaik wrote:
>>
>>> Hello,
>>>
>>> I recently downloaded postgres 9.4 and I have a client application that
>>> runs in Tcl that inserts to the db and fetches records.
>>>
>>> For the majority of the time, the app will connect to the server to do
>>> insert/fetch.
>>>
>>> For occasional use, we want to remove the requirement to have a server
>>> db and just have the application retrieve data from a local file.
>>>
>>> I know I can use pg_dump to export the tables. The questions are:
>>>
>>> 1) is there an in-memory db instance or file based I can create that is
>>> loaded with the dump file? This way the app code doesn't have to change.
>>>
>>
>> No.
>>
>>
>>> 2) does pg support embedded db?
>>>
>>
>> No.
>>
>> 3) Or is my best option to convert the dump to sqlite and the import the
>>> sqlite and have the app read that embedded db.
>>>
>>
>> Sqlite tends to follow Postgres conventions, so you might be able to use
>> the pg_dump output directly if you use --inserts or --column-inserts:
>>
>> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>>
>>
>>> Finally, I am noticing pg_dump takes a lot of time to create a dump of
>>> my table. right now, the table has 77K rows. Are there any ways to
>>> create automated batch files to create dumps overnight and do so quickly?
>>>
>>
>> Define long time.
>>
>> What is the pg_dump command you are using?
>>
>> Sure use a cron job.
>>
>>
>>> Thanks for your inputs!
>>>
>>
>>
>> --
>> Adrian Klaver
>> adrian(dot)klaver(at)aklaver(dot)com
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize. Whether or not you
> wish to share my fantasy is entirely up to you.
>
From | Date | Subject | |
---|---|---|---|
Next Message | Ramesh T | 2015-10-15 19:15:35 | Re: postgres function |
Previous Message | David G. Johnston | 2015-10-15 18:02:54 | Re: Simple way to load xml into table |