From: | Melvin Davidson <melvin6925(at)gmail(dot)com> |
---|---|
To: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
Cc: | anj patnaik <patna73(at)gmail(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: question |
Date: | 2015-10-15 15:17:43 |
Message-ID: | CANu8FiznG1-4i4yY0dCyRkyT6Xh=N_7=mJpvFk4=MtF_uMgh4A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
In addition to exactly what you mean by "a long time" to pg_dump 77k of
your table,
What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with pg_dump?
What is the value of shared_memory & maintenance_work_mem in
postgresql.conf?
On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
wrote:
> On 10/14/2015 06:39 PM, anj patnaik wrote:
>
>> Hello,
>>
>> I recently downloaded postgres 9.4 and I have a client application that
>> runs in Tcl that inserts to the db and fetches records.
>>
>> For the majority of the time, the app will connect to the server to do
>> insert/fetch.
>>
>> For occasional use, we want to remove the requirement to have a server
>> db and just have the application retrieve data from a local file.
>>
>> I know I can use pg_dump to export the tables. The questions are:
>>
>> 1) is there an in-memory db instance or file based I can create that is
>> loaded with the dump file? This way the app code doesn't have to change.
>>
>
> No.
>
>
>> 2) does pg support embedded db?
>>
>
> No.
>
> 3) Or is my best option to convert the dump to sqlite and the import the
>> sqlite and have the app read that embedded db.
>>
>
> Sqlite tends to follow Postgres conventions, so you might be able to use
> the pg_dump output directly if you use --inserts or --column-inserts:
>
> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>
>
>> Finally, I am noticing pg_dump takes a lot of time to create a dump of
>> my table. right now, the table has 77K rows. Are there any ways to
>> create automated batch files to create dumps overnight and do so quickly?
>>
>
> Define long time.
>
> What is the pg_dump command you are using?
>
> Sure use a cron job.
>
>
>> Thanks for your inputs!
>>
>
>
> --
> Adrian Klaver
> adrian(dot)klaver(at)aklaver(dot)com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2015-10-15 15:18:09 | Re: Installing plpython3u |
Previous Message | David G. Johnston | 2015-10-15 15:15:23 | Re: How can I use crosstab functons in PostgreSQL 9.3? |