From: | jelle <jellej(at)pacbell(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Questions about 2 databases. |
Date: | 2005-03-11 21:43:00 |
Message-ID: | Pine.LNX.4.61.0503111323180.24097@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Fri, 11 Mar 2005, Tom Lane wrote:
[ snip ]
> COPY would be my recommendation. For a no-programming-effort solution
> you could just pipe the output of pg_dump --data-only -t mytable
> into psql. Not sure if it's worth developing a custom application to
> replace that.
I'm a programming-effort kind of guy so I'll try COPY.
>
>> My web app does lots of inserts that aren't read until a session is
>> complete. The plan is to put the heavy insert session onto a ramdisk based
>> pg-db and transfer the relevant data to the master pg-db upon session
>> completion. Currently running 7.4.6.
>
> Unless you have a large proportion of sessions that are abandoned and
> hence never need be transferred to the main database at all, this seems
> like a dead waste of effort :-(. The work to put the data into the main
> database isn't lessened at all; you've just added extra work to manage
> the buffer database.
The insert heavy sessions average 175 page hits generating XML, 1000
insert/updates which comprise 90% of the insert/update load, of which 200
inserts need to be transferred to the master db. The other sessions are
read/cache bound. I hoping to get a speed-up from moving the temporary
stuff off the master db and using 1 transaction instead of 175 to the disk
based master db.
Thanks,
Jelle
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-03-11 22:07:35 | Re: Postgres on RAID5 |
Previous Message | Arshavir Grigorian | 2005-03-11 21:13:05 | Postgres on RAID5 |