Re: Questions about 2 databases.

From: Richard_D_Levine(at)raytheon(dot)com
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: jellej(at)pacbell(dot)net, pgsql-performance(at)postgresql(dot)org, pgsql-performance-owner(at)postgresql(dot)org
Subject: Re: Questions about 2 databases.
Date: 2005-03-11 20:51:07
Message-ID: OFEB461F61.135A85B8-ON05256FC1.00720265@ftw.us.ray.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> this seems
> like a dead waste of effort :-(. The work to put the data into the main
> database isn't lessened at all; you've just added extra work to manage
> the buffer database.

True from the view point of the server, but not from the throughput in the
client session (client viewpoint). The client will have a blazingly fast
session with the buffer database. I'm assuming the buffer database table
size is zero or very small. Constraints will be a problem if there are
PKs, FKs that need satisfied on the server that are not adequately testable
in the buffer. Might not be a problem if the full table fits on the RAM
disk, but you still have to worry about two clients inserting the same PK.

Rick


Tom Lane
<tgl(at)sss(dot)pgh(dot)pa(dot)us> To: jellej(at)pacbell(dot)net
Sent by: cc: pgsql-performance(at)postgresql(dot)org
pgsql-performance-owner(at)pos Subject: Re: [PERFORM] Questions about 2 databases.
tgresql.org


03/11/2005 03:33 PM

jelle <jellej(at)pacbell(dot)net> writes:
> 1) on a single 7.4.6 postgres instance does each database have it own WAL
> file or is that shared? Is it the same on 8.0.x?

Shared.

> 2) what's the high performance way of moving 200 rows between similar
> tables on different databases? Does it matter if the databases are
> on the same or seperate postgres instances?

COPY would be my recommendation. For a no-programming-effort solution
you could just pipe the output of pg_dump --data-only -t mytable
into psql. Not sure if it's worth developing a custom application to
replace that.

> My web app does lots of inserts that aren't read until a session is
> complete. The plan is to put the heavy insert session onto a ramdisk
based
> pg-db and transfer the relevant data to the master pg-db upon session
> completion. Currently running 7.4.6.

Unless you have a large proportion of sessions that are abandoned and
hence never need be transferred to the main database at all, this seems
like a dead waste of effort :-(. The work to put the data into the main
database isn't lessened at all; you've just added extra work to manage
the buffer database.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Browse pgsql-performance by date

  From Date Subject
Next Message Arshavir Grigorian 2005-03-11 21:13:05 Postgres on RAID5
Previous Message Lou O'Quin 2005-03-11 20:35:03 Re: Query performance