A few questions

From: M Simms <grim(at)argh(dot)demon(dot)co(dot)uk>
To: pgsql-general(at)postgreSQL(dot)org
Subject: A few questions
Date: 1999-07-12 01:30:58
Message-ID: 199907120130.CAA07664@argh.demon.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi

I asked these questions a couple of weeks ago and got no response whatsoever
so I am going to try again.

I have just installed 6.5, and there are some things I cannot find in
the documentation.

1 ) When I use temp tables, is there a way to instruct postgresql to
keep these in memory rather than on disc, for faster access, or
does it do this anyway with temp tables

2 ) Is there an optimal amount of updates and inserts to perform
before vacuuming a database, some kind of formula based on inserts
and updates that indicates when a vacuum would be most
beneficial. I realise there cannot be an absolute rule for this,
but a guideline would help, as I dont know if I will need to
vacuum more than once a day on a busy database.

3 ) Is there a way to instruct postgresql to perform a query at a
lower priority, such as daily maintainence operations, so that
these jobs do not impact on the interactive actions. I realise
I can renice a process that is making calls to the database, but
that doesnt have any effect on the backend spawned by the
postmaster when I connect to it.
If there is no such functionality, would people be interested in it
if I was to code it and release it back to the main source tree?

4 ) Is there an optimal ratio between the number of backends and the
number of shared memory buffers. I realise there is a minimum of
1:2 but do more shared memory buffers increase performance in some
areas, or would the extra overhead of managing the buffers make the
increase pointless.

5 ) The final question (I promise) is that if I have a large number of
inserts that I generate dynamically, is it quicker for me to
perform these inserts one by one (maybee 10,000 of them at a time)
or would it be faster and less CPU intensive to generate a text
file instead and then read this in via a single copy command.
This file at times may be over 100,000 entries, so would I be
better to split it to a maximum number of transactions if I
take the route of the copy command?

Thanks in advance, and I hope that this time someone will be able
to answer some or all of these questions.

M Simms

PS. Appologies to the person that receives this twice, I hit reply
instead of group reply to your mail to this list, and so you got
yourown personal copy {:-)

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Bruce Momjian 1999-07-12 02:39:33 Re: [GENERAL] New FAQ item
Previous Message Ross J. Reedstrom 1999-07-11 23:08:14 Re: [GENERAL] New FAQ item