From: | Alfred Perlstein <bright(at)wintelcom(dot)net> |
---|---|
To: | "Igor V(dot) Rafienko" <igorr(at)ifi(dot)uio(dot)no> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Postgres-7.0.2 optimization question |
Date: | 2000-10-13 17:47:02 |
Message-ID: | 20001013104702.N272@fw.wintelcom.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
* Igor V. Rafienko <igorr(at)ifi(dot)uio(dot)no> [001013 05:09] wrote:
>
>
> Hi,
>
>
> I've got a slight optimization problem with postgres and I was hoping
> someone could give me a clue as to what could be tweaked.
>
> I have a couple of tables which contain little data (around 500,000 tuples
> each), and most operations take insanely long time to complete. The
> primary keys in both tables are ints (int8, iirc). When I perform a delete
> (with a where clause on a part of a primary key), an strace shows that
> postgres reads the entire table sequentially (lseek() and read()). Since
> each table is around 200MB, things take time.
Postgresql fails to use the index on several of our tables, an
'EXPLAIN <query>' would probably output a lot of lines about
doing a 'sequential scan'.
The only solution that I've been able to come across is to issue
a 'set enable_seqscan=off;' SQL statement on most of my queries
to force postgresql to use an index.
hope this helps,
-Alfred
From | Date | Subject | |
---|---|---|---|
Next Message | Mitch Vincent | 2000-10-13 18:10:12 | Re: Postgres-7.0.2 optimization question |
Previous Message | Ross J. Reedstrom | 2000-10-13 17:00:28 | Re: [HACKERS] My new job |