From: | Arjen van der Meijden <acmmailing(at)tweakers(dot)net> |
---|---|
To: | John Beaver <john(dot)e(dot)beaver(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: large tables and simple "= constant" queries using indexes |
Date: | 2008-04-09 21:21:20 |
Message-ID: | 47FD3350.4060504@tweakers.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
First of all, there is the 'explain analyze' output, which is pretty
helpful in postgresql.
My guess is, postgresql decides to do a table scan for some reason. It
might not have enough statistics for this particular table or column, to
make a sound decision. What you can try is to increase the statistics
target, which works pretty easy:
ALTER TABLE gene_prediction_view ALTER gene_ref SET STATISTICS 200;
Valid ranges are from 1(0?) - 1000, the default is 10, the default on my
systems is usually 100. For such a large table, I'd go with 200.
After that, you'll need to re-analyze your table and you can try again.
Perhaps analyze should try to establish its own best guess to how many
samples it should take? The default of 10 is rather limited for large
tables.
Best regards,
Arjen
On 9-4-2008 22:58 John Beaver wrote:
> Hi, I've started my first project with Postgres (after several years of
> using Mysql), and I'm having an odd performance problem that I was
> hoping someone might be able to explain the cause of.
>
> ----My query----
> - select count(*) from gene_prediction_view where gene_ref = 523
> - takes 26 seconds to execute, and returns 2400 (out of a total of 15
> million records in the table)
>
> ---My problem---
> Using a single-column index to count 2400 records which are exactly
> one constant value doesn't sound like something that would take 26
> seconds. What's the slowdown? Any silver bullets that might fix this?
>
> ----Steps I've taken----
> - I ran vacuum and analyze
> - I upped the shared_buffers to 58384, and I upped some of the other
> postgresql.conf values as well. Nothing seemed to help significantly,
> but maybe I missed something that would help specifically for this query
> type?
> - I tried to create a hash index, but gave up after more than 4 hours
> of waiting for it to finish indexing
>
> ----Table stats----
> - 15 million rows; I'm expecting to have four or five times this
> number eventually.
> - 1.5 gigs of hard drive usage
>
> ----My development environment---
> - 2.6ghz dual-core MacBook Pro with 4 gigs of ram and a 7200 rpm hard
> drive
> - OS X 10.5.2
> - Postgres 8.3 (installed via MacPorts)
>
> ----My table----
>
> CREATE TABLE gene_prediction_view
> (
> id serial NOT NULL,
> gene_ref integer NOT NULL,
> go_id integer NOT NULL,
> go_description character varying(200) NOT NULL,
> go_category character varying(50) NOT NULL,
> function_verified_exactly boolean NOT NULL,
> function_verified_with_parent_go boolean NOT NULL,
> function_verified_with_child_go boolean NOT NULL,
> score numeric(10,2) NOT NULL,
> precision_score numeric(10,2) NOT NULL,
> CONSTRAINT gene_prediction_view_pkey PRIMARY KEY (id),
> CONSTRAINT gene_prediction_view_gene_ref_fkey FOREIGN KEY (gene_ref)
> REFERENCES sgd_annotations (id) MATCH SIMPLE
> ON UPDATE NO ACTION ON DELETE NO ACTION,
> CONSTRAINT gene_prediction_view_go_id_fkey FOREIGN KEY (go_id)
> REFERENCES go_terms (term) MATCH SIMPLE
> ON UPDATE NO ACTION ON DELETE NO ACTION,
> CONSTRAINT gene_prediction_view_gene_ref_key UNIQUE (gene_ref, go_id)
> )
> WITH (OIDS=FALSE);
> ALTER TABLE gene_prediction_view OWNER TO postgres;
>
> CREATE INDEX ix_gene_prediction_view_gene_ref
> ON gene_prediction_view
> USING btree
> (gene_ref);
>
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Bill Moran | 2008-04-09 21:36:09 | Re: large tables and simple "= constant" queries using indexes |
Previous Message | John Beaver | 2008-04-09 20:58:27 | large tables and simple "= constant" queries using indexes |