From: | mark <markkicks(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | how to make this database / query faster |
Date: | 2008-03-15 23:21:31 |
Message-ID: | 82fa9e310803151621j5cb1bd05nf04f85d3d8b70363@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi
I use postgres v 8.3 on a dual quad core, intel xeon e5405(at)2(dot)00GHz, fedora
core 8 x86_64, and 32GB RAM
settings i changed on postgresql.conf:
shared_buffers = 1000MB # min 128kB or max_connections*16kB
effective_cache_size = 4000MB
I have a user table structure is attached and I have around 2 million rows
and adding like 10k-30k rows everyday..
id is the primary key, and i have an index session_key
i iterate through the users table like this
select * from users where session_key is not Null order by id offset OFFSET
limit 300
i want to go through the whole table... it gets really slow like greater
than 5 minutes when the OFFSET is over 500,000..
what is the best way to iterate through the whole table? should i increase
the limit?
thanks a lot!
Attachment | Content-Type | Size |
---|---|---|
usertable.sql | text/x-sql | 2.9 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Terry Fielder | 2008-03-15 23:25:32 | Re: Loging of postgres requests |
Previous Message | Adrian Klaver | 2008-03-15 22:58:47 | Re: Loging of postgres requests |