From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
Cc: | Nico Sabbi <nicola(dot)sabbi(at)poste(dot)it>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Configuration tips for very large database |
Date: | 2015-02-12 23:19:46 |
Message-ID: | CAGTBQpb4_sk1hebZh3pekL=QxgMJCFOC-nxE_LW5_usSWrBjPQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Feb 12, 2015 at 7:38 PM, Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
> Nico Sabbi <nicola(dot)sabbi(at)poste(dot)it> wrote:
>
>> Queries get executed very very slowly, say 20 minutes.
>
>> I'd like to know if someone has already succeeded in running
>> postgres with 200-300M records with queries running much faster
>> than this.
>
> If you go to the http://wcca.wicourts.gov/ web site, bring up any
> case, and click the "Court Record Events" button, it will search a
> table with hundreds of millions of rows. The table is not
> partitioned, but has several indexes on it which are useful for
> queries such as the one that is used when you click the button.
I have a table with ~800M rows, wide ones, that runs reporting queries
quite efficiently (usually seconds).
Of course, the queries don't traverse the whole table. That wouldn't
be efficient. That's probably the key there, don't make you database
process the whole thing every time if you expect it to be scalable.
What kind of queries are you running that have slowed down?
Post an explain analyze so people can diagnose. Possibly it's a
query/indexing issue rather than a hardware one.
From | Date | Subject | |
---|---|---|---|
Next Message | Paul Callaghan | 2015-02-12 23:28:19 | Re: query - laziness of lateral join with function |
Previous Message | Mathis, Jason | 2015-02-12 23:14:00 | Re: Configuration tips for very large database |