From: | Steve Crawford <scrawford(at)pinpointresearch(dot)com> |
---|---|
To: | Navaneethan R <nava(at)gridlex(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Scaling 10 million records in PostgreSQL table |
Date: | 2012-10-08 20:09:59 |
Message-ID: | 50733317.9080408@pinpointresearch.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 10/08/2012 08:26 AM, Navaneethan R wrote:
> Hi all,
>
> I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance. I need to access the last week data from the table.
> It takes huge time to process the simple query.So, i throws time out exception error.
>
> query is :
> select count(*) from dealer_vehicle_details where modified_on between '2012-10-01' and '2012-10-08' and dealer_id=270001;
>
> After a lot of time it responds 1184 as count
>
> what are the ways i have to follow to increase the performance of this query?
>
> The insertion also going parallel since the daily realtime updation.
>
> what could be the reason exactly for this lacking performace?
>
>
What version of PostgreSQL? You can use "select version();" and note
that 9.2 has index-only scans which can result in a substantial
performance boost for queries of this type.
What is the structure of your table? You can use "\d+
dealer_vehicle_details" in psql.
Have you tuned PostgreSQL in any way? If so, what?
Cheers,
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Navaneethan R | 2012-10-08 20:25:02 | Re: Scaling 10 million records in PostgreSQL table |
Previous Message | Larry Rosenman | 2012-10-08 19:53:48 | Re: Scaling 10 million records in PostgreSQL table |