From: | Kido Kouassi <jjkido(at)gmail(dot)com> |
---|---|
To: | "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
Subject: | Read performance on Large Table |
Date: | 2015-05-21 14:54:44 |
Message-ID: | CA+ebzukaZJVOjw2Vqg0cuCUqbaEEHHuCa-pNbsX6CWAZyga3Fw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hello Admin,
I am looking to build a database to log activities on my system
Currently I have the following table:
CREATE TABLE table
(
attr1 bigint,
attr2 text,
attr3 timestamp without time zone,
attr4 integer,
attr5 character varying(50),
attr6 character varying(50),
attr7 character varying(50),
attr8 character varying(50),
attr9 character varying(50),
attr10 character varying(50)
)
Over 3 months the table has grown to 600+ Millions Rows,
We mainly do inserts, we almost never do updates or delete.
However the table is not very useful as it stands because any select we
run it takes a very long time ( in hours) to complete.
Question1: Is Slowness to be expected with such big table?
Question2: is there anything I am not doing that I should do to get a
better perfomance?
I am thinking about 2 solution but I wanted to Ask the more experienced
people before implementing them:
1: Break the table into multiple small table for each week.
2: Create an index on the Timestamp column sin it is the one we use the
most in queries.
Thank you in advance for you Help,
--Kido K.
Note:
explain analyse select count(*) from table where attr5='value' and attr3 >
'05/19/2015 00:00:00' and attr3 < '05/21/2015 00:00:00';
Has been running for 30 minutes and counting....
From | Date | Subject | |
---|---|---|---|
Next Message | Ravi Krishna | 2015-05-21 15:02:18 | Re: Read performance on Large Table |
Previous Message | Thomas SIMON | 2015-05-21 14:47:15 | Re: Performances issues with SSD volume ? |