From: | Naresh Soni <jmnaresh(at)gmail(dot)com> |
---|---|
To: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Hi Community |
Date: | 2015-02-02 15:14:53 |
Message-ID: | CADg8u6=smhK4+DREEMKUSM+hDnbr2FAGE4t-1vomB383ERsYtw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi Kevin,
Thanks for your response.
So you mean postgres can handle such huge records without any fine tuning
required on postgres, by default execpt we will need to use indexing for
searching.
On 02-Feb-2015 8:03 PM, "Kevin Grittner" <kgrittn(at)ymail(dot)com> wrote:
> Naresh Soni <jmnaresh(at)gmail(dot)com> wrote:
>
> > This is my first question on the list, I wanted to ask if
> > postgres can handle multi millions records? for example there
> > will be 1 million records per table per day, so 365 millions per
> > year.
>
> Yes, I have had hundreds of millions of rows in a table without
> performance problems. If you want to see such a table in action,
> go to the following web site, bring up a court case, and click the
> "Court Record Events" button. Last I knew the table containing
> court record events had about 450 million rows, with no
> partitioning. The total database was 3.5 TB.
>
> http://wcca.wicourts.gov/
>
> > Is yes, then please elaborate.
>
> You will want indexes on columns used in the searches. Depending
> on details you have not provided it might be beneficial to
> partition the table. Do not consider partitioning to be some
> special magic which always makes things faster, though -- it can
> easily make performance much worse if it is not a good fit.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Ribe | 2015-02-02 15:23:02 | Re: Hi Community |
Previous Message | Kevin Grittner | 2015-02-02 14:32:45 | Re: Hi Community |