Re: is postgres a good solution for billion record data.. what about mySQL?

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: nevita0305(at)hotmail(dot)com
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: is postgres a good solution for billion record data.. what about mySQL?
Date: 2009-10-24 20:34:56
Message-ID: dcc563d10910241334g44af7a9an9d155bee5686e157@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Sat, Oct 24, 2009 at 1:46 PM, Tawita Tererei <nevita0305(at)hotmail(dot)com> wrote:
> In addition to this what about MySQL, how much data (records) that can be
> managed with it?

That's a question for the mysql mailing lists / forums really. I do
know there's some artificial limit in the low billions that you have
to create your table with some special string to get it to handle
more.

As for pgsql, we use one of our smaller db servers to keep track of
our stats. It's got a 6 disk RAID-10 array of 2TB SATA 5400 RPM
drives (i.e. not that fast really) and we store about 2.5M rows a day
in it. So in 365 days we could see 900M rows in it. Each daily
partition takes about 30 seconds to seq scan. On our faster servers,
we can seq scan all 2.5M rows in about 10 seconds.

PostgreSQL can handle it, but don't expect good performance with a
single 5400RPM SATA drive or anything.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Scott Marlowe 2009-10-24 20:43:11 Re: is postgres a good solution for billion record data
Previous Message Bruno Baguette 2009-10-24 20:24:49 Re: How can I get one OLD.* field in a dynamic query inside a trigger function ?