From: | Christopher Browne <cbbrowne(at)acm(dot)org> |
---|---|
To: | pgsql-advocacy(at)postgresql(dot)org |
Subject: | Re: MySQL million tables |
Date: | 2006-03-09 14:04:48 |
Message-ID: | 87veunlnnj.fsf@wolfe.cbbrowne.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-advocacy |
A long time ago, in a galaxy far, far away, greg(at)turnstep(dot)com ("Greg Sabino Mullane") wrote:
> I kicked this off last night before bed. It ran much quicker than
> I thought, due to that 27 hour estimate.
>
> Total time: 23 minutes 29 seconds :)
I'm jealous. I've got the very same thing running on some Supposedly
Pretty Fast Hardware, and it's cruising towards 31 minutes plus a few
seconds.
While it's running, the time estimate is...
select (now() - '2006-03-09 13:47:49') * 1000000 / (select count(*)
from pg_class where relkind='r' and relname ~ 'foo');
That pretty quickly converged to 31:0?...
> Maybe I'll see just how far PG *can* go next. Time to make a
> PlanetPG post, at any rate.
Another interesting approach to it would be to break this into several
streams.
There ought to be some parallelism to be gained, on systems with
multiple disks and CPUs, by having 1..100000 go in parallel to 100001
to 200000, and so forth, for (oh, say) 10 streams. Perhaps it's
irrelevant parallelism; knowing that it helps/hurts would be nice...
--
(format nil "~S(at)~S" "cbbrowne" "cbbrowne.com")
http://linuxfinances.info/info/rdbms.html
Where do you want to Tell Microsoft To Go Today?
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Treat | 2006-03-09 14:22:43 | Re: PostgreSQL committer history? |
Previous Message | Bruce Momjian | 2006-03-09 13:27:57 | Re: PostgreSQL committer history? |