From: | "Guido Barosio" <gbarosio(at)gmail(dot)com> |
---|---|
To: | "Jim Nasby" <jim(at)nasby(dot)net> |
Cc: | "Christopher Browne" <cbbrowne(at)acm(dot)org>, pgsql-advocacy(at)postgresql(dot)org |
Subject: | Re: MySQL million tables |
Date: | 2006-03-11 00:40:58 |
Message-ID: | f7f6b4c70603101640j51b29cb3uda58aaa1556f2b51@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-advocacy |
Well,
This is a WTF case, but year ago, a request arrived from the
Develociraptors to the DBA team.
Their need was a 2 terabyte db [with a particular need, continue]. They
did benchmark on both mysql and postgresql, and believe me, it was funny,
cause the DBA team refuse to support the idea, and left the funny und wild
developmentiraptors on their own.
The result? A script creating more o less 40.000 (oh yeah, like the foo$i
one) tables on a mysql db and making it almost unable to be browsed, but
live, currently in a beta stage, but freezed as the lack of support.
(without DBA's support, again)
Lovely! But you never know with these things, you neeever know.
note: I've created 250k tables in 63 minutes using the perl script from a
previous post, on my own workstation. (RH3, short on ram, average CPU,
ebay.com used and shipped from the north to the south crappy drive)
g.-
On 3/11/06, Jim Nasby <jim(at)nasby(dot)net> wrote:
>
> I can't believe y'all are burning cycles on this. :P
>
> On Mar 9, 2006, at 8:04 AM, Christopher Browne wrote:
>
> > A long time ago, in a galaxy far, far away, greg(at)turnstep(dot)com
> > ("Greg Sabino Mullane") wrote:
> >> I kicked this off last night before bed. It ran much quicker than
> >> I thought, due to that 27 hour estimate.
> >>
> >> Total time: 23 minutes 29 seconds :)
> >
> > I'm jealous. I've got the very same thing running on some Supposedly
> > Pretty Fast Hardware, and it's cruising towards 31 minutes plus a few
> > seconds.
> >
> > While it's running, the time estimate is...
> >
> > select (now() - '2006-03-09 13:47:49') * 1000000 / (select count(*)
> > from pg_class where relkind='r' and relname ~ 'foo');
> >
> > That pretty quickly converged to 31:0?...
> >
> >> Maybe I'll see just how far PG *can* go next. Time to make a
> >> PlanetPG post, at any rate.
> >
> > Another interesting approach to it would be to break this into several
> > streams.
> >
> > There ought to be some parallelism to be gained, on systems with
> > multiple disks and CPUs, by having 1..100000 go in parallel to 100001
> > to 200000, and so forth, for (oh, say) 10 streams. Perhaps it's
> > irrelevant parallelism; knowing that it helps/hurts would be nice...
> > --
> > (format nil "~S(at)~S" "cbbrowne" "cbbrowne.com")
> > http://linuxfinances.info/info/rdbms.html
> > Where do you want to Tell Microsoft To Go Today?
> >
> > ---------------------------(end of
> > broadcast)---------------------------
> > TIP 4: Have you searched our list archives?
> >
> > http://archives.postgresql.org
> >
>
> --
> Jim C. Nasby, Database Architect decibel(at)decibel(dot)org
> Give your computer some brain candy! www.distributed.net Team #1828
>
> Windows: "Where do you want to go today?"
> Linux: "Where do you want to go tomorrow?"
> FreeBSD: "Are you guys coming, or what?"
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faq
>
--
/"\ ASCII Ribbon Campaign .
\ / - NO HTML/RTF in e-mail .
X - NO Word docs in e-mail .
/ \ -----------------------------------------------------------------
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua D. Drake | 2006-03-11 01:10:11 | Re: MySQL million tables |
Previous Message | Jim Nasby | 2006-03-11 00:17:11 | Re: MySQL million tables |