From: | Becky Neville <rebecca(dot)neville(at)yale(dot)edu> |
---|---|
To: | Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au> |
Cc: | Antoine <asolomon15(at)nyc(dot)rr(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: looking for large dataset |
Date: | 2003-05-03 15:13:17 |
Message-ID: | Pine.LNX.4.44.0305031109280.32758-100000@termite.zoo.cs.yale.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
If you can create a flat file with some rows, it's pretty easy to
duplicate them as many times as you need to get up to 50k (which, as
previously mentioned, is relatively small)
This might not work if you need "real" data - but I started with 67k rows
of real data in my table, then copied them to a temp table,
updated the 3 key fields with previous value + max value,
and inserted back into the original table. (Just to ensure my new rows
had new values for those 3 fields.)
On Sat, 3 May 2003, Christopher Kings-Lynne wrote:
> That's a very small dataset :)
>
> Chris
>
> On 3 May 2003, Antoine wrote:
>
> > I was woundering where could I find a nice large dataset. Perhaps 50
> > thousand records or more
> > --
> > Antoine <asolomon15(at)nyc(dot)rr(dot)com>
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 2: you can get off all lists at once with the unregister command
> > (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)
> >
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
>
From | Date | Subject | |
---|---|---|---|
Next Message | Becky Neville | 2003-05-03 17:40:28 | why is the db so slow? |
Previous Message | Andrew Sullivan | 2003-05-03 14:52:12 | Re: NOT IN doesn't use index? |