From: | Joe Conway <mail(at)joeconway(dot)com> |
---|---|
To: | "David F(dot) Skoll" <dfs(at)roaringpenguin(dot)com> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Large TEXT fields |
Date: | 2002-08-16 17:06:41 |
Message-ID: | 3D5D3121.30503@joeconway.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
David F. Skoll wrote:
> I'm considering building a database with a table which stores entire
> e-mail messages, up to around 128kB.
>
> - How does PostgreSQL cope with tables with huge amounts of data in a
> TEXT column? Let's say 50,000 rows of e-mail messages with an average
> e-mail size of 20kB, with maybe 5% of the messages over 100kB.
What you describe is not what I would call huge, but a lot depends on
your hardware and typical queries of course. Why don't you test it?
As am example, I did this (on a Pentium 833, 512 MB RAM, IDE hard drive,
Red Hat 7.3)
1. create table bigtest(f1 int, f2 text);
2. run script to insert 50000 rows of sequential id, and 130k string
literal (took about 24 minutes)
3. create index bigtest_idx on bigtest(f1); (took about 2 seconds)
4. analyze bigtest; (took about 2 seconds)
5. explain analyze select f1, substr(f2,1,20)
from bigtest where f1 = 33333;
explain analyze select f1, substr(f2,1,20) from bigtest where f1 = 33333;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
Index Scan using bigtest_idx on bigtest (cost=0.00..3.01 rows=1
width=36) (actual time=1.66..1.69 rows=1 loops=1)
Index Cond: (f1 = 33333)
Total runtime: 1.83 msec
(3 rows)
HTH,
Joe
From | Date | Subject | |
---|---|---|---|
Next Message | Dmitry Morozovsky | 2002-08-16 18:04:19 | Re: upgrade issue |
Previous Message | Tom Lane | 2002-08-16 15:00:41 | Re: upgrade issue |