| From: | Wells Oliver <wellsoliver(at)gmail(dot)com> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org |
| Subject: | Alternatives to very large tables with many performance-killing indicies? |
| Date: | 2012-08-16 20:54:23 |
| Message-ID: | CAOC+FBUSg2Lp+O5dcNxe+njGx0yGZ_RpokpgJ95fKbv6GnYO4w@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Hey folks, a question. We have a table that's getting large (6 million rows
right now, but hey, no end in sight). It's wide-ish, too, 98 columns.
The problem is that each of these columns needs to be searchable quickly at
an application level, and I'm far too responsible an individual to put 98
indexes on a table. Wondering what you folks have come across in terms of
creative solutions that might be native to postgres. I can build something
that indexes the data and caches it and runs separately from PG, but I
wanted to exhaust all native options first.
Thanks!
--
Wells Oliver
wellsoliver(at)gmail(dot)com
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Merlin Moncure | 2012-08-16 21:00:41 | Re: Alternatives to very large tables with many performance-killing indicies? |
| Previous Message | Steve Crawford | 2012-08-16 19:45:45 | Re: You cannot do PITR with streaming replication - true? |