Re: poor pefrormance with regexp searches on large tables

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Grzegorz Blinowski" <g(dot)blinowski(at)gmail(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: poor pefrormance with regexp searches on large tables
Date: 2011-08-10 17:17:44
Message-ID: 4E4276E8020000250003FD52@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Grzegorz Blinowski <g(dot)blinowski(at)gmail(dot)com> wrote:

> the problem is not disk transfer/access but rather the way
> Postgres handles regexp queries.

As a diagnostic step, could you figure out some non-regexp way to
select about the same percentage of rows with about the same
distribution across the table, and compare times? So far I haven't
seen any real indication that the time is spent in evaluating the
regular expressions, versus just loading pages from the OS into
shared buffers and picking out individual tuples and columns from
the table. For all we know, the time is mostly spent decompressing
the 2K values. Perhaps you need to save them without compression.
If they are big enough after compression to be stored out-of-line by
default, you might want to experiment with having them in-line in
the tuple.

http://www.postgresql.org/docs/8.4/interactive/storage-toast.html

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2011-08-10 17:17:46 Re: Autovacuum running out of memory
Previous Message Tomas Vondra 2011-08-10 17:15:44 Re: poor pefrormance with regexp searches on large tables