From: | Ernest E Vogelsinger <ernest(at)vogelsinger(dot)at> |
---|---|
To: | nikolaus(at)dilger(dot)cc |
Cc: | pgsql-performance(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org |
Subject: | Re: [PERFORM] Interesting incosistent query timing |
Date: | 2003-06-17 23:01:09 |
Message-ID: | 5.1.1.6.2.20030618005902.02e31a38@mail.vogelsinger.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-performance |
At 00:45 18.06.2003, nikolaus(at)dilger(dot)cc said:
--------------------[snip]--------------------
>Thanks for providing the additional information that
>the table has 2.3 million rows.
>
>See during the first execution you spend most of the
>time scanning the index id_mdata_dictid_string. And
>since that one is quite large it takes 1500 msec to
>read the index from disk into memory.
>
>For the second execution you read the large index from
>memory. Therfore it takes only 10 msec.
>
>Once you change the data you need to read from disk
>again and the query takes a long time.
--------------------[snip]--------------------
I came to the same conclusion - I installed a cron script that performs a
select against that index on a regular basis (3 minutes). After that even
the most complex queries against this huge table go like whoosssshhh ;-)
Would be interesting what one could do to _not_ have to take this basically
clumsy approach...
--
>O Ernest E. Vogelsinger
(\) ICQ #13394035
^ http://www.vogelsinger.at/
From | Date | Subject | |
---|---|---|---|
Next Message | Joseph Shraibman | 2003-06-17 23:03:37 | EFFECTIVE_CACHE_SIZE |
Previous Message | Arguile | 2003-06-17 23:00:07 | Re: Link to Bruce M's fs performance tuning doc |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-06-17 23:21:41 | Re: Postgres Connections Requiring Large Amounts of Memory |
Previous Message | nikolaus | 2003-06-17 22:45:38 | Re: [PERFORM] Interesting incosistent query timing |