From: | nikolaus(at)dilger(dot)cc |
---|---|
To: | ernest(at)vogelsinger(dot)at |
Cc: | pgsql-performance(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org |
Subject: | Re: [PERFORM] Interesting incosistent query timing |
Date: | 2003-06-17 22:45:38 |
Message-ID: | 20030617154539.8987.h018.c001.wm@mail.dilger.cc.criticalpath.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-performance |
Ernest,
Thanks for providing the additional information that
the table has 2.3 million rows.
See during the first execution you spend most of the
time scanning the index id_mdata_dictid_string. And
since that one is quite large it takes 1500 msec to
read the index from disk into memory.
For the second execution you read the large index from
memory. Therfore it takes only 10 msec.
Once you change the data you need to read from disk
again and the query takes a long time.
Regards,
Nikolaus
> For the first time run it executes in 1.5 - 2 seconds.
> From the second
> time, only 10 msec are needed for the same result:
>
> Unique (cost=3.84..3.84 rows=1 width=4) (actual
> time=1569.36..1569.39
> rows=11 loops=1)
> -> Sort (cost=3.84..3.84 rows=1 width=4) (actual
> time=1569.36..1569.37
> rows=11 loops=1)
> -> Index Scan using id_mdata_dictid_string on
> rv2_mdata t1
> (cost=0.00..3.83 rows=1 width=4) (actual
> time=17.02..1569.22 rows=11 loops=1)
> Total runtime: 1569.50 msec
>
>
> Unique (cost=3.84..3.84 rows=1 width=4) (actual
> time=10.51..10.53 rows=11
> loops=1)
> -> Sort (cost=3.84..3.84 rows=1 width=4) (actual
> time=10.51..10.51
> rows=11 loops=1)
> -> Index Scan using id_mdata_dictid_string on
> rv2_mdata t1
> (cost=0.00..3.83 rows=1 width=4) (actual
> time=0.60..10.43 rows=11 loops=1)
> Total runtime: 10.64 msec
On Tue, 17 Jun 2003 04:54:56 +0200, Ernest E
Vogelsinger wrote:
>
> At 04:20 17.06.2003, Nikolaus Dilger said:
> --------------------[snip]--------------------
> >My guess is that the second execution of the query is
> >shorter since the data blocks are cached in memory.
> >When you modify the data then it needs to be read
again
> >from disk which is much slower than from memory. The
> >short execution after restarting PostgreSQL seems to
> >indicate that your data is cached in the Linux buffer
> >cache.
> >
> >The only strange thing seems to be that you have so
few
> >rows. Are you getting the data from a remote
machine?
> >How many bytes does a single row have? Are they
really
> >large???
> --------------------[snip]--------------------
>
> What exactly do you mean? This table is quite filled
> (2.3 million rows),
> but the query results are correct.
>
>
> --
> >O Ernest E. Vogelsinger
> (\) ICQ #13394035
> ^ http://www.vogelsinger.at/
>
>
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>
> http://www.postgresql.org/docs/faqs/FAQ.html
From | Date | Subject | |
---|---|---|---|
Next Message | Joseph Shraibman | 2003-06-17 22:55:10 | Re: order of nested loop |
Previous Message | Tom Lane | 2003-06-17 22:44:43 | Re: order of nested loop |
From | Date | Subject | |
---|---|---|---|
Next Message | Ernest E Vogelsinger | 2003-06-17 23:01:09 | Re: [PERFORM] Interesting incosistent query timing |
Previous Message | Tom Lane | 2003-06-17 22:03:45 | Re: Postgres Connections Requiring Large Amounts of Memory |