From: | Bill Moran <wmoran(at)collaborativefusion(dot)com> |
---|---|
To: | "Jesper Krogh" <jesper(at)krogh(dot)cc> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: "Slow" query or just "Bad hardware"? |
Date: | 2008-03-27 17:07:44 |
Message-ID: | 20080327130744.9bef536b.wmoran@collaborativefusion.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
In response to "Jesper Krogh" <jesper(at)krogh(dot)cc>:
> Hi
>
> I have a table with around 10 million entries The webpage rendered hits
> at most 200 records which are distributed well in the 10m with an average
> of 2 "references" pr. entry.
>
> Is there anyway to speed this query more up than allready. .. yes running
> it subsequenctly it is blazingly fast, but with view of around 200/10m we
> most
> often dont hit the same query again.
While all the other advice is good, what you really need to do to address
this is figure out what's in your cache and whether it's the right things.
Once you _know_ that (and aren't just speculating) you can start to use
the solutions that others have suggested to improve on the situation. If
you just start trying things at random, you'll probably figure it out
eventually anyway, but I'm assuming you'll want a direct route.
So, I'm going to repeat something that I say on this mailing list about
twice a month: install MRTG or some equivalent and start graphing critical
database statistics.
In your case, install the pg_buffercache addon and use it to track how
much of your shared buffers each table is using. Based on your
description of the problem, I doubt it will take more than a few days
to have a clear view of exactly what's going on (i.e. you'll probably
see table X clearing table Y out of the buffers or something ...)
From there you can start making all kinds of decisions:
* Do you need more RAM overall?
* Is enough RAM allocated to shared_buffers (you don't provide any
details on config settings, so I can't guess at this)
* Are there queries that can be better optimized to not fill up the
cache with data that they don't really need?
* Can switching up storage methods for TEXT fields help you out?
* Are your demands simply to high for what a SAN can provide and
you'll be better off with a big RAID-10 of SCSI disks?
HTH
> # explain analyze SELECT "me"."created", "me"."created_initials",
> "me"."updated", "me"."updated_initials", "me"."start_time",
> "me"."end_time", "me"."notes", "me"."id", "me"."sequence_id",
> "me"."database", "me"."name", "numbers"."reference_id",
> "numbers"."evidence" FROM "reference" "me" LEFT JOIN "number" "numbers" ON
> ( "numbers"."reference_id" = "me"."id" ) WHERE ( "me"."sequence_id" IN (
> 34284, 41503, 42274, 42285, 76847, 78204, 104721, 126279, 274770, 274790,
> 274809, 305346, 307383, 307411, 309691, 311362, 344930, 352530, 371033,
> 371058, 507790, 517521, 517537, 517546, 526883, 558976, 4894317, 4976383,
> 1676203, 4700800, 688803, 5028679, 5028694, 5028696, 5028684, 5028698,
> 5028701, 5028676, 5028682, 5028686, 5028692, 5028689, 3048683, 5305427,
> 5305426, 4970187, 4970216, 4970181, 4970208, 4970196, 4970226, 4970232,
> 4970201, 4970191, 4970222, 4350307, 4873618, 1806537, 1817367, 1817432,
> 4684270, 4981822, 3172776, 4894299, 4894304, 4700798, 1120990, 4981817,
> 4831109, 4831036, 4831068, 4831057, 4831105, 4831038, 4831044, 4831081,
> 4831063, 4831051, 4831086, 4831049, 4831071, 4831075, 4831114, 4831093,
> 2635142, 4660208, 4660199, 4912338, 4660150, 4662011, 5307782, 4894286,
> 4894292, 4894296, 4894309, 4894313, 1428388, 1932290, 5306082, 2010148,
> 3979647, 4382006, 4220374, 1880794, 1526588, 774838, 1377100, 969316,
> 1796618, 1121046, 4662009, 963535, 5302610, 1121105, 688700, 688743,
> 688836, 688763, 688788, 1056859, 2386006, 2386015, 2386023, 4265832,
> 4231262, 4265743, 5302612, 1121056, 1121090, 1121074, 688659, 688650 ) )
> ORDER BY "ecnumbers"."reference_id";
> QUERY
> PLAN
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--
> ---------------------------------------------------------------------------------------------
> Sort (cost=56246.18..56275.20 rows=11606 width=299) (actual
> time=2286.900..2287.215 rows=389 loops=1)
> Sort Key: numbers.reference_id
> -> Nested Loop Left Join (cost=388.48..55462.63 rows=11606 width=299)
> (actual time=475.071..2284.502 rows=389 loops=1)
> -> Bitmap Heap Scan on reference me (cost=388.48..23515.97
> rows=11606 width=191) (actual time=451.245..1583.966 rows=389
> loops=1)
> Recheck Cond: (sequence_id = ANY
> ('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,11
21
> 090,1121074,688659,688650}'::integer[]))
> -> Bitmap Index Scan on reference_seq_idx
> (cost=0.00..385.58 rows=11606 width=0) (actual
> time=422.691..422.691 rows=450 loops=1)
> Index Cond: (sequence_id = ANY
> ('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,11
21
> 090,1121074,688659,688650}'::integer[]))
> -> Index Scan using ecn_ref_idx on number eumbers
> (cost=0.00..2.74 rows=1 width=108) (actual time=1.794..1.795
> rows=0 loops=389)
> Index Cond: (numbers.reference_id = me.id)
> Total runtime: 2287.701 ms
> (10 rows)
>
> .. subsequent run: 32.367ms
>
> On a X4600 server with 32GB of ram and Equalogic iSCSI SAN attached.
>
> Jesper
>
>
> --
> Jesper Krogh
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
wmoran(at)collaborativefusion(dot)com
Phone: 412-422-3463x4023
****************************************************************
IMPORTANT: This message contains confidential information and is
intended only for the individual named. If the reader of this
message is not an intended recipient (or the individual
responsible for the delivery of this message to an intended
recipient), please be advised that any re-use, dissemination,
distribution or copying of this message is prohibited. Please
notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The
sender therefore does not accept liability for any errors or
omissions in the contents of this message, which arise as a
result of e-mail transmission.
****************************************************************
From | Date | Subject | |
---|---|---|---|
Next Message | Luke Lonergan | 2008-03-27 17:32:44 | Re: postgresql is slow with larger table even it is in RAM |
Previous Message | PFC | 2008-03-27 16:57:06 | Re: "Slow" query or just "Bad hardware"? |