From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Clay Luther" <claycle(at)cisco(dot)com> |
Cc: | "Pgsql-General (E-mail)" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: 50K record DELETE Begins, 100% CPU, Never Completes 1 hour later |
Date: | 2003-09-11 20:26:33 |
Message-ID: | 952.1063311993@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Clay Luther" <claycle(at)cisco(dot)com> writes:
> By 32K I meant:
> sort_mem = 32768 # min 64, size in KB
Ah, so really 32M. Okay, that is in the realm of reason. But it would
still be worth your while to investigate whether performance changes if
you kick it up some more notches. If the planner is estimating that you
would need 50M for a hash table, it will avoid hash-based plans with
this setting. (Look at estimated number of rows times estimated row
width in EXPLAIN output to get a handle on what the planner is guessing
as the data volume at each step.)
The rationale for keeping sort_mem relatively small by default is that
you may have a ton of transactions each concurrently doing one or several
sorts, and you don't want to run the system into swap hell. But if you
have one complex query to execute at a time, you should consider kicking
up sort_mem just in that session.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Network Administrator | 2003-09-11 20:29:37 | Re: Picture with Postgres and Delphi |
Previous Message | Clay Luther | 2003-09-11 20:20:27 | Re: 50K record DELETE Begins, 100% CPU, Never Completes 1 hour later |