| From: | Alexy Khrabrov <deliverable(at)gmail(dot)com> | 
|---|---|
| To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> | 
| Cc: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org | 
| Subject: | Re: two memory-consuming postgres processes | 
| Date: | 2008-05-02 20:26:47 | 
| Message-ID: | 72E02D29-848B-467A-AE6B-401568010254@gmail.com | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-performance | 
On May 2, 2008, at 1:13 PM, Tom Lane wrote:
> I don't think you should figure on more than 1GB being
> usefully available to Postgres, and you can't give all or even most of
> that space to shared_buffers.
So how should I divide say a 512 MB between shared_buffers and, um,  
what else?  (new to pg tuning :)
I naively thought that if I have a 100,000,000 row table, of the form  
(integer,integer,smallint,date), and add a real coumn to it, it will  
scroll through the memory reasonably fast.  Yet when I had  
shared_buffers=128 MB, it was hanging there 8 hours before I killed  
it, and now with 1500MB is paging again for several hours with no end  
in sight.  Why can't it just add a column to a row at a time and be  
done with it soon enough? :)  It takes inordinately long compared to a  
FORTRAN or even python program and there's no index usage for this  
table, a sequential scan, why all the paging?
Cheers,
Alexy
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alexy Khrabrov | 2008-05-02 20:28:42 | Re: two memory-consuming postgres processes | 
| Previous Message | Greg Smith | 2008-05-02 20:22:29 | Re: two memory-consuming postgres processes |