From: | Vincenzo Melandri <vmelandri(at)imolinfo(dot)it> |
---|---|
To: | Віталій Тимчишин <tivv00(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Seq scan on 10million record table.. why? |
Date: | 2012-10-30 18:18:51 |
Message-ID: | CAHSd9GdYiqCMQVuGjZDC3AUybkhtBgRAo5w_bS9=jD46pAtEHg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> 1) Make all types the same
> 2) If you are using some narrow type for big_table (say, int2) to save
> space, you can force narrowing conversion, e.g. "b.key1=ds.key1::int2". Note
> that if ds.key1 has any values that don't fit into int2, you will have
> problems. And of course, use your type used instead of int2.
>
> Best regards, Vitalii Tymchyshyn
>
This fixed my problem :)
Thanks Vitalii!
For the other suggestions made from Gabriele, unfortunately I can't
make an accurate data-partitioning 'cause (obviously) it will be quite
a big work and the customer finished the budget for this year, so
unless I choose to work for free... ;)
--
Vincenzo.
From | Date | Subject | |
---|---|---|---|
Next Message | pg noob | 2012-10-30 18:34:31 | pg_buffercache |
Previous Message | Daniel Burbridge | 2012-10-30 16:41:49 | Re: Prepared statements slow in 9.2 still (bad query plan) |