From: | durumdara <durumdara(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Web + Slicing/Paging datas |
Date: | 2009-04-23 07:09:39 |
Message-ID: | 49F01433.3090205@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi!
In a mod_py application I wanna write a wrapper that handle all PSQL
data view with paging/slicing.
For example:
I have 1.500 records. I wanna show only N (f. ex: 15) records in the
view, other records are accessable with a pager (links):
[First, P-2, P-1, P, P+1 P+2, Last]
F. Ex: First, 5, 6, {7}, 8, 9, Last
Ok, I can realize this with count, and next to define the select's
start/and limit parameters.
But I heard about the count(*) is slow in PG.
This paging is a typical problem: I need to paid two times for the datas.
First time I get all data, but I only count them. Next time I get only
the slice of records I need.
As I saw, some systems with less data do this:
1.)
Inserts all records to a temp table.
Check the affected rows (as count).
Slicing the records.
Fetch the slice records.
Destroy temp table.
2.)
Select all records.
Fetching all records.
Dropping all not needed elements.
Return needed records.
Close cursor.
Every solution is slow, the 1.) is because of storing the records (bulk
insert) 2.) is because of fetching not needed records (network speed).
So I wanna ask: what are you doing if you wanna use paging/slicing of
records?
The first (count/slicing) solution is enough fast for you?
Thanks for your help:
dd
From | Date | Subject | |
---|---|---|---|
Next Message | John R Pierce | 2009-04-23 07:23:14 | Re: Web + Slicing/Paging datas |
Previous Message | John R Pierce | 2009-04-23 01:50:28 | Re: Help request to improve function performance |