From: | Andy Chambers <achambers(at)mcna(dot)net> |
---|---|
To: | pgsql <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Batching up data into groups of n rows |
Date: | 2011-09-27 03:55:09 |
Message-ID: | CAAfW55oh17_1m+77uQ+L+0rtX1_TLcmdC8uHVadL=Byb6bA3PA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Sep 26, 2011 at 4:22 PM, Marti Raudsepp <marti(at)juffo(dot)org> wrote:
> On Mon, Sep 26, 2011 at 18:59, Andy Chambers <achambers(at)mcna(dot)net> wrote:
>> ...and I want to select the data from that table, adding a "group_id"
>> column, and a "record_id" column. The "group_id" would start at 1,
>> and increment by 1 every 100 rows, and the "record_id" would increment
>> by 1 every row, but restart at 1 for each new group_id.
>
> I can't see why you would want this, but regardless...
The addresses need to be sent to a 3rd party web-service for
canonicalization. The web service accepts batches of <100 addresses.
I was wondering how I'd get Postgres to generate the XML for sending
100 addresses at a time to this web service.
> Since you didn't list a primary key for the table, I'm using "ctid" as
> a substitute. But that's a PostgreSQL-specific hack, normally you
> would use the real primary key instead.
>
> update addresses set group_id=(nr/100)+1, record_id=(nr%100)+1 from
> (select ctid, row_number() over () -1 as nr from addresses) as subq
> where subq.ctid=addresses.ctid;
Cool! I don't need to actually store these ids in the database, they
just need to be generated on the fly and forgotten but I think I can
adapt the example to do what I need.
Thanks,
Andy
From | Date | Subject | |
---|---|---|---|
Next Message | Venkat Balaji | 2011-09-27 05:22:47 | Re: : PostgreSQL Online Backup |
Previous Message | John R Pierce | 2011-09-27 01:09:24 | Re: Pg and compress |