From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Gurjeet Singh <singh(dot)gurjeet(at)gmail(dot)com> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, PGSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Creating multiple indexes in one table scan. |
Date: | 2012-05-24 15:44:52 |
Message-ID: | CA+TgmobL7Yuhfk6UPhRneb3DWt2poViRB762VmEZ1gxz2BWfTQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, May 24, 2012 at 11:25 AM, Gurjeet Singh <singh(dot)gurjeet(at)gmail(dot)com> wrote:
> It'd be great if one of standard utilities like pg_restore supported this,
> by spawning every concurrent index build in separate backends. Just a
> thought.
If parallel restore doesn't already take this into account when doing
job scheduling, that would be a worthwhile improvement to consider.
Personally, I think the big win in this area is likely to be parallel
sort. There may well be some more we can squeeze out of our existing
sort implementation first, and I'm all in favor of that, but
ultimately if you've got 60GB of data to sort and it's all in cache,
you want to be able to use more than one CPU for that.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Kohei KaiGai | 2012-05-24 16:00:33 | Re: [RFC] Interface of Row Level Security |
Previous Message | Peter Geoghegan | 2012-05-24 15:39:13 | Re: pg_stat_statments queryid |