| From: | Boris Sagadin <sagadin(at)gmail(dot)com> |
|---|---|
| To: | Peter Geoghegan <pg(at)bowt(dot)ie> |
| Cc: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-bugs(at)postgresql(dot)org> |
| Subject: | Re: BUG #14917: process hang on create index |
| Date: | 2017-11-21 08:19:02 |
| Message-ID: | CAF8kEZuD_buuCbP70zhzDHKqK0LJ-Vde9H3gYQFEF==F-j_J2A@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-bugs |
No string is over 200 chars. By function name I assumed the get_next_seq is
pgsql related, but now I've learned it's glibc. Sorting in shell with same
locale is indeed very slow, I used just 1% of data in that column and it
finishes sorting in a few minutes. Thanks, I'll check with glibc.
Regards,
Boris
On Mon, Nov 20, 2017 at 7:17 PM, Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> Boris Sagadin <sagadin(at)gmail(dot)com> wrote:
>
>> After a fresh DB start and CREATE INDEX idx_table123_fast ON table123
>> USING
>> btree (k, n), strace looks normal to a point:
>>
>
> brk(0x7f13d970a000) = 0x7f13d970a000
>> mmap(NULL, 12587008, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
>>
>
> Are these text strings individually very large?
>
> --
> Peter Geoghegan
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Amit Kapila | 2017-11-21 11:52:18 | Re: 10.1: hash index size exploding on vacuum full analyze |
| Previous Message | Ashutosh Bapat | 2017-11-21 06:25:20 | Re: [BUGS] BUG #14890: Error grouping by same column twice using FDW |