From: | "Mike Sofen" <msofen(at)runbox(dot)com> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: UUIDs & Clustered Indexes |
Date: | 2016-08-31 03:33:38 |
Message-ID: | 033201d20338$77a687a0$66f396e0$@runbox.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
From: George Neuner Sent: Tuesday, August 30, 2016 5:54 PM
>Mike Sofen wrote: So in this scenario, I'm using
>BOTH bigserials as the PK and uuids as AKs in the core tables. I
>reference the bigints for all joins and (have to) use the uuids for the
>filters. It's been working ok so far, lookup performance on a table
>with a few million rows, using the uuid (indexed) is instantaneous.
>I'll soon have a 100 million+ rows loaded into a single table and know a
bit more.
>
>The uuids are also design insurance for me in case I need to shard,
>since I'll need/want that uniqueness across servers.
FYI: articles about sharding using bigint keys.
<http://instagram-engineering.tumblr.com/post/10853187575/sharding-ids-at-in
stagram>
http://instagram-engineering.tumblr.com/post/10853187575/sharding-ids-at-ins
tagram
<http://rob.conery.io/2014/05/29/a-better-id-generator-for-postgresql/>
http://rob.conery.io/2014/05/29/a-better-id-generator-for-postgresql/
George
I remember reading these articles a long time ago, forgot about them...and
appreciate the reminder!
I really liked the enhanced Instagram function from Rob Conery in the second
link, but so far haven't needed to deal with it. However, an upcoming
project may require huge data storage - approaching hundreds of billions of
rows, and I'm sticking with Postgres - so this will be a great way to test
drive the function. And I may try my hand at a further enhancement, time
permitting. Thanks for the links!
Mike
From | Date | Subject | |
---|---|---|---|
Next Message | Benoit Lobréau | 2016-08-31 07:45:52 | PGDATA / data_directory |
Previous Message | George Neuner | 2016-08-31 00:53:56 | Re: UUIDs & Clustered Indexes |