From: | Francisco Olarte <folarte(at)peoplecall(dot)com> |
---|---|
To: | Tobias Völk <tobias(dot)voelk(at)t-online(dot)de> |
Cc: | "pgsql-bugs(at)lists(dot)postgresql(dot)org" <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Improvement for query planner? (no, not about count(*) again ;-)) |
Date: | 2020-07-20 10:56:37 |
Message-ID: | CA+bJJbxEuHXV4oiVrB54y3-tBgd+cwmMCFUHTFH2VB8BcfTFdA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs pgsql-general |
Tobias:
On Mon, Jul 20, 2020 at 12:09 PM Tobias Völk <tobias(dot)voelk(at)t-online(dot)de> wrote:
...
> Insert into newtable(name) select name1 from games on conflict do nothing;
> (and later on intended to do the same for the second column)
> However after hours it still wasn’t done, used only 1 cpu core to the max and read with 5 MB/s from my fast SSD.
> I’ve also tried inserting (select name1 from games union select name2 from games) but it always wanted to do it using sorting.
> But either the sorting or the preperations for the sorting were again only done using 1 core to the max and reading with 5 MB/s.
> Couldn’t find a fast query for my problem.
> So I wrote a java-program which read the whole table at a fetchsize of about 4 million and inserted the names into a HashSet.
> And surprisingly after only a few minutes the program was already 25% done o.O
Not surprising, 1.3E9 rows, lets say 400M for 25%, SSD, if your net is
fast enough pg should be able to send you a million rows a second to
do that.
Have you tried doing a similar thing in postgres, like "select
distinct name1", or select distinct name1 union select distinct
name2". The distinct part is the equivalent of putting everything in a
hash set.
The part of doing it using sorting, maybe you do not have enough
work_mem or other things, but it is probably the right way to do it
under the constraints you have put on the engine, but I would not
bother with that before timing. I routinely sort huge files ( not in
pg ) spilling to disk ( they are about a hundred times available RAM )
sort a few gigabytes, spill, read and merge in multi megabytes chunks,
and it only makes a constant factor (2, IIRC, ram sort is
read+sort+write, spill is read+write chunks + read chunks + write )
when testing against pure ram ( with a file which fit in ram ).
> My question is, why isn’t postgres nearly this fast? Why doesn’t it just create a HashSet in RAM and read full speed from the disk?
The on conflict version may be slow because it is not optimized for
this kind of things, and is doing nothing but testing for conflict on
every row ( which may be needed ). Also, you have set a primary key
before a bulk load, which is a big no-no as pg has to build the index
as it loads.
I would try to do the equivalent of the hash set, create the table
without the PK, then try something like "select distint name1 union
select distinct name2", which is similar to build two hashsets and
collapse them, then add the primary key afterwards. Test it in steps.
> I even created a hash index but it kept using it’s primary key b-tree and then I read that hash indices somehow don’t support checking for uniqueness.
Also more indexes => slower loading.
Francisco Olarte.
From | Date | Subject | |
---|---|---|---|
Next Message | Ruslan Mukhamedov | 2020-07-20 11:13:14 | apt-get -yq purge postgresql-common shows interactive dialog |
Previous Message | Amit Kapila | 2020-07-20 09:59:00 | Re: Buffers from parallel workers not accumulated to upper nodes with gather merge |
From | Date | Subject | |
---|---|---|---|
Next Message | Durumdara | 2020-07-20 13:20:54 | PGBench on Windows - connections are subprocesses? |
Previous Message | Thorsten Schöning | 2020-07-20 09:28:13 | Re: How to restore a dump containing CASTs into a database with a new user? |