From: | 007reader(at)gmail(dot)com |
---|---|
To: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Abnormal JSON query performance |
Date: | 2018-05-12 04:32:42 |
Message-ID: | C3E7372D-153D-4276-8DB5-0D232ECD91E4@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
I observed performance anomaly with JSON that I believe is incorrect in version 10, although didn’t test earlier ones.
Here is a test table:
create table test(
id integer primary key,
data json
);
A simple query works fine on a table of 92,000 records with ~60KB average record size.
select data->>key1 from test limit 1000;
The query above takes about 155 ms. If I add additional field to the list of fields, execution time increases linearly - retrieving 2 fields take ~310 ms, 3 fields ~ 465 ms and so on. The explain statement for 5 fields looks normal
Limit (cost=0.00..34.10 rows=1000 width=160) (actual time=1.783..813.643 rows=1000 loops=1)
-> Seq Scan on loans (cost=0.00..3165.85 rows=92834 width=160) (actual time=1.753..805.629 rows=1000 loops=1)
Planning time: 8.546 ms
Execution time: 817.874 ms
Looks like every record gets parsed separately for each key... Unless I’m doing something incorrectly, this makes retrieval of individual key completely useless for practical cases. It’d leave only unattractive alternative of getting the entire JSON document. In my case, instead of retrieving a few hundred byte, it would force getting 60KB for each query...
-Bob Jones
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2018-05-12 05:03:41 | Re: Abnormal JSON query performance |
Previous Message | PG Bug reporting form | 2018-05-11 14:26:23 | BUG #15193: Postgres Coredumped in AIX |