Re: sql statement not using all primary key values and poor performance

From: James Pang <jamespang886(at)gmail(dot)com>
To: Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
Cc: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: sql statement not using all primary key values and poor performance
Date: 2024-02-23 10:21:05
Message-ID: CAHgTRffjX2fvgzg_ygt6Y2oGH-1S_Q9_iq-x_h8=1UkffFWTqA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

it's a third-party vendor application, not easy to change their code.
is it possible to 1) in Postgresql JDBC driver connection, set
plan_cache_mode=force_custom_plan
or 2) some other parameters can workaround this issue?

Thanks,

James

Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> 於 2024年2月23日週五 下午5:17寫道:

> On Fri, 2024-02-23 at 15:20 +0800, James Pang wrote:
> > we have a SQL from Postgresql JDBC, primary is based on
> (bigint,varchar2,bigint),
> > but from sql plan, it convert to ::numeric so the plan just use one
> "varchar"
> > key column and use the other 2 bigint keys as filters. what's the cause
> about that ?
> >
> > Table "test.xxxxxx"
> > Column | Type | Collation |
> Nullable | Default
> >
> ------------------+--------------------------------+-----------+----------+---------
> > xxxid | bigint | | not
> null |
> > paramname | character varying(512) | | not
> null |
> > paramvalue | character varying(1536) | |
> |
> > sssid | bigint | | not
> null |
> > createtime | timestamp(0) without time zone | |
> |
> > lastmodifiedtime | timestamp(0) without time zone | |
> |
> > mmmuuid | character varying(32) | |
> |
> > Indexes:
> > "pk_xxxxxx" PRIMARY KEY, btree (xxxid, paramname, sssid)
> > "idx_xxxxxx_mmmuuid" btree (sssid, mmmuuid, paramname)
> >
> > SET extra_float_digits = 3
> >
> > duration: 7086.014 ms plan:
> > Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID,
> CREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XXXXXX WHERE ( ( XXXID =
> $1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) )
> > Index Scan using pk_xxxxxx on test.xxxxxx
> (cost=0.57..2065259.09 rows=1 width=86) (actual time=7086.010..7086.011
> rows=0 loops=1)
> > Output: confid, paramname, paramvalue, sssid, createtime,
> lastmodifiedtime, mmmuuid
> > Index Cond: ((xxxxxx.paramname)::text = 'cdkkifffff'::text)
> <<< just use only one key instead all primary keys.
> > Filter: (((xxxxxx.xxxid)::numeric = '18174044'::numeric) AND
> ((xxxxxx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but
> converted to numeric
> > Buffers: shared read=1063470
> > I/O Timings: read=4402.029
> >
> > it's from JDBC, we saw this JDBC driver try to set extra_float_digits =
> 3 before
> > running the SQL ,does that make planner to convert bigint to numeric ?
>
> Setting "extra_float_digits" is just something the JDBC driver does so as
> to
> not lose precision with "real" and "double precision" values on old
> versions
> of PostgreSQL.
>
> The problem is that you bind the query parameters with the wrong data
> types.
> Don't use "setBigDecimal()", but "setLong()" if you want to bind a
> "bigint".
> An alternative is "setObject()" with "targetSqlType" set to "Types.BIGINT".
>
> Yours,
> Laurenz Albe
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Laurenz Albe 2024-02-23 11:48:55 Re: sql statement not using all primary key values and poor performance
Previous Message Laurenz Albe 2024-02-23 09:17:34 Re: sql statement not using all primary key values and poor performance