From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | 石勇虎 <SHIYONGHU651(at)pingan(dot)com(dot)cn> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: 答复: response time is very long in PG9.5.5 using psql or jdbc |
Date: | 2018-02-13 18:58:00 |
Message-ID: | 30157.1518548280@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
=?gb2312?B?yq/Twrui?= <SHIYONGHU651(at)pingan(dot)com(dot)cn> writes:
> Yes,we have more than 500 thousand of objects,and the total size of the database is almost 10TB.Just as you said,we may need to reduce the objects number,or you have any better solution?
Hmph. I tried creating 500000 tables in a test database, and couldn't
detect any obvious performance problem in session startup. So there's
something very odd about your results. You might try looking at the
sizes of the system catalogs, e.g like
select pg_size_pretty(pg_total_relation_size('pg_attribute'));
(In my test database, pg_class is about 80MB and pg_attribute about
800MB.)
> And I also have a question that if the backend's internal caches of catalog is shared with other users or sessions?if the pgbouncer is usefull?
pgbouncer or some other connection pooler would help, yes. But I don't
think the underlying performance ought to be this bad to begin with.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2018-02-13 19:05:56 | Re: 答复: response time is very long in PG9.5.5 using psql or jdbc |
Previous Message | Tom Lane | 2018-02-13 17:32:16 | Re: BUG #15060: Row in table not found when using pg function in an expression |