From: | Tobias Oberstein <tobias(dot)oberstein(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: lseek/read/write overhead becomes visible at scale .. |
Date: | 2017-01-25 21:27:40 |
Message-ID: | a7fc2477-d9d6-c695-b40a-76ad341562c9@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
>> Synthetic PG workload or real world production workload?
>
> Both might work, production-like has bigger pull, but I'd guess
> synthetic is good enough.
Thanks! The box should get PostgreSQL in the not too distant future.
It'll get a backup from prod, but will act as new prod, so it might take
some time until a job can be run and a profile collected.
>> So how would I do a perf profile that would be acceptable as prove?
>
> You'd have to look at cpu time, not number of syscalls. IIRC I
> suggested doing a cycles profile with -g and then using "perf report
> --children" to see how many cycles are spent somewhere below lseek.
Understood. Either profile manually or expand the function.
> I'd also suggest sharing a profile cycles profile, it's quite likely
> that the overhead is completely elsewhere.
Yeah, could be. It'll be interesting to see for sure. I should get a
chance to collect such profile and then I'll post it back here -
/Tobias
From | Date | Subject | |
---|---|---|---|
Next Message | Nikita Glukhov | 2017-01-25 21:36:53 | Re: PATCH: recursive json_populate_record() |
Previous Message | Peter Geoghegan | 2017-01-25 21:22:41 | Re: Checksums by default? |