From: | "Ken Egervari" <ken(at)upfactor(dot)com> |
---|---|
To: | "John Arbash Meinel" <john(at)arbash-meinel(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Help with tuning this query (with explain analyze finally) |
Date: | 2005-03-04 05:22:12 |
Message-ID: | 000e01c5207a$1f668ac0$6a01a8c0@KEN |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers-win32 pgsql-performance |
Josh,
Thanks so much for your comments. They are incredibly insightful and you
clearly know your stuff. It's so great that I'm able to learn so much from
you. I really appreciate it.
>Do you need the interior sort? It's taking ~93ms to get 7k rows from
>shipment_status, and then another 30ms to sort them. This isn't a lot,
>so it might be fine.
Running the query without the sort doesn't actually improve performance
unfortunately, which I find strange. I think the analyze is giving bad
feedback because taking all sorts out completely makes no difference in
performance. Dan Tow's book actually said the same thing... how sorting
rarely takes up the bulk of the work. Although I know you didn't like his
book much, but I had observed that in my experience too.
>Also, did you ever try CLUSTER current_status_date_idx ON shipment_status.
>This groups the rows in shipment_status by their status date, which
>helps put items with the same date next to eachother. This may effect
>other portions of the query, or other queries. Also, if you are
>inserting sequentially, it would seem that the items would already be
>naturally near eachother based on date.
Yes, this was one of the first things I tried actually and it is currently
clustered. Since shipment status comes into our system at real time, the
dates are more or less in order as well.
>The next big cost is having to merge the 28k rows with the fast hash
>plan, which takes about 80ms.
>
>I guess the biggest issue is that you are doing a lot of work, and it
>takes time to do it. Also, I've noticed that this query is being run
>with exactly the same data. Which is good to compare two methods. But
>remember to test on multiple potential values. You might be better off
>one way with this query, but much worse for a different dataset. I
>noticed that this seems to have fewer rows than what postgres thinks the
>*average* number would be. (It predicts 60k and you only get 28k rows).
Well, the example where p.id = 355 is an above normal case where performance
is typically bad. If a user's company has very few shipments and
shipment_status rows, performance isn't going to matter much and those
queries usually perform much faster. I really needed to tune this for the
larger customers who do have thousands of rows for their entire company and
will probably reach 6 digits by the end of next year. For the person 355,
they've only been on the system for 3 months and they already have 27700
rows. Even if this makes the smaller customers a bit slower, I think it's
worth it if I can speed up cases like this, who all have very similar data
distribution.
>If this query is performed a lot, and you can be okay with a slight
>delay in updating, you could always switch to some sort of lazy
>materialized view.
I thought about this, but it's very important since shipment and
shipment_status are both updated in real time 24/7/365. I think I might be
able to cache it within the application for 60 seconds at most, but it would
make little difference since people tend to refresh within that time anyway.
It's very important that real-time inforamtion exists though.
>You could also always throw more hardware at it. :) If the
>shipment_status is one of the bottlenecks, create a 4-disk raid10 and
>move the table over.
>I don't remember what your hardware is, but I don't remember it being a
>quad opteron with 16GB ram, and 20 15k SCSI disks, with the transaction
>log on a solid state disk. :)
That sounds like an awesome system. I loved to have something like that.
Unfortunately, the production server is just a single processor machine with
1 GB ram. I think throwing more disks at it is probably the best bet,
moving the shipment and shipment_status tables over as you suggested.
That's great advice.
>Why do you need the query to be 30ms? ~250ms is still pretty fast. If
>you are needing updates faster than that, you might look more into *why*
>and then handle it from a higher level.
30ms is a good target, although I guess I was naive for setting that goal
perhaps. I've just taken queries that ran at 600ms and with 1 or 2 indexes,
they went down to 15ms.
Let's say we have 200 users signed into the application at the same time.
The application refreshes their shipment information automatically to make
sure it's up to date on the user's screen. The application will execute the
query we are trying to tune every 60 seconds for most of these users. Users
can set the refresh time to be higher, but 60 is the lowest amount so I'm
just assuming everyone has it at 60.
Anyway, if you have 200 users logged in, that's 200 queries in the 60 second
period, which is about 3-4 queries every second. As you can see, it's
getting maxed out, and because of bad luck, the queries are bunched together
and are being called at the same time, making 8-9 queries in the same second
and that's where the performance is starting to degrade. I just know that
if I could get this down to 30 ms, or even 100, we'd be okay for a few
months without throwing hardware at the problem. Also keep in mind that
other application logic and Hibernate mapping is occuring to, so 3-4 queries
a second is already no good when everything is running on a single machine.
This isn't the best setup, but it's the best we can afford. We are just a
new startup company. Cheaper servers and open source keep our costs low.
But money is starting to come in after 10 months of hard work, so we'll be
able to replace our server within the next 2 months. It'll be a neccessity
because we are signing on some big clients now and they'll have 40 or 50
users for a single company. If they are all logged in at the same time,
that's a lot of queries.
>And naturally, the most important this is to test it under load. 250ms
>is pretty good, but if under load it goes back to 6s, then we probably
>should look for different alternatives. Also, what is the load that is
>causing the problem? Is it that you have some other big seqscans which
>are causing all of your tables to go out of cache?
No, this query and another very close to it are probably the most executed
in the system. In fact, even checking the page stats on the web server
tells us that the pages that use these queries are 80% of the pages viewed
in our application. If I can fix this problem, I've fixed our performance
problems period. The statistics queries are very slow too, but I don't care
about that since nobody goes to them much (maybe once a month. People don't
mind waiting for that sort of information anyway).
I'm very interested in those other alternatives since I may have to
experiment with them. I'm under the impression that this query is actually
performing quite well for what I'm throwing at it and the work that it's
doing.
>Also, I believe I remember you saying that your production server is a
>P4, is that a single P4? Because I know postgres prefers Opterons to
>Pentium Xeons when in a multiprocessor machine. Look through the
>archives about spinlocks and the context switch bug. (context storm,
>etc). Plus, since opterons are 64-bit, you can throw a lot more RAM at
>them. I believe opterons outperform xeons for the same cost, *and* you
>can scale them up with extra ram.
Yeah, we have nothing of that sort. It's really just a P4 3.0 Ghz
processor. Like I mentioned before, we just put computers together from
what we had and built our application on them. Our business is new, we
don't have a lot of money and we're just starting to actually have a good
client base. It's finally growing after all of this time but we are still
using the servers we started with.
>But remember, the biggest bottleneck is almost *always* the I/O. So put
>more & faster disks into the system first.
I will price that raid setup you recommended. That will probably be the
first adjustment to our server if we don't just replace the entire thing.
Thanks again,
Ken
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2005-03-04 15:56:25 | Re: Help with tuning this query (with explain analyze finally) |
Previous Message | John Arbash Meinel | 2005-03-04 00:22:14 | Re: Help with tuning this query (with explain analyze finally) |
From | Date | Subject | |
---|---|---|---|
Next Message | Ron Mayer | 2005-03-04 11:27:23 | Query's fast standalone - slow as a subquery. |
Previous Message | William Yu | 2005-03-04 02:55:55 | Re: name search query speed |