From: | Yan Cheng Cheok <yccheok(at)yahoo(dot)com> |
---|---|
To: | Greg Smith <greg(at)2ndquadrant(dot)com> |
Cc: | Dann Corbit <DCorbit(at)connx(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: PostgreSQL Write Performance |
Date: | 2010-01-07 05:17:47 |
Message-ID: | 336122.21521.qm@web65708.mail.ac4.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thanks for the valuable advice! Will take them into consideration seriously..
From my point of view, my current requirement is limited by so-called "overhead" during communication with database. See the following result from SQL Shell :
SemiconductorInspection=# \timing on
Timing is on.
SemiconductorInspection=# ;
Time: 0.660 ms
SemiconductorInspection=# ;
Time: 0.517 ms
SemiconductorInspection=# ;
Time: 2.249 ms
SemiconductorInspection=#
I assume there shall be no hard disc activity involved, as I am sending "empty" SQL statement over.
Thanks and Regards
Yan Cheng CHEOK
--- On Thu, 1/7/10, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
> From: Greg Smith <greg(at)2ndquadrant(dot)com>
> Subject: Re: [GENERAL] PostgreSQL Write Performance
> To: "Yan Cheng Cheok" <yccheok(at)yahoo(dot)com>
> Cc: "Dann Corbit" <DCorbit(at)connx(dot)com>, pgsql-general(at)postgresql(dot)org
> Date: Thursday, January 7, 2010, 12:49 PM
> Yan Cheng Cheok wrote:
> > The time taken to perform measurement per unit is in
> term of ~30 milliseconds. We need to record down the
> measurement result for every single unit. Hence, the time
> taken by record down the measurement result shall be far
> more less than milliseconds, so that it will have nearly 0
> impact on the machine speed (If not, machine need to wait
> for database to finish writing, before performing
> measurement on next unit)
> >
>
> Saving a piece of data to a hard disk permanently takes a
> few milliseconds. As pointed out already, exactly how
> many depends on the drive, but it's probably going to be 8ms
> or longer on your system. There are a few options
> here:
>
> 1) Add a battery-backed write caching controller to your
> system. Then the battery will help make sure the data
> doesn't get lost even if the power goes out before the
> driver writes it out. This will cost you around $300.
>
> 2) Use some other type of faster storage, such as a SSD
> drive that has a battery on it to cache any unfinished
> writes. Probably also going to be around that price,
> the cheaper SSDs (and some of the expensive ones) don't take
> data integrity very seriously.
>
> 3) Write the data to a flat file. Periodically import
> the results into the database in a batch.
>
> The thing you should realize is that using (3) is going to
> put you in a position where it's possible you've told the
> machine the measurement was saved, but if the system crashes
> it won't actually be in the database. If you're saving
> to a flat file now, you're already in this position--you
> can't write to a flat file and make sure the result is on
> disk in less than around 8ms either, you just probably
> haven't tested that out yet. Just because the write
> has returned successfully, that doesn't mean it's really
> stored permanently. Power the system off in the window
> between that write and when the memory cache goes out to
> disk, and you'll discover the data missing from the file
> after the system comes back up.
>
> If you're OK with the possibility of losing a measurement
> in the case of a system crash, then you should just write
> measurements to a series of flat files, then have another
> process altogether (one that isn't holding up the machine)
> load those files into the database. The fact that it
> takes a few ms to write to disk is a physical limitation you
> can't get around without using more expensive hardware to
> improve the situation. If you haven't been seeing that
> in your app already, I assure you it's just because you
> haven't looked for the issue before--this limitation on disk
> write speed has been there all along, the database is just
> forcing you to address it.
>
> -- Greg Smith
> 2ndQuadrant Baltimore, MD
> PostgreSQL Training, Services and Support
> greg(at)2ndQuadrant(dot)com
> www.2ndQuadrant.com
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Rikard Bosnjakovic | 2010-01-07 06:20:38 | Re: How many records to delete ? |
Previous Message | Fujii Masao | 2010-01-07 04:52:50 | Re: WAL archiving is stuck on an old file that was deleted -- how to get it going again? (8.4.2) |