From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | Madison Kelly <linux(at)alteeve(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: New to the list; would this be an okay question? |
Date: | 2004-06-22 17:38:52 |
Message-ID: | 40D86EAC.1090801@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Madison Kelly wrote:
>
> The database is on the system's local disk and the destination drives
> are on the USB-connected drives. It is possible to include the server's
> local disk in a backup job. Without the DB calls I was able to process
> ~4000 files in ~2secs.
>
> 'vmstat 10' shows (system running):
> r 0; b 0; swapd 3396; free 192356; buff 7084; cache 186508; si 0; so 0;
> bi 0; bo 5; in 1023; cs 82; us 1; sy 0; id 99; wa 0
>
> 'vmstat 10' shows (while the program is running):
> r 1; b 0; swapd 480; free 180252; buff 8600; cache 195412; si 0; so 0;
> bi 6; bo 163; in 1037; cs 281; us 97; sy 3; id 0; wa 0
Hmm - well, your CPU usage is up to 97% userland (us 97) and your disk
activity is up (bi 6; bo 163) as you'd expect.
> I'm afraid that I am pretty new to postgres (and programming period)
> so I am not sure if the indexes are being used. I assumed they where
> simple because I created them but then again assumptions always get me
> in trouble. :p When you say "select from "pg_stat_indexes"" do you mean
> select from there those three field instead of 'SELECT <blah> FROM
> file_dir'?.
No - there are a set of statistics views called pg_stat_xxx (see ch 23.2
of the online manuals). For example, I have a unique index on
content_core.cc_id:
SELECT relname,indexrelname,idx_scan,idx_tup_read,idx_tup_fetch
FROM pg_stat_user_indexes
WHERE relname = 'content_core';
indexrelname | idx_scan | idx_tup_read | idx_tup_fetch
-------------------+----------+--------------+---------------
content_core_pkey | 717 | 717 | 717
(1 row)
SELECT * FROM content_core WHERE cc_id=2;
...
SELECT relname,indexrelname,idx_scan,idx_tup_read,idx_tup_fetch FROM
pg_stat_user_indexes
WHERE relname = 'content_core';
indexrelname | idx_scan | idx_tup_read | idx_tup_fetch
-------------------+----------+--------------+---------------
content_core_pkey | 718 | 718 | 718
As you can see, the index was used for this query.
> I currently have "VACUUM ANALYZE" set to run after every big set of
> UPDATE/INSERTs but that isn't enough? Would it also help if I kept a
> count of how many records had been processed and running "VACUUM
> {ANALYZE:FULL}" every so many 'x' records?
>
> I have been trying to bunch jobs into a single transaction but for
> some reason whenever I try to do that PERL dies on me with an empty
> "$DBI:errstr" error (it shows what line it died on but it is blank where
> it normally says what went wrong). I was trying to do this like this:
>
> $DB->do("BEGIN TRANSACTION") || die $DBI::errstr;
> # Jump into the sub-routine:
> &list_files($real_dir, $exclude_list_num, $relative_dir, $file_backup,
> $file_restore, $file_display);
> $DB->do("COMMIT;") || die $DBI::errstr;
You can do this, I think better practice is supposed to be:
$DB->begin_work;
...
$DB->commit;
You might want to turn on statement logging for PostgreSQL's logs -
details in the manuals (Ch 16.4)
> Inside the subroutine I kept a count of how much time went by and to
> commit the current transaction every 250secs and start a new transaction
> block:
>
> if ( $rec_num > $rec_disp )
> {
> $rec_disp=($rec_disp+500);
> $nowtime=time;
> $processtime=($nowtime - $starttime);
> if ( $processtime >= $committime )
> {
> $committime=$committime+250;
> $DB->do("COMMIT") || die $DBI::errstr;
> $DB->do("BEGIN TRANSACTION") || die $DBI::errstr;
> system 'echo " |- Committed processed records to the database,
> updating committime to \''.$committime.'\' and proceeding:" >> '.$log;
> }
> system 'echo " |- Processing record number \''.$rec_num.'\',
> \''.$processtime.'\' seconds into operation." >> '.$log;
> }
>
>
> Is what I am doing wrong obvious at all? When I enter "BEGIN
> TRANSACTION" directly into 'psql' it seems to work fine so I am sure the
> syntax is right. Is it a perl prolem maybe?
Nothing leaping out at me - although I'd do something like:
use IO::File;
my $log = new IO::File("> logfile.txt");
print $log "Processing record number $rec_num\n";
$log->close;
Or even just:
print STDERR "Processing...";
Note there is just a space between the filehandle and string to print.
Turn logging up in PostgreSQL (which will slow things down of course)
and then get the transaction blocks working, and we'll see what happens
then. Oh, and don't forget the tuning document on varlena.com
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Troy Campano | 2004-06-22 18:55:19 | PostgreSQL Docs for Paper (benchmark and contrib) |
Previous Message | Tom Lane | 2004-06-22 16:53:03 | Re: How can I add a new language localization(locale) support |