Re: Postgresql INSERT speed (how to improve performance)?

From: "Lim Berger" <straightfwd007(at)gmail(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "Postgresql General List" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Postgresql INSERT speed (how to improve performance)?
Date: 2007-08-14 03:54:31
Message-ID: 69d2538f0708132054w2004e1e6s704ecc756aaec388@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 8/14/07, Lim Berger <straightfwd007(at)gmail(dot)com> wrote:
> On 8/14/07, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > "Lim Berger" <straightfwd007(at)gmail(dot)com> writes:
> > > I have a table in MySQL with three compound indexes. I have only three
> > > columns from this table also in PostgreSQL, which serves as a cache of
> > > sorts for fast queries, and this table has only ONE main index on the
> > > primary key!
> >
> > > INSERTing into MySQL takes 0.0001 seconds per insert query.
> > > INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
> >
> > You had better provide some details, because that's completely out of
> > line, assuming that by "insert query" you mean insert one row. For a
> > comparison point, I get this on a rather old and slow machine:
> >
> > regression=# create table t1 (f1 int, f2 int, f3 int,
> > regression(# unique(f1,f2),
> > regression(# unique(f2,f3),
> > regression(# unique(f1,f3));
> > NOTICE: CREATE TABLE / UNIQUE will create implicit index "t1_f1_key" for table "t1"
> > NOTICE: CREATE TABLE / UNIQUE will create implicit index "t1_f2_key" for table "t1"
> > NOTICE: CREATE TABLE / UNIQUE will create implicit index "t1_f1_key1" for table "t1"
> > CREATE TABLE
> > regression=# \timing
> > Timing is on.
> > regression=# insert into t1 values(1,2,3);
> > INSERT 0 1
> > Time: 9.048 ms
> > regression=# insert into t1 values(1,7,4);
> > INSERT 0 1
> > Time: 4.357 ms
> > regression=# insert into t1 values(11,7,5);
> > INSERT 0 1
> > Time: 3.998 ms
> > regression=#
>
>
>
> Thanks Tom. But on a newly minted table, sure, the performance would
> be great. My table now has about 3 million rows (both in MySQL and
> PG).
>
> Here's the table definition:
>
>
> Table "public.cachedstats"
> Column | Type | Modifiers
> -----------------------+-----------------------+------------------------------
> id | bigint | not null
> prof_name | character varying(20) | not null
> notes | text | not null
> inform_prof_on_change | character(1) | not null default 'N'::bpchar
>
> Indexes:
> "cachedstats_pkey" PRIMARY KEY, btree (id)
> "idx_cachedstats_unique_prof_name" UNIQUE, btree (alias)
>
>
> How can I test the bottleneck?
>

Btw, the query is as simple as:

INSERT INTO cachedstats
(id, prof_name, notes, inform_prof_on_change)
VALUES
(3190087, 'Lim Berger', '.....text of about 1000 chars', 'Y');

I am testing through PHP microtime function. The query is administered
through pg_query() function of PHP. I know there could be some latency
coming in from the PHP's PG functions' overhead, but not such an order
of magnitude different from "mysqli_query"! (I hope -- because this
is quite a common real-world situation I would think).

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2007-08-14 04:13:27 Re: Postgresql INSERT speed (how to improve performance)?
Previous Message Lim Berger 2007-08-14 03:50:15 Re: Postgresql INSERT speed (how to improve performance)?