From: | "Hanu Kurubar" <hanu(dot)kurubar(at)gmail(dot)com> |
---|---|
To: | "Arjen van der Meijden" <acmmailing(at)tweakers(dot)net> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Append table |
Date: | 2007-06-02 19:49:00 |
Message-ID: | 912b58490706021249t1eca473eyab9a54a2cab4d2d0@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Thanks for quick answer.
Previsoly I have exported table records into employee.csv file using COPY
command which has 36,00,0000 records.
After that I have added few more entries in database and EmpId is
incremented.
I want put the exported data back into database with re-generating new
EmpId. Like importing back all data without harming existing data.
If I choose INSERT opeartion, it is very time consuming.
I am thinking of creating new table (dummy table) and copying all data (COPY
from command) into that table and maniplate the data so that EmpId is unique
in parent table and dummy table and then append these two tables.
I feel creating views and joins will make things complex.
Do you have inputs on this?
On 6/2/07, Arjen van der Meijden <acmmailing(at)tweakers(dot)net> wrote:
>
> There are two solutions:
> You can insert all data from tableB in tableA using a simple insert
> select-statement like so:
> INSERT INTO tabelA SELECT EmpId, EmpName FROM tabelB;
>
> Or you can visually combine them without actually putting the records in
> a single table. That can be with a normal select-union statement or with
> a view, something like this:
> SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName FROM tabelB;
>
> You can use this query as a table-generating subquery in a FROM-clause,
> like so:
>
> SELECT * FROM (SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName
> FROM tabelB) as emps WHERE EmpId = 1;
>
> Or with the view:
> CREATE VIEW tabelC AS SELECT EmpId, EmpName FROM tabelA UNION EmpID,
> EmpName FROM tabelB;
>
> And then you can use the view as if it was a normal table (altough
> inserts are not possible without applying rules to them, see the manual
> for that).
>
> SELECT * FROM tabelC WHERE EmpId = 1;
>
> Best regards,
>
> Arjen
>
> On 2-6-2007 17:52 Hanu Kurubar wrote:
> > Any luck on appending two table in PostgreSQL.
> > Below are two table with same schema that have different values. In this
> > case EmpID is unique value.
> >
> > tabelA
> > ------------
> > EmpId (Int) EmpName (String)
> > 1 Hanu
> > 2 Alvaro
> >
> >
> > tabelB
> > ------------
> > EmpId (Int) EmpName (String)
> > 3 Michal
> > 4 Tom
> >
> >
> > I would be looking below output after appending tableA with tableB. Is
> > this possible in PostgreSQL?
> >
> >
> > tabelA
> > ------------
> > EmpId (Int) EmpName (String)
> > 1 Hanu
> > 2 Alvaro
> > 3 Michal
> > 4 Tom
> >
> >
> >
> > Thanks,
> > Hanu
> >
> >
> > On 5/30/07, *Hanu Kurubar* <hanu(dot)kurubar(at)gmail(dot)com
> > <mailto:hanu(dot)kurubar(at)gmail(dot)com>> wrote:
> >
> > Can you help me appending two table values into single table without
> > performing INSERT?
> > Note that these tables are of same schema.
> >
> > Is there any sql command is supported?
> >
> > Thanks,
> > Hanu
> >
> >
> > On 5/29/07, *Alvaro Herrera* <alvherre(at)commandprompt(dot)com
> > <mailto:alvherre(at)commandprompt(dot)com>> wrote:
> >
> > Michal Szymanski wrote:
> > > There is another strange thing. We have two versions of our
> test
> > > >>environment one with production DB copy and second
> > genereated with
> > > >>minimal data set and it is odd that update presented above
> > on copy of
> > > >>production is executing 170ms but on small DB it executing
> > 6s !!!!
> > > >
> > > >How are you vacuuming the tables?
> > > >
> > > Using pgAdmin (DB is installed on my laptop) and I use this
> > tool for
> > > vaccuminh, I do not think that vaccuming can help because
> > I've tested on
> > > both database just after importing.
> >
> > I think you are misunderstanding the importance of vacuuming the
> > table.
> > Try this: on a different terminal from the one running the test,
> > run a
> > VACUUM on the updated table with vacuum_cost_delay set to 20, on
> an
> > infinite loop. Keep this running while you do your update
> > test. Vary
> > the vacuum_cost_delay and measure the average/min/max UPDATE
> times.
> > Also try putting a short sleep on the infinite VACUUM loop and
> > see how
> > its length affects the UPDATE times.
> >
> > One thing not clear to me is if your table is in a clean
> > state. Before
> > running this test, do a TRUNCATE and import the data
> > again. This will
> > get rid of any dead space that may be hurting your measurements.
> >
> > --
> > Alvaro
> > Herrera
> http://www.advogato.org/person/alvherre
> > "The Postgresql hackers have what I call a "NASA space shot"
> > mentality.
> > Quite refreshing in a world of "weekend drag racer" developers."
> > (Scott Marlowe)
> >
> > ---------------------------(end of
> > broadcast)---------------------------
> > TIP 4: Have you searched our list archives?
> >
> > http://archives.postgresql.org
> > <http://archives.postgresql.org/>
> >
> >
> >
> >
> > --
> > With best regards,
> > Hanumanthappa Kurubar
> > Mobile: 98 801 800 65
> >
> >
> >
> >
> > --
> > With best regards,
> > Hanumanthappa Kurubar
> > Mobile: 98 801 800 65
>
--
With best regards,
Hanumanthappa Kurubar
Mobile: 98 801 800 65
From | Date | Subject | |
---|---|---|---|
Next Message | Douglas J Hunley | 2007-06-03 17:24:15 | Re: upgraded to pgsql 8.2.4, getting worse performance then 7.4.x |
Previous Message | Josh Berkus | 2007-06-02 19:44:33 | Re: Postgres Benchmark Results |