From: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: A way to optimize sql about the last temporary-related row |
Date: | 2024-06-27 15:27:40 |
Message-ID: | CANzqJaA1omaViNqAdEQzfKng8D4X6=Fku_spJhJNEvDt9Nt99Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Jun 27, 2024 at 11:20 AM agharta82(at)gmail(dot)com <agharta82(at)gmail(dot)com>
wrote:
[snip]
> -- insert 4M records
> insert into test_table(pk_id) select generate_series(1,4000000,1);
>
> -- now set some random data, distribuited between specific ranges (as in
> my production table)
> update test_table set
> datetime_field_1 = timestamp '2000-01-01 00:00:00' + random() *
> (timestamp '2024-05-31 23:59:59' - timestamp '2000-01-01 00:00:00'),
> integer_field_1 = floor(random() * (6-1+1) + 1)::int,
> integer_field_2 = floor(random() * (200000-1+1) + 1)::int;
>
>
> -- indexes
> CREATE INDEX idx_test_table_integer_field_1 ON test_table(integer_field_1);
> CREATE INDEX xtest_table_datetime_field_1 ON test_table(datetime_field_1
> desc);
> CREATE INDEX idx_test_table_integer_field_2 ON test_table(integer_field_2);
>
>
Off-topic: save some resources by vacuuming before creating indices.
From | Date | Subject | |
---|---|---|---|
Next Message | Dhritman Roy | 2024-06-27 15:32:13 | dblink Future support vs FDW |
Previous Message | agharta82@gmail.com | 2024-06-27 15:20:21 | A way to optimize sql about the last temporary-related row |