From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net>, Sawada Masahiko <sawada(dot)mshk(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, David Johnston <david(dot)g(dot)johnston(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Proposal: knowing detail of config files via SQL |
Date: | 2015-03-02 23:39:44 |
Message-ID: | 54F4F4C0.30205@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2/27/15 11:27 PM, Stephen Frost wrote:
>> >@@ -344,6 +346,21 @@ ProcessConfigFile(GucContext context)
>> > PGC_BACKEND, PGC_S_DYNAMIC_DEFAULT);
>> > }
>> >
>> >+ guc_file_variables = (ConfigFileVariable *)
>> >+ guc_malloc(FATAL, num_guc_file_variables * sizeof(struct ConfigFileVariable));
> Uh, perhaps I missed it, but what happens on a reload? Aren't we going
> to realloc this every time? Seems like we should be doing a
> guc_malloc() the first time through but doing guc_realloc() after, or
> we'll leak memory on every reload.
>
>> >+ /*
>> >+ * Apply guc config parameters to guc_file_variable
>> >+ */
>> >+ guc = guc_file_variables;
>> >+ for (item = head; item; item = item->next, guc++)
>> >+ {
>> >+ guc->name = guc_strdup(FATAL, item->name);
>> >+ guc->value = guc_strdup(FATAL, item->value);
>> >+ guc->filename = guc_strdup(FATAL, item->filename);
>> >+ guc->sourceline = item->sourceline;
>> >+ }
> Uh, ditto and double-down here. I don't see a great solution other than
> looping through the previous array and free'ing each of these, since we
> can't depend on the memory context machinery being up and ready at this
> point, as I recall.
MemoryContextInit() happens near the top of main(), before we call
InitializeGUCOptions(). So it should be possible to use memory contexts
here. I don't know why guc doesn't use palloc; perhaps for historical
reasons at this point?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2015-03-02 23:40:27 | Re: Patch: raise default for max_wal_segments to 1GB |
Previous Message | Michael Paquier | 2015-03-02 23:36:35 | Re: Bug in pg_dump |