From: | Joshua Tolley <eggyknap(at)gmail(dot)com> |
---|---|
To: | mladen(dot)gogala(at)vmsinfo(dot)com |
Cc: | "pgsql-novice(at)postgresql(dot)org" <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: Sphinx indexing problem |
Date: | 2010-05-24 00:48:49 |
Message-ID: | AANLkTinR6Fo2M3gAuduV78sqYqPEOw0RX_5-BdbeMdZS@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Sun, May 23, 2010 at 4:36 PM, Mladen Gogala
<mladen(dot)gogala(at)vmsinfo(dot)com> wrote:
> I am trying to create a Sphinx index on a fairly large Postgres table. My
> problem is the fact that the Postgres API is trying to put the entire
> result set into the memory:
>
> [root(at)medo etc]# ../bin/indexer --all
> Sphinx 0.9.9-release (r2117)
> Copyright (c) 2001-2009, Andrew Aksyonoff
>
> using config file '/usr/local/etc/sphinx.conf'...
> indexing index 'test1'...
> ERROR: index 'test1': sql_query: out of memory for query result
> (DSN=pgsql://news:***(at)medo:5432/news).
> total 0 docs, 0 bytes
> total 712.593 sec, 0 bytes/sec, 0.00 docs/sec
> total 0 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
> total 0 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
> Is there anything I can do to prevent the API from attempting to put the
> entire query result in memory?
Use a cursor, and fetch chunks of the result set one at a time.
http://www.postgresql.org/docs/current/interactive/sql-declare.html
--
Joshua Tolley / eggyknap
End Point Corporation
From | Date | Subject | |
---|---|---|---|
Next Message | Jayadevan M | 2010-05-24 03:40:12 | Re: Best starter book |
Previous Message | Mladen Gogala | 2010-05-23 22:36:34 | Sphinx indexing problem |