From: | Jan Wieck <JanWieck(at)Yahoo(dot)com> |
---|---|
To: | 吴德文 <windwood(at)jingxian(dot)xmu(dot)edu(dot)cn> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: query and pg_dump problem on my postgresql 6.5.3/Redhat |
Date: | 2003-12-04 20:43:08 |
Message-ID: | 3FCF9C5C.2030507@Yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I would say you're loosing your disk drive. Have you lately checked for
bad blocks?
Jan
吴德文 wrote:
> Help!
>
> A few days ago, my php page began to complain this:
> ------
> Warning: PostgresSQL query failed: pqReadData() -- backend closed the
> channel unexpectedly. This probably means the backend terminated abnormally
> before or while processing the request.
> ------
>
> The SQL string in php page is:
> ------
> $sql.='Select news_id,title,summary,publish_time,is_html,if_use_url,url,news_pri ';
> $sql.='From newses N,classes C ';
> $sql.="Where N.class_id = C.class_id AND C.classname='$class' ";
> $sql.='Order by publish_time Desc,news_id Desc Limit '.$Nlimit;
> ------
>
> NOTE:
> I'm on Redhat 6.2 with Postgresql 6.5.3, the database named "news",
> and the table is "newses", looks like this (dumped from "pg_dump -s -t newses news"):
>
> CREATE TABLE "newses" (
> "news_id" int4 DEFAULT nextval ( '"newses_news_id_seq"' ) NOT NULL,
> "title" character varying(100) NOT NULL,
> "class_id" int4 NOT NULL,
> "summary" text DEFAULT '',
> "user_id" int4 NOT NULL,
> "url" character varying(100),
> "img_url" character varying(100),
> "publish_time" date NOT NULL,
> "if_show_news" bool DEFAULT bool 'f' NOT NULL,
> "if_use_url" bool DEFAULT bool 'f' NOT NULL,
> "is_html" bool DEFAULT bool 'f' NOT NULL,
> "view_count" int4 DEFAULT 0 NOT NULL,
> "news_pri" int4);
> CREATE UNIQUE INDEX "newses_pkey" on "newses" using btree ( "news_id"
> "int4_ops" );
>
> This table has 243 records, the max news_id is 253.
>
> Later I found queries like these fails in psql:
> select news_id,title from newses order by news_id desc limit 10;
> select count(news_id) from newses;
>
> But thess works fine:
> select * from newses where news_id< 300;
> select count(*) from newses where news_id <300;
> select count(news_id) from newses where news_id <300;
>
> A simple rule is if I'm running query over the whole
> table without condition, I get same error message mentioned above.
>
> I thought my postgresql should be patch or upgrade, so I began to backup the
> database on it.
>
> But I found that pg_dump sometimes does not work on that very table,
> and sometimes work with a long long time then error.
>
> following are the error message of "pg_dump news -t newses -f newses-data.sql":
> ------
> pqWait() -- connection not open
> PQendcopy: resetting connection
> SQL query to dump the contents of Table 'newses' did not execute correctly. After we read all the table contents from the backend, PQendcopy() failed. Explanation from backend: ''.
> The query was: 'COPY "newses" TO stdout;
> '.
> ------
>
> I read the file(14M) generated and found that after the normally record(91K) there are many character like these:
> ------
> \N \N \N \N \N \N \N \N \N \N \N \N \N
> \N \N \N \N \N \N \N \N \N \N \N \N \N
> \N \N \N \N \N \N \N \N \N \N \N \N \N
> \N \N \N \N \N \N \N \N \N \N \N \N \N
> \N \N \N \N \N \N \N \N \N \N \N \N \N
> \N \N \N \N \N \N \N \N \N \N \N \N \N
> 280368896 \N \N 0 \N f f f 0 0
> 280368896 \N \N 0 \N f f f 0 0
> 280368896 \N \N 0 \N f f f 0 0
> ------
> And end with
> ------
> \N \N \N \N \N \N \N \N \N \N \N \N \N
> \N \N \N \N \N \N \N \N \N \N \N \N \N
> \N \N \N \N \N \N \N \N \N \N \N \N \N
> \N \N \N \N \N \N \N \N \N \N \N \N \N
> \.
> ------
>
> It is my nightmare now for I can't get back my data. I googled around with
> no luck.
>
> Anyone help me to get back the data and tell me what was going on?
>
>
> Yours Wind Wood
> windwood(at)jingxian(dot)xmu(dot)edu(dot)cn
> 2003-12-04
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Rawnsley | 2003-12-04 20:44:29 | Re: Storing passwords |
Previous Message | Manfred Koizar | 2003-12-04 20:27:39 | Re: Transaction Question |