From: | Tareq Aljabban <dee(dot)jay23(dot)me(at)gmail(dot)com> |
---|---|
To: | Jeff Davis <pgsql(at)j-davis(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Storage Manager crash at mdwrite() |
Date: | 2012-03-15 23:36:23 |
Message-ID: | CAGOe0aLqb18ZYWw6C77z_rcUMQyPfA0W2sWGuT5VTxZ7YDmAfw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
When configuring postgreSQL, I'm adding the libraries needed to run HDFS C
API (libhdfs).
./configure --prefix=/diskless/myUser/Workspace/EclipseWS1/pgsql
--enable-depend --enable-cassert --enable-debug CFLAGS="$CFLAGS
-I/diskless/myUser/Workspace/HDFS_Append/hdfs/src/c++/libhdfs
-I/usr/lib/jvm/default-java/include" LDFLAGS="$LDFLAGS
-L/diskless/myUser/Workspace/HDFS_Append/hdfs/src/c++/libhdfs
-L/diskless/myUser/Workspace/HDFS_Append/build/c++/Linux-i386-32/lib
-L/usr/lib/jvm/default-java/jre/lib/i386/server -ljvm -lhdfs"
I have done lots of changes so far on how the storage manager works. In
fact, I changed smgr.c so instead of calling regular md.c functions, that
it would call my functions .
For simplicity, you can say that whenever mdwrite() was supposed to be
called, another function is also called beside it. so now what is being
called is:
mdwrite(param1, param2, buffer, ....);
hdfswrite(param1, param2, buffer, ....);
where hdfswrite() is the function where I write the buffer to HDFS.
I changed hdfswrite() so it will always write only the same (dummy) buffer
down to HDFS storage. Let's say "dummyBufferFile". After writing this file
3 times to HDFS, I'm getting the message that I've shown in my first email.
The same hdfswrite() code works without any issues when I run it in a
separate application.
Hope it's clear now.
On Thu, Mar 15, 2012 at 5:28 PM, Jeff Davis <pgsql(at)j-davis(dot)com> wrote:
> On Thu, 2012-03-15 at 13:49 -0400, Tareq Aljabban wrote:
> > I'm implementing an extention to mdwrite() at
> > backend/storage/smgr/md.c
> > When a block is written to the local storage using mdwrite(), I'm
> > sending this block to an HDFS storage.
> > So far I don't need to read back the values I'm writing to HDFS. This
> > approach is working fine in the initDB phase.
> > However, when I'm running postgres (bin/pg_ctl start), the first few
> > write operations run successfully, and then suddenly (after writing
> > exactly 3 files to HDFS), I get a 130 exit code with the following
> > message showing the JVM thread dump of HDFS:
> >
> >
> > LOG: background writer process (PID 29347) exited with exit code 130
> > LOG: terminating any other active server processes
>
> ...
>
> > This seems like an HDFS issue, but the same code worked properly in
> > initDB(). I replaced this HDFS write code with a code that writes
> > always the same block (empty one) to HDFS regardless from the value
> > received by mdwrite().. Kept getting the same issue after writing 3
> > files.
> > I copied this exact code to a separate C application and ran it there
> > and it executed without any problems (I wrote/deleted 100 files).
> > That's why I'm doubting that it's something related to postgreSQL.
> >
> >
> > Any ideas on what's going wrong?
>
> What code, exactly, did you change in md.c, and anywhere else in
> postgres? Are you linking in new libraries/code from somewhere into the
> postgres backend?
>
> Regards,
> Jeff Davis
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Davis | 2012-03-15 23:58:03 | Re: Storage Manager crash at mdwrite() |
Previous Message | Daniel Farina | 2012-03-15 23:14:03 | pg_terminate_backend for same-role |