From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org >> PG-General Mailing List" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: md5(large_object_id) |
Date: | 2015-10-07 11:30:24 |
Message-ID: | CAFj8pRDFQC5=4MzJwwZbf7gH-arsYwWBJhvAYQ3sHpCZUKmYDA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
2015-10-07 13:18 GMT+02:00 Karsten Hilbert <Karsten(dot)Hilbert(at)gmx(dot)net>:
> On Wed, Oct 07, 2015 at 12:55:38PM +0200, Karsten Hilbert wrote:
>
> > > > I am dealing with radiology studies aka DICOM data) one would
> > > > want an md5 function which streams in parts of a large object
> > > > piece by piece using md5_update and m5_finalize or some such.
> > > It would certainly be possible to write a lo_md5(oid) function to do
> > > this, but as far as I'm aware nobody has yet done so. How are your
> > > C skills?
> >
> > I had hoped someone was going to say: "Yeah, right, low
> > hanging fruit, let's just do it for 9.next" :-)
>
> Someone _with_ C skills, that is.
>
if the size of blobs is less than 1GB, then it should be possible in
plpgsql too.
postgres=# \lo_import ~/Desktop/001.jpg
lo_import 24577
postgres=# select md5(lo_get(24577));
┌──────────────────────────────────┐
│ md5 │
╞══════════════════════════════════╡
│ 610ccaab8c7c60e1168abfa799d1305d │
└──────────────────────────────────┘
(1 row)
Regards
Pavel
> Thanks for the plperlu suggestion, btw.
>
> Karsten
> --
> GPG key ID E4071346 @ eu.pool.sks-keyservers.net
> E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
From | Date | Subject | |
---|---|---|---|
Next Message | Karsten Hilbert | 2015-10-07 11:38:05 | Re: md5(large_object_id) |
Previous Message | Karsten Hilbert | 2015-10-07 11:18:02 | Re: md5(large_object_id) |