From: | Jan Bilek <jan(dot)bilek(at)eftlab(dot)com(dot)au> |
---|---|
To: | "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Cc: | "karsten(dot)hilbert(at)gmx(dot)net" <karsten(dot)hilbert(at)gmx(dot)net>, "scherrey(at)proteus-tech(dot)com" <scherrey(at)proteus-tech(dot)com>, "pavel(dot)stehule(at)gmail(dot)com" <pavel(dot)stehule(at)gmail(dot)com> |
Subject: | RE: Requirement PA-DSS 1.1.4 |
Date: | 2019-06-06 23:51:02 |
Message-ID: | SYCPR01MB521536891ED1C3BC9190AAEEB5170@SYCPR01MB5215.ausprd01.prod.outlook.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thank you all - Karsten, Benjamin, Pavel, PostgreSql team,
I've discussed all your inputs with our developers and they came with a solution for this problem, which was already agreed (on a high level) by our auditor.
I am adding it here so it can inspire the others, when potentially getting in a same situation.
<>
Process For Managing Secure Data With PostgreSQL
This sets out the process we have developed for managing secure data with PostgreSQL; firstly for any technique to work we are going to assume that you are using a filesystem and media compliant with NIST 800-88 in the sense that:
1. Your disk can be cleaned by using multi-writes (if you have a platter disk)
2. Your disk can be cleaned by trimming (if using SSD)
3. That once the disks are finished with being removed from use as secure storage they are destroyed in line with NIST 800-88.
So the problem that must be solved is data that’s not used any more must be securely erased even if it was encrypted. So imagine you have a transaction log used for settling transactions with batch entities overnight (standard UK processing); once you’ve finished with those card numbers being held encrypted they must be securely erased from the system. Another use case is expiry of keys that are no longer used in an application. Here we don’t want to destroy the entire table or database but simply a partition of the data.
We propose that data is stored in two ways:
1. Tables that are deleted when the data is no longer wanted (insert then drop only tables)
2. Rows that are deleted when the data is no longer wanted
For scenario 1 where data is finished with this table is sent to the “Secure Delete” process. For scenario 2 where data is finished with the remaining rows are copied to a new instance of the table (imagine a view sitting over tables active and inactive so for example key_store view sits over key_store_active and key_store_inactive) so you’d send inactive to the “Secure Delete” process and then recreate inactive by select * into inactive from active then finally swap active and inactive and then “Secure Delete” the new inactive table.
The secure dropping of a table would operate as follows:
* Drop all FKs from my_table_to_delete
* Drop all PKs from my_table_to_delete
* My_new_uuid is a new v4 uuid
* Rename my_table_to_delete -> my_new_uuid
* Insert into pending_secure_erase values (my_new_uuid, filenameof(my_new_uuid)) (the filename can be determined by performing various functions on the OID of the table)
Another process running with permissions to access the underlying data is then running (probably running as postgres user):
1. Run the following forever:
* Wait for pending_secure_erase to contain something
* Foreach table_name, filename in pending_secure_erase
* If filename exists use secure erase tool on the file such as the shred app
* Drop table if exists table_name
* Delete from pending_secure_erase where table_name = table_name
In this way we have enabled:
1. Row deletes with the data to be completely purged from disk
2. The database to be available for use even when the application is moving data (you can user more tricks in the view to insert/update both active and inactive when copying to make it “more” online)
The limitation is that data isn’t securely erased until the above process is run where the data is row based rather than table based expiry and your exposure is then limited to how often the process is run to cutover data.
</>
Disclaimers:
- All credits to our principal architect (CD) who put this together, I am just a messenger here, where he prefers to stay behind.
- Feel free to comment, but implementation on our side is already commenced.
Kind Regards,
Jan
CTO - EFTlab
On 2019-06-06 18:14:39+10:00 karsten(dot)hilbert(at)gmx(dot)net wrote:
On Thu, Jun 06, 2019 at 11:41:40AM +0700, Benjamin Scherrey wrote:
&gt; You should never store such information
&gt; in a database product unless you plan of decommissioning ALL of the media
&gt; that stores the information once you're supposed to lose custody.
Use a tablespace on a dedicated disk.
Move the tablespace when requirements ask for deletion.
Wipe the storage medium after moving the tablespace.
Karsten
--
GPG 40BE 5B0E C98E 1713 AFA6 5BC0 3BEA AC80 7D4F C89B
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2019-06-07 00:19:08 | Re: Postgres 10.7 Systemd Startup Issue |
Previous Message | Adrian Klaver | 2019-06-06 22:37:58 | Re: postgres 11 issue? |