transferring large encrypted images.

classic Classic list List threaded Threaded
7 messages Options
Xen
Reply | Threaded
Open this post in threaded view
|

transferring large encrypted images.

Xen
Hi Folks,

I was wondering if I could ask this question here.

Initially when I was thinking up how to do this I was expecting block
encryption to stay consistent from one 'encryption run' to the next, but I
found out later that most schemes randomize the result by injecting a
random block or seed at the beginning and basing all other encrypted data
on that.

In order to prevent plaintext attacks I guess (the block at the beginning
of many formats is always the same?) and also to prevent an attacker from
learning the key based on multiple encryptions using the same key.

However the downside is that any optimization scheme is rendered useless,
such as rsync's.

What is a best practice for this, if any?

My backup software that I'm currently using, I'm on Windows, does
encryption but since it has the key, it can create
differentials/incrementals so the whole image does not need to be
retransferred. If it works, but that's another story.

Still, differentials and incrementals are all fine (grandfather, father,
son) but updating the/a main full image file itself would perhaps be much
more efficient still.

For some reason my host and rsync on Windows are rather slow, I get some
500K/s upload for a 20GB file. Which takes, kinda long.

I might start splitting the files in lower gigabyte chunks as well,
though.

Currently sending it to another host at 1MB/s which rsyncs it to the real
target where I'm less concerned about how long it takes.

But I'm sending it over with scp (pscp) because for some reason rsync is
also rather slow here (maybe it's my computer).
Scp has no partial option (how silly) but I can just rsync if it fails.

Still, I wonder how other people are doing this, if they do something like
this.

Regards,

Xen.


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Reply | Threaded
Open this post in threaded view
|

Re: transferring large encrypted images.

Paolo Bolzoni
Why are you encrypting the files and not the filesystem and the channel?

On Tue, Oct 13, 2015 at 6:54 PM, Xen <[hidden email]> wrote:

> Hi Folks,
>
> I was wondering if I could ask this question here.
>
> Initially when I was thinking up how to do this I was expecting block
> encryption to stay consistent from one 'encryption run' to the next, but I
> found out later that most schemes randomize the result by injecting a random
> block or seed at the beginning and basing all other encrypted data on that.
>
> In order to prevent plaintext attacks I guess (the block at the beginning of
> many formats is always the same?) and also to prevent an attacker from
> learning the key based on multiple encryptions using the same key.
>
> However the downside is that any optimization scheme is rendered useless,
> such as rsync's.
>
> What is a best practice for this, if any?
>
> My backup software that I'm currently using, I'm on Windows, does encryption
> but since it has the key, it can create differentials/incrementals so the
> whole image does not need to be retransferred. If it works, but that's
> another story.
>
> Still, differentials and incrementals are all fine (grandfather, father,
> son) but updating the/a main full image file itself would perhaps be much
> more efficient still.
>
> For some reason my host and rsync on Windows are rather slow, I get some
> 500K/s upload for a 20GB file. Which takes, kinda long.
>
> I might start splitting the files in lower gigabyte chunks as well, though.
>
> Currently sending it to another host at 1MB/s which rsyncs it to the real
> target where I'm less concerned about how long it takes.
>
> But I'm sending it over with scp (pscp) because for some reason rsync is
> also rather slow here (maybe it's my computer).
> Scp has no partial option (how silly) but I can just rsync if it fails.
>
> Still, I wonder how other people are doing this, if they do something like
> this.
>
> Regards,
>
> Xen.
>
>
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Reply | Threaded
Open this post in threaded view
|

Re: transferring large encrypted images.

Selva Nair
In reply to this post by Xen
On Tue, Oct 13, 2015 at 12:54 PM, Xen <[hidden email]> wrote:
Hi Folks,

I was wondering if I could ask this question here.

Initially when I was thinking up how to do this I was expecting block encryption to stay consistent from one 'encryption run' to the next, but I found out later that most schemes randomize the result by injecting a random block or seed at the beginning and basing all other encrypted data on that.

In order to prevent plaintext attacks I guess (the block at the beginning of many formats is always the same?) and also to prevent an attacker from learning the key based on multiple encryptions using the same key.

However the downside is that any optimization scheme is rendered useless, such as rsync's.

What is a best practice for this, if any?
 
If the backup is from an encrypted volume to another, depending on the scheme used, you could arrange rsync to see only decrypted data (with the transport protected by, say, ssh): for example, both destination and source using eCryptfs could have the decrypted volumes mounted during the backup.  

But this may not be necessary: Directly backing up an encrypted volume could still make use of rsync's delta algorithm:.in case of eCryptfs, for example, data is encrypted in blocks of page_size (e.g., 4kB), so only a few blocks may change during updates and subsequent rsync runs could be almost as efficient as on unencrypted volumes -- I haven't tested this though.

If encryption is only to protect the data during transport, you can simply use ssh transport with rsync

If the idea is to protect the data at a remote backup destination, say on the cloud, rsync may not be the best option. For that I prefer duplicity which uses the rsync algorithm to transfer only deltas (uses librsync) but stores the backup as tar archives encrypted by GnuPG (both the initial full backup as well as incremental deltas). You lose the advantage of a mirror archive that rsync can maintain. 
 
Selva


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Xen
Reply | Threaded
Open this post in threaded view
|

Re: transferring large encrypted images.

Xen
In reply to this post by Paolo Bolzoni

Paolo Bolzoni <[hidden email]> schreef:

> Why are you encrypting the files and not the filesystem and the channel?

Because of what the other person mentioned.

If anything anywhen gets compromised, people may have access to the  
filesystem(s) and the channel(s) before they get access to the file.  
That is to say, yes it is a remote host with a form of cloud  
suppliance. I do not think that I can encrypt that filesystem. Of  
course, I could encrypt it on the spot but then rsync would also not  
work.

They might take my private key from somewhere, so to speak (that does  
the transfer) but then they still won't have the file.

The local filesystem is encrypted, not one of the others. I mean I see  
the advantage technically but practically having an encrypted file is  
way superior for me.

They are images, so they are like filesystems themselves. It is a  
filesystem that is being stored on a filesystem.

That's why I encrypt the image. And I store them on remote hosts that  
I do not control.

Regards, X.

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Xen
Reply | Threaded
Open this post in threaded view
|

Re: transferring large encrypted images.

Xen
In reply to this post by Selva Nair
Selva Nair <[hidden email]> schreef:

> If the backup is from an encrypted volume to another, depending on the
> scheme used, you could arrange rsync to see only decrypted data (with the
> transport protected by, say, ssh): for example, both destination and source
> using eCryptfs could have the decrypted volumes mounted during the backup.

Hmmm, I know, but it would be like mounting the image within for  
instance a block container (I would create a block container the size  
of my quotum, and hope I can run LUKS or TrueCrypt there).

Then you mount that container and then store the images/volume in  
there, and that then effectively is the volume's encryption. But I do  
not like that scheme. The image itself is already a form of a block  
container.

Mounting it would be pointless (it is not really a file-level  
container, more like block-level).

> But this may not be necessary: Directly backing up an encrypted volume
> could still make use of rsync's delta algorithm:.in case of eCryptfs, for
> example, data is encrypted in blocks of page_size (e.g., 4kB), so only a
> few blocks may change during updates and subsequent rsync runs could be
> almost as efficient as on unencrypted volumes -- I haven't tested this
> though.

That's what I mentioned. It depends on whether the encryption  
algorithm "randomizes" the encryption runs to make them different each  
time, or not. Because if they are the same, you could use --rsyncable  
on gzip and then what you say would be correct. But in practice, thus  
far (I haven't tested it extensively with what I'm currently using)  
you get a different encryption each run, which means all the blocks  
are different.

> If encryption is only to protect the data during transport, you can simply
> use ssh transport with rsync.

Ya but it is more for remote storage (and even local storage, there  
are different levels of "having to give up your passwords" and you may  
have to give up one (your first) but you may still be in the position  
to keep your seconds or thirds.

I have had a scheme where I had at least 3 different sets of passwords  
and I can at my own leisure, so to speak, hand over the first when I  
feel like it, and they will see an almost empty system except that all  
normal applications are there -- just no email etc. Then, there is  
another password and it only reveals non--offensive stuff. I mean,  
what to call it. Non-controversial.

So when they get the second password they see only stuff that is not  
very important. And then the 3rd password is even a hidden partition.  
Stuff like that. I only forgot the password to the outer volume :P.

lol :(.

> If the idea is to protect the data at a remote backup destination, say on
> the cloud, rsync may not be the best option. For that I prefer duplicity
> which uses the rsync algorithm to transfer only deltas (uses librsync) but
> stores the backup as tar archives encrypted by GnuPG (both the initial full
> backup as well as incremental deltas). You lose the advantage of a mirror
> archive that rsync can maintain.

So duplicity is a full solution. Meaning, probably, that it transfers  
the data unencrypted or temporarily-encrypted, and then encrypts it at  
the remote host with the given solution? All of these schemes require  
some process running at the remote host.

This is also a (part) Windows solution I need. That is to say, either  
the software encrypts the image, or I do it myself. You can do a cat  
over ssh but that obviates the ability to have incremental stuff,  
probably, unless you devise it really well. You could then encrypt it  
remotely as it is received, but that is not really what you want  
either. I mean partial transfers, or retranfers, or continuation of  
tranfers. The only real solution for what I want is to have a delta on  
the encrypted blocks.

So given that that is not possible, you delta the unencrypted file but  
encrypt it remotely. However, that doesn't work with the solution I'm  
using. It would also imply you directly store the file remotely, not  
storing it locally first.  All impossible, to day.

You probably cannot even mount a block device / file unless you are root.

So yeah, I don't know yet. Thanks for the thought though, I will think  
about it.

Thanks, B.

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Reply | Threaded
Open this post in threaded view
|

Fwd: transferring large encrypted images.

Selva Nair
On Tue, Oct 13, 2015 at 5:03 PM, Xen <[hidden email]> wrote:

As I said before, just rsyncing the lower layer (encrypted) of an eCryptfs
volume may work well -- no multiple decryption-encryption cycles and what
not.. Say you have an eCryptfs folder named ~/Private, then just keep your
images in ~/Private and rsync the lower layer in ~/.Private. If you enable
filename encryption retrieving individual files may get a bit tricky,
though.

Sure if the files are small and not encrypted. Or, not constantly changing (with their encryption).

Not so. For small file there is no advantage as any change may change the whole file. Its for large files where only a  few blocks change that the delta algorithms saves transfer time. And that's exactly where eCryptfs ould help.

You don't like file level encryption but that is exactly what you have been asking about. You cant move out of Windows but still want a scalable, stable solution. Its all a contradiction of terms..

Selva


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Xen
Reply | Threaded
Open this post in threaded view
|

Re: Fwd: transferring large encrypted images.

Xen
On Tue, 13 Oct 2015, Selva Nair wrote:

> On Tue, Oct 13, 2015 at 5:03 PM, Xen <[hidden email]> wrote:

>> Sure if the files are small and not encrypted. Or, not constantly changing
>> (with their encryption).
>
>
> Not so. For small file there is no advantage as any change may change the
> whole file. Its for large files where only a  few blocks change that the
> delta algorithms saves transfer time. And that's exactly where eCryptfs
> ould help.

But you said "you can keep the image in the filesystem" or something like
that. Now, If I would be backing up a single filesystem obviously there
won't be encrypted images (normally). But that means you can't use outer
layer (lower layer, as you mention it) eCryptFS, because it will probably
use a randomized cipher/encryption.

That means you need to use the decrypted files. I.e. a mounted eCryptFS to
operate from. In that case there are no advantages to eCryptFS, you might
just as well encrypt the entire volume/partition/system

Depending, perhaps, I guess, on whether you need user home directories
that are encrypted apart from the rest, but etc.

> You don't like file level encryption but that is exactly what you have been
> asking about. You cant move out of Windows but still want a scalable,
> stable solution. Its all a contradiction of terms..

Euhm, no. Unless I'm mistaken, I have been asking about block-container
encryption, but perhaps that is the same to you? A container file is still
a file.

Anyway, Duplicity is the only system I've heard of (I heard about it
before) and now I've read it, it seems to work well. I don't like GnuPG,
but there you have it. On the other hand, restoring Linux would require a
live session with duplicity after manually creating the filesystems and
then chrooting and hopefully restoring the boot manager; all fine and
simple.

But that means you need to run a Linux system, as you say. Which has its
own drawbacks ;-). The point of even backing up a system like that kinda
disappears. But all in all, these are filesystem deltas of real
unencrypted files. It doesn't use rsync (by default, it doesn't have to)
but it uses the rsync algorithm to create diffs. And the incremental diffs
are stored remotely.

Well, that's what my Windows software does too. You see, it's all the same
in that regard. Perhaps it creates huge diffs - that might be a flaw of
the software. Duplicity creates a lot of temp files or uses a lot of temp
space, I take it to mean that it first creates the tarball locally. So
what you have is a system that merely facilitates the transfer process and
makes it more intuitive to use it to transfer to a remote location.

But that means Duplicity does what I do: I create encrypted "tarballs"
with encrypted "diffs" of those tarballs with the newest "filesystem" and
both of them are stored remotely currently through scp and/or rsync.

I could mount a remote filesystem (such as webdav, or whatever) and write
to it directly and apart from some amount of failure modus (what if I have
a network error?) it would do exactly the same thing and in a better or
more pleasant way. Except that mounting remote filesystems by default also
gives away the location etc.

What I might do is create a network share from a host I reasonably trust
(VPS) and attempt to store backups there as it automatically rsyncs them
to a different host. All it requires then is for the writes to that
network share to succeed reasonably. I could have a script (some cron
thing perhaps) that just checks whether it is running and if not fire up a
regular rsync job.

I guess I'll go about fixing that....

Regards..

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html