[ art / civ / cult / cyb / diy / drg / feels / layer / lit / λ / q / r / sci / sec / tech / w / zzz ] archive provided by lainchan.jp

lainchan archive - /sec/ - 3506

File: 1484080474466.png (13 KB, 300x127, nw-setup-1.png)


i'm currently thinking about how to set up an offsite backup in a secure manner. there are essentially two options i'm considering:
* encfs reverse filesystem, then rsync the encrypted files to offsite
* luks encrypted offsite storage, rsync the unencrypted files (still over a secure channel obviously)

my goal is to prevent anyone except me to view my files.

+ files are encrypted before they leave the onsite location
+ i've used it successfully before and know how it works
- encfs has security issues if an attacker has repeated access to encrypted files
- encryption on a per-file level
- makes incremental updates difficult or impossible (?)

+ solid encryption (?)
+ encryption below filesystem level
- needs password entry over network on each boot (so no fully automated boot possible)
- files are only encrypted at offsite location (they are in the clear in offsite memory or with offsite OS access)

* I don't need to encrypt the root partition of the offsite server, encrypting the raid disks is enough
* offsite server has usb port inside case for an usb key boot partition (maybe needed for luks)
* offsite server has case-open detection switch ("tamper switch")
* I can't inspect the server on a regular basis, but I can do so occasionally (every 2-3 months) or when I suspect something is wrong, or when a harddisk fails

attacker model: "incompetent hardware access"
* attacker can do everything on the network, but i'm planning to use ssh or vpn or something anyway
* attacker has hardware access since it's offsite
* attacker has access to harddisks I throw away when they fail
* attacker can shutdown/disconnect server
* attacker cannot open case without shutting down server (tamper switch)
* attacker cannot disconnect server power without shutting down server (tamper detection)
* attacker cannot disconnect/remove harddisk while server is running without triggering a warning (either because of network disconnect or because of raid degradation)

encfs additional drawbacks on attacker model:
* attacker does not repeatedly read out harddisk contents and can determine file contents via encfs shortcomings (one-hit attacker only)

luks additional drawbacks on attacker model:
* attacker is not byzantine, as in he would not cut open the side of the server and access the memory or hardware bus from there while server is running.

Can you recommend one setup over another? Which one makes more sense? Is there some problem with either setup i haven't taken into account? Is the attacker model realistic? Anything i should change? Something else than rsync?

And, most important to me: How have you set up your offsite backups?


make sure you use an OS with ASLR. Cold-capturing RAM is a thing.

Anyway, I would do something like cock.li's Iron Dong. https://vc.gg/blog/announcing-the-iron-dong-hidden-service-backup-system.html


My offsite backups are rather simple, I create a local PGP-encrypted backup with duply and then sync the backup data with an Amazon S3 bucket via aws-cli. My backup is only about 100G, with infrequent access storage the total cost is less than $3/month.

Interesting read, thanks for the link.


tahoe-lafs is the gold standard


>local PGP-encrypted backup
you mean, you make an image (or tarball, whatever) and then encrypt this and then send to offsite?

The crypto here is rock solid, yes, but this has the disadvantage that you need double the diskspace for generating the encrypted image before sending, if I understand this correctly.


Not exactly, duplicity/duply performs the backup by breaking your data up into volumes of a configurable size (200M by default), which are then encrypted through gpg.

Unfortunately, duplicity doesn't seem to handle renaming or moving of files very well, so moving a directory that contains 100G of stuff will cause the backup to grow by 100G.


Use PGP to encrypt each file individually. Then you only need to push "files modified since X", and they're encrypted in transit and at rest.

The only downfall is you need enough temporary space in your trusted net to basically have a copy of everything as the encryption is taking place.


this is not good, there are several backup images and containers with > 50G size

Also I want to automate backup (and restore) without fiddly shellscripts if possible, but rsync or similar don't have "pipe through encryption here" options, this makes pgp a bad choice


>The crypto here is rock solid, yes, but this has the disadvantage that you need double the diskspace for generating the encrypted image before sending, if I understand this correctly.

Not if you create a ramfs mount point and save your unencrypted image there, then encrypt it and save it to your harddisk, then delete it from your ramfs mount point.


dear lord.

these are all encrypted filesystems, lain. they aren't backup programs. the constraints for data that are assumed to be frequently accessed are different from the constraints on data totally at rest.

just use duplicity. duplicity is the gold standard.


Forgot to make this clear in my first response: >>3661 is exactly what duplicity does, only it doesn't take in all of your data at once, but processes it in little chunks.


can someone recommend decent hosting services for this?


>these are all encrypted filesystems
yes. they are used to encrypt files.

> they aren't backup programs

that's what rsync does. is there a reason *against* using different tools for these two tasks?

> just use duplicity

maybe when they release a stable major version and are not under ongoing development anymore. For backups, some people prefer not to use software still in pre-release.

>processes it in little chunks
note that the duplicity website states that they sometimes use up a lot of temp space too.


does it support synchronizing filesystems, without versioning or increments? as in, just have the same data (plus/minus encryption) on two servers?

The problem is that incremental takes up same-or-more space than just a duplicate which takes up same space. Which would create the need for more diskspace on the offsite storage than on the onsite storage, and it is difficult to tell in advance how much (as in, at harddisk-buying time).


OP here. what encryption does duplicity use?

which attacker model does its encryption protect against?