| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
The crawler now chanes to the directoties that are crawled and uses
stat() on relative paths instead of using abolsute paths for all
operations. This brings about a 10% reducting in crawling time.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This currently ignores files with filenames that have an unknown
encoding. This is far from ideal though.
|
|
|
|
|
|
| |
This ensures that the correct old files are removed and the current file
is correctly renamed to avoid the situation when the encryption type is
changed.
|
|
|
|
|
|
| |
This integrates compression and encryption detection into the
read_file() function and filename expansion into the write_file()
function (adding arguments for compression and encryption).
|
|
|
|
|
| |
Move writing to separate function and use file name in the repository to
select the correct extract and decrypt commands for use in restore.sh.
|
| |
|
|
|
|
|
| |
This replaces a call to os.walk() with one to os.listdir() to avoid
calling stat() twice on each file and directory encountered.
|
|
|
|
|
| |
This updates properties of the Config class with arguments on the
command line. It also includes a few configuration changes.
|
| |
|
|
|
|
|
|
| |
This configures which PGP keys should be able to decrypt the backup. At
least for one of the keys specified the private key should available
during backup runs to be able to verify repository meta-data.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This prepares for configuration of compression and encryption.
The subprocess-related readers and writers have been moved to a separate
module and abstracted by Filter objects. The repository now includes
functions for detecting compression and encryption, based on file names
in the repository.
The restore script is now partially generated from commands that are
specified in the configured filters.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
files
Now entries are stored in the order they are found when crawling to
ensure that directory contents is extracted together. Also, it is not
really neccesary to store directory entries after the files that contain
them but they need to be together in order for tar to do the extracting
correctly.
|
| |
|
|
|
|
|
|
|
|
| |
This scans the repository for archive files in the repository and
updates the list of archives in the metadata cache.
This also ensures that JSON is consistently encoded and checks
subcommand exit status.
|
|
|
|
|
|
|
|
|
|
| |
This encrypts the extractlist, makes a JSON dump of the files in the
snapshot and makes a JSON dump of each created archive. It also passes
-p to tar when extracting, creates a proper temporary directory for
extracting temporary files
This also refactors some bits of FileRepository to be simpler and allow
for different backends in the future.
|
|
|
|
|
|
| |
This uses subprocesses for compression and encryption using file-like
Filter classes. Encryption is done using a (currently plain text)
passphrase file.
|
| |
|
|
|
|
|
|
|
|
| |
This moves much of the queries from the MetaData class to the place
where they are used and adds some comments.
Archive names are built up using a timestamp and a random string to
avoid collisions while remaining orderable.
|
|
|
|
|
|
| |
This records whether a path is a directory while crawling. It also
re-organises the code a bit and limits the extractlist to path elements
only.
|
|
|
|
|
| |
The script is supposed to be self-contained and shouls be simple and
smart to easily do a full restore of a single snapshot.
|
|
|
|
|
| |
This does a dummy extract of the selected archives in a temporary table
to see how the system would look like.
|
| |
|
|
|