开发者

Delphi: own files' Backup

开发者 https://www.devze.com 2023-03-20 21:42 出处:网络
There is a folder - 7 Gb with a big amou开发者_如何学JAVAnt of binary files. An average size of the binary file is around 100 Kb. These files are compressed in ZIP archives. An average size of that ZI

There is a folder - 7 Gb with a big amou开发者_如何学JAVAnt of binary files. An average size of the binary file is around 100 Kb. These files are compressed in ZIP archives. An average size of that ZIP archive is around 2 Mb. Therefore I need to make the backup of the ZIP archives.

And I need to make the backup of the whole folder. It can be without a data compression. A main requirement is the fast backup. Therefore it must be a possibility to restore files from the backup.

For example, an archive without a data compression can be used, but adding and extracting of files must be fast. Or fast copying can be used.

Please, tell me how to do this in Delphi or tell your suggestions.

Thanks!


How is the 7GB of data layout? How many files? Which kind of files (DB, pictures, text, docx, doc)? What is the average/smaller/bigger file size? How do the files change: new files added, files completely changed, some bytes changed/appended to some existing files?

Those are meaningful information to pick up to right way of backup...

For instance, if your files are not small, just copying to another folder would make the trick (you can find some algorithm for directory-level diffs in this unit).

7 GB is huge for a zip archive. Of course, you can handle zip on such size, but IMHO it is not the right format to use for an incremental backup.

One issue with the zip format is that to update a file, the previous version has to be deleted, therefore the whole zip content has to be rewritten... at best, the file content is moved.

So if you want to refresh the backup on a regular basis (each day e.g.), the regular archive format is not the best candidate.

I'd suggest using some flat-file format, like our Open Source Synopse Big Table, and store either the plain content, either a compressed version (don't use zip/deflate format, which are slow, but for instance our SynLZ algorithm which is faster than any other - even faster than LZO - for compression). If compression is faster than your disk access, backup will be faster: zip/deflate is slower than Disk access for compression, but SynLZ is much faster, so you'll save time by SynLZ-compressing the content.

With Big Table, you can use in-memory meta-data for the file attributes (name, version, date, attributes, previous version ID...), then let the compressed content on disk.

If the files are huge, and only some bytes will change, consider using a binary diff instead of writing the whole content every time, to store only the difference between main versions. Here is one unit which is very fast, and we use for storing diff between versions for the file versioning part of our SynProject tool.


You can find many components which will help you to Zip/UnZip file or folders here or here

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号