The goal of this project is to help us to:
-
List all duplicates files base on a parent directory
-
Delete all duplicates files base on a parent directory
- integration of a cli interface allowing:
- execute commands (duplicate search, selection, deletion)
- visualization of the directory tree (just towards duplicates)
- selection of duplicates to delete
To fully understand how this project works, we all need to first understand this
-
A hash function is a mathematical algorithm that takes an input (or 'message') and produces a fixed-size string of characters, which is typically a hash code.
-
When it comes to files, the contents of the file are the input to the hash function. The hash function then produces a fixed-length hash value (often represented as a hexadecimal number). Even a small change in the file content should result in a substantially different hash value.
-
By comparing the hash values of files, you can quickly determine whether the file contents are identical. If two files have the same hash value, they are highly likely to be duplicates. This is because, for practical purposes, it's extremely rare for different files to produce the same hash.
-
We browse the parent directory, and save its entire tree in a data structure (n-ary tree)
-
Next time , for each n-ary tree node ( correspoding to sub_folders ) we browse his files and save hash in another datastructure (dictionary) like key = file_hash and value = array of files that have same hash
-
Finally , we browse her dictionary to show / delete all duplicates files in tree