4 Find duplicate files in given directory trees. Where "duplicate" is defined as
5 having the same MD5 hash digest.
7 It is roughly equivalent to the following one-liner:
9 find . -type f -exec md5sum '{}' \; | awk '{digest = $1; path = $2; paths[digest, ++count[digest]] = path} END {for (digest in count) {n = count[digest]; if (n > 1) {print(digest, n); for (i=1; i<=n; i++) {print " ", paths[digest, i]} } } }'
12 which, when indented, looks like:
14 find . -type f -exec md5sum '{}' \; \
19 paths[digest, ++count[digest]] = path
23 for (digest in count) {
27 for (i=1; i<=n; i++) {
28 print " ", paths[digest, i]
35 and works well-enough, until you start getting weird file paths that are more
36 of a pain to handle quoting for than re-writing this thing in OCaml :)
40 After building, run `dups` on the current directory tree:
44 Finished, 0 targets (0 cached) in 00:00:00.
45 Finished, 5 targets (0 cached) in 00:00:00.
48 df4235f3da793b798095047810153c6b 2
51 d41d8cd98f00b204e9800998ecf8427e 2
54 087809b180957ce812a39a5163554502 2
55 "./_build/dups.native"
57 Processed 102 files in 0.025761 seconds.
59 Note that the report line (`Processed 102 files in 0.025761 seconds.`) is
60 written to `stderr`, so that `stdout` is safely processable by other tools.