The best tool i’ve found to delete duplicate files windows is sfk (aka “Swiss File Knife”).
Installation
1. Download Swiss File knife from there homepage
2. Extract and copy to C:\windows
Usage
sfk dupfind -dir anydir [-file .ext1 .ext2]
find and list duplicate files.
options
-diffdirs list only duplicates residing in different
root directories. this option requires that
you specify at least two dirs after -dir.
-listorg list all original filenames,
leave out any duplicate filenames.
-minsize=n compare only files with size >= n.
examples for n are:
5m = 5000000 bytes (5 mbytes)
100k = 100000 bytes (5 kbytes)
1M = 1048576 bytes (2<<20 bytes)
9000b = 9000 bytes
command chaining
– by default, this command passes the names
of found duplicate files to the next command.
– option -listorg does the opposite: it passes
only original filenames, but no duplicates,
to the next chain command.
NOTE:
if identical files are found, the decision what is listed
as “original” or “duplicate” is currently based on the
order in the file system: the file found first is listed as
“original”. check carefully if this is what you think,
before cleaning up any duplicates.
examples
sfk dupfind .
find all duplicates within the current directory tree.
sfk dupfind -dir docs1 docs2 docs3
find all dups across and within the given directories.
sfk dupfind -diffdir -dir docs1 docs2 docs3
find dups between docs1/docs2, docs2/docs3, docs1/docs3,
but does NOT list dups within the same root directory.
sfk dupfind docs .doc +del
find all duplicate .doc files, within the docs
directory tree, and delete them.
sfk dupfind -listorg docs .doc +run “copy $file docs2”
copy all .doc files from docs to docs2,
but leave out any duplicate files.
sfk dupfind -dir pic1 -dir pic2 -dir pic3
find duplicates across three different directory trees.
specifying multiple -dirs is also a way of influencing
the result order; if a file is found both in pic1 and pic3,
the file from pic1 will be listed as original, the other one
as the duplicate.
sfk sel -dir pic1 pic2 pic3 -file .jpg +dup -minsize=1m
similar to the above, this example uses command chaining:
list all .jpg files from the pic directories, then pass
this to the dupfind command, also filtering by size.
Actually deleting the dupes
From current directory
cd "c:\some\path" sfk dupfind +del! .
Another alternative is “DuplicateFilesDeleter” a great fix for duplicates.