Over the last year or so, I have been slowly scanning my father’s large collection of photographs. He was a bit of a stickler for collecting his negatives in an orderly fashion so I am able to work my way through numbered folders of negatives, each of which generally contains the content of a single roll of film.
The scanning process uses my Epson Perfection V370 Photo flatbed scanner. I found various online references citing a recommended resolution of 4000 dpi for archival scanning of 35mm negatives, so I have been scanning them to 4800 dpi TIFF files. However, the software used for this has been problematic at times.
I first began with the Epson Scan software provided with the scanner. This software makes a reasonable job of detecting each frame on a strip of up to six negatives and saving them out to separate files. However, it has two weaknesses. First, it sometimes fails to properly detect frames (especially underexposed ones) and makes a big job out of manually correcting these as you need to go through an entire scanning process for each frame that requires correcting. Secondly, it’s fiddly in operation. I constantly had to turn off “auto” settings that threatened to “process” the files for me. I want the files straight from the scanner. I have other software to process the photos for further use as required.
After hitting major frame-detection issues with some half-frame films, I set about trying to use Hamrick Software’s VueScan, which I already owned. This software gives a lot of control over both the scanner and the output. It’s multi-frame handling, however, is obscure and complex and, I find, impossible to use.
I began to despair that neither piece of software was going to make the job as easy as I wanted. I have over 200 films to scan so every inefficiency is magnified substantially. After a bit of a hiatus, I decided I should get back into the task and decided that I would settle for scanning each entire strip in VueScan. This means I can use the single crop mode which is quick and simple. Plus using only a single scan pass for an entire strip actually saves considerable time over Epson Scan’s multi-frame approach.
I was originally going to leave the strips intact and only separate out individual frames when I actually wanted to process one, but I found previewing the 220 MB image files to be a little cumbersome and some software, including Apple’s Preview app, just couldn’t cope with the size. It struck me that Affinity Photo‘s Export Persona might offer a quick solution. It did.
Loading up a full strip into Photo takes a few seconds, then I switch to Export Persona and set the default output format to 16-bit B&W TIFF — the same format I am starting with from the scan. Once that’s done, I simply drag out the rectangles to mark each frame, and as each is created I add the film and frame number as a name, e.g. 110_24. Once all the frames are defined and named, a single click on the export button writes out the individual TIFF files. It’s an efficient process. Far more efficient than the same frame-setting in either Epson Scan or VueScan.
Once I have my individual frames, the strip scans are deleted and the frames end up in Adobe Lightroom, and across several backups, local and cloud. It was while beginning to upload to a new cloud backup that I noticed all of the earlier films, scanned with Epson Scan, were substantially larger than those created recently with the new process. I’m talking about 55 MB instead of 35 MB. That’s quite a difference in one file and enormous when considering the 3000+ frames already scanned!
A bit of detective work revealed the early files had no compression applied while the newer ones did. This is not something I ever thought to set, and I’m not sure I could. Certainly, Affinity Photo doesn’t seem to offer control of this. With the huge gains to be had in both local and cloud storage, I set about finding how to compress the older files. No apps I have seem to offer this capability but I remembered I had installed ImageMagick some time ago and began looking for ways this might help me.
While looking for how to invoke ImageMagick, I recalled I had installed it using MacPorts. In my search for the command to run (which the online documentation inexplicably has wrong) I stumbled across a bunch of commands with names beginning “tiff”!
A vague recollection tells me that when I installed MacPorts I went through all of the available ports looking for stuff I thought might be interesting or useful, and the tiff port was one of those.
Looking through the commands, I found tiffinfo was able to give me some more insight into the compression used in the smaller files. The tiffcp command had the required chops to read one tiff file and output another with specific attributes — including compression.
Some research and experimentation later yielded the following command to best compress an existing, uncompressed TIFF file.
tiffcp -c zip:2 original.tif new.tif
This turned my 55 MB TIFF files into 35 MB TIFF files and they were still happily viewable with QuickLook and Affinity Photo. Success! So the next step was how to reasonably apply this compression to 3000 or so files.
Some further experimenting yielded the following multi-command line that I ran in each of the 100 folders that needed attention.
for f in *.tif; do echo -n "."; tiffcp -c zip:2 "$f" "$f"f; rm "$f"; mv "$f"f "$f"; done; echo ""
Some of that is tidying and dressing up. The “$f” source file and “$f”f target file meant 110_24.tif would be converted to 110_24.tiff which then enabled me to remove the original and rename it back to a .tif extension to maintain the filename (for Lightroom). The echo simply provided feedback that it was indeed cycling through all of the files in the directory.
I forgot to check the space before I embarked on this exercise across all of the folders, but I’ve saved at least 38 GB of space.
Here’s one of the more interesting photos to have come out of the exercise so far.