TIFF scanning and compression

Over the last year or so, I have been slowly scanning my father’s large collection of photographs. He was a bit of a stickler for collecting his negatives in an orderly fashion so I am able to work my way through numbered folders of negatives, each of which generally contains the content of a single roll of film.

The scanning process uses my Epson Perfection V370 Photo flatbed scanner. I found various online references citing a recommended resolution of 4000 dpi for archival scanning of 35mm negatives, so I have been scanning them to 4800 dpi TIFF files. However, the software used for this has been problematic at times.

I first began with the Epson Scan software provided with the scanner. This software makes a reasonable job of detecting each frame on a strip of up to six negatives and saving them out to separate files. However, it has two weaknesses. First, it sometimes fails to properly detect frames (especially underexposed ones) and makes a big job out of manually correcting these as you need to go through an entire scanning process for each frame that requires correcting. Secondly, it’s fiddly in operation. I constantly had to turn off “auto” settings that threatened to “process” the files for me. I want the files straight from the scanner. I have other software to process the photos for further use as required.

After hitting major frame-detection issues with some half-frame films, I set about trying to use Hamrick Software’s VueScan, which I already owned. This software gives a lot of control over both the scanner and the output. It’s multi-frame handling, however, is obscure and complex and, I find, impossible to use.

I began to despair that neither piece of software was going to make the job as easy as I wanted. I have over 200 films to scan so every inefficiency is magnified substantially. After a bit of a hiatus, I decided I should get back into the task and decided that I would settle for scanning each entire strip in VueScan. This means I can use the single crop mode which is quick and simple. Plus using only a single scan pass for an entire strip actually saves considerable time over Epson Scan’s multi-frame approach.

I was originally going to leave the strips intact and only separate out individual frames when I actually wanted to process one, but I found previewing the 220 MB image files to be a little cumbersome and some software, including Apple’s Preview app, just couldn’t cope with the size. It struck me that Affinity Photo‘s Export Persona might offer a quick solution. It did.

Loading up a full strip into Photo takes a few seconds, then I switch to Export Persona and set the default output format to 16-bit B&W TIFF — the same format I am starting with from the scan. Once that’s done, I simply drag out the rectangles to mark each frame, and as each is created I add the film and frame number as a name, e.g. 110_24. Once all the frames are defined and named, a single click on the export button writes out the individual TIFF files. It’s an efficient process. Far more efficient than the same frame-setting in either Epson Scan or VueScan.

Once I have my individual frames, the strip scans are deleted and the frames end up in Adobe Lightroom, and across several backups, local and cloud. It was while beginning to upload to a new cloud backup that I noticed all of the earlier films, scanned with Epson Scan, were substantially larger than those created recently with the new process. I’m talking about 55 MB instead of 35 MB. That’s quite a difference in one file and enormous when considering the 3000+ frames already scanned!

A bit of detective work revealed the early files had no compression applied while the newer ones did. This is not something I ever thought to set, and I’m not sure I could. Certainly, Affinity Photo doesn’t seem to offer control of this. With the huge gains to be had in both local and cloud storage, I set about finding how to compress the older files. No apps I have seem to offer this capability but I remembered I had installed ImageMagick some time ago and began looking for ways this might help me.

While looking for how to invoke ImageMagick, I recalled I had installed it using MacPorts. In my search for the command to run (which the online documentation inexplicably has wrong) I stumbled across a bunch of commands with names beginning “tiff”!

A vague recollection tells me that when I installed MacPorts I went through all of the available ports looking for stuff I thought might be interesting or useful, and the tiff port was one of those.

Looking through the commands, I found tiffinfo was able to give me some more insight into the compression used in the smaller files. The tiffcp command had the required chops to read one tiff file and output another with specific attributes — including compression.

Some research and experimentation later yielded the following command to best compress an existing, uncompressed TIFF file.

tiffcp -c zip:2 original.tif new.tif

This turned my 55 MB TIFF files into 35 MB TIFF files and they were still happily viewable with QuickLook and Affinity Photo. Success! So the next step was how to reasonably apply this compression to 3000 or so files.

Some further experimenting yielded the following multi-command line that I ran in each of the 100 folders that needed attention.

for f in *.tif; do echo -n "."; tiffcp -c zip:2 "$f" "$f"f; rm "$f"; mv "$f"f "$f"; done; echo ""

Some of that is tidying and dressing up. The “$f” source file and “$f”f target file meant 110_24.tif would be converted to 110_24.tiff which then enabled me to remove the original and rename it back to a .tif extension to maintain the filename (for Lightroom). The echo simply provided feedback that it was indeed cycling through all of the files in the directory.

I forgot to check the space before I embarked on this exercise across all of the folders, but I’ve saved at least 38 GB of space.

Here’s one of the more interesting photos to have come out of the exercise so far.


Enums for hard-coded configuration

While writing my new Stretch Timer app, I needed to set lower and upper limits for three key numbers. I had them hardcoded as numeric literals but when I wanted to change one of them, it occurred to me there must be a better way.

After doing a bit of research on enums, I found a construct that not only factors the numbers out to a single place, but also makes their use a lot more readable.

enum pickerRange {
    static let Stretch = (min: 5, max: 60)
    static let Rest = (min: 5, max: 60)
    static let Repeat = (min: 3, max: 15)

It is fairly plain from the enum declaration what is being described. It is also fairly evident what’s going on when they are referenced, too.

return pickerRange.Stretch.max - pickerRange.Stretch.min + 1


When we observe the world around us we are frequently to be found comparing what we see with our previous experiences. “That’s a lot of rain” is not an absolute measure of the volume of rain without considering the observer – do they come from Thailand or southern California? So it should come as no surprise that the technology we are already familiar with has a significant bearing on our assessment of new gadgets. It recently occurred to me that my own judgement has been affected in this way and I think it’s a common affliction.

My revelation came from my recent phone upgrade. In late 2014 I sold off my beloved iPhone 5 and “went big” with the iPhone 6 Plus. I remember the first week or two of using the giant new phone. I had pain in my hands and wrists until I got used to dealing with the size of the screen. Then, for three years, it was simply “my phone” and gave me no real problems except when it came to pocketability.

When I decided to upgrade this year, I opted for the iPhone 8 over the 8 Plus. The dual camera system of the Plus is not a real drawcard for me (I have a real camera) so that slight issue of pocketability lead me to the “regular” sized phone this time around. It didn’t hurt that it was $150 cheaper, either. I ran my 6 Plus in “zoomed” mode for the last few weeks before the upgrade to get used to the amount of ‘stuff’ that would fit on the screen and now I’m perfectly happy with my new iPhone 8. It seems normal to me in every way now.

The revelation comes from observing the now reasonably high number of Plus-sized iPhones I see in public. They now look huge to me! My initial reaction many times has been disbelief that people are carrying these enormous devices around that are so much bigger than any iPhone; bigger than the device I have only just stopped carrying myself. But careful observation confirms that these are indeed iPhones Plus.

The effect was clarified for me when I observed my son’s girlfriend using her iPhone 5S. It just looked like a “smaller” iPhone. Yet I remember well when I was using my 6 Plus, that troubleshooting issues on my wife’s 5C felt comical; I felt like I was working with an iPod Nano-sized device. It seems it does not take me long to get used to whatever sized phone I use every day and consider all other devices relative to it.

At the time of launch of the 6 Plus, there were a lot of people online claiming the phone was simply too big for “ordinary people.” I still know people who claim this. And yet, from my observations out in public, the Plus-sized phones are predominant among those who I have to assume are “normal people.” The phones haven’t shrunk and I don’t think pockets have become bigger. Certainly, hands haven’t. I believe what has changed is more people have experience of the larger iPhones now that they’ve been out for several years.

There have been other aspects of iPhones for which I have seen this effect in my online reading. Touch ID was going to be pathetic, or at least an unknown quantity, before it was released because all existing fingerprint readers were rubbish. Face ID seems to have followed the same pattern, although some smart commentators had spotted the Touch ID parallel.

Then there’s the much vaunted iPhone camera versus a real camera. It still astounds me the number of times I read that iPhones have amazing cameras. They don’t. Honestly, they produce very average images compared to most dedicated cameras. Your photos might look great on the phone but put them up full screen on any retina Mac and they will show their flaws readily. Photos from my $2500-ish camera setup are night and day better than anything an iPhone can produce. Even my “hundred dollar camera” can produce shots roughly equivalent at wide angle and they are markedly superior when zoomed due to its optical zoom capability. There is no denying the convenience of having a moderately capable camera on your phone – because it’s always in your pocket or purse – but they’re only good cameras for a phone.

I could add examples of software and even data plans but you get my point. Some people believe what they have is fine and anything more or different is unnecessary or overkill. Sometimes it is. If what you have does the job, then good for you. But progress is made by branching out.

I use a Mac because I wondered whether it was better than Windows. I started with a desktop (iMac) then upgraded to a laptop because I thought that would be better. Now I’ve gone back to an iMac and I know why it’s the better choice for me. I’ve even added a 12″ MacBook to my repertoire because I know what roles a laptop can fill for me.

I use Adobe Photoshop Lightroom because I wondered if it could do more than just collect my photos in folders. I use Affinity Photo because I wondered if it could be as good as Photoshop. I used Photoshop because I wondered if it was more powerful than PaintShop Pro.

I have had the luxury of funds to make all these changes and seek out different solutions over the years. Not everyone is so lucky. Which is why trial software is so important and, if you can, get your hands on different types of hardware to actually try them for some real tasks. Most importantly, think objectively about technology.

Banner image by leg0fenris.

Using regular expressions in QShell

It’s pretty rare that I publish anything about my work, but given the difficulty I had in figuring out this particular problem with online research (in the end, I only found the answer by experimenting) I thought it would be useful to others if I published the solution.

One of the first problems I always encounter when searching industry standard technologies as they apply to the IBM i platform is that the name of the platform is incredibly hard to include in search terms, so I’ll helpfully mention here that this also applies if your search terms are AS/400, AS400, iSeries, System i, or i5/OS. Heck, I’ll even throw in a gratuitous eServer reference!

With that out of the way, the problem to be solved: using QShell to apply Unix text manipulation commands to a stream file, making use of the power of regular expressions. In my case I had downloaded a CSV file from a public site and needed to take care of some formatting prior to using the CPYFRMIMPF command to load it into a database file. I’d used this method before, but hit an additional snag this time around.

There were three problems to solve with the file, all of which I knew I could attack with sed. If you’re not familiar with sed, here’s a brief introduction to how I use it.

sed -e 's/find this/replace with this/'

You can probably figure out from the example what it will do. It’s a simple search and replace (the initial ‘s’ means ‘substitute’). As with most Unix commands, in this default form it will take standard input, filter it, and write the result to standard output. Later on I’ll hook those up to the files I need.

The next thing to know for this example is how regular expressions work. That’s way too deep a subject for me to cover here. If you need to learn this, I recommend having a read of parts 17 and 18 of Bart Busschots’ Taming the Terminal series. I’ll be using some basic features plus back references to capture groups. They will be deployed within the sed command.

So to the first problem. The first two fields in each record were actually numeric but provided as quoted values, so CPYFRMIMPF would treat them as strings and complain about the target fields being numeric. Also, there was a requirement to concatenate these two values (ironically, as if they were strings) as a new value at the end of each record. The goal, then, to strip the quotes and to append a new value on the end of the record. Back references to the rescue.

sed -e 's/^"([0-9]{1,3})","([0-9]{1,4})"(.*)$/\1,\2\3,\1\2/'

That looks pretty complex but if you break it down, it’s simply looking for two quoted groups of digits. (I specified the lengths but could have opted for lazy matching instead.)

But here’s the key reason I’m writing this article – as written above, the command will not work in QShell. It’s relatively common to need to escape capture parentheses thus: \(  … \) and this is indeed required in QShell. But even then it won’t work. The trick that took me a good hour of experimenting to discover is that QShell also requires the cardinality braces { } to be escaped! I’ve not seen this on any other platform. Here’s what actually works.

sed -e 's/^"\([0-9]\{1,3\}\)","\([0-9]\{1,4\}\)"\(.*\)$/\1,\2\3,\1\2/'

It looks a complete mess with all the back-slashes required to escape both parentheses and braces. At least the square brackets do not need the same treatment!

With my new snag out of the way (I’d not previously used cardinality), let’s move on to the next problem in the file – the dates are presented as “2017-07-15” and we need them as an 8 digit number. So we need to strip the quotes and the hyphens.

sed -e 's/"\([0-9]\{4\}\)-\([0-9]\{2\}\)-\([0-9]\{2\}\)"/\1\2\3/g'

This is a pretty simple example, but adds the ‘g’ modifier on the end to enable it to replace all matching dates in the line, not just the first.

Finally, there were a lot of empty fields in the file and because every field was quoted and some of these needed to be treated as numeric, I decided to remove all empty quotes. Which is to say, find any adjacent pair of double-quote characters and remove them. Again, this applies as many times as it can on the line.

sed -e 's/,""/,/g'

That was a really simple one. Once I had these working on the QShell command line, it was time to wrap them up for execution in my CL program, at which point I added the input file on the front by issuing a cat command and piping it into the multi-action sed command, and finally redirecting that output to my result file. Note the doubling up of the single quotes in the sed command to allow for the overall quoting of the CMD( ) parameter string.

STRQSH CMD('cat /tmp/inputfile.csv | +
sed -e ''s/^"\([0-9]\{1,3\}\)","\([0-9]\{1,4\}\)"\(.*\)$/\1,\2\3,\1\2/'' +
-e ''s/"\([0-9]\{4\}\)-\([0-9]\{2\}\)-\([0-9]\{2\}\)"/\1\2\3/g'' +
-e ''s/,""/,/g'' > /tmp/outputfile.csv')

And there you have it! Simple!

Cleaning up macOS Launchpad

It is a much maligned feature of macOS but there are times when LaunchPad is useful. Such as when I want to scan my apps for recent additions that warrant a review. Sure, I can just open the Applications folder, but LaunchPad is a much easier presentation.

But, it does have its issues: for one, the inability to delete non-Mac App Store apps. This is a quick post on a method I found that allows you to do just that.

When searching for a solution, there were numerous older posts online that talked about issuing database deletes but many noted this didn’t work in more recent releases of the OS and indeed I found this to be true of Sierra.

Then I found this post at OS X Daily which offered what appeared at first to be a simple rearrangement of icons rather than removal of specific ones. However it turns out it does exactly what I needed. If you delete an app from your Mac and then follow this tip, the icon will be removed from LaunchPad.

So, here’s the sequence of steps required.

  1. Delete the app. If you have AppZapper or AppDelete, use one of those. If you have Hazel, simply drag the app to the trash and let Hazel clean up the extras*.
  2. Empty the trash. The one time I didn’t do this, weird things happened.
  3. Execute the below command in Terminal. If your browser wraps the line, just copy it as one and paste it into Terminal and it will be fine.
defaults write com.apple.dock ResetLaunchPad -bool true; killall Dock

That’s it. You may notice when you next open LaunchPad that some of the icons are absent momentarily but they will appear before you have time to worry.

A side effect is that the first page will have the default layout of Apple apps and subsequent pages will have all the extra apps previously present (less those deleted) in alphabetical order.

The one time I didn’t empty the trash, most of the icons were drawn as dotted rectangles, hence my step 2.

*Hazel has a feature called App Sweep. I’m not sure if this is turned on by default. You can find the setting in the Trash tab of the Hazel preference pane.

Combine and conquer

A little over a week ago I wrote about my quest for software to ‘run’ my podcast production for The Sitting Duck Podcast. Specifically, some form of soundboard software that would work well for multi-track recording into Logic Pro X (hereafter referred to simply as Logic).

Since then I’ve discovered two new pieces of software and a new way to approach the multi-track solution.

As much as Apple’s Mac App Store is derided by the tech press, I have found some very useful software in it over the years and I’m still in love with the simplicity of installation and re-installation at the click of a button and maybe entry of a password. The downsides are the poor search results and the general unavailability of a ‘try before you buy’ option. (Some vendors make their apps free and unlock full functionality with an in-app purchase.)

So last night I set about searching for a ‘soundboard’ application in the App Store. I was surprised that only a dozen results appeared. I guess this is an indication of the smaller number of apps in the Mac App Store as compared to the iOS one.

Unsurprisingly, Ambrosia Software’s Soundboard appeared in the top spot. Also unsurprisingly, many of the others are clearly not what I’m after – Burp And Fart Piano, anyone? But there are two on the list that warrant further investigation.

SoundBoard FX

This is a fairly straightforward soundboard implementation There are a useful amount of tweaks that can be made to each sound and the run-time interface is nice and clean. It also has a pop-out clock and countdown (on the current clip) timer. I was impressed at first glance.

However, it does not have the ability to route the audio differently for each clip.


This app looks a lot more ‘professional,’ as its name suggests. Like QLab, it supports more than just audio, allowing for video and images as well. These won’t be useful for my situation but are, I think, an indication of the level of work that has gone into the app.

Sticking with the audio capabilities, there are a decent number of tweaks that can be made to each clip, including routing each to a specific sound device. I emailed the developer asking whether it would be possible to specify the channels on the target device and he quickly responded saying no, not yet.

When I saw his reply, something clicked in my brain and I suddenly thought of a possible solution to the lack of channel mapping. Only Sound Byte and QLab had this ability (the latter at a significant cost!) so it was a bit of a killer to the cause if I had to have it. This morning I played with my idea and I can now say MiX16 PRO is my front-runner.

Aggregate devices

I had wondered whether there was a way of playing songs from the command line that would give me enough control. I soon found the afplay command and also found that there did not seem to be an easy way to set the output device for this. However, I kept coming across all manner of posts on blogs and forums that mentioned aggregate devices.

I know what these are because I have played with them before. macOS’s built-in app, Audio MIDI Setup, has the ability to aggregate multiple physical sound devices into a single virtual one. Premier audio app developer Rogue Amoeba has built on this capability with their excellent Loopback app which, you may recall, I listed in my toolbox.

The thought that struck me was this: Could I create a whole bunch of simple, two-channel sound devices, to which I could direct the tracks – one to each – and then aggregate those into a mammoth single device which Logic could use as its source?

The short answer – yes!


I opened Loopback and began to set up the devices I thought I would need. I created 8 simple ‘pass through’ devices. These are very simple to create as they use all the default settings for a new device. Simply click the ⊕ button to add a device, give it a name, and you’re done. I created eight of them, but I could use many more. I called mine “Tracker01” etc to keep the name short. “Tracker” referring to the device’s role in accepting a track to feed the multiple-track device.

Next, I created a device I called “Master” and to this, I added all of my “Tracker” devices plus my microphone. By default, this would mix all of the incoming stereo signals into a single stereo output. The magic is in setting up manual channel mappings.

You can click on both of the images below to get a better look if you need to.



The “Master” device now has all of the audio needed for Logic in a single device (Logic requires a single device). I should note here that the Loopback interface for mapping the channels is a little finicky. The trick is to select the target channel in the bottom selection by clicking its number, then select the source device in the top section by clicking its name, and then you can drag a channel token from top to bottom. If you don’t follow these steps the channel tokens either refuse to be dragged or refuse to land on their target. Weird.


Having created my monster, I needed to test it. For this, I chose to use Sound Byte. It has full device and channel mapping and it works in trial mode for a short time – enough to get my test done.

I edited the sample tracks I already had in Sound Byte for my previous testing by setting each of the four tracks to output to the “Tracker01” through “Tracker04” devices and the channels in each case to 1 and 2 (the defaults). The setting of the channels this way mimics what the non-channel-mapping software will do. I then opened up my Logic test project and modified it to use “Master” as its input device. The four-song tracks kept their channel mappings (3&4, 5&6, 7&8, 9&10), which was perfect, plus I added a microphone track using channels 1&2.

With all tracks set to record and to monitor, I fired up two songs simultaneously (not very pleasant) and the Mac’s speakers provided some low-level input to the microphone which I had turned on.



You can see in the image above that the meters in Logic show a signal coming from the relevant tracks (in green) in Sound Byte plus the low level into the microphone.

So now I have a solution that enables me to use any software that can at least route clips to a different device, even if it cannot route to specific channels.

I’m going to think on the whole situation a little while longer, but with this in place and the high quality, reasonable price, and responsive developer for MiX16 PRO, I think that will be the way I go.