Above and beyond

UPDATE: I’ve now added a follow-up post covering two new apps and a solution to channel routing.


There was a time when I used to record and publish a podcast every Thursday. It only really lasted a few years before I lost the passion, but it does still exist and I release new episodes from time to time when the bug bites me.

I’ve been skirting around the edges of putting a new episode out for the last couple of weeks but one thing is holding me back – software. The podcast was born on a Windows PC in the wonderfully simple CastBlaster software. After switching to a Mac, I used GarageBand for a time before discovering what would almost be the perfect tool for me – Ubercaster.

Unfortunately, Ubercaster is no more, so this week I set out to see if I could come close to the Ubercaster experience with modern Mac software.

Goals

 

Before beginning my search, I stopped to consider what features made Ubercaster the uber solution for The Sitting Duck Podcast. You can get a pretty decent feel for all of the features from this contemporary review over at MyMac.com.

The first key feature was the single workspace. I would spend a bit of time in the Prepare layer, adding the songs I would be playing and perhaps some sound clips, and tweaking titles and layouts so I would have a good visual flow during recording. I could also adjust volumes for consistency and trim out silent starts and ends on clips.

Then came the Record layer. I would generally record an entire episode in one take, effortlessly clicking on songs to ‘fire’ them, clicking on the microphone mute so I could take a drink, and keeping an eye on time-to-run on the current song plus the total recording length, and checking levels. Importantly, for those odd times when things didn’t quite go right, I knew I always had the Edit layer so I would just push through any hiccups.

The killer feature of Ubercaster was the Edit layer. Even though the recording was live, the result was a multi-track layout almost exactly like GarageBand or Logic Pro. I could edit out vocal flubs if I was so inclined, though I rarely did. If a song hadn’t fired when I wanted, or there had been a few seconds of silence at the start that I hadn’t caught in preparation, I could simply slide things around to make it seem like everything had gone perfectly. Sometimes I even started the wrong song, stopped it, and started the correct one. That was still easy to fix in post even when vocal and music overlapped.

Since the disappearance of Ubercaster, I have gone back to using GarageBand or more recently Logic Pro X. Using these means either ‘wiring up’ multiple applications like SoundBoard and SoundFlower to input tracks of the recording software and then juggling applications during recording, or non-live production, where I record a vocal part, then stop and insert the song, then record vocal again. The former is a lot more work and easy to get confused with, and the latter doesn’t have the same feel to it, although it can save rather handsomely on the total time to record!

So my goals for a new approach would be:

  • Preparing all audio before hitting record, so it will just need to be ‘fired’ at the right time.
  • Minimising the number of applications I will need to interact with during recording.
  • Recording multi-track*

* When it comes to multi-track recording, there are different levels of ‘multi’ to consider. It’s relatively easy to get vocals on a different track to music, but separating music from jingles or multiple music tracks from each other is ideal. In the latter case, it allows for a cross-fade that can still be edited after recording as with Ubercaster.

The Toolkit

While I would be looking for new software, there are a few applications I already have which could be pressed into service as part of the solution.

  • Logic Pro X, from Apple, is the obvious endpoint for recording because it has all the multi-track tools I could imagine and then some.
  • Audio Hijack, from Rogue Amoeba, is also another option for recording which technically can record multiple tracks but is not an editor. It can play a role in managing audio channels, however.
  • Loopback, from Rogue Amoeba, is a lower level tool than Audio Hijack which can also assist in routing audio channels.
  • SoundBoard, from Ambrosia Software, is outwardly an obvious choice for firing songs and jingles.
  • I have various others such as SoundSource, Fission, and Piezo, but I see these as unlikely contributors.

The Contenders

I’ve searched a number of times for an all-in-one solution approaching what Ubercaster had to offer and come up well short, so I have been looking mostly at applications to ‘run the show’ for recording in Logic Pro. I’ve owned a copy of SoundBoard for a while and had tried using this for recording, so it became the benchmark from which I would cast a net for alternatives. I visited alternativeto.net and typed in “SoundBoard” to see what it had to offer.

Note that SoundBoard is still a contender, but it lacks the audio routing to be the perfect solution and the interface is not to my liking. Not to mention I have concerns about Ambrosia Software. The SoundBoard Remote application for iOS is on the list of apps that will stop working in iOS 11 because it’s still 32bit, and the pain of WireTap Studio still stings.

qwertyGO

This USD$39 application from SIR Audio Tools is clearly aimed at live performance. It appears to have all the controls needed, including channel routing, but the interface is clearly designed around firing sounds by keyboard. While it does support MIDI (see below), the need for a clear interface at record time is not met because each sound clip is represented by a picture of a single keyboard key with limited information visible and no real options for layout. I have not trialled this.

Sound Byte

This USD$39* from Black Cat Systems is an interesting one. It has all the controls needed, including routing and layout (though it would be nice to be able to reduce the grid from 75 cells). If I were to judge it purely from the website, I might run away (1990 called and wants its design back) but ticking all the boxes put this one on my trial list.

In use, the preparation of clips is cumbersome but relatively easy to understand. It would benefit considerably by allowing the user to set some aspects of multiple clips at once. With some time on setup, the record-time interface (which is the same interface) is functional and has some nice customisations available such as flashing clips nearing their end, disabling played clips, and locking volumes.

* The $39 price point gets the Lite version which would be sufficient for me. More expensive versions increase the number of racks available from the single rack in Lite.

EventSoundControl

This €39.50 application bills itself as “professional audio software” but I think I might need a qualification before I can use it. It has most of the controls necessary but seems to be missing channel routing. At least, I think it’s missing. I couldn’t find anywhere to set a per-clip routing and when I did find something to do with output channels, I couldn’t figure out what it does. the interface is also confusing and I found it difficult to actually perform the basic actions for record-time. Not to mention adding clips is a nightmare. Drag and drop is not supported! I went looking for the help and it just says “coming soon.” Given they’re up to version 4, I’m not hopeful for that.

Soundplant

This USD$50 application from Marcel Blum takes the same approach as qwertyGO, using a facsimile of the computer keyboard as its interface. It has all of the control features, but the interface paradigm kept it off my trial list.

BZ Soundboard

This free application looks like a very simple one. A little too simple, however, as it lacks the routing features. Also, it was last updated in 2011. This did not make it to my trial list.

ShowCueWeb

Although intriguing – this is an HTML5 sound cueing application – it will undoubtedly lack the required controls (being confined to a browser) and, well, any application whose homepage is a GitHub page isn’t going to be an easy time. Not on my trial list!

QLab

This application from Figure 53 seems to be the bee’s knees! It can cue audio, video, and lights and has plenty of features and a fantastic interface. It took me a little while to get to grips with the parts of the application, but this really is full featured and well thought out. There’s even a free version!

Just one problem. Audio routing is not available in the free version. That requires a license. A USD$399 license!

Outside The Box

After not finding the perfect solution from the above list, I began to wonder if there was a way to use what I already had to bolt something together.

The music I play lives mostly in iTunes. I’ve ‘Hijack‘ed iTunes into Logic before as a means of firing the songs, but it’s finicky, requiring a double-click to launch the song from my playlist, followed by quickly clearing the Up Next list. Forget that second step and you get the next song when you don’t want it.

I looked at scripting iTunes to get better control. This is possible but will take some effort to figure out, and it’s simply not possible to route iTunes audio, other than ‘Hijack‘ing it.

I looked at Audio Hijack to see if it had the ability to play files as sources, but it doesn’t.

MIDI

Quite literally outside the (computer) box, adding some form of MIDI controller could make a difference to which software could work. For example, qwertyGO and Soundplant both lost out because of their interfaces, but if I could subjugate that interface with an external control surface, they may work well. I’m not sure how much a control surface might cost and I’m not sure whether dedicated controls will work versus a clearly labelled interface.

Best So Far

I’m still on this journey and as yet I’ve not outlaid any money, but so far the best option looks like Sound Byte. The interface is good enough and it has the channel routing that, together with Loopback, will allow me to lay down multiple songs into multiple tracks along with vocals and jingles each in theirs.

My next best options are to use SoundBoard or QLab (free) and deal with the fact that all the songs will lay down on the same track. An acceptable trade-off if I can commit to getting the cross-fades just right first time!

Beats X Bluetooth earbuds – update

 

Two weeks ago I published a review of the BeatsX Bluetooth earbuds over at podfeet.com as part of my hosting duties standing in for Allison Sheridan. I had a few issues with them at the time I wrote the review but now I have a few additional thoughts on them, so here’s a quick update.

The issue I had with the power button still effectively remains, but with practice it’s really simple to judge “more than a second” to hold it down and I’ve found that in addition to easily forming the habit of turning them off and on, I’ve pretty much stopped looking at the power light now. The chime in my ears is enough to confirm to me I’ve done the job of turning them on or off – if I leave at least one bud in an ear before pushing the button, so I can hear it. This last bit can be mentally challenging. When I get into my car, I engage my after-market Bluetooth system. Once connected, it ‘steals’ the audio from the BeatsX and my first instinct is to simply remove them from my ears – but I need to leave them in and power them off first, even while the audio is playing now on the car speakers.

The bowing out or in of the cables below my ears remains a problem. Every length of wire on this product is a ribbon shape and this causes them to have a mind of their own. From time to time I still manage to twist an earbud – almost always the left – the wrong way and the wire twists uncomfortably into my face. If there was one thing I would change on the design, it would be round wires from the earbuds to the ‘lumps’. On the positive side, the recent weather has given rise to wearing my ‘up-to-the-chin,’ fleece-lined jacket and I’ve had no problems zipping this up to the top against the rain and the buds staying put, although see the following point for a probable contributing factor to that.

I mentioned I had trouble with wind catching the flat wires and tending to blow the buds out of my ears. There were a couple of days when the winds were severe enough (around 50km/h+) that this became extremely annoying. One lunchtime I went for a walk and just about gave up trying to listen because the left bud kept edging out of my ear. I’d been experimenting with the different sized tips, even going with different sizes on each ear, but nothing would stay in. After that walk, I fetched out the optional “wings” that are provided and fitted what appears to be the smaller pair of the two provided. These have made all the difference. With the wings fitted, the buds don’t move at all. I’m fairly sure I still don’t have a perfect seal in my left ear, but even the strongest winds do not budge them. These also make the buds feel lighter in my ears.

There have been two days when I have absent-mindedly left my BeatsX at home. For such occasions, I have two pairs of Sennheiser wired earbuds stashed in my car, as it’s when I get out at the railway station that I usually realise my error. I have the pair I had been using full time prior to the BeatsX, and an earlier pair which are a little higher quality but less practical. Using either pair of Sennheisers made a definite difference to my listening experience. The sound of both sets was somehow more comfortable sounding. I’m no sound engineer, so I can’t explain why that is. I also found the apparent weight in my ears was less with the wired sets, which surprised me. But… I struggled frequently with the wires, which was the whole point of going Bluetooth.

Just yesterday I discovered an unexpected benefit of the wireless earbud lifestyle. I had spent the last hour or so at work listening to some tunes while I got some work done – my ‘desk neighbours’ having left early. The iPhone was on my desk so I could easily see what track was playing and skip or like (I was listening to Apple Music’s My New Music Mix playlist). When it came time to pack up, I stood up and went about putting stuff away and even fetching my jacket from the nearby coat stand, all without having to fuss at all with where my phone was (it remained on the desk) or where any wires were dangling as I was reaching over and under my desk. When I was ready to leave, I simply placed the phone in my jacket inside pocket and walked out. The music had continued uninterrupted the entire time.

To summarise, the effect of wind on the flat wires is a bit of a design flaw that had me seriously thinking about giving them up, but the wings address that problem, if not truly fix it. With that out of the way, no other niggles make me regret my purchase.

You can purchase BeatsX directly from Apple for NZD$229.95.


All images copyright © Apple, Inc.

Making pi

You know what the internet is like. You click on a link on Twitter which takes you to YouTube which suggests another video, which then suggests another and before you know it you’ve spent far more time than you intended at the computer.

That’s the process that landed me on a series of Numberphile videos. I know about Numberphile through another, much longer series of links. I listened to the NosillaCast, which featured Bart Busschots, who co-hosted the International Mac Podcast (no longer active), on which I appeared with Andrew J Clark who recommended The Prompt (since replaced by Connected) which featured Myke Hurley, who started Relay.FM on which he joined CGP Grey for Cortex, where Grey mentioned Hello Internet which features Brady Haran who makes the Numberphile videos!

Cutting to the chase, I watched a whole bunch of Numberphile videos today on all manner of topics including a number which has long held a fascination for me – pi, or π.

Many years ago, when I was in my early twenties, I was boasting to my father that I had memorised a bunch of digits of pi. I forget how many, but I suspect it was something like 15 or so. He promptly grabbed a piece of paper and slowly wrote out 30 decimal places of pi. The first ones matched mine so I had to assume he was correct with the rest. When I quizzed him on how he did it, he wrote out a poem in which the number of letters in each word corresponded with the decimal digits of pi. While trying (and failing) to find the said poem online when writing this post, I discovered this technique is referred to a piphilology and specifically, my father relied on a piem.

In an effort to one-up my father, I took the 30 digits he had furnished and set about learning them by rote. I created for myself an extra login step on the computer terminal at work which required me to enter all 30 digits to continue, and I used this several times a day – though I had a much shorter cut-out passphrase for use when the boss was waiting on me!

To this day, I can still recall those digits, plus another 10 I committed to memory many years later. I swear to you that this is typed entirely from memory.

3.1415926535897932384626433832795028841971

In fact, my Dad’s poem had a confusing word (I recall it had an apostrophe or similar) and for a long time I remembered the palindromic sub-sequence …46364… only later discovering it was correctly …46264…

After watching the pi-related videos today, I had a mind to get myself to a round 50 digits by committing the next 10 digits to memory. Now, while I could simply have looked up these digits online, I began to wonder, as I have before, whether I could use my Mac to calculate the digits.

A related video (not from Numberphile) that I had watched had a link to some software claiming to do exactly that, but on inspection, it appeared not to have been updated in a while and was only provided for Windows and Linux. While I could probably have got the Linux one to work (perhaps in a VM), I began a search for a Mac program that could do it.

It turns out, there’s a remarkably simple way to calculate digits of pi on any Mac or Linux system without any software beyond what comes as standard. There’s a Unix command bc which calculates with “arbitrary precision.” Give it the right equation and it’ll work with extraordinary numbers of digits.

This site I came across gives a remarkably short script to generate digits of pi to an output file. I’m not sure what that a( ) function is (it is intrinsically hard to search for!), but I ran it for 300 digits and it finished instantly. Then I ran it for 10,000 digits and it finished in 100 seconds. Before I went to bed I left it calculating 100,000 digits. It took – no kidding – 11 hours and 1 second.

While the big one was still running I had a file with 10,000 digits of pi – what to do with it? I’d recently been fiddling with shapes in Affinity Designer, trying to come up with some kind of new wallpaper for my 27″ iMac. I would create a new wallpaper, which I could then use to help me learn those next 10 digits, and maybe more.

So what, exactly, did I have? I had a text file which contained 10,000 digits of pi arranged in lines of 68 digits, terminated by a backslash and newline. I figured I needed to combine pairs of lines to get the right sort of shape for fitting a lot of digits on a 16:10 screen. I turned to the Atom text editor and its regular expression search and replace.

Find   : (\d{68})\\\n(\d{68})\\
Replace: $1$2

Now I had half as many lines of 136 characters and no superfluous backslashes. I copied and pasted the lot into a text block in Affinity Designer and chose a suitable font – monospaced, of course – which was Menlo. With a suitable font size to allow the digits to be read, but not enormous, I then trimmed to 42 lines to fit the screen with some space top and bottom. That’s 5,710 decimal places (plus the “3.”).

For a bit of style, I added a black background and ran a subtle grey ‘shimmer’ gradient from corner to corner. I think it looks pretty snazzy.

 

5,710 decimal places of pi on my desktop.

But then I decided I wanted something a bit more… funky. One of the Numberphile videos included a number of clever and artsy representations of pi using various visual techniques including the use of colour. What if I coloured all of the 0s one colour, all of the 1s another colour, etc?

 

I set about choosing the colours. It quickly dawned on me that picking evenly spaced colours along the hue axis of the hue-saturation-luminance picker would be a good choice. I trialled one of each digit and it looked OK. But how to do roughly 571 of each without going completely batty?

I hit upon a relatively simple technique using a combination of Pages and Atom. In Pages, I created a new document with a single line of text “0123456789” and I coloured each of the digits appropriately. I then saved the file as rich text.

Opening the rich text file in Atom, it was reasonably easy to see how each colour was applied to each character. At the top, there was a definition of all of the colours and then for each character, there was a sequence like the following: \cf2 \strokec2 0

The colours were numbered from 2 through 11 in the order I had defined them, so all I needed to do was replace each “0” with “\cf2 \stroke2 0” then each “1” with “\cf3 \strokec3 1” and so on. It struck me that doing a find and replace on each of the digits was going to be problematic considering replacing the 0s would introduce 2s (as part of the colour definition) so I first did a bunch of search-and-replaces to switch out 0 through 9 with A through J. Then I was able to replace “A” with “\cf2 strokec2 0” and so on.

Having done the two rounds of replacements, I had a huge wodge of text which I then simply copied and pasted into the rich text file in the appropriate place. A quick preview showed it had worked! You might notice an opening extra “0” which is there because the first digit in the original file was prefixed with a bunch of other codes and so I left it there in case their order mattered. I later edited it out.

 

That looked pretty ugly on a white background, but when I copied it into the Affinity Designer file it and set it in Menlo against black, it looked… bright! I reduced the opacity to 50% but it still didn’t look right. Time to add a shimmer! I used the transparency tool to create a transparency gradient that varied between 100% and 75%. It was looking better, but the random distribution of the digits still gave an overall flat appearance. What was needed was some kind of hero feature.

I quickly hit upon the idea of using the π symbol itself as a feature. Many fonts’ glyphs for π are rather dull and square but I eventually settled on the AppleMyunjo font which has a pleasingly stylish one. I added a giant π in 50% grey, set the blend mode to colour dodge so it would brighten up the colours below it, lowered the opacity until it seemed about right (75%), then finally added a moderate gaussian blur to soften the edges.

Tada!

 

So there you have it. 5,710 decimal places of pi, as art. I’m really pleased with the final version. You can click on the image above to see the full 2880 x 1620 pixel version I use on my iMac. Same for the monochrome one.

NC #620

With Allison and Steve away in the Galapagos Islands and Machu Picchu, I was given the task of hosting NosillaCast #620, which meant a week of blog posts to manage and then collate into the final product.

Topics include a miniature review of using the Apple Watch Series 2 for swim workouts, 26 Mac Apps you didn’t know you already had, two more videos from the CSUN Assistive Technology Conference, some recommendations for podcasts you might want to listen to that aren’t about technology, Terry delivering on his callout from Allison with a review of GhostReader text to speech software, and a review of the BeatsX Bluetooth earbuds with Apple W1 chip.

The best camera – update

Back in 2013, I wrote a blog post (since taken offline) about my disagreement that modern smartphone cameras “make compact cameras obsolete.” My premise being that for many types of photo – just about anything of an object out of reach – the lack of optical zoom is a severely limiting factor.

Later I purchased what I call “the hundred dollar camera” and have been carrying this in the bag I take to work every day, and sometimes – when I remember – in my pocket. My goal is to find and capture scenes that are simply impossible to capture on a phone, using a device that’s just as pocketable and super cheap.

On Friday morning, I was doing my usual walk down Wellington’s waterfront on a frankly gorgeous morning. The harbour was glassy and still – a state it doesn’t often achieve – and there were numerous people out enjoying it in vessels of different sizes.

SSV Robert C. Seamans is a 134-foot steel sailing brigantine operated by the Sea Education Association (SEA) for oceanographic research and sail training. She had been berthed at Wellington’s Queens Wharf the previous day, but on this particular morning, she was underway. (As it turned out, merely to another berth around the corner.) The sight of this beautiful tall ship on the glassy water with a grey overcast above was stirring enough that I decided I needed to capture the scene. I reached for my hundred dollar camera.

SSV Robert C. Seamans
SSV Robert C. Seamans

This photo was at an equivalent focal length of 106mm – almost twice that possible with the latest technology in the iPhone 7 Plus. As shown above, it is a very slight crop, colour corrected, and with some noise removal applied, which really only seemed to affect the foliage on the hill (Mount Victoria) behind.

Viewed at full scale, the quality of the image is terrible, but it looks fantastic on my iPhone 6 Plus screen. Easily the equal of good photos taken on the phone itself. But of course, if taken with the iPhone, it would have to have been a major crop and the quality issues on iPhone photos would become apparent – certainly if taken with the 28mm equivalent standard lens.

So, you might get something approaching that quality with an iPhone 7 Plus. But you wouldn’t have a chance of getting this shot at 172mm equivalent.

SSV Robert C. Seamans
SSV Robert C. Seamans

That’s a hair over three times the focal length of the iPhone 7 Plus and still comfortably inside the optical zoom range of the hundred dollar camera. The same types of processing have been applied as above and once again, it looks fantastic on my iPhone screen. In fact, it looks pretty darned good on my computer screen, too, if not at full zoom.

An iPhone shot would show a boat in a harbour. This shot shows people on a boat. This is a perfect example of my characterisation of “objects you can’t touch” which the iPhone camera is simply incapable of capturing well.

I’m not giving up my DSLR any time soon, even though I concede it is a bulky item to carry. I have carried my DSLR on my commute on a number of occasions, but it’s a little too heavy and bulky to be a regular practice. Or is it? As I wrote that sentence, it occurred to me the biggest pain with carrying the DSLR is the size of it in my laptop bag which is not designed to carry it. With some thought, I may be able to solve that.

But aside from issues of bulk with a DSLR, this tiny camera, which I can carry in the same pocket as my iPhone 6 Plus at the same time, clearly outperforms any model of iPhone for less money than you’ll spend upping the storage size on your next iPhone.


The “hundred dollar camera” is a Canon IXUS 160, which cost me NZD$110 in 2016.

The value of photographs

This post is a revision of one I published in 2015. The topic came to mind again as I was discussing my Adobe Lightroom workflows with an acquaintance who is currently making a switch to this software.

The question at hand is how to decide which of your hundreds or thousands of digital photos you should delete and which you should keep.

I have observed amongst some friends that the subject of culling engenders lengthy discussions including picking keepers, hiding duds, rating schemes, multiple passes, and the passage of time to try and make sure the right photos went in the right direction.

My approach does away with this angst, for the most part, by flipping the triage process on its head. Instead of deciding which photos to throw away, I decide which to publish, and keep the lot.

Back when I wrote the original piece, I had just seen the beginning of a training video in which it was claimed that 99% of the trainer’s photographs ”don’t work” and 95% of them should be deleted. He was making selections based on how well he had executed the art of photography and any frame that was slightly out of focus, unbalanced, misaligned, or poorly composed did not deserve to be kept. Even a good frame did not deserve to be kept if there was a better one of the same subject. But I contend that photography is not just art.

I have thousands of photos of aircraft that are not pleasing enough to my eye to publish, but they are a record of a particular aircraft at a particular time and location. I follow blogs which publish hundreds of such photos from years gone by and these generate a lot of interest. I can imagine some may consider photos of trains, cars or boats in the same way.

I would also consider bird photos in a similar vein. For instance, I have a handful of photos of New Zealand Dabchicks, almost all of which aren’t great to look at, but represent a lot of work I did to stalk these shy birds and to some extent serve as an aid in recognition for the future.

The trainer was a street photographer, and at first, I considered that subject matter unlikely to fall in line with my thinking. But what if you capture something which only has meaning much later than when you first review your photos? There are photos of people before they became famous, or claimed to be ”the last” before they died, many of which are artistically unremarkable, yet historically important or at the very least interesting.

What about photos of family which are memories? How many times have you seen a story about a tragic death in which the person is remembered by a photo which, usually, shows them in happy times, but also usually, is a not a great piece of photographic art? It’s the memory that is important, whether in focus or not. Even if you have 5 photos of a person in the same place at the same time, maybe there’s something in the background of one of them – a favourite toy; a cherished painting; something that takes on a deeper meaning after later events unfold. You might even have an unremarkable photo of a landscape that later undergoes dramatic change.

It was this concept of ’later significant’ photos that was explained to me many years ago after which point I haven’t deleted a single photo except a small handful which were massively out of focus or accidental shots, say, of the ground. With my shift to Adobe Photoshop Lightroom to manage my photos I have doubled down on my keywording – cleaning up as much as I can – and in the process, I have come across photos I forgot I had. Ones never before considered for publishing to the world but fascinating to rediscover and some I have cleaned up and now published.

Furthermore, the phrase ”storage is cheap” continues to ring true. Granted my photo library is not enormous (~32,000 photos @ ~375 GB) but I now have it comfortably resting on a fast external SSD. Even if it were 10 times the size, I could spend a few hundred dollars on a 4TB USB drive.

So, I do not cull my photos and have no plans to start. Rather, I organise everything to be held for posterity and then select my best or most interesting for publishing.

Here’s the first one I stumbled across in my cleanup. I have no idea what was my frame of mind that this very interesting and unique (in New Zealand) aircraft didn’t warrant publishing straight away…

 

_IGP3438

…whereas the very next frame from my camera did get published.

 

Is this a photographic work of art? Or a memory of a young Maine Coon called Snickers?


The header image on this post is of Fouga Magister, ZK-FGA, taken on January 25th, 2004. This aircraft tragically crashed less than two months later. It’s not a fantastic photo of the aircraft, but it is the only (digital) one I have.

Not realising potential

This is a follow-up to my previous post and came about due to a discussion I had on that post with a friend.

One of the basic issues I identified with the cut-and-paste situation was that the touch interface is having to deal with an “old school” model of text editing that came, in fact, from the days before the mouse. However, I came to realise there are things that a touch interface should be really good at but are still hamstrung with old ideas.

I’ll keep this post much shorter. Watch this video, and pay particular attention to the section on photos starting at the 02:34 mark. Scrub forward to that section if you wish. If you’ve seen it before I urge you to watch that section again.

OK, now open up the Photos app on your iPad – the one which has seen continuous improvements for the last 10 years. Which experience is it closer to – the original iPhoto app for Mac, or the demo in the video above?

It is clearly an iteration of the basic iPhoto design which debuted in 2002 and couldn’t even claim to be original back then. You get a grid of photos, some sorting options, some searching, and you can tap any photo to have it enlarge to full screen. The iOS app isn’t even as capable as the basic feature of Photos for Mac today. Try adding a keyword to a photo. You can’t.

Why don’t we have something much closer to the demo by now? In case you didn’t know or notice, the demo took place in 2006. The year before the iPhone launched. Four years before the iPad launched. Modern iPads can play fantastically complex and detailed real-time video games – why can’t I organise and edit my photos in a natural fashion?

Trucks and cars

UPDATE: Please see the bottom of this post for an interesting side story.


There has been a recent resurgence of discussion in the Apple commentator’s world about the future of the Mac. In many cases, the discussion turns to how well, or not, iOS can take the place of macOS for many types of work.

I love my Mac and would hate to see it fade away. I’ve always had this feeling that some basic tasks are just more intuitive and simple on a Mac than on iOS but until a few days ago I couldn’t come up with any concrete examples.

I recently purchased a 9.7″ iPad Pro and have been using it for some writing – one of those tasks the commentators say an iPad is pretty darned good at. I wrote a fairly lengthy blog post for a friend’s blog using Ulysses, both on the iPad and my MacBook and iMac. Most of the initial writing was done on the iPad but editing occurred on the Macs. Again, it seemed like the easier option to edit on a Mac. But what was the truth of it? Here’s how I quantified the issue…

Having completed the blog post, I decided I should dig out a “to do” list I had created of future topics for my friend’s blog. It was a fairly old list and I found it sitting in Apple’s Notes app as a checklist. I decided it would be better to copy the items into OmniFocus so I could prioritise them, add notes, and mark off those completed.

OmniFocus on an iPad is a joy to use. It’s the type of app that really lends itself to a touch interface and I find it easier to use there than on my Macs unless I’m doing some major reorganisation. So I decided I would copy the 31 entries across on my iPad. Should be fun!

Here’s what it takes to copy list entries from a Note to an OmniFocus task. Hold on tight…

My Notes list

Multitasking makes everything easier. OmniFocus has been brought onto screen and has a new project ready for the new tasks.

 

Eagle-eyed readers may note the circles have disappeared from the Notes list in subsequent screen captures. That’s because I removed them after I discovered they would be included as “- [ ]” characters in the copied text. A quirk, but not relevant to my point here.

I have tapped the button to add a task to the project. Now what?

 

A tap on the Note places the cursor (not captured in the screen capture) in Notes. With care, the cursor is at the end of the text I want to use to create the task (after the “1” in “Item 1”).

 

I tap again to invoke the pop-up menu, from where I can enter text selection mode. I tap Select.

 

Initially, only the nearest word is selected, so I need to carefully tap-drag to select the whole line.

 

Now I have my line selected, I can tap Copy.

 

I have my line of text copied, but I’m still in Notes. I need to go over to OmniFocus with a tap over there.

 

The first tap sets OmniFocus as the active app, but I can’t paste anything yet.

 

Another tap brings up the Paste option, which I can then tap.

 

Boom! I have my text in the new task and I can save it. OmniFocus has the nifty “Save +” button which saves time by immediately opening a new task for entry.

 

At this point, it has taken 8 taps to copy a line of text from one application to the other. Several of those need to be made with some precision or additional taps will be required. This does not include those taps required to create and save the task in OmniFocus.

I repeated the steps to copy the second task across. But wait! Is there a better way? Does iOS offer better mechanisms to solve this simple task?

Well, there are action extensions, and OmniFocus most certainly has one. Let’s add the third item in the “modern” way.

First up, the same tap, tap, tap, drag is required to select the line of text in Notes.

But instead of going straight to the Copy function, we need to tap the arrow to get to action extensions.

 

Now we can tap the Share… button. We’re up to 6 taps now.

 

I had to scroll to find OmniFocus, but I could rearrange those to make it instantly accessible. So just counting the tap on the OmniFocus extension, we’re up to 7 taps.

 

The OmniFocus extension pops up, but because I’m “coming through a different door” the context of my project is lost and therefore I need to select it. Again, some organisation could put the project at the top, so we’ll give that as a freebie, but the tap to select the project takes us to 8 taps total.

A final, ninth tap on Save creates the task in OmniFocus. The modern approach takes more taps than the old school.

 

I use action extensions reasonably often and find them mostly intuitive and simple and quick. But when it comes to a repetitive task like this, all of those attributes melt away. Even the intuitiveness! When copying 31 tasks for my real list, my brain would start to get muddled on which step needed to be performed next. This is true on the Mac as well, sometimes, but this is a very, very simple task – copy some text from one application to another.

This same task on the Mac is far simpler. Again, with both apps open side by side and the project container and new task created, it takes the following steps:

Click on Notes. Drag over the text. Cmd-C. Click on OmniFocus. Cmd-V.

While that’s still 5 operations, only one of those requires any dexterity – the drag. More of it can be accomplished with keyboard shortcuts, too. Cmd-Tab to switch between the apps and, outside of my counting scope, Cmd-N to create a new task in OmniFocus and Enter to save it. I could even select the text in Notes with the keyboard although I reckon that’s slower and more fiddly.

Granted I could add a keyboard to my iPad, but should I require an expensive additional extra just to do a simple copy and paste task? I have no idea how different it would be with a keyboard, but I suspect that there’d still be a lot of touching the screen. The use of a keyboard on the Mac, plus the basic keyboard shortcuts (Cmd-N, C, V) are intrinsic to almost all apps because the keyboard is always present.

The nub of this issue, as I see it, is that a touch interface will never be good at detailed work that follows the same paradigms as the traditional desktop computer. Perhaps there is a clever way to multi-touch edit text that has yet to be thought of, but it’s not here now.

A final note. Those eagle-eyed readers may also note the time in my screen captures is out of order. There were so many individual taps that I found it hard to remember to take every screen capture the first time through. And the second time.


UPDATE

In this post, I used a specific task between Apple’s Notes and OmniFocus to illustrate a fairly basic concept. This was never intended to be a slight on either product, but rather the nature of iOS.

However, even though I did not reach out to the Omni Group, nor even complain about their OmniFocus product, the CEO of Omni Group, Ken Case, obviously came across the post and reached out to me to explain that I could have done this particular task more easily. This is a fantastic level of support! And so I thought it deserved a callout here.


Postscript: After publishing this post I noted that where I had used image captions to describe the steps, the text was too small in comparison to the few passages in regular paragraphs. In the space of a couple of minutes, I edited the post to move all of the caption text into text blocks, including creating most of those text blocks. One hand on the keyboard, one on the mouse, and my eyes planted firmly on the screen this was a quick and fluid task. Ignoring the fact this task isn’t even possible on iOS, if it were, I don’t think I would have been done in two minutes!

A decade on

Today marks 10 years since I switched to the Mac and I thought, like that day, it deserved a blog post to mark the occasion.

A screen capture from the “Wayback Machine” at archive.org shows my blog post as it originally appeared in 2007 on the Sitting Duck blog.

I still think it is one of the best decisions I ever made to (mostly) abandon Windows. For all the complaints I have had over the years about the Mac, I still get a regular view into the Windows world and, as my friend Allison says, it’s like being prodded in the ribs every 5 minutes.

It’s a fascinating discussion to have with people who remain with Windows and say “it’s fine.” Very few people defending Windows have ever spent much time really immersing themselves in the Mac operating system, yet a large proportion (these days a majority I reckon) of Mac users came from years of Windows use like me, or at least have been exposed to Windows in an office environment. In my experience, Windows defenders never actually defend their choice of OS but rather attack my choice, often by explaining how “nasty” Apple is in its ways.

I had a fairly level-headed discussion recently where my ‘opponent’ was actually trying to defend Windows, but I kept pointing out to him that all of his positive points amounted to “it’s not as bad as it used to be.” I think that’s the nub of the issue. People just expect things to be difficult. That’s not to say things are always easy on the Mac, far from it. Yet I use Windows 7 five days a week at work and it is constantly bugging me in so many ways.

I’m not going to make this a long diatribe and try to convince anyone to switch. Truth be told, most who will read this will already be Mac users. No, I just wanted to mark the occasion and note that I’m still happy with the decision – 8 OS versions and 3 Macs later.