Making pi

You know what the internet is like. You click on a link on Twitter which takes you to YouTube which suggests another video, which then suggests another and before you know it you’ve spent far more time than you intended at the computer.

That’s the process that landed me on a series of Numberphile videos. I know about Numberphile through another, much longer series of links. I listened to the NosillaCast, which featured Bart Busschots, who co-hosted the International Mac Podcast (no longer active), on which I appeared with Andrew J Clark who recommended The Prompt (since replaced by Connected) which featured Myke Hurley, who started Relay.FM on which he joined CGP Grey for Cortex, where Grey mentioned Hello Internet which features Brady Haran who makes the Numberphile videos!

Cutting to the chase, I watched a whole bunch of Numberphile videos today on all manner of topics including a number which has long held a fascination for me – pi, or π.

Many years ago, when I was in my early twenties, I was boasting to my father that I had memorised a bunch of digits of pi. I forget how many, but I suspect it was something like 15 or so. He promptly grabbed a piece of paper and slowly wrote out 30 decimal places of pi. The first ones matched mine so I had to assume he was correct with the rest. When I quizzed him on how he did it, he wrote out a poem in which the number of letters in each word corresponded with the decimal digits of pi. While trying (and failing) to find the said poem online when writing this post, I discovered this technique is referred to a piphilology and specifically, my father relied on a piem.

In an effort to one-up my father, I took the 30 digits he had furnished and set about learning them by rote. I created for myself an extra login step on the computer terminal at work which required me to enter all 30 digits to continue, and I used this several times a day – though I had a much shorter cut-out passphrase for use when the boss was waiting on me!

To this day, I can still recall those digits, plus another 10 I committed to memory many years later. I swear to you that this is typed entirely from memory.

3.1415926535897932384626433832795028841971

In fact, my Dad’s poem had a confusing word (I recall it had an apostrophe or similar) and for a long time I remembered the palindromic sub-sequence …46364… only later discovering it was correctly …46264…

After watching the pi-related videos today, I had a mind to get myself to a round 50 digits by committing the next 10 digits to memory. Now, while I could simply have looked up these digits online, I began to wonder, as I have before, whether I could use my Mac to calculate the digits.

A related video (not from Numberphile) that I had watched had a link to some software claiming to do exactly that, but on inspection, it appeared not to have been updated in a while and was only provided for Windows and Linux. While I could probably have got the Linux one to work (perhaps in a VM), I began a search for a Mac program that could do it.

It turns out, there’s a remarkably simple way to calculate digits of pi on any Mac or Linux system without any software beyond what comes as standard. There’s a Unix command bc which calculates with “arbitrary precision.” Give it the right equation and it’ll work with extraordinary numbers of digits.

This site I came across gives a remarkably short script to generate digits of pi to an output file. I’m not sure what that a( ) function is (it is intrinsically hard to search for!), but I ran it for 300 digits and it finished instantly. Then I ran it for 10,000 digits and it finished in 100 seconds. Before I went to bed I left it calculating 100,000 digits. It took – no kidding – 11 hours and 1 second.

While the big one was still running I had a file with 10,000 digits of pi – what to do with it? I’d recently been fiddling with shapes in Affinity Designer, trying to come up with some kind of new wallpaper for my 27″ iMac. I would create a new wallpaper, which I could then use to help me learn those next 10 digits, and maybe more.

So what, exactly, did I have? I had a text file which contained 10,000 digits of pi arranged in lines of 68 digits, terminated by a backslash and newline. I figured I needed to combine pairs of lines to get the right sort of shape for fitting a lot of digits on a 16:10 screen. I turned to the Atom text editor and its regular expression search and replace.

Find   : (\d{68})\\\n(\d{68})\\
Replace: $1$2

Now I had half as many lines of 136 characters and no superfluous backslashes. I copied and pasted the lot into a text block in Affinity Designer and chose a suitable font – monospaced, of course – which was Menlo. With a suitable font size to allow the digits to be read, but not enormous, I then trimmed to 42 lines to fit the screen with some space top and bottom. That’s 5,710 decimal places (plus the “3.”).

For a bit of style, I added a black background and ran a subtle grey ‘shimmer’ gradient from corner to corner. I think it looks pretty snazzy.

 

5,710 decimal places of pi on my desktop.

But then I decided I wanted something a bit more… funky. One of the Numberphile videos included a number of clever and artsy representations of pi using various visual techniques including the use of colour. What if I coloured all of the 0s one colour, all of the 1s another colour, etc?

 

I set about choosing the colours. It quickly dawned on me that picking evenly spaced colours along the hue axis of the hue-saturation-luminance picker would be a good choice. I trialled one of each digit and it looked OK. But how to do roughly 571 of each without going completely batty?

I hit upon a relatively simple technique using a combination of Pages and Atom. In Pages, I created a new document with a single line of text “0123456789” and I coloured each of the digits appropriately. I then saved the file as rich text.

Opening the rich text file in Atom, it was reasonably easy to see how each colour was applied to each character. At the top, there was a definition of all of the colours and then for each character, there was a sequence like the following: \cf2 \strokec2 0

The colours were numbered from 2 through 11 in the order I had defined them, so all I needed to do was replace each “0” with “\cf2 \stroke2 0” then each “1” with “\cf3 \strokec3 1” and so on. It struck me that doing a find and replace on each of the digits was going to be problematic considering replacing the 0s would introduce 2s (as part of the colour definition) so I first did a bunch of search-and-replaces to switch out 0 through 9 with A through J. Then I was able to replace “A” with “\cf2 strokec2 0” and so on.

Having done the two rounds of replacements, I had a huge wodge of text which I then simply copied and pasted into the rich text file in the appropriate place. A quick preview showed it had worked! You might notice an opening extra “0” which is there because the first digit in the original file was prefixed with a bunch of other codes and so I left it there in case their order mattered. I later edited it out.

 

That looked pretty ugly on a white background, but when I copied it into the Affinity Designer file it and set it in Menlo against black, it looked… bright! I reduced the opacity to 50% but it still didn’t look right. Time to add a shimmer! I used the transparency tool to create a transparency gradient that varied between 100% and 75%. It was looking better, but the random distribution of the digits still gave an overall flat appearance. What was needed was some kind of hero feature.

I quickly hit upon the idea of using the π symbol itself as a feature. Many fonts’ glyphs for π are rather dull and square but I eventually settled on the AppleMyunjo font which has a pleasingly stylish one. I added a giant π in 50% grey, set the blend mode to colour dodge so it would brighten up the colours below it, lowered the opacity until it seemed about right (75%), then finally added a moderate gaussian blur to soften the edges.

Tada!

 

So there you have it. 5,710 decimal places of pi, as art. I’m really pleased with the final version. You can click on the image above to see the full 2880 x 1620 pixel version I use on my iMac. Same for the monochrome one.

NC #620

With Allison and Steve away in the Galapagos Islands and Machu Picchu, I was given the task of hosting NosillaCast #620, which meant a week of blog posts to manage and then collate into the final product.

Topics include a miniature review of using the Apple Watch Series 2 for swim workouts, 26 Mac Apps you didn’t know you already had, two more videos from the CSUN Assistive Technology Conference, some recommendations for podcasts you might want to listen to that aren’t about technology, Terry delivering on his callout from Allison with a review of GhostReader text to speech software, and a review of the BeatsX Bluetooth earbuds with Apple W1 chip.

The best camera – update

Back in 2013, I wrote a blog post (since taken offline) about my disagreement that modern smartphone cameras “make compact cameras obsolete.” My premise being that for many types of photo – just about anything of an object out of reach – the lack of optical zoom is a severely limiting factor.

Later I purchased what I call “the hundred dollar camera” and have been carrying this in the bag I take to work every day, and sometimes – when I remember – in my pocket. My goal is to find and capture scenes that are simply impossible to capture on a phone, using a device that’s just as pocketable and super cheap.

On Friday morning, I was doing my usual walk down Wellington’s waterfront on a frankly gorgeous morning. The harbour was glassy and still – a state it doesn’t often achieve – and there were numerous people out enjoying it in vessels of different sizes.

SSV Robert C. Seamans is a 134-foot steel sailing brigantine operated by the Sea Education Association (SEA) for oceanographic research and sail training. She had been berthed at Wellington’s Queens Wharf the previous day, but on this particular morning, she was underway. (As it turned out, merely to another berth around the corner.) The sight of this beautiful tall ship on the glassy water with a grey overcast above was stirring enough that I decided I needed to capture the scene. I reached for my hundred dollar camera.

SSV Robert C. Seamans
SSV Robert C. Seamans

This photo was at an equivalent focal length of 106mm – almost twice that possible with the latest technology in the iPhone 7 Plus. As shown above, it is a very slight crop, colour corrected, and with some noise removal applied, which really only seemed to affect the foliage on the hill (Mount Victoria) behind.

Viewed at full scale, the quality of the image is terrible, but it looks fantastic on my iPhone 6 Plus screen. Easily the equal of good photos taken on the phone itself. But of course, if taken with the iPhone, it would have to have been a major crop and the quality issues on iPhone photos would become apparent – certainly if taken with the 28mm equivalent standard lens.

So, you might get something approaching that quality with an iPhone 7 Plus. But you wouldn’t have a chance of getting this shot at 172mm equivalent.

SSV Robert C. Seamans
SSV Robert C. Seamans

That’s a hair over three times the focal length of the iPhone 7 Plus and still comfortably inside the optical zoom range of the hundred dollar camera. The same types of processing have been applied as above and once again, it looks fantastic on my iPhone screen. In fact, it looks pretty darned good on my computer screen, too, if not at full zoom.

An iPhone shot would show a boat in a harbour. This shot shows people on a boat. This is a perfect example of my characterisation of “objects you can’t touch” which the iPhone camera is simply incapable of capturing well.

I’m not giving up my DSLR any time soon, even though I concede it is a bulky item to carry. I have carried my DSLR on my commute on a number of occasions, but it’s a little too heavy and bulky to be a regular practice. Or is it? As I wrote that sentence, it occurred to me the biggest pain with carrying the DSLR is the size of it in my laptop bag which is not designed to carry it. With some thought, I may be able to solve that.

But aside from issues of bulk with a DSLR, this tiny camera, which I can carry in the same pocket as my iPhone 6 Plus at the same time, clearly outperforms any model of iPhone for less money than you’ll spend upping the storage size on your next iPhone.


The “hundred dollar camera” is a Canon IXUS 160, which cost me NZD$110 in 2016.

The value of photographs

This post is a revision of one I published in 2015. The topic came to mind again as I was discussing my Adobe Lightroom workflows with an acquaintance who is currently making a switch to this software.

The question at hand is how to decide which of your hundreds or thousands of digital photos you should delete and which you should keep.

I have observed amongst some friends that the subject of culling engenders lengthy discussions including picking keepers, hiding duds, rating schemes, multiple passes, and the passage of time to try and make sure the right photos went in the right direction.

My approach does away with this angst, for the most part, by flipping the triage process on its head. Instead of deciding which photos to throw away, I decide which to publish, and keep the lot.

Back when I wrote the original piece, I had just seen the beginning of a training video in which it was claimed that 99% of the trainer’s photographs ”don’t work” and 95% of them should be deleted. He was making selections based on how well he had executed the art of photography and any frame that was slightly out of focus, unbalanced, misaligned, or poorly composed did not deserve to be kept. Even a good frame did not deserve to be kept if there was a better one of the same subject. But I contend that photography is not just art.

I have thousands of photos of aircraft that are not pleasing enough to my eye to publish, but they are a record of a particular aircraft at a particular time and location. I follow blogs which publish hundreds of such photos from years gone by and these generate a lot of interest. I can imagine some may consider photos of trains, cars or boats in the same way.

I would also consider bird photos in a similar vein. For instance, I have a handful of photos of New Zealand Dabchicks, almost all of which aren’t great to look at, but represent a lot of work I did to stalk these shy birds and to some extent serve as an aid in recognition for the future.

The trainer was a street photographer, and at first, I considered that subject matter unlikely to fall in line with my thinking. But what if you capture something which only has meaning much later than when you first review your photos? There are photos of people before they became famous, or claimed to be ”the last” before they died, many of which are artistically unremarkable, yet historically important or at the very least interesting.

What about photos of family which are memories? How many times have you seen a story about a tragic death in which the person is remembered by a photo which, usually, shows them in happy times, but also usually, is a not a great piece of photographic art? It’s the memory that is important, whether in focus or not. Even if you have 5 photos of a person in the same place at the same time, maybe there’s something in the background of one of them – a favourite toy; a cherished painting; something that takes on a deeper meaning after later events unfold. You might even have an unremarkable photo of a landscape that later undergoes dramatic change.

It was this concept of ’later significant’ photos that was explained to me many years ago after which point I haven’t deleted a single photo except a small handful which were massively out of focus or accidental shots, say, of the ground. With my shift to Adobe Photoshop Lightroom to manage my photos I have doubled down on my keywording – cleaning up as much as I can – and in the process, I have come across photos I forgot I had. Ones never before considered for publishing to the world but fascinating to rediscover and some I have cleaned up and now published.

Furthermore, the phrase ”storage is cheap” continues to ring true. Granted my photo library is not enormous (~32,000 photos @ ~375 GB) but I now have it comfortably resting on a fast external SSD. Even if it were 10 times the size, I could spend a few hundred dollars on a 4TB USB drive.

So, I do not cull my photos and have no plans to start. Rather, I organise everything to be held for posterity and then select my best or most interesting for publishing.

Here’s the first one I stumbled across in my cleanup. I have no idea what was my frame of mind that this very interesting and unique (in New Zealand) aircraft didn’t warrant publishing straight away…

 

_IGP3438

…whereas the very next frame from my camera did get published.

 

Is this a photographic work of art? Or a memory of a young Maine Coon called Snickers?


The header image on this post is of Fouga Magister, ZK-FGA, taken on January 25th, 2004. This aircraft tragically crashed less than two months later. It’s not a fantastic photo of the aircraft, but it is the only (digital) one I have.

Not realising potential

This is a follow-up to my previous post and came about due to a discussion I had on that post with a friend.

One of the basic issues I identified with the cut-and-paste situation was that the touch interface is having to deal with an “old school” model of text editing that came, in fact, from the days before the mouse. However, I came to realise there are things that a touch interface should be really good at but are still hamstrung with old ideas.

I’ll keep this post much shorter. Watch this video, and pay particular attention to the section on photos starting at the 02:34 mark. Scrub forward to that section if you wish. If you’ve seen it before I urge you to watch that section again.

OK, now open up the Photos app on your iPad – the one which has seen continuous improvements for the last 10 years. Which experience is it closer to – the original iPhoto app for Mac, or the demo in the video above?

It is clearly an iteration of the basic iPhoto design which debuted in 2002 and couldn’t even claim to be original back then. You get a grid of photos, some sorting options, some searching, and you can tap any photo to have it enlarge to full screen. The iOS app isn’t even as capable as the basic feature of Photos for Mac today. Try adding a keyword to a photo. You can’t.

Why don’t we have something much closer to the demo by now? In case you didn’t know or notice, the demo took place in 2006. The year before the iPhone launched. Four years before the iPad launched. Modern iPads can play fantastically complex and detailed real-time video games – why can’t I organise and edit my photos in a natural fashion?

Trucks and cars

UPDATE: Please see the bottom of this post for an interesting side story.


There has been a recent resurgence of discussion in the Apple commentator’s world about the future of the Mac. In many cases, the discussion turns to how well, or not, iOS can take the place of macOS for many types of work.

I love my Mac and would hate to see it fade away. I’ve always had this feeling that some basic tasks are just more intuitive and simple on a Mac than on iOS but until a few days ago I couldn’t come up with any concrete examples.

I recently purchased a 9.7″ iPad Pro and have been using it for some writing – one of those tasks the commentators say an iPad is pretty darned good at. I wrote a fairly lengthy blog post for a friend’s blog using Ulysses, both on the iPad and my MacBook and iMac. Most of the initial writing was done on the iPad but editing occurred on the Macs. Again, it seemed like the easier option to edit on a Mac. But what was the truth of it? Here’s how I quantified the issue…

Having completed the blog post, I decided I should dig out a “to do” list I had created of future topics for my friend’s blog. It was a fairly old list and I found it sitting in Apple’s Notes app as a checklist. I decided it would be better to copy the items into OmniFocus so I could prioritise them, add notes, and mark off those completed.

OmniFocus on an iPad is a joy to use. It’s the type of app that really lends itself to a touch interface and I find it easier to use there than on my Macs unless I’m doing some major reorganisation. So I decided I would copy the 31 entries across on my iPad. Should be fun!

Here’s what it takes to copy list entries from a Note to an OmniFocus task. Hold on tight…

My Notes list

Multitasking makes everything easier. OmniFocus has been brought onto screen and has a new project ready for the new tasks.

 

Eagle-eyed readers may note the circles have disappeared from the Notes list in subsequent screen captures. That’s because I removed them after I discovered they would be included as “- [ ]” characters in the copied text. A quirk, but not relevant to my point here.

I have tapped the button to add a task to the project. Now what?

 

A tap on the Note places the cursor (not captured in the screen capture) in Notes. With care, the cursor is at the end of the text I want to use to create the task (after the “1” in “Item 1”).

 

I tap again to invoke the pop-up menu, from where I can enter text selection mode. I tap Select.

 

Initially, only the nearest word is selected, so I need to carefully tap-drag to select the whole line.

 

Now I have my line selected, I can tap Copy.

 

I have my line of text copied, but I’m still in Notes. I need to go over to OmniFocus with a tap over there.

 

The first tap sets OmniFocus as the active app, but I can’t paste anything yet.

 

Another tap brings up the Paste option, which I can then tap.

 

Boom! I have my text in the new task and I can save it. OmniFocus has the nifty “Save +” button which saves time by immediately opening a new task for entry.

 

At this point, it has taken 8 taps to copy a line of text from one application to the other. Several of those need to be made with some precision or additional taps will be required. This does not include those taps required to create and save the task in OmniFocus.

I repeated the steps to copy the second task across. But wait! Is there a better way? Does iOS offer better mechanisms to solve this simple task?

Well, there are action extensions, and OmniFocus most certainly has one. Let’s add the third item in the “modern” way.

First up, the same tap, tap, tap, drag is required to select the line of text in Notes.

But instead of going straight to the Copy function, we need to tap the arrow to get to action extensions.

 

Now we can tap the Share… button. We’re up to 6 taps now.

 

I had to scroll to find OmniFocus, but I could rearrange those to make it instantly accessible. So just counting the tap on the OmniFocus extension, we’re up to 7 taps.

 

The OmniFocus extension pops up, but because I’m “coming through a different door” the context of my project is lost and therefore I need to select it. Again, some organisation could put the project at the top, so we’ll give that as a freebie, but the tap to select the project takes us to 8 taps total.

A final, ninth tap on Save creates the task in OmniFocus. The modern approach takes more taps than the old school.

 

I use action extensions reasonably often and find them mostly intuitive and simple and quick. But when it comes to a repetitive task like this, all of those attributes melt away. Even the intuitiveness! When copying 31 tasks for my real list, my brain would start to get muddled on which step needed to be performed next. This is true on the Mac as well, sometimes, but this is a very, very simple task – copy some text from one application to another.

This same task on the Mac is far simpler. Again, with both apps open side by side and the project container and new task created, it takes the following steps:

Click on Notes. Drag over the text. Cmd-C. Click on OmniFocus. Cmd-V.

While that’s still 5 operations, only one of those requires any dexterity – the drag. More of it can be accomplished with keyboard shortcuts, too. Cmd-Tab to switch between the apps and, outside of my counting scope, Cmd-N to create a new task in OmniFocus and Enter to save it. I could even select the text in Notes with the keyboard although I reckon that’s slower and more fiddly.

Granted I could add a keyboard to my iPad, but should I require an expensive additional extra just to do a simple copy and paste task? I have no idea how different it would be with a keyboard, but I suspect that there’d still be a lot of touching the screen. The use of a keyboard on the Mac, plus the basic keyboard shortcuts (Cmd-N, C, V) are intrinsic to almost all apps because the keyboard is always present.

The nub of this issue, as I see it, is that a touch interface will never be good at detailed work that follows the same paradigms as the traditional desktop computer. Perhaps there is a clever way to multi-touch edit text that has yet to be thought of, but it’s not here now.

A final note. Those eagle-eyed readers may also note the time in my screen captures is out of order. There were so many individual taps that I found it hard to remember to take every screen capture the first time through. And the second time.


UPDATE

In this post, I used a specific task between Apple’s Notes and OmniFocus to illustrate a fairly basic concept. This was never intended to be a slight on either product, but rather the nature of iOS.

However, even though I did not reach out to the Omni Group, nor even complain about their OmniFocus product, the CEO of Omni Group, Ken Case, obviously came across the post and reached out to me to explain that I could have done this particular task more easily. This is a fantastic level of support! And so I thought it deserved a callout here.


Postscript: After publishing this post I noted that where I had used image captions to describe the steps, the text was too small in comparison to the few passages in regular paragraphs. In the space of a couple of minutes, I edited the post to move all of the caption text into text blocks, including creating most of those text blocks. One hand on the keyboard, one on the mouse, and my eyes planted firmly on the screen this was a quick and fluid task. Ignoring the fact this task isn’t even possible on iOS, if it were, I don’t think I would have been done in two minutes!

A decade on

Today marks 10 years since I switched to the Mac and I thought, like that day, it deserved a blog post to mark the occasion.

A screen capture from the “Wayback Machine” at archive.org shows my blog post as it originally appeared in 2007 on the Sitting Duck blog.

I still think it is one of the best decisions I ever made to (mostly) abandon Windows. For all the complaints I have had over the years about the Mac, I still get a regular view into the Windows world and, as my friend Allison says, it’s like being prodded in the ribs every 5 minutes.

It’s a fascinating discussion to have with people who remain with Windows and say “it’s fine.” Very few people defending Windows have ever spent much time really immersing themselves in the Mac operating system, yet a large proportion (these days a majority I reckon) of Mac users came from years of Windows use like me, or at least have been exposed to Windows in an office environment. In my experience, Windows defenders never actually defend their choice of OS but rather attack my choice, often by explaining how “nasty” Apple is in its ways.

I had a fairly level-headed discussion recently where my ‘opponent’ was actually trying to defend Windows, but I kept pointing out to him that all of his positive points amounted to “it’s not as bad as it used to be.” I think that’s the nub of the issue. People just expect things to be difficult. That’s not to say things are always easy on the Mac, far from it. Yet I use Windows 7 five days a week at work and it is constantly bugging me in so many ways.

I’m not going to make this a long diatribe and try to convince anyone to switch. Truth be told, most who will read this will already be Mac users. No, I just wanted to mark the occasion and note that I’m still happy with the decision – 8 OS versions and 3 Macs later.