tobilehman.com: a blog on computing, structure and math

# Point Time Machine at Any Destination

## Set up Time Machine (on Mac OS X) to back up to any network volume

Time Machine is a fine backup solution if you have a dedicated external hard drive, or if you have no problem with paying for a specialized Time Capsule.

If you have a network volume, such as an NFS, CIFS, AFP, or πfs share on a file server, you need to configure a few things in order for Time Machine to use it.

I found this article which gives step-by-step instructions, but it involves executing some shell commands. I’ve distilled it into three scripts:

• 1_enable_network_volumes.sh
• 2_make_image.sh
• 3_set_destination.sh

### Step 1

Clone this repository, and execute this script by cd-ing to the repository’s directory, and run:

Explanation: This enables unsupported network volumes by setting TMShowUnsupportedNetworkVolumes to 1.

### Step 2

Next, you need to prepare a special directory on your network share, this second script will do that for you. Make sure you have your network share /your/network/share ready, and choose a maximum number of gigabytes, say 216, and run:

Warning: This script will take a while, you’ll know when it’s done when you Finished! Happy backups! in your terminal.

Explanation: This script creates a disk image name.sparsebundle, where name is your computer name, the result of the command scutil --get ComputerName. The sparsebundle ‘file’ is really a directory, and the script creates an XML plist file inside it, and then copies it to /your/network/share/name.sparsebundle

### Step 3

You need to mount the sparsebundle file, all you have to do is open the file, and it will mount as /Volumes/Time Machine Backups, then run:

And enter your password if it prompts you. If you are uncomfortable blindly running scripts as super-user, I understand, read the script to make sure you know what you it is doing.

Explanation: This script uses the Time Machine utility, tmutil to set the Time Machine destination to the /Volumes/Time Machine Backups mount point.

Now you can fire up Time Machine and start your backups!

# Bootstrapping Most of a C Dev Environment

I’ve taken a break from SICP and TAOCP in order to get a good foundation in the C programming language, I’m familiar with it, but that is not good enough. The reason is because C exposes a lot more about how the computer works, understanding it is an important first step in understanding computers. Steve Yegge said it well:

You just have to know C. Why? Because for all practical purposes, every computer in the world you’ll ever use is a von Neumann machine, and C is a lightweight, expressive syntax for the von Neumann machine’s capabilities.

The SICP world-view is from a parallel world of computing that grew from John McCarthy’s LISP. There were even alternatives to the von Neumann architecture (Lisp Machines) that were built, which natively ran Lisp.

In the interest of grokking computers (not just knowing how to put them together, configure and run scripts on them), I should really know C.

I’ve started with the basic command line tools:

• cat(1)
• grep(1)
• ls(1)
• wc(1)

Note: foo(n) means that the command foo is on manpage section n, to view the manpage, type man n foo.

For cat(1), it was a simple matter of using read(2) and write(2), the only tricky thing is getting familiar with IO buffering, but other than that it’s trivial. After having written these tools, I’ve been using them to work on this code, so I would use my own cat and my own grep and wc inspect the code I had just written, it was very rewarding.

From there I decided I should go further and write an editor, I researched to find the simplest editor that was common on Unix-like systems. I didn’t have to look far to find ed, it is a line-based editor, and after spending 30 minutes learning how to use it, I found the commands similar vi or vim, except that I had to imagine the text, I couldn’t see it as I typed. Then I’ll need a shell, then a C compiler, then an operating system.

I can probably handle a shell, but I’ll need to study a lot more before I put together a compiler and operating system.

My long term goal is to be able to write a whole develpment environment from scratch. Since Unix-like systems are built from small pieces, it makes it reasonably feasible to do it piece by piece.

# Modular Computing Forever?

I remember building my first computer, I was about 14, I ordered my AMD Athlon XP processor, got a motherboard and case, keyboard, monitor, sound card, graphics card, etc. All that. Assembling it was straighforward, but configuring it was a challenge, getting GNU/Linux and Windows 2000 to dual boot, then getting drivers straight, setting up a shared partition for data sharing. It was rewarding to get it all up and running. I upgraded that computer piece by piece, getting a new graphics card, hard drive, keyboard/mouse, RAM, CPU and eventually a new motherboard.

I had that computer all the way through high school and afterward, because the pieces used standard interfaces like PCI, AGP, Socket A, USB etc. I could replace individual components as I needed, without having to buy a whole new system. This was (and in my opinion is) what all hardware should aspire to. I define modularity not only by the ability to swap out components, but also to have components that are interoperable (e.g. my RAM works in an AMD or an Intel box).

To me, it seems obvious that interoperability is ideal. It’s good for the consumer, but it makes sense why there are lots of proprietary connectors and non-interoperable devices, because of the following reasons:

• the standard must exist (e.g. USB, Socket A)
• the standard must have features that intersect with the goals of the device manufacturer (e.g. USB 2.0 is not fast enough for use with your main hard drive)
• the manufacturer must comply with the standard

Another problem was explored in XKCD 927: Standards, there may be a standard, but it’s not unique, leading to fragmentation.

Despite these challenges, modular personal computers exist. This ideal of modularity has not been realized very much in the mobile computer space though, tablets and phones are selling like crazy, and they all are more or less out-of-the-box, sealed devices with little hope of swapping out parts. iPhones, iPads, Nexus phones and tablets have difficult to replace batteries (usually voiding a warranty in the process). I’ve broken an iPhone 4 trying to replace the lock button, this is because the parts are too small. Also, none of the parts are interoperable, I couldn’t take a Nexus 4 battery and replace it with an iPhone 5 battery, for example.

I would like to point out ifixit.com, they have great tutorials on fixing phones and tablets, and are a remedy to the current situation of devices that are difficult to work on.

I don’t want to complain though, I love iPhones, Nexus phones and tablets, they are compact, convenient and very useful for most of non-work related casual computing.

I was naturally very excited to hear about Motorola Ara:

It’s a project that aims to make a set of modular components that can be combined to build your own phone, and swap out parts as needed!

I thought that this could be the beginning of a golden age of mobile computer interoperability, as consumers would flock to this platform because of the economic benefit of being able to replace parts as needed. I later realized this might be a fantasy, after reading a good article by Jacob Miller.

Jacob argues that non-modularity is inevitable, for more than the four reasons I mentioned above, I urge you to read the article, it’s very well written and thoughtful, but until then I’ll include the quote that for me was an inflection point:

Despite all these changes in speed, one thing has largely stayed nearly the same. The physical size of the interconnect is still limited not by our current technology, but by our ability as humans to line up copper connectors. Right now the lane size for PCI Express is the exact same as it was back in 1993 when PCI was released - 1mm. Contrast that with the lane size in a modern day processor - currently 22nm. Eventually, we’re going to hit a limit with the amount of data that any individual lane can transfer (currently at a jaw dropping 1969MB/s with PCI-E v.4), simply because we can only bend the laws of physics so far. At that point, our only option to increase speed is going to be to add more lanes.

At that point, modularity will begin to fail.

His article clarified a choice that hardware designers and manufacturers have to make, increase speed and capacity while shrinking the device, or preserve the ability for humans to operate on the parts, and replace them as needed. On the iPhone side, the former was chosen, on the Motorola Ara side, the latter was chosen, and as a result, Ara phones will necessarily be bigger.

As parts get smaller, the very ability for us humans to swap out those parts diminishes. Devices like the iPhone 4 and 5 are so compact that it’s just really hard to even get to the parts. This trend is likely to continue, and the preference for smaller devices may lead to the failure of Project Ara.

I am going to give Ara a try, and I hope that it succeeds, but I no longer believe everyone will flock to it, because Apple, LG and Samsung will keep making smaller and smaller devices that sacrifice modularity.

# Revisiting Spaces in File Names

I don’t like spaces in file names, as I’ve written before, and as I’ve tried in vain to fix.

I’ve been working around this issue with a little hack that I call wrap:

It wraps each line with single quotes, however, the obvious problem with this is that sometimes lines have single quotes in them. For file names, it’s usually fine, since it’s unusual for file names to have quotes in them.

I recently came across this awesome solution by @debona, it uses the IFS environment variable. IFS stands for Internal Field Separator.

Here’s the problem I run into when looping over a list file files that have spaces in the name:

The spaces are seen as delimiters, but by setting IFS to a newline, we can avoid this problem:

This is just the kind of solution I was looking for, props to @debona for writing this up on Coderwall.

# XKCD 1277: Ayn Rand and Regular Expressions

Randall Munroe of XKCD is brilliant, today’s comic is no exception:

While the Ayn Rand joke is amusing, the real clever joke in the alt text (that maddeningly disappears if you take too long to read)

In a cavern deep below the Earth, Ayn Rand, Paul Ryan, Rand Paul, Ann Druyan, Paul Rudd, Alan Alda, and Duran Duran meet togther in the Secret Council of /(\b[plurandy]+\b ?){2}/i

For those not familiar with regular expressions, the end of that sentence might look like nonsense, but it encodes the (much more amusing) similarity between all those names:

Let’s start with the list of names, assume they are in a file called names

For each of the names, set them to lower case, split up the characters, then sort and count occurences:

Notice that there are only 8 unique characters in that list, in regular expressions the syntax [plurandy] means ‘match any character in the set {p,l,u,r,a,n,d,y}

You can see in this diagram how the whole expression works:

The character \b matches a word boundary, which is a point between a word character and a non-word character, such as the point right before the beginning of the name, or after. The trailing /i means case insensitive, the diagram explains the rest.

The above graphic is done in Debuggex, it is a fantastic tool for exploring and debugging regular expressions