How to build a Fisher-Price-style “My First Neural Net”

Note: this was first published in early 2019. Between then and early 2020 Keras and/or TensorFlow changed their default configurations, and the neural net built in this post now offers radically better accuracy even before the manual tuning described in Step 3. I’ve left the post unaltered because tuning is an important concept to understand even if it’s no longer strictly needed for this example. The pace with which neural net libraries are improving is mind-blowing!

Introduction

fisher-priceRemember the Fisher-Price “My First…” toys from the 70s and 80s? They were super-simple versions of common toys or household objects.

Let’s build a Fisher-Price-style My First Neural Net: the simplest possible piece of software that qualifies as a full-fledged neural net. Even though it will be as stripped down as possible, it will be capable of actual classification work just like an industrial-grade neural net.

Although no experience building neural nets is required to get the code up and running, this project will make more sense if you have some understanding of the basic concepts of neural nets:

  • nodes
  • weighted connections between nodes
  • hidden layers
  • output layers
  • prediction through forward-propagation
  • training through back-propagation

If you need an introduction or refresher to any of these, see the first section of Wikipedia’s Artificial Neural Net entry.

neural_network
Generic neural net architecture: this is even simpler than the super-basic neural net we’ll build

It will also help to be familiar with the basics of Python, and with writing and running Python scripts in an editor and the command line, in iPython, or in a Jupyter notebook.

This rest of this post will lead you through three tasks:

  1. Setting up our development environment
  2. Building the simplest neural net possible
  3. Making the neural net more accurate

The code for making our Fisher-Price-style My First Neural Net is spread throughout this post, but it’s also presented at the end of Steps 2 and 3 to make it easy to copy onto your own computer.

Step 1: Setting up our development environment

I’ll assume that you’re starting from a reasonably clean Linux or macOS machine. I haven’t tested these steps on Windows, but they’ll probably work with very few modifications.

This step collects the lego blocks that can be snapped together to make a neural net. We won’t actually assemble those lego blocks until Step 2.

a. Install Miniconda

Thanks to the magic of the Miniconda package manager for Python, setting up our development environment is trivial.

Use the official instructions to install Miniconda.

Note that we could use Miniconda’s more full-featured cousin Anaconda instead, but Miniconda does everything we need and its stripped-down feature set makes it easier to use.

b. Make a new Python environment

In a terminal, use Miniconda to make a new Python environment to play around in so we don’t corrupt the rest of our system.

$ conda create -n fisher-price
$ conda activate fisher-price

Accept the default values for any prompts.

c. Install the Keras neural net library

We’ll build our neural net using the Python Keras library, which is a user-friendly wrapper on top of Google’s TensorFlow library. As of early 2019, Keras is probably the most accessible neural net package for newbies. It’s mature, robust, and has reasonable documentation. Install it with a single command in the terminal.

$ conda install keras

Answer the default values for any prompts, watch Miniconda install twenty or so dependencies, and we’re done!

Step 2: Building the simplest neural net possible

First, an important note on the accuracy rates discussed in this post. When we create a new neural net, all of its weights are set to random values. As we train it, those weights change and ultimately converge on values that give the neural net its predictive power. But because different neural net instances are initialized with different random weights, even if we train both of them with the same data, they’ll end up with slightly different final weights and will have slightly different predictive accuracy. That’s just the nature of the beast when it comes to neural nets.

This means that if you run this code in your own environment, you can expect similar, but not identical accuracy to what I see in mine.

Now, let’s see how quickly we can make a real neural net with a Python script. Thanks to Keras, this takes remarkably little code. Let’s walk through each line.

At the very top of a new Python script, load Keras.

import keras

Next, load some data to train our neural net to train on. The most common “hello world” dataset for learning about neural nets is MNIST, which comprises grayscale images of handwritten digits. A neural net can be trained to categorize each picture as a handwritten 0, 1, 2, or whatever. The MNIST dataset provides 60,000 images to train the neural net and 10,000 images to test the neural net’s accuracy.

But you don’t want to use the same dataset every other new data scientist uses! Instead, let’s use the Fashion MNIST dataset. This is exactly the same as MNIST in format (same number of pictures, same size of pictures, same grayscale), but consists of articles of clothing instead of digits. A neural net trained on Fashion MNIST learns to identify what category of clothing (t-shirt, dress, handbag, etc.) each picture belongs to.

Fashion MNIST images
A tiny portion of the images included in the Fashion MNIST dataset

Keras provides a helper method for importing the Fashion MNIST training data. We’re actually importing four separate sets of data with one command:

  • images to train the neural net on
  • category labels for those training images (“t-shirt”, “dress”, etc.)
  • images to test the neural net with once it’s been trained
  • category labels for those test images

Add this code to our script:

(train_images, train_labels), (test_images, test_labels) = keras.datasets.fashion_mnist.load_data()

Python represents the training data as a three-dimensional matrix of size 60,000 x 28 x 28. To feed this data to our neural net, we have to first convert it into a two-dimensional matrix of size 60,000 x 784 (note that 784 = 28 x 28). The technical reasons that we need to “reshape” the data are unimportant for this post—it’s just easier for Keras to process that way.

We need to do exactly the same thing for the 10,000 images that we’ll use for testing the trained neural net.

Here’s more code to add. (For the rest of this post, any code presented is meant to be added to our script.)

train_images = train_images.reshape(60000, 784)
test_images = test_images.reshape(10000, 784)

Now let’s create what’s called the “model.” This is the neural net itself, with nodes arranged in layers and weighted interconnections between various nodes.

We’ll use Keras to make a sequential model, which is the simplest kind of neural net. In a sequential model, signals flow from input nodes to one or more layers of hidden nodes and finally to output nodes. This model doesn’t include any fancy bells or whistles: it’s a plain vanilla neural net architecture.

neural_net = keras.models.Sequential()

It’s time to add some layers of nodes to our network.

First, let’s add a hidden layer of 100 nodes.

neural_net.add(keras.layers.Dense(100, input_dim=784))

There’s a lot going on in this line, so let’s step through it.

Dense is a type of layer that connects every node that it contains to every node in the next layer. This is the simplest and most common kind of layer. It’s the basic building block of most neural nets.

100 specifies the number of nodes in this layer. I picked this number more or less at random; we’ll experiment with tweaking it later.

input_dim specifies how many pieces of data will feed into the neural net. Typically only the first layer in a neural net uses this parameter. The images that serve as input to our neural net are 28 pixels by 28 pixels. Keras automatically flattens this two-dimensional image data into a 1-dimensional list of 784 numbers and feeds that list to each of the 100 nodes in this layer. Each of these 784 numbers ranges from 0-255, representing grayscale values.

neural_net.add(keras.layers.Dense(10, activation='softmax'))

This second layer is the last layer in our network, so will serve as the neural net’s output layer. Output layers are traditionally dense. I’m not sure why this is so considering that there are no more layers after it for it to connect to, but this seems to be standard practice.

This layer has ten nodes, each representing one clothing category label (t-shirt, dress, etc.).

The activation parameter scales the weights of the nodes in this layer so they all add up to 1.0. This means that if the neural net thinks a given image is a dress, the output node that represents “dress” might have an activation value of 0.9, whereas the output nodes that represent “handbag” and “t-shirt” might have activation values of 0.05.

Next we “compile” the network, which converts it from a blueprint of a network into a runnable network. We also pass a few “hyperparameters” to the network, which tune its operation.

neural_net.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

The optimizer parameter tells the network what algorithm to use when training. “Adam” is a general-purpose algorithm that’s usually a good place to start.

The loss parameter tells the network how to measure its accuracy, which not only makes it possible for us to understand how successful the training has been, but also helps with the training process itself. The exact details are unimportant, but “sparse categorical cross-entropy” works well for this type of classification task.

The final parameter, metrics, is optional. The way we’re using it here gives us ongoing reports of how the network’s accuracy improves as it is trained.

Now that we’ve specified and compiled the network, we need to train it using the Fashion MNIST training data. Once again, Keras makes this simple.

Although a modern CPU should be able to run this in less than a minute, this step could take much longer (hours or even days) if we had a more complicated neural net or more training data.

neural_net.fit(train_images, train_labels)

Now it’s time for the big payoff! Let’s feed test data to the trained neural net and see how accurately it classifies fashion images it’s never seen before.

print(neural_net.evaluate(test_images, test_labels))

Unless you’re following along in Jupyter or iPython, you’ll need to run the whole script at this point.

$ python neural-net.py # replace "neural-net.py" with your Python script name

Your output should look similar to this:

Epoch 1/1 
60000/60000 [==============================] - 5s 78us/step - loss: 1.0321 - acc: 0.6291
10000/10000 [==============================] - 0s 25us/step
[14.50628568725586, 0.1]

This is cryptic, but the important part is the second number in the last line of output: 0.1. That means that our trained neural net correctly identified… 10% of the fashion images in the test set.

The bad news: that’s pathetic.

The good news: there are several simple ways to modify our basic neural net to improve its accuracy. We’ll see how high we can get the accuracy with some simple tweaks.

But first, let’s catch our breath and take a look at all the code we have so far, all in one place.

# load the neural net library
import keras

# load the images and category labels we'll use to train the neural net, and then to test its accuracy
(train_images, train_labels), (test_images, test_labels) = keras.datasets.fashion_mnist.load_data()

# flatten the images from two-dimensional arrays into one-dimensional arrays
train_images = train_images.reshape(60000, 784)
test_images = test_images.reshape(10000, 784)

# specify the neural net's architecture: one hidden layer and one output layer
neural_net = keras.models.Sequential()
neural_net.add(keras.layers.Dense(100, input_dim=784))
neural_net.add(keras.layers.Dense(10, activation='softmax'))

# convert the neural net blueprint into a runnable neural net
neural_net.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# using our dataset of training images, train the neural net by adjusting the weights between its connections
neural_net.fit(train_images, train_labels)

# using our dataset of test images, check our neural net's accuracy
print(neural_net.evaluate(test_images, test_labels))

Step 3: Making the neural net more accurate

For complicated technical reasons, neural nets often improve dramatically when each of their layers has something called an activation function, which modifies the activation level of each node. Remember that our neural net has two layers. We’ve already specified the softmax activation function for the second (output) layer, but we didn’t specify an activation function for the first (hidden) layer. Let’s add one to the first layer and see if that gives us better results. We’ll use tanh, a common activation function, and a good default option. I’ve highlighted the changed line below.

Replace this line:

neural_net.add(keras.layers.Dense(100, input_dim=784))

With this:

neural_net.add(keras.layers.Dense(100, input_dim=784, activation='tanh'))

Let’s re-run and see how it does.

Epoch 1/1
60000/60000 [==============================] - 5s 78us/step - loss: 1.0321 - acc: 0.6291
10000/10000 [==============================] - 0s 28us/step
[0.9260841958999634, 0.6585]

Wow. Just adding the tanh activation function to the hidden layer catapulted accuracy to 66%!

Here’s something else to try: neural nets often work best when each piece of input data (in this case, the grayscale value of each pixel in an image) ranges from 0 to 1.0. Currently our image data ranges from 0 to 255. Let’s scale our data (for both the training and test images) so it fits in the 0 – 1.0 range, and see how that affects our accuracy.

Right after this line:

(train_images, train_labels), (test_images, test_labels) = keras.datasets.fashion_mnist.load_data()

Add these lines:

train_images = train_images / 255.0 
test_images = test_images / 255.0

And re-run.

Epoch 1/1 
60000/60000 [==============================] - 5s 82us/step - loss: 0.4803 - acc: 0.8281 
10000/10000 [==============================] - 0s 31us/step 
[0.4426993363380432, 0.8401]

Better still: we’re up to 84%!

Next, let’s see what happens if we train the system on the training images not just once, but multiple times. Since training adjusts the weights of the connections only a little bit with each batch of training data, maybe it will continue to improve its accuracy if we let it take several passes at the same set of training data.

We do this by using the epochs parameter during training. One epoch is a single pass through all the training data, so specifying five epochs means we’ll train the neural net on the same training data five times.

Replace this line:

neural_net.fit(train_images, train_labels)

With this:

neural_net.fit(train_images, train_labels, epochs=5)

And re-run.

Epoch 1/5 60000/60000 [==============================] - 5s 83us/step - loss: 0.4766 - acc: 0.8290 
Epoch 2/5 60000/60000 [==============================] - 5s 79us/step - loss: 0.3699 - acc: 0.8661 
Epoch 3/5 60000/60000 [==============================] - 5s 80us/step - loss: 0.3374 - acc: 0.8765 
Epoch 4/5 60000/60000 [==============================] - 5s 81us/step - loss: 0.3138 - acc: 0.8854 
Epoch 5/5 60000/60000 [==============================] - 5s 80us/step - loss: 0.2966 - acc: 0.8910 
10000/10000 [==============================] - 0s 33us/step 
[0.36925499482154844, 0.8619]

More improvement: we’re at 86%, meaning that our neural net can correctly identify more than eight out of ten fashion images from the test set of 10,000 images.

It’s important to understand that the higher the accuracy gets, the harder it becomes to eke out even better accuracy, and the more important even tiny gains become. That’s why a 2% improvement from 84% to 86% is nothing to sneeze at.

Let’s declare victory and stop here.

In the interest of science, I did try a few other tweaks, but none of them improved the accuracy of our Fisher-Price neural net above what we’ve already achieved:

  • more nodes in the hidden layer
  • more hidden layers
  • different activation functions
  • more training epochs
  • different training batch sizes (where batch size is the number of images we feed through the neural net during training before adjusting the weights of the neural net’s connections)

I suspect we could get even better accuracy with a more complicated neural net architecture, but that’s a topic for another blog post.

Here’s all the code we ended up with after tuning.

# load the neural net library
import keras

# load the images and category labels we'll use to train the neural net, and then to test its accuracy
(train_images, train_labels), (test_images, test_labels) = keras.datasets.fashion_mnist.load_data()
train_images = train_images / 255.0
test_images = test_images / 255.0

# flatten the images from two-dimensional arrays into one-dimensional arrays
train_images = train_images.reshape(60000, 784)
test_images = test_images.reshape(10000, 784)

# specify the neural net's architecture: one hidden layer and one output layer
neural_net = keras.models.Sequential()
neural_net.add(keras.layers.Dense(100, input_dim=784, activation='tanh'))
neural_net.add(keras.layers.Dense(10, activation='softmax'))

# convert the neural net blueprint into a runnable neural net
neural_net.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# using our dataset of training images, train the neural net by adjusting the weights between its connections
neural_net.fit(train_images, train_labels, epochs=5)

# using our dataset of test images, check our neural net's accuracy
print(neural_net.evaluate(test_images, test_labels))

Conclusion

Think about what we just did: with a dozen lines of Python code we created a neural net that categorizes pictures of clothing with 86% accuracy. This would have been unthinkable just ten years ago.

Although none of the concepts involved with neural nets are terribly difficult to understand, there are a bewildering number of ways you can build and tune a neural net to perform optimally for your particular categorization or prediction task. One unusual aspect of neural net engineering is that it’s as much art as it is science. In many cases, we don’t fully understand why certain neural net architectures or tunings perform better than others. The standard neural net development workflow consists of starting with a good general-purpose architecture and a best-guess set of hyperparameters, and then experimenting with variations as you watch the system’s accuracy move up and down. Once you hit on a combination that achieves the accuracy you need, you’re done. It’s hard to think of another branch of computer science that works in exactly this way (although maybe performance optimization comes close).

If you want to explore further, I recommend the official Keras documentation and tutorials, or the excellent book Deep Learning with Python by Francois Chollet, the lead designer of Keras.

 

Why I study Swedish / Varför studerar jag svenska

Abba
Probably the four most famous Swedes (including the one who’s Norwegian)

I’ve been studying the Swedish language (and to a lesser extent, Swedish culture) for about five years.

Why am I studying a language? I’m doing it partly to prove to myself that I really can achieve a minimal degree of competence in a language (after having failed several times with other languages in school), and partly because I’m embarrassed at being a monolingual American (though in another blog post I’ll go into reasons why we Americans shouldn’t be too hard on ourselves on that front). I’m not studying a language to help my career or connect me to ancestry or open up new parts of the world that otherwise would have been closed to me. Swedish won’t do any of those things.

Why Swedish, then? There are lots of good arguments against it: it’s only spoken by ten million people (smaller than the combined population of Oregon and Washington), I have zero Swedish ancestry, I’m not helped by knowing any other Scandinavian languages, and virtually all Swedish speakers under 40 speak great English.

There are a number of reasons I picked it. Each one individually might not seem too compelling, but add them all up and it doesn’t seem like such a crazy choice.

  • The United States Foreign Service Institute (the agency that trains our diplomats) puts Swedish in the “easiest” category of languages to learn for English speakers.
  • There are plenty of books and online courses on Swedish, so I don’t have to resort to formal (and expensive) classes.
  • The Swedish government sponsors two daily news podcasts (one in simple Swedish, the other in really simple Swedish) that are intended for immigrants new to the country, but are perfect for me too.
  • I respect Swedish culture, with its heavy liberal slant, emphasis on moderation in all things, and tradition of humane treatment of all sorts of people.
  • I’ve already failed to make any useful or satisfying progress at learning Spanish (four years of high school), German (two years in college), and French (one year in grad school).
  • As a non-traditional language for an American to learn, it’s something I can feel a sense of ownership toward, which provides ongoing motivation.
  • Since so many Swedes speak such great English, it feels respectful to return the favor.
  • I’m a big fan of Swedish (well, all Scandinavian) mysteries in TV, movie, or book form.
  • Danish sounds horrible (search it out on YouTube if you haven’t heard it), Norwegian confusingly has two written forms, Icelandic is spoken by only a handful of people, and Finnish is in a totally different league of difficulty (it’s seriously hard).
  • People keep telling me I look Swedish.

Language knowledge falls into four categories: reading, writing, speaking, and listening. Every language I’ve studied has presented an obvious and consistent order of difficulty: reading is by far the easiest, listening is significantly harder, writing is slightly harder still, and speaking is basically impossible unless I live in an area where I can get constant feedback and correction. (As an aside, I wonder if other people with differently wired brains would rank these skills in the same order?) Since I won’t be able to live in Sweden or Swedish-speaking part of Finland any time soon, I don’t consider spoken competence to be a realistic goal, and it’s not something I’m working toward.

I do have concrete goals for the other three skills. Some day I want to be able to read or listen to the harder of the two Swedish-for-dummies news broadcasts (which are all transcribed into written form too thanks to the state-run radio network) and understand enough that I don’t have to look any words up in order to get the big picture and most of the details. I also want to be able to read a normal, not-for-dummies Swedish newspaper and get the general sense of what’s going on in the world (even though many of the details will escape me). As for written skills, I’d like to reach a level of competence where I could in theory (not in practice!) be a pen pal for a Stockholm fourth grader.

So that’s the rationale behind this project, and a summary of my goals. Future blog posts will cover progress updates (there’s a lot to report after five years of study!), changes to these goals, and observations about the language learning process.

January resolution: sleep

sleeping cat

I’m trying a new approach to resolutions this year: introduce one small one on the first day of each month, and try to maintain it for the rest of the year, while adding a new one each month.

I’ll talk about all of the resolutions eventually, but for this post I’ll explain that my resolution for January of 2018 was to be in bed for eight full hours, six nights per week. Since it takes awhile to fall asleep once I’m actually in bed, this rarely results in eight hours of actual sleep. But whatever: it’s way more sleep than I usually get. I’ve probably averaged between five-and-a-half and six-and-a-half hours for… decades? Honestly, the last time I was prone for eight hours was probably fifteen years ago.

There’s mounting evidence not only that we’re almost all underslept, but also that lack of sleep contributes to almost every physical and mental malady there is. NPR stories, newspaper articles, blog posts from all sorts of people, and sketchy click-bait articles all trumpet the importance of sleep, and the volume and stridency of these stories has rocketed in the last three years or so. Arianna Huffington seems to have sold the Huffington Post simply so she can dedicate her every waking hour to making sure other people have fewer, um, waking hours.

I can’t tell if my sub-par memory and general feeling of cluelessness is caused by aging or by the sleep thing, but being chronically underslept can’t help. Since sleep connects up to everything in the body, who knows what might improve with more chance to rest and recover from the day’s stresses? Will my hair stop falling out? Will my weightlifting finally result in me being able to lift more weight? Will I crave sugar less often?

Being in bed for eight hours should be a natural occurrence instead of something that requires a resolution to make happen. I resist this for a number of reasons:

  • I’m rarely tired before midnight
  • Falling asleep is boring
  • Dreams are often stressful and unpleasant
  • The sooner I go to sleep the sooner I’ll have to get up and go to work again (even though work is always fine once I get there)
  • If I go to sleep it’s possible I’ll never wake up
  • There’s so much to do, and so few years in which to do it

We’re at the beginning of March, which means my experiment has been going on for two months. How well have I been able to follow the resolution, and have I seen any improvements in my physical or mental state?

I’ve been almost successful at sticking to this plan. I’ve pretty reliably gotten seven-and-a-half hours of horizontal time. (Mysteriously, making time for the full eight is enormously harder than seven-and-a-half.) Allowing for the time required to fall asleep once I’m lying down, this means I’m hitting around seven hours of actual sleep per night.

During this experiment, I’ve noticed that my sleep has become less consistent. Sometimes I sleep straight through the night, but in general I’m up tossing and turning, not quite asleep, many more hours per week than I used to be. (Before this year, I slept like a log during all of my not-enough hours each night.) Maybe this means my body doesn’t need any more sleep than it’s now getting? The jury’s still out on this, but it’s the main reason I’m not kicking myself for not quite making it to the eight-hour goal. Regardless, I’m still doing a lot better than my old underslept standard.

As for physical or mental changes, I guess I’m a little less tired during the day, and maybe my short-term memory has improved marginally, and perhaps my mood is slightly more upbeat, but wow that’s a lot of qualifiers. I’m not 100% convinced that those effects aren’t illusory. I can’t say that I’ve noticed any physical changes. But I don’t want to draw any conclusions after only two months, so I’ll report back with more confident results after I’ve been on this new schedule for the full year.

Also, there’s a potentially confounding factor here, in the form of one of my other 2018 resolutions. I’ll talk about that in a future blog post, but it’s possible that it’s leading to some of these changes which might (or might not) be occurring.

On the importance of ridiculousness

cary grant
Cary Grant being unserious in Charade (1963)

I remember a great line from an old movie in which somebody in their 60s (I’ll say it was Cary Grant, because it probably was) was praised by somebody else in their 30s (I’ll say it was Audrey Hepburn, because it probably was) for being “serious.” At which point the first somebody looked aggrieved and said something like “at my age, the last thing you want to do is be serious!”

Until that moment–and this is the reason I remember that line at all–it had never occurred to me that it could be a reasonable goal to become less serious with age. I had assumed without realizing it that becoming more serious (whatever that might mean, but probably involving a blend of financial responsibility, reduced sense of whimsy, and a more planned, less spontaneous approach to life) was not only inevitable with age, but should be thought of as one of the few good things that go along with getting older.

Now I’m starting to understand what he was talking about. For a certain group of people (including me), being young feels like a temporary waiting period until you can mature and be taken seriously by others. But once you reach the age at which you’d be considered by most people as fully formed, predictable, and stable, you start thinking about the endings of things, and you feel an urge to re-experience youth, along with its infinite sense of possibility and general feeling of physical and mental bullet-proof-ness.

Maybe life experience makes us better at being ridiculous? Maybe it takes some exposure to the dangers of over-seriousness (hubris, dullness, an accelerated perception of time) for us to realize the value of being silly. Honest striving works well in a world that appears black and white (if you follow the rules, you can’t help but get ahead), but once you’ve felt how powerful a force luck (in both good and bad forms) is in most lives, once you’ve had life laugh at your plans a few times, once you’ve discovered that the black-and-white parts of life are truly few and far between, you almost can’t help but embrace absurdity not only in the world, but in yourself.

Kermit was right

kermit
It really isn’t easy being green

I was reminded recently that “Reduce, Reuse, Recycle” are not supposed to be equally important instructions. Reducing the amount of stuff you buy has the biggest potential impact on the earth’s health. Reusing what you do buy is not as helpful as reducing, but is more helpful than recycling. In other words, if, like many of us, you’re conscientious about recycling but mostly ignore the reduce and reuse edicts, you’re doing it wrong.

The reason so many of us fall into that category is that reusing and especially reducing are much harder instructions to follow.

Consider reuse. In rare cases, you have an opportunity to reuse something immediately after you use it the first time. When you unpack your groceries from the paper bag, you might be able to immediately plop the bag under your sink to use for collecting recyclables. But in most cases you either don’t know how to reuse the object (the paper bag today’s baguette came in asks me in a lovely blue typeface to reuse it, but what am I supposed to do with a baguette-shaped bag?) or you do know how to reuse it, but you have a big stack of similar objects already in line for reuse (I have about 30 paper grocery bags in a stack waiting to be used to collect my recycling). In either case, you can hold onto the object until you either think of a way to reuse it or exhaust your supply of similar objects. But where do you put it? I don’t know about your home, but mine is pretty full of objects already. And once you do discover or create a space to store it, when/if the big moment for reuse arrives, how do you remember a) that you have it, and b) where you stashed it? Then when you’re done reusing it, you’ll presumably want to store it again for a second round of reuse. That’s all fine if the object is something like a glass jar that doesn’t get dinged and dented with multple uses, but if it’s the kind of thing that wears out, when do you decide that it’s too worn to continue reusing? And no matter how hard you’re trying to reduce during this time, it’s inevitable that you’ll continue to bring some new objects into your life, which means your stack of ready-to-reuse objects will get bigger and bigger.

My point here is not that reuse is impossible, but that it requires an awful lot of decisions from someone whose life is probably way too full of decisions already. I find myself exhausted just thinking about the process, much less following through on it.

The more important act of reducing feels even harder to implement, since there are often no clear criteria for when you really truly need to buy something. For example, how do you know when it’s time to replace your car? Your current car can probably be repaired indefinitely. At some point the cost of repairing the car over the next n years might exceed the cost of buying a new one and maintaining it for those same n years, but it’s almost impossible to reliably identify that point. And the math is rarely so simple. Will insurance be more on a new car? Will it use less fuel? What price do you put on new safety features? How damaging to the earth is it to dispose of an old car? How damaging is it to make a new one? How much discomfort is your family expected to put up with as the kids get taller and no longer fit comfortably in your old, small car? If you think you need a new car for the sake of some lifestyle choice (maybe you want four-wheel drive for skiing), how do you factor in how much your interest in skiing is worth? What if you just really like cars? Does that excuse your decision not to reduce the number of cars you own?

A car is a big, earth-damaging purchase that involves a ton of hard-to-reason-about decisions, but smaller purchases present you with the same sorts of challenges (on a correspondingly smaller scale). Do I really need to buy or make dessert tonight? When is it reasonable to replace worn-out towels? How small should I let my kids’ clothes get before I get them new ones? And how should we respond to the horrifying amount of unnecessary packaging that comes with many goods today. Obviously buying food from bulk bins and filling your own reusable containers is a good way of minimizing this kind of waste. But many things that we genuinely need (foods and otherwise) can’t be bought in this way, and saddle us with obscene amounts of unnecessary packaging, thereby bypassing the “reduce” option completely.

I knew a minimalist in college who made a point of decluttering his life by making up songs in his head instead of buying CDs or tapes. I think he only owned three shirts, and a single pair of shoes. This seemed extreme to me, but is it? How much do we need, really? How much inconvenience, discomfort, or ostracism are we expected to endure in the name of reducing? Similarly, how much effort are we expected to put into figuring out how and what to reuse? How much of our limited time and money are we supposed to dedicate to recycling? Doing something is clearly better than doing nothing, but is there a point of diminishing returns for any of these three commandments?

Is the amount of effort we’re supposed to put into reducing, reusing, and recycling relative to the society in which we live? If no one around me is doing any of those things, does that let me off the hook? Or does it mean that I should try extra hard to follow these three commandments, to compensate for the laziness (or in some cases, inability) of those around me?

I’m some sure some academic has thought all of this through (surely there’s a Peter Singer equivalent for the environment?), but again: how much time and effort can I reasonably be expected to expand in looking that up, reading it, and returning to it from time to time to keep my memory of the proposed answers fresh? Some eminent New York intellectual (I can’t remember who) claimed that the day he stopped recycling was the most liberating day of his life. I don’t want to be that guy, but I also don’t want to be driven crazy by the “no level of effort is enough” mentality that I’ve seen in others. I haven’t yet found a reasonable middle ground, but I’ll keep looking.

Living on the Long Tail

lemur

In 2004 a Wired magazine writer named Chris Anderson coined (or maybe just popularized) the phrase “long tail,” meaning the many, many items that don’t sell well individually, but which collectively make up a significant part of a company’s sales. For example (and I’m using completely made up numbers here), Amazon might sell thousands of copies of a new Stephen King book in a day, but only two copies of The Odyssey on that same day. The Stephen King book and the other hundred other megasellers of the moment make up the “head” of Amazon’s sales numbers, while The Odyssey and the hundreds of thousands of books that only sell a handful of copies make up the “long tail” of Amazon sales. And because there are so very many books making up the long tail, they might together make Amazon just as much money as the items in the head do.

I think it would be a fascinating experiment to live for a year on the long tail of as many areas of life as possible.

What are some examples of how you might do this?

  • At the supermarket, buy only foods that are high up, low down, far back, or otherwise hard to find. These are the low sellers. See that jar of artichoke pesto with pimentos hiding behind the better selling marinara and alfredo sauces? That pesto is on the long tail.
  • On your favorite music streaming service, type in random words and see what it pulls up. Odds are it won’t be the latest Justin Timberlake or Yo Yo Ma album. Gregorian Chant layered on top of Glen Campbell songs? I’d give that a spin.
  • Remember that bizarre looking novel you made fun of at Goodwill because it had a tentacled alien wearing a sheriff’s badge on the cover? Put down your Dostoyevsky and pick up the alien — that’s your bedtime reading for the next few days.
  • Why drive a Toyota, Honda, or Ford when you could be cruising in a Suzuki, Isuzu, or old Plymouth?
  • Cancel that Las Vegas vacation and check out Providence instead.

For this experiment to be bearable, the elements of the long tail must be low sellers because they’re old or out of fashion, not because they’re objectively worse than the items at the head. In some cases there’s probably a solid reason why a food/CD/book/car/city isn’t as popular as the top sellers. But if you can identify items that lack buzz and flash but still have a solid, worthwhile core, you might discover all sorts of wonderful things that otherwise would have passed you by (not to mention likely saving a lot of money).

Anyway, this is an experiment I’d like to try out on myself. I don’t know if the results should be recorded in a blog, magazine article, or nowhere at all. But that’s the kind of experiential journalism I’d enjoy reading about, so unless I can pitch it successfully to A.J. Jacobs, maybe I’ll explore the long tail a little myself.

Flag family: Counties of Southwest England

The three English counties of Cornwall, Devon, and Dorset occupy the southwestern corner of Great Britain. I’m a big fan of these counties not only because geographical extremities are fascinating, but also because they have the good sense to all belong to a single flag family. Behold!

cornwall-flag-static
Cornwall
devon-flag-static
Devon
dorset-flag-static
Dorset

Weirdly (to me), though some English counties have had unofficial heraldic symbols or banners for a long time, there was no recognized way to designate official county flags until the United Kingdom’s Flag Institute was created in 1971. And since then, the counties have adopted flags in a wonderfully haphazard, disorganized fashion. Many counties in Wales and especially Scotland have no official flags at all, even today.

The Cornish flag was the first of this trio, having roots back to the 12th century. Devon’s flag was designed in 2003 and adopted in 2006, while Dorset’s was designed in 2006 and adopted in 2008. While some Cornish nationalists consider the other flags to be appropriations of the ancient Cornish design, I love the vexillological family created by this variations-on-a-theme approach. This is pure speculation on my part, but I suspect that whatever sibling rivalry these counties feel, the bonds created by their geographical proximity, cultural similarity, and possibly shared anti-London sentiment are probably stronger. And what better way to demonstrate this than through related flags?

somerset-flag-static
Somerset

To my dismay, Somerset, the next county in this geographical sequence, deliberately broke the pattern when they adopted their flag in 2013. It’s a lovely design, but I do wish they had consulted me first.

The happy inconsistency of tea

assam

Tea is one of my favorite ingestibles. It’s huge fun experimenting with different terroirs (India lowland swamp vs. India highland vs. Sri Lanka vs. China vs. Japan), processing (green vs. oolong vs. black), steeping times (anywhere from two to five minutes), re-steep approaches, and water temperature. Ultimately I’ve settled on Assam (the maltier the better) steeped once for four minutes with boiling water as my preferred variety.

But I use the term “preferred” loosely, because I find the taste of tea to be hugely inconsistent. Two cups brewed identically from the same tin can taste quite different on different days. I don’t know what to chalk this up to: Body chemistry? Lingering flavors from other foods? Hunger levels? Mood? Biorhythms? Different chemical makeups of different spoonfuls of leaves? I have no idea. And when you add in the fact that different tins contain leaves picked from different plants with potentially varied growing conditions, and have possibly been processed slightly differently (I don’t know how careful the oxidation timing is at a typical tea plantation), it’s no surprise that there can be significant variations in taste. Also, who knows if the Portland tap water I use is chemically consistent from day to day? And if you’re a barbarian like me who pollutes his tea with milk and sugar, both of which are eyeballed rather than carefully measured, well, all bets are off when it comes to any hope of consistent flavor.

This inconsistency is sometimes frustrating, but it’s not always a bad thing. It’s nice to have some surprises left in life. For every time that I go through the tea brewing ritual and end up with a disappointingly weak or off-flavored cup, there’s another time that I hit the jackpot with a magical combination of chemistry, time, and heat that generates a few minutes of sublime pleasure. That curiosity about how this mug will turn out adds a little spice to an otherwise routine process. Obviously there are a ton of life experiences that work like this (e.g., every human interaction ever), but for some reason I notice and appreciate this unpredictability the most with my twice-daily tea.

I wonder: do coffee drinkers experience the same thing, or is their experience more consistent? How about beer?

Clojure and IntelliJ on a Raspberry Pi

Clojure_logoraspberry_pi_case

One of my sons got a Raspberry Pi 3 for Christmas. He turned it into a retro game console, but once a Nintendo Switch showed up on the scene, the Raspberry Pi got shoved aside. At around the same time I started learning Clojure. These two facts led to the truly perverse idea of turning the RPi into a Clojure development box. What could go wrong?

It turns out you can–mostly–do it! Here’s how:

      1. Install Raspbian, the default Debian-based Linux distribution.
      2. Remove the pre-installed JDK and install Oracle’s latest. This is necessary in order for IntelliJ to work. This isn’t a completely straightforward apt-get install, but it’s not terrible either. Instructions abound on Google. Note: OpenJDK is (I’m told) unusably slow on RPi, so stick with Oracle’s official flavor. Also, make sure to install the ARM version, since that’s what’s lurking inside your RPi. Yeah, I didn’t know Oracle supported ARM either; this was a nice surprise.
      3. If you’re addicted to JetBrains’ IDEs like I am (RubyMine, PyCharm, IntelliJ — I love them all), download the Linux version of IntelliJ Community Edition. I don’t understand why this works, since I was under the impression that IntelliJ installs its own JRE which it then run on, and surely that JRE doesn’t support ARM. But work it does.
      4. If you’re not a JetBrains fan, you’re on your own, IDE- or editor-wise.
      5. Fire up IntelliJ and disable as many plugins as you can live without. Which should be almost all of them.
      6. Install the IntelliJ’s Cursive plugin, and sign up for a free non-commercial license at http://www.cursive-ide.com.
      7. Edit IntelliJ’s idea.vmoptions file (there’s even an item in the Help menu that opens it for you) and set -Xms128m and -Xmx512m in order to keep the IDE from being too greedy with your limited memory. Those might be the default values for your installation already (they were for me).
      8. And you’re off to the races! Go code some Clojure!

The good news is that this totally works. There’s significant bad news, too.

First, you can use Chromium at the same time as IntelliJ to view Clojure documentation, but don’t plan on having a lot of tabs open. Where by “a lot” I mean “more than one or maaaybe two, if they’re both lightweight pages.” For serious. The RPi’s 1GB of RAM can’t handle anything more than that without hitting the extremely slow swap file that lives on the RPi’s MicroSD card.

Second, although the REPL is fast enough to use as a learning tool, anything more complicated is… less satisfying. Compiling, uberjarring, and running Clojure code is a painful experience. Check this out:

  • Run a compiled Hello World Java class: 5 seconds.
  • Compile a Hello World Java class: 10 seconds.
  • Cold launch IntelliJ and load a trivial Java project: 1 minute 45 seconds.
  • Cold launch IntelliJ and load a trivial Leiningen-based Clojure project: 2 minutes.

In conclusion: you can learn and develop Clojure on a Raspberry Pi, but you probably shouldn’t. I’ll probably try a few more times as a novelty, and then put RetroPie back on it and revert to my Ivy Bridge desktop for Clojure.

Blog ground rules

Selectric_II
I really truly learned to type on one of these, thanks to Ainsworth Elementary’s sixth grade typing class

This blog is an experiment. Its theme is: topics and ideas that interest me. I’ll write when I feel like it, but will aim for one blog post per week. If it gets boring, or starts feeling like a burden, I’ll stop. If WordPress doesn’t work out as a platform, I’ll move the content elsewhere. I’ll try to write a post, give it a quick copy editing once-over, and then publish and move on. This process is aspirational and possibly unrealistic, considering the horror I usually feel when re-reading old writing and seeing all the things that need to be fixed. It’ll be good practice in satisficing.