Delivering Happiness

October 1, 2015

Tim Dobson and I swapped books. It was one of those crazy internet things where I randomly sent him a copy of Unlocking the Clubhouse, and Tim bought me a copy of Delivering Happiness. We don’t see each other much, so we did this using Amazon.

This is the story of that book. This is definitely not the sort of book I would normally read thing. So that’s good thing. There’s no point in sending me a book that it is already on my reading list: much more interesting to be sent something slightly quirky.

Tony Hsieh is CEO of Zappos.com, and the book is about how he discovered how to make Zappos.com great (hint: by “Delivering Happiness”). I hadn’t heard of Zappos.com when I got the book, and having read it, I’m still not sure if I should’ve heard of Zappos. I think this is mostly geographical. I’m in the UK and Zappos seem to be a North American shoe e-tailer. Haha. “e-tailer”. Yes, the book is largely a story about _that_ era, when selling things online was novel enough to be talked about (to quote: “There will never be another 1997”).

I have to get one thing out of the way. Typography. Generally, the typography and book design drifts between mediocre and terrible. The main body text is just fine. But the line length is too long and the margins too small. There are many quotes in the book, extracts from emails and blog posts and so on, and in some sections Zappos employees have been invited to write a few paragraphs. These displays are poorly designed. No consideration of visual flow, and alarming and inconsistent choice of font. The whole thing gives the impression of having been done by dumping the whole lot into a Word document, fiddling with the fonts a bit, and then posting off to a print-on-demand publisher without ever having been edited. As one of the reviewers in the blurb says: “honest, passionate and humble, fun and a little weird”.

It really could have done with a little bit of book design and/or editing.

What about the content and the message? It’s a fun read. Despite my quip about it not being edited, it has at least been proofread. It’s self indulgent and egotistical. It’s written as if I already know who Tony Hsieh is and all about the california start-up scene of the late 1990s. Despite all that, I like it.

It is a story of journeys. Tony’s journey from childhood to Zappos, and then Zappos’ journey. There is a real central crisis: at one point Tony and other key people are deciding whether to kill Zappos, which they love but they didn’t create but which they did nurture, or to become even closer to Zappos. The book is about Zappos, so, obviously they decided to get closer. The second journey of how Zappos grows is the more interesting story. It’s illustrated with many amusing anecdotes not just from Tony, but personal (and badly typeset) stories from staff and friends too.

It is in this journey that Tony discovers happiness. Or rather, discovers how happiness can improve a business, and ultimately realises that it may be the key to improving the world. The closing parts of the book read like a call to arms. Arms to embrace happiness and each other, not arming for war. The US edition has one of the flash things on the corner but instead of saying “Fork me on github” it says “Join the movement”. Kinda creepy.

The book is very quotable and says many things I already agree with: On quitting a boring job at Oracle: “be in control of our own destiny”. On making the mistake of buying logistics rather than building it: “we should never outsource our core competency”.

Tony writes openly and passionately. There are definitely interesting things to read. Ideas that you might want to consider applying to your company, your hires, your life, your friends. It won’t all stick, but I don’t think it has to.

Thanks Tim for delivering “Delivering Happiness” to me!


My Inner Fish

July 13, 2012

I was going to write a very brief wrap-up of the pop-science books I’ve recently read, then I found this barely started review of Neil Shubin’s “Your Inner Fish” which I borrowed from a friend of mine. So I’ll just tack on the rest of the reviews after that one.

“Your Inner Fish” is a pop science review of our ancestral biology dressed up with Tiktaalik on the cover. The strapline says “The amazing discovery of our 375-million-year-old ancestor”, and this definitely features in the book, but pretty much only in the first couple of chapters. I feel I have been a little misled by the cover. Still, I’m sure that’s the publisher’s fault and not Shubin’s.  There is a lot about teeth and nipples.

By now (having read The Ancestor’s Tale, and Life on Earth) I’m familiar with the “narrative arc” used in the book. Pick out a few species and illustrate salient features of our evolution by highlighting the aspects that the common ancestor that species X and Homo sapiens share.

Matt Ridley, “The Red Queen”.  Can’t actually remember much about this, but I guess it was okay.  Whenever I read Matt Ridley now, my thoughts are always tainted by the fact that he’s an idiot when it comes to climate change. Still didn’t stop me buying another book by him (“Genome”, which I haven’t read yet).

Mark S. Blumberg, “Freaks of Nature: And what They Tell Us about Evolution and Development”.  Lots of pictures of people with two heads, that sort of thing, and even more bizarre stuff; this of course makes it grotesquely intriguing.  It tries a little bit too hard to be a reaction against all the modern evo-devo stuff, but is good for all that.  Yes, we’ve heard of epigenetics, thanks.  No, that doesn’t explain how insects gained a second thorax segment.

Stephen Jay Gould “Wonderful Life”.  Burgess Shale.  Eye opening.  I had no idea animals were so diverse.  Excellent illustrations (many drawn by Marianne Collins who probably deserves a cover credit).  Gould clearly likes cracking open the history of the discoveries and their publication too. Particularly useful later when I was reading Holland’s book, as when Holland is whizzing from one phylum to the next I could remind myself that “Wonderful Life” describes enough bizarre animals to fill a book, and is mostly concerned with just one phylum!

Peter Holland, “The Animal Kingdom: A Very Short Introduction”.  This is an excellent, and very short, book.  Crammed full of head exploding stuff, mostly about the “upper echelons” of the animal phylogeny.  Nice summary of the recent phylogenetic redrawings of the Tree of Life.  But… surprisingly rubbish clade diagrams.  The other illustrations, mostly of species that represent the 30-odd phyla that don’t include us, are ace.  Pricey enough to make me borrow it from my library and I liked it enough to seek out a couple of other “Very Short Introductions” from the library too.

Christopher Will, “Exons, Introns, & Talking Genes”.  Reading it now is a more of a historical perspective on the origins of the Human Genome Project (it was published in 1991, 8 years before Chromosome 22).  But I actually liked that; the stuff about the history and origins of the HGP is quite interesting.  Lots about how we might and might not use the HGP to cure diseases.  Also, forensics scandal!

Lewis Wolpert, “How We Live and Why We Die: the secret lives of cells”.  Good, but… no illustrations!  I’m really glad I read “The Manga Guide to Molecular Biology” before this, as I kept anchoring Wolpert’s discussion with images in my mind that I recalled from the Manga book (which is excellent, by the way).  At times felt like Wolpert was merely organising his lecture notes into a book, and sometimes drifted off into mere lists with little overall motivation.  Still worth reading though.

Wolpert and Holland both revive the sheet/epithelial theme that I’d first been exposed to in “Your Inner Fish”. Animals are all about sheets of epithelials, the folding and pinching of which is is how we make stuff as diverse as teeth, guts, and nipples. This is really evident when we study development of the embryo, and reinforces the evolutionary history.

You will not be surprised to learn that “Endless Forms Most Beautiful” has just arrived through my letterbox, and it looks beautiful from a quick flick through.

What I’d really like to see, is a modern evo-devo book that features mostly plants instead of mostly animals.


Monbiot’s heat

June 9, 2011

George Monbiot, Heat.

Most of this book is a survey of various sources of greenhouse gas (GHG) emissions and how we might try and reduce them by 90%. The first two chapters introduce the problem (of climate change and emissions, durr) and document the denialist industry respectively. Both worthwhile, and chapter 2 could do with a whole book (no, I haven’t read Merchants of Doubt yet). The real substance is Monbiot’s plan for reducing by 90% the emissions created by housing, transport, heating, shopping, and so on.

Naturally my thoughts on Monbiot’s book are coloured by the fact that I’ve already read MacKay’s Sustainable Energy—Without the Hot Air. Which is, on the whole, better. Monbiot has the facts but throws them around as gee-whiz figures: billions this, millions that. Often his calculations are more precise and more detailed than MacKay’s, but they are no more convincing for that. MacKay’s deliberate lack of precision is more convincing precisely because it captures the uncertainty in the problem. Monbiot himself makes this point, but fails to carry through on it, preferring to pore through endless reports to quote a figure for the carbon dioxide emissions per car passenger kilometre to 3 significant figures.

There are almost no graphs: Two at the end of Chapter 1 are completely made up and little more than useless. They really would be better drawn in crayon (because then it would be more apparent that they are napkin sketches to illustrate a point).

Monbiot’s ability to quote, whilst not really handling, numbers means he fails to circumscribe the scale of the problem. Is it worth talking about how much chicken shit we can burn? (answer: it is worth a little bit, is it worth the amount of time Monbiot spends on it? I don’t think so) In a similar vein, is it worth a couple of pages tossing up whether we should be calling the effect whereby increased efficiency leads to increased consumption the Jevons effect or the Khazzoom–Brookes postulate? Nice history lesson, but spare us, really. Sometimes I think Monbiot wants to show us how much research he has done.

Compared to MacKay’s book, Monbiot’s is in a way more optimistic. Not more optimistic about averting climate change, but more optimistic about changing behaviour as a means of averting climate change. In Monbiot’s future we all shop online, we travel by (deluxe) coach, and we never go snorkelling in Africa with our dinner party friends. Maybe Monbiot is not being optimistic, but persuasive, he is a columnist after all. Curiously, given that he has written on this subject before, he doesn’t advocate eating less meat. Just to rehearse the argument: eating less meat cuts some directs emissions from livestock, and it frees up land which was used to grow animal feedstock for other purposes, such as PV arrays or biofuel.

He tackles one sector that MacKay does not: Cement. This is pretty reasonable, Cement is an important industrial process and has emissions related to the chemistry of cement manufacture (and Monbiot even features some chemical equations: yay!). Apparently we should be using geopolymeric cements. Right. Been talking to the salesmen again have we? For all I carp, MacKay’s book is about energy, and whilst that is the most important sector for emissions, someone needs to do a MacKay for non-energy emissions. Monbiot’s half chapter is a reasonable start. Wonder why he doesn’t tackle methane and nitrous oxide from agriculture? (about 8% of the UK’s GHG emissions by equivalent CO2)

On the whole, I would recommend this book, but be sure to follow up with MacKay’s if you haven’t already. It is a little bit of a pity that Heat came out before the IPCC’s 2007 Fourth Assessment Report. Many of Monbiot’s references are a little dated already. Time for a second edition?


Kinetochore reproduction

July 14, 2010

A pair of papers:

«Kinetochore reproduction in animal evolution: Cell biological explanation of karyotypic fission theory» and «Kinetochore reproduction theory may explain rapid chromosome evolution».

Recall that a kinetochore is the structure that attaches to the centromere of chromosomes during (eukaryotic) cell division.

I found these papers when reading Quammen’s «The Song of the Dodo». He somewhat outrageously backs up Mayr and says that sympatric evolution never happens. My first instinct is to think that surely sympatric evolution happens whenever chromosome number changes? So I started to look for stuff about chromosome number change.

These papers discuss one mechanism by which chromosome number might change: extra kinetochores get added during cell division, a chromosome with two kinetochores splits into two. Specifically, the diploid number might double by having all the (metacentric) chromosomes fission. It turns out, much to my surprise, that there is good evidence to believe that two fission-product chromosomes can match up (synapse) with a copy of the original unsplit chromosome. Two one-armed chromosomes match up with a normal two-armed chromosome. Fissioned pairs can be heterozygous with unfissioned singletons in a non-selective way throughout a population.

So the thing that historians of science will like is that some guy, Todd, came up with a fissioning theory years ago, but everyone ignored him. What modern biology brings to the table is a deeper understanding of cell function and therefore a plausible mechanism by which this might happen. Tension sensitive dephosphorylation, this is very cool.

So, I learn some Cell biology, Todd gets his moment in the sun, but no, I haven’t yet shown that chromosome number change is an instant slapdown for “sympatric evolution never happens”.


Image Analysis for Biologist’s Microscopy Images

January 28, 2010

The paper is:

Ljosa V, Carpenter AE (2009) Introduction to the Quantitative Analysis of Two-Dimensional Fluorescence Microscopy Images for Cell-Based Screening. PLoS Comput Biol 5(12): e1000603. doi:10.1371/
journal.pcbi.1000603

And being a PLoS journal there is an online version available. Yay for open access.

This paper is a tutorial and whilst I’m neither a biologist nor an image analyst, I know a bit about both and I found the level to be just about right for me. I think the intended readership is those overworked postdocs who are just about to design the protocol for a 10,000 slide experiment. Rather than attempting a comprehensive overview they refer to their primary example of “a cell-based fluorescence microscopy assay for DNA-damage regulators”. In other words, they took pictures of cells and counted the number of places where the DNA was damaged. In laying the groundwork they give a good number of motivating examples and also what looks to be a good selection of more comprehensive reviews and further reading.

I find their example quite good and I expect that at least in broad overview the image pipeline they illustrate and the particular techniques they discuss will be applicable not just to different areas of biological assay but image analysis more generally. They discuss quite a few image analysis techniques and their relevance to biology. For example, the importance of correcting for uneven illumination (by the microscope) is an effect that might be barely noticeable to the human eye but which can disrupt image processing algorithms.

There are two things I’m surprised they do not mention: non-linear intensity recording, and spectrally selective filters. By non-linear intensity recording I mean the fact that many image formats do not record a linear representation of light intensity, but instead gamma correct it first. Gamma correction is incredibly useful for recording images intended to be viewed by humans but may interfere with image processing algorithms. Who knows what a proprietary microscope does, but let’s hope it’s well documented. Incidentally, I do wonder if this gamma oblivious attitude is responsible for their comment that “working with the logarithm of the intensities is often helpful because it can reduce the skewness of the intensity data” (because linear intensities and gamma corrected intensities differ in their logarithms only by a constant factor (the exponent used for gamma correction)). The right place to discuss this would be in box 2, just underneath “Image file bit depth”.

A spectrally selective filter is one that passes only a narrow band of optical wavelengths. In their figure 1 they show a colour source image being split (by channel) into images that reflect markers for DNA, cytoplasm, and DNA-damage respectively. It seems to me that careful use of spectrally selective filters would enable this step to be performed more accurately and reliably in the microscope at the image capture stage. It ought to enable many more markers to be used as well. Perhaps I show my biological naïvety here, but filters are commonly used in astronomy, and I’m surprised they don’t get a mention at all.

For working biologists I expect that the practical advice in box 2 will prove invaluable. All sorts of juicy nuggets from using microplates with black sidewalls for laser based autofocussing, to avoiding photographing the edge of a well, to not opening the lab door while a series is being photographed. And avoid JPEG.

On the whole I found this quite a useful introduction, well written, and occasionally fun too. I now have a few more items for my reading list and at least one image processing algorithm to implement.


Global nitrogen deposition and carbon sinks

July 29, 2009

“Global nitrogen deposition and carbon sinks”. Dave S. Reay; Frank Dentener; Pete Smith; John Grace; Richard A. Feely. nature geoscience. 2008-07. I read the PDF that I found by googling for the title and clicking on the first PDF in the results. Pay-walls suck.

A paper about nitrogen’s rôle in the carbon cycle, looking at what we know about how nitrogen influences the major carbon sinks (forests and soil on land, and the ocean). By the way, Nr is reactive nitrogen (they use this symbol, and so shall I).

Probably worth reading if you are starting to think about the secondary effects of emissions. But don’t let it distract from the big picture. We’re burning too much carbon, and it’s not going to help with that.

I am not a climate scientist. But this is not a deep paper, it is mostly an overview; it’s 8 pages long and mostly consists of summarising other works; it references 92 papers (and still misses at least one: they make use of the IPCC SRES scenarios, but fail to reference the IPCC Special Report on Emissions Scenarios). The paper does not discuss the nitrogen cycle at all (despite being a paper about nitrogen deposition). Nor does it discuss other greenhouse gasses apart from CO2; in particular N2O, a greenhouse gas itself, is only discussed as a reactive nitrogen emission (from soil, for example) and its effect on nitrating a CO2 sink. This seems odd. But to incorporate the nitrogen cycle and other greenhouse gasses at the same time would be both potentially confusing, and lead to a much less accessible paper.

It seems very comprehensive. I have not read most of the referenced works (in fact, I’ve read only 1 I think, part of the IPCC Fourth Assessment Report), but they seem to be reasonably summarised, and the paper as a whole covers a lot of ground in its 8 pages. The paper first discusses the current emissions and their likely increase (not everywhere; European reactive nitrogen emissions are likely to decline). The rest of the paper is split between the 3 main carbon sinks: forest, soil, ocean.

The main theme of the paper is uncertainty. Having reviewed the available literature it seems that the effects of reactive nitrogen have been difficult to quantify so far. For example, the effect on carbon sequestration in the boreal forest of reactive nitrogen is summarised as being somewhere between 40 g C per g Nr and 200 g C per g Nr. Quite a wide range.

The bottom line is… yes a bit of extra sequestration in the oceans, some in the boreal forest. And not enough is know about the topical forest. Which is a shame, because it looks like that’s where a lot of the future Nr is going to get dumped. Overall, the extra sequestration will be noticeable (amounting to not more than 3 billion tonnes CO2 per year), but not really enough to have any useful effect. And a lot of that useful effect is negated by the greenhouse gas emissions themselves.

Niggles relegated to an appendix

Mostly the text uses Petagrammes (Pg), “emissions reached 7.2 Pg of carbon per year” that sort of thing, but then the diagrams, borrowed from the IPCC, use Gigatonnes (Gt). These are actually the same unit. It would be better to choose one unit and stick to it.

On two pages we see global maps comparing the distribution and strength of nitrogen deposition. Over land and oceans on page 432, and over the ocean on page 434. There are several problems with these maps, mostly in the inconsistent presentation. The first set of maps, page 432, shows the current (year 2000) nitrogen deposition and two different projections for the year 2030. The second set, page 434, shows the pre-industrial distribution of nitrogen deposition, current (1990s) deposition, and a project under the SRES A1FI scenario. One set of maps puts the prime meridian in the centre and goes from -180 to +180, the other set puts the prime meridian at the left and goes from 0 to +360. Latitudes are marked “60ºN, 30ºN, EQ” on one set, and “90ºN, 45ºN, 0ºN” on another. They are different sizes. One set of maps uses g N m-2 yr-1 the other uses mg N m-2 yr-1 (then, later in the text, kg Nr ha-1 yr-1; garhh!). The scales are different. The number and selection of colours is different. One set is labelled “Global distribution of total Nr deposition”, the other “Global distribution of oceanic nitrogen deposition”. Nr is the symbol they introduce for reactive nitrogen. Are the oceanic depositions reactive nitrogen? Probably, but they fail to say so it introduces ambiguity.

I know why the maps are like that. It’s because they were borrowed from other papers. But this points to a problem in the scientific community. It should be easy to take visualisations from different sources and massage them into a consistent presentation format. The fact that it’s obviously not easy is bad.

Incidentally I note that the maps use the equirectangular projection, they don’t say this, and I never knew its name until I looked it up on Wikipedia for this review. I still find this a strange projection to use, but it does seem to be common in scientific communities.

Oh yeah, and for some reason the band from 45ºS to 90ºS on the oceanic set is strangely squashed.


Lambda: The Ultimate Goto

July 16, 2009

Debunking the “Expensive Procedure Call” Myth; or, Procedure Call Implementations Considered Harmful; or, Lambda: The Ultimate GOTO. Guy Lewis Steele Jr. October 1977. MIT AI Memo 443.
PDF version hosted on this blog.

Essential reading for all computer scientists or those wishing to implement a language. I should’ve read this paper when it was half its age.

It’s 1977 and Steele is writing memos from an alternate reality. It’s a little hard to place oneself in this alternate universe: “Some programmers fear that their expressive power or style will be cramped if GOTO is taken away from them.” Steele does not think this is a good thing, hence the memo, it’s just a sign of how things are in that era. Was this really only 2**5 years ago?

His concerns about the subroutine’s perceived inefficiency are now laughable, and that’s no doubt partly due to Steele’s efforts in this memo. His other concern is the conflict between abstract programming concepts and concrete language constructs. And that concern is still valid.

Steele uses LISP (yes, in capitals!) for his examples with no introduction nor explanation. Cruel. But then, it is a Steele memo, and LISP has been around for 17 years already; it’s no new kid on the block (it has a whole 4 years over PL/I for example).

Part A shows how splitting up the traditional notion of what a procedure call might mean allows them to be implemented efficiently and also used for tail-calls. In other words you don’t need to do tail-calls as a special thing, organise your compiler properly and you’ll give the programmer a tool with which to express tail-calls. This is a good thing, and crops up later on.

He has an amusing rant about the syntax of procedure calls giving them a distinct flavour in most languages from built-in operators. Like the fact you can’t pass a Fortran statement function as an argument, or you have to use «CALL ... USING» in COBOL. This part still rings true. In Python we can’t pass «+» as a function (though we do have «operator.add»); «(2).__mul__» is not the same as «lambda x: 2*x».

In Part E (one of the best bits), Steele shows how Yourdon’s rat’s nest state machine can be transformed from the “traditional” implementation with an explicit state variable and a loop to a “procedural implementation”. Steele considers this “structured” (“structured” as in no GOTOs, a current buzzword of the time), and points out a further benefit: state transitions can pass each other information as parameters to a procedure, rather than using global (shared) variables. I would put it slightly differently: The liveness information the programmer is giving to the compiler is more honest.

This part, implementing a state machine using procedures, forms merely one component of a larger argument. That programming concepts and programming language constructs do not have a one-to-one correspondence. That is, though we have constructs like procedures for encapsulating modularity, and WHILE for iteration, we might use assignment to implement modularity (and yes, he has quite a good example of this), and procedures for iteration. And, he argues further, it is not up to the language implementor to guess what the programmer might do with each language construct. Implementors should give programmers all the reasonable tools they can, “otherwise, programmers will merely misuse the constructs they are given”.

This point is expanded in a note (“Various Optimizations”); there should not be just one way to compile a particular language construct, but compiler writers should “try to determine from a given program the best of all possibly interpretations and produce code accordingly.” Sound, but somewhat glib.