I’ve already tweeted about this, but for all you Python programmers who have gotten indigestion from the complex process of installing virtualenv and virtualenvwrapper, check out brainsik’s virtualenv-burrito. It wraps up both tools and installs them for you in a single step, no thinking required.
For most of the aughts, the various Linux desktop projects tried to catch up to Windows by copying features wholesale, while Mac OS X innovated like crazy and blew past both by building an amazing desktop experience.
The omission of Chrome & Safari from Paul Rouget‘s pro-Firefox Is IE9 a modern browser? reminds me of that misguided attempt. I hope Firefox doesn’t make the same mistake as desktop Linux; it would be a shame if they focused all their energy trying to catch Explorer, while missing the rapid user-interface innovation going on elsewhere.
Update: Here’s an example of a great idea from 2007, from Alex Faaborg, out of the user experience team at Mozilla, that didn’t make it into Firefox 3.
I am going traveling very soon, so things will get pretty quiet around here. My posts here are split up into three categories, and each has a separate feed:
- Essays (subscribe here), my longer, in-depth writings, usually about technical subjects.
- Travelog (subscribe here), which is where I have written about my travels. Although will be traveling soon, I’m not sure if I’ll continue to write travel log entries.
- Blurbs (subscribe here), mostly links to things I find interesting, with very short commentary. For the last few months I’ve been posting interesting links & short commentary via Twitter instead, so if you like my blurbs, or want to respond to anything on here, you could follow me on Twitter.
There’s of course also a master feed for all posts. You can follow my posts on Digg, or follow my account on the awesome soup.io, which aggregates this blog, and my twitter, t-shirts, 3-d printing, and various other online presences. (Update 2011 April 6: Digg has removed import by RSS)
Those who followed Is Git really better than X will enjoy this interesting article by Armin Ronacher on the finer distinctions between Git and Mercurial’s branching. (via brainsik)
I think it’s particularly fascinating to look at how scientific beliefs about the functioning of the human brain have progressed through a long series of misconceptions.
Aristotle couldn’t believe that the brain, an inert grey mass, could have anything to do with thought; he assumed that the heart, hot and pulsing, must be the source of cognition, and that the brain’s function was simply to cool the blood.
Descartes assumed that the brain, with its aperture-like “cavities and pores,” was, along with the heart, part of an elaborate hydraulic system that controlled the flow of “animal spirits” through the “pipes” of the nerves. More recently, there was a longstanding belief that the cellular structure of the brain was essentially fixed by the time a person hit the age of 20 or so; we now know, through a few decades’ worth of neuroplasticity research, that even the adult brain is quite malleable, adapting continually to shifts in circumstances and behavior.
Even more recently there’s been a popular conception of the brain as a set of computing modules running, essentially, genetically determined software programs, an idea that is now also being chipped away by new research. Many of these misconceptions can be traced back to the metaphors human beings have used to understand themselves and the world (as Robert Martensen has described in his book The Brain Takes Shape).
Descartes’ mechanistic “clockwork” metaphor for explaining existence underpinned his hydraulic brain system and also influenced our more recent conception of the brain as a system of fixed and unchanging parts.
Contemporary models of the brain’s functioning draw on the popular metaphorical connection between the brain and the digital computer. My sense is that many scientific misconceptions have their roots in the dominant metaphors of the time. Metaphors are powerful explanatory tools, but they also tend to mislead by oversimplifying.
What other contemporary metaphors are misleading us about our world today?
Most of Scott Chacon’s points in Why Git is Better than X are spot-on. The page could even be renamed “Why distributed version control systems are better than Subversion and Perforce,” since those two are the clear losers. (And yes, Bazaar is so slow I think it deserves to be listed twice in the speed section.)
I’ve used each of Git, Mercurial, and Bazaar for several months in medium sized teams; I’ve done quite a bit of branching and merging in Mercurial and Bazaar, and a fair amount in Git. Based on that experience, I am compelled to disagree here with a few of the points, specifically with regards to Mercurial.
Cheap local branching
Chacon argues that only Git offers cheap local branching, but Mercurial allows exactly the same work-flows that he outlines, and they are just as easy. Chacon claims that Git’s branching model is different:
Git will allow you to have multiple local branches that can be entirely independent of each other…
This isn’t quite correct. The real difference between Git’s branching model and the others is that Git does not assume a one-to-one relationship between a logical branch and a directory on the file-system. In Git, you can have a single directory on the file-system and switch between different branches with the
git checkout command without changing directories.
Mercurial (and Bazaar) enforce default to1 a one-to-one relationship between a logical branch and directory on the file-system. You can, however, use the
hg clone command to make a new, “entirely independent,” branch. You can clone a local directory, a remote directory over SSH, or a remote Mercurial repository. Cloning a repository that’s on the local file-system is the way to create cheap local branches in Mercurial. Mercurial will even create hard links to save disk space and time.
For example, in Git, you make a cheap local branch with:
git branch featurebranch
git checkout featurebranch
And in Mercurial, you make a cheap local branch with:
hg clone master featurebranch
In Git, you switch between branches with
git checkout branchname. In Mercurial, since branches are in a one-to-one correspondence with directories, you switch between branches with
In Git, you pull changes from a local branch with
git merge branchname. In Mercurial, you pull changes into from a local branch with
hg pull -u ../branchname.
Chacon’s four example work-flows using cheap local branches are not only completely possible in Mercurial, they are just as trivial as they are in Git. (The first three are even equally trivial in Bazaar; I’m not sure about the last one in Bazaar, though.)
You can find ways to do some of this with other systems, but the work involved is much more difficult and error-prone.
Reasonable people might also disagree about the intuitiveness of the specific commands, but creating, updating, merging, and deleting them in Mercurial are one or two commands in the shell, just like in Git. None of these tasks are more difficult or more error-prone in Mercurial.
Chacon also says:
when you push to a remote repository, you do not have to push all of your branches. You can only share one of your branches and not all of them.
This is a little misleading. In Mercurial, since there’s usually a one-to-one correspondence between branches and local directories, and since you can’t be in two directories at once, you’re usually only pushing a single branch. In Mercurial, you generally don’t have to think about which branch you’re pushing, or whether you’re pushing other branches that shouldn’t be pushed, because you generally only ever push one branch at a time. I would actually turn this argument around, and claim that Git’s default of keeping multiple logical branches in a single directory forces you to worry about which branches you’re pushing, and actually makes cheap local branches harder and more error-prone in Git than in Mercurial.
Reasonable people might disagree about the intuitiveness of a branching model which does or does not assume default to a one-to-one relationship between logical branches and local directories, but that’s not why cheap local branches are so powerful in Git. Rather, it’s the ability to clone and merge between repositories locally that allows cheap local branches, and it’s just as easy in Mercurial as in Git, and nearly as easy in Bazaar.
GitHub is unarguably the biggest community around any distributed version control system. But Chacon goes too far when he says:
This type of community is simply not available with any of the other SCMs.
BitBucket may not be as big as GitHub, but it’s a “socially-targeted” community where Mercurial users can “fork and contribute” to other Mercurial projects. It might not have as many projects or contributors as GitHub, but a sheer difference in size doesn’t translate to “simply not available.”
Bazaar’s not listed as inferior to Git on this point, I assume because of Launchpad. If Launchpad counts, why not BitBucket?
I’d also be interested to see usage numbers comparing GitHub and everyone’s favorite open-source community brontosaurus, SourceForge. Has GitHub surpassed them in projects or users or commits or downloads?
Minor haggling points
I also disagree with Chacon about the benefits of the staging area or index, but that’s purely a matter of personal preference. I never wanted anything like the staging area before I started using Git, and I don’t miss it now when I’m using Mercurial. If anything, the extra concept of a staging area makes Git slightly harder to learn; newbies coming from any other version control system have to be taught about the
-a option to
git commit right away, or else they wonder why their changes aren’t getting committed.
In the “Easy to Learn” section, Chacon highlights the
add commands in both Mercurial and Git, indicating that they are the same; but they are pretty different. In Mercurial,
add schedules previously un-tracked files to be tracked. In Git,
add adds changes in a current file to the staging area, or index, including, but not limited to, previously un-tracked files. That section is also missing Mercurial’s
mv command. Other potentially confusing differences include Git’s
revert, which corresponds to Mercurial’s
backout; and Mercurial’s
revert, which can be duplicated using Git’s
- Update: Thanks to David Wolever, who points out that
hg bookmarkcan be used to have more than one branch per directory in Mercurial. I didn’t even know about these commands when I first wrote this post. But I think my central point still holds; creating cheap local branches in Mercurial is just as easy as in Git. [↩]
In the middle of a major, hairy refactoring recently, I codified a tactic for refactoring that I’d been using, unconsciously, for years. The tactic is to break down refactoring into two modes, and deliberately switch between them:
- Clean-up mode: Clean up and reorganize the code, without changing the code’s external functionality.
- Change mode: Make the smallest, simplest self-contained change, making sure the code is functional again when it’s completed.
You start in clean-up mode, and switch back and forth as needed. This tactic is probably unsurprising to experienced programmers, but for those of you not yet as hard-core as Donald Knuth, let me explain why it’s a good idea.
Separating clean-up and changes into discrete modes of working gives you less to think about at any one time. A refactor can be very invasive—code that was closely tied together is now completely decoupled, or vice-versa; functions, objects and modules become obsolete or nonsensical, or split apart or merge in unexpected ways; identifiers become ambiguous or collide. If you’re trying to reorganize everything at the same time you’re juggling old and new code in your head, it’s easy (for anyone) to get lost in the maze.
When you’re consciously in clean-up mode, you can focus just on tasks like making sure variables have unambiguous, correct names, object / module / function / &c. organization and boundaries are sensible, and so on. You’re not changing any functionality, so your tests (you do have tests, right?) still pass, and the application still behaves as it always has (you did test it manually, right?). And you can be liberal in your clean-up; if you end up improving code that isn’t ultimately affected by the refactor, there’s no harm in that.
Once you feel like the code is clean and ready for the changes in functionality, commit your clean-up (you are using version control, right?). I usually use a commit message like “clean-up in preparation for….”
Now switch to change mode, and start making the changes. Quite often, you’ll discover something else that needs to be cleaned up. But you’re in change mode, not clean-up mode. Since you’re only making the smallest, simplest self-contained change, this new clean-up can probably wait until later. But if it can’t wait until later, then roll back your changes (or save a patch, or store your work in the attic, or the stash, or shelve it, or whatever the kids are calling it these days). Complete the new clean-up, commit it, and then go back to change mode and your half-completed changes.
This tactic has some additional benefits that might not be immediately obvious, too:
- Clean-up mode forces you to read over all of the code that’s going to be changed and ask, “How is this code going to be affected by the changes I’m planning?” Sometimes you discover unanticipated edge-cases or bugs in your planned changes. Sometimes you realize the whole plan is flawed, and have to go back to the drawing board. And sometimes, after a good, hearty clean-up session, you realize that you can make the required changes in a less invasive, simpler way. I find that most refactors are more clean-up than anything else.
- If the codebase has a “master” or “main” branch, and you use a “feature” or “working” branch for your refactor, you can put the clean-up commits into the master branch (they don’t change any functionality, right?) and only commit the functional changes to the feature branch. What’s the point of that? Well, everyone else gets to benefit from your code clean-up right away, and when you do end up merging your functionality changes into the master branch, because it’s a smaller delta, there’s a smaller chance of conflicts.
The next time you’re staring down the barrel of a nasty refactor, and cursing the person who didn’t fully think think through the business requirements, try this. It won’t make every refactor a piece of cake, nor will it become the hot new programming methodology acronym down in the valley (2SR, anyone? 2SRT?), but I guarantee you’ll be glad you tried it.