Ops Pearl: fast-forwarding a Git clone without a full pull

Categories
Git Logo

tl;dr - How to fast forward a git clone without a full git pull, shamelessly stolen from a fantastic Kubecon talk given by some team members of the Internet Archive

I’m still working my way through the videos from Kubecon 2018 (thanks to CockroachDB for sponsoring the recordings), but one of the talks I really enjoyed was given by a two team members from The Internet Archive who shared how they migrated some of their workloads to Kubernetes and got massive gains in usability without everything catching fire.

The talk was given by Tracey Jaquith and David Van Duzer, and is titled “Migrating Internet Archive to Kubernetes”, and I’d encourage giving it a watch (Protip: watch at 1.5x or 2x speed).

The talk also discusses also discussed Gitlab and using one of their best features, AutoDevOps, and in particular Review Apps something I’ve only really seen at companies who are/can afford doing engineering really right. Gitlab really offers a crazy amount of tooling that turns into velocity for companies, but today’s post isn’t quite about how awesome Gitlab is.

One of the pearls from the talk (that I instantly thought was worth sharing) was a few lines of git commands to fast forward a shallow clone (you can also watch the moment in the talk):

$ git checkout --detach
$ git fetch --depth 1 -f origin [brnch|<hash>]
$ git checkout -f [brnch|<hash>]

As mentioned, the point of these commands is to fast forward a commit (maybe containing a simple change/quick change, like a version increment), without pulling a potentially large repository. I’m personally interested in this because I’m pretty excited by how easy Gitlab makes it to use trigger pipelines across projects, so easy in fact, that I’ve written about automatically building Swagger-derived clients w/ Gitlab before, and actually do it in production projects (how I do it these days is a little different but the old code should still work as well). Teams that I’ve worked with that have barely heard of Swagger/auto-generating clients are usually pretty impressed when a new one gets made @ every tag release.

Overall the video was fantastic – seeing their approach, how they moved data, navigated and ported legacy systems was fantastic to watch. There’s lots of insight like this (though it can be hard to sift through the talks) in the Seattle Kubecon 2018 vids so check them out!

Did you find this read beneficial? Send me questions/comments/clarifciations.
Want my expertise on your team/project? Send me interesting opportunities!