29 August, 2025
I recently updated the articles list, and added a counter to the link at the bottom. Which, as I draft this post, shows a count of 99.
So this is the 100th post on this site. Big news!
Except of course it’s not. Not really. Not all my writing is on this site. I still have a bunch of “imported” articles sitting in the wings, retrieved from older versions of this site, some dating back longer than I care to admit. Many more are more dated than old, if you know what I mean.
I read recently that if you’re not embarrassed by your old work, you haven’t really grown, which in many ways means I must have grown quite a lot in the last couple of decades. I mentioned on my updated about page that I have been writing online since the last century, and it’s true. My first website was published back in 1998, on some free space that came along with my first dial up connection from Freeserve. I can’t even remember how I put it together, likely on some bootleg copy of Dreamweaver, which would have been brand new software at the time.
That original site was named after the first four letters of our family car’s registration plate - about the only thing that came to mind when we were asked for a username and found out that Barrett had already been taken. Turns out there’s still some remnants of it on the Wayback Machine, albeit from a couple of years later.
Having had a quick revisit of that site, some of it is definitely, toe-curlingly cringe (as I believe the kids call it). Some of it is familiar, though: Links to my first few commercial web projects (sites for local bed and breakfasts, and one for a local hip hop label), some essays that I had written, and a page of “rantings” that was clearly a blog before that term had been invented.
Halting, blustering, simultaneously full of my own importance and crushingly shy. Some things change, some things remain the same.
And so here I am, still bashing keys and making marks on a screen, hoping someone is reading and taking some sort of enjoyment from all this. Even if it turns out to just be me, some twenty plus years later.
25 August, 2025
I have written before about how critical it is to be clear on what we mean when we say certain things. Today, I would like to be clear on what I mean when I say “productivity”.
Productivity is often discussed in terms of business or commercial contexts: how to squeeze those extra drops of productivity out of your day so you can ace those TPS reports while also landing your own personal best on your marathon time. While that can certainly be an example of productivity, it can also be a path to burnout.
See, “productivity” is really just the name of a measurement. Taken literally (and I love taking things literally when looking to find clarity), it is simply a measure of “productive activity”.
Activity seems easy enough to understand: it’s the sum of actions taken in a particular time. Hopefully nothing controversial there.
But what does it mean for activity to be “productive”? Again, be literal: what is being produced?
In your boss’s mind, it may well be “value for the company”, specifically those TPS reports, or a new feature, or some bug fixes, or a strategy document. Those are all “products”. And so in a work perspective, those could be good examples of intended products.
There are also unintended products. You might be producing conflict with your activity. Or bugs. Or GDPR nightmares. These are also products, but it would be hard to argue that you’ve been “productive” by introducing a security hole that leaked your entire customer database.
So let’s refine this: we care about measuring activity that leads to intended products.
Is that enough to give us a definition? Let’s see if it fits:
Productivity is a measure of how successful our activities have been in outputting intended products.
There’s still something not quite right about this. I like being literal, but “product” feels off here. When I think about “product” I think about little packaged things. This is a very commercial definition, and actually quite limiting. There are so many things we “produce”, and some aren’t even really “things” at all. Can we broaden our thinking here to make this more universally applicable?
I like to reach into the world of stage magic, here: magicians often talk of producing an “effect”, like make a card disappear, or making a rabbit appear from a hat. In these cases, what is being produced is the “effect” on the audience, as much as it is the rabbit.
I prefer this idea of thinking of our productions in terms of the “effect” we have on the world around us. To channel Steve Jobs, what we’re really producing is our “dent in the universe”.
So let’s revise the definition:
Productivity is a measure of how successful our activities have been in producing intended effects.
This feels much more workable. It absolutely satisfies those TPS reports and that marathon personal best as being “high productivity”, but it also introduces one critical consideration: Intention.
Presumably your boss’s intentions at least somewhat align with yours (they want those TPS reports, you want to get paid). But the marathon? Are you doing that because you intended to? Or because you were trying to live up to someone else’s expectations?
Conversely, if your intention is to produce a completed series rewatch of Stranger Things, then binging on Netflix over the weekend absolutely counts as being productive. And this is not a bug, this is a feature.
See, all these “productivity” tools are just that, tools. You might watch a bunch of YouTube videos about being super efficient with a hammer, or read some articles about getting really skilled at wielding a to-do list, but none of these can really tell you what you intend to do with those tools.
So the really key part of all of this, and without which “productivity” becomes just another treadmill, is to put in the work to get really clear on your intentions.
Because you have more control over that to-do list than you think. Sure, you might not be able to say “no” to those TPS reports that you’re stuck with compiling, but you (hopefully) don’t live at work. And the tools that work on your to-do list in your job also work on your to-do list for the rest of your life too.
And no, that doesn’t mean you have to treat your downtime like “work”. That’s why the admission of that Stranger Things binge to the definition of “productivity” is absolutely a feature. However, if we’re not able to be intentional with our lives, just like at work, you can guarantee there are plenty of people who are ready to fill up our to-do lists with their intentions. As Hank Green points out, nobody is bragging about starting their second hour on TikTok.
But start our second hour we do. And even if you feel like you have very little time in your life that you truly have control over, have any say over, I can almost guarantee that that first hour on TikTok was something you seemed to have complete control over.
Except you didn’t. TikTok did. It took control and convinced you that you were treating yourself to choose to do that for you. It felt like a well earned “break”. And that’s where the “productivity” tools can help us avoid this trap of spending our break times working on someone else’s farm.
What if we spent some time to come up with a few things we actually want to do, whether it’s finishing that book, writing that article, painting that picture, going for that hike, lying in bed, whatever it is. Just took five minutes and wrote a list.
Then what if we reviewed all that “productivity” stuff and applied it to our list, instead of someone else’s.
Because there’s my definition, what I mean when I talk about productivity: taking a few minutes to decide what you want, right now, and then applying some activity to produce it.
9 August, 2025
There’s a story I love to tell when I’m talking about enabling autonomy in teams. It was the first time I remember consciously letting the team plot their own course without either abdicating my role as the lead, or trying to “Jedi mind trick” them into thinking they had plotted their own course.
It was pretty early on in my management career, possibly the first major project I had been involved in from the start. It was time for the team to sit down, look at the problem, and start to formulate a solution. I was terrified, and had spent the previous week doing nothing other than running through the context, the current implementation, various bits of tech debt and bugs: effectively running a dress rehearsal on the whole planning session ahead of time, myself.
Why was I terrified? Despite having been at the company for five years before moving to management, this particular team worked in a domain that I had very little knowledge about. How was I supposed to lead them unless I knew as much as they did about everything? I didn’t want to let them down by being clueless.
I knew exactly how this project needed to go, exactly how it needed to be broken down. I even had a good idea who should work on what, based on skills, experience, upcoming holidays, even what kind of growth the team members had on their career maps.
All I needed to do was present it to the team. I was prepped, they would feel properly looked after, it would be great.
But as I walked into the meeting room to get the projector set up ahead of the start, something buzzed in the back of my brain: something my predecessor had said. Our job isn’t to stop them driving off the cliff, rather it’s to be there to roll up the sleeves, help them pick up the pieces, and figure out what went wrong. There was a nagging feeling that despite all my prep, this was going to be a disaster.
Still, I had my documents all ready to present. Work breakdowns, maps of the code, Gantt charts, the full thing. I couldn’t just abandon that, could I?
People started to file in. We had one remote engineer to dial in, made sure they were able to see everything (we had a dedicated in-room buddy for every remote team member, so they were on an iPad that their buddy could move around to better see what was going on), and so we began.
I pulled up the brief document which outlined the problem we were trying to solve, the constraints, how we were going to measure success, who the stakeholders were. All the starting points for the planning I had done.
I read through it, let them ask some questions, and was all ready to skip to the next tab: the one I had lovingly called “The Plan”.
And I paused. This was where the disaster would start. My gut told me to ask a question.
“So,” I turned to the room rather than talking to the projector. “Where should we start?”.
There was a brief pause before our remote engineer spoke up. “Well, we obviously need to chat with our contacts management team: this is going to bump into a bunch of code they manage.”
I breathed a sigh of relief. This was exactly on my plan, and so this might work. They were going to reinvent the plan I had for them. I wouldn’t need to be a dictator, the Jedi Mind Trick had worked.
“Hang on”, another voice jumped in. “No we don’t. The problem we’re actually solving for has nothing to do with contacts. That’s just in the success metrics. I can see why it’s there: it’s the easiest thing for us to measure. But it’s not actually needed to solve the actual business problem”.
I stopped. Wasn’t it? I skimmed the problem statement again. No, we could bypass the contacts altogether. I had completely missed that, as had our stakeholder. We had veered off course. I fought the panic for a moment.
“Okay,” I started, terrified that I’d fucked up but also genuinely curious. “Say we bypass touching contacts. How do we measure the impact?”.
More silence. Then “those are just a proxy for usage of this new feature. We could measure directly if we added in some telemetry here and here. Hang on, let me show you.”
The projector switched to another laptop screen and up came some code I had seen but not fully understood. “Look”.
The next ten minutes saw the team fully engage on this new idea. Code was pulled up, a quick diagram was sketched on our remote whiteboard, and suddenly we were starting to form a plan. I kept on asking insightful questions (only insightful because I was genuinely curious why things were different from my plan, but they didn’t know that), and the conversation flowed for a further hour.
Some of my plan (just over half of it) ended up being reinvented, but what we landed on in the end added up to about 60% the effort I had originally projected to myself, and informally budgeted for in terms of expectation management to stakeholders.
We had just manufactured four weeks of time. And all because I had the sense to keep my damn mouth shut.
See, what I realised later was that it wasn’t that I’d made a mistake doing the planning, but that it had been essential to help me be the best possible coach in the moment. Having an idea of how to solve the problem, but not sharing it, helped give me something concrete to compare to. I could ask helpful questions, not just dumb manager ones. But the shock of having a blind spot revealed to me so early helped me avoid poisoning the well by trying to steer them back to my plan.
I had context, useful knowledge, curiosity, and a genuine incentive to defer to their superior understanding of the existing implementations.
And by trusting that, the team now had a plan that they owned, that they felt genuinely invested in, that they understood and could adapt to changes, because it was their plan. Oh, and we had also managed to buy a month of refactoring at the end of the project.
It was at that point I resolved to avoid sharing my ideas till as late as possible in any conversation. I still fall into this trap too often, but it’s a powerful technique when managing a team that has been deep in the code for long enough, and is more in need of being guided in processes or business context.
In short, do your homework so you can ask good questions, rather than give good answers. Ask the questions. And then shut the fuck up.
21 July, 2025
One of the things I love about the span of time is stuff like the timeline of tool usage by humans. Homo Sapiens evolved around 300,000 years ago, but there’s evidence of hominid tool usage dating back over 2 million years ago.
That, it seems fair to say, is a long time.
It’s also fair to say that we Homo Sapiens know how to master our tools. We literally evolved alongside them, and have never known a tool-free world, as a species.
Which makes it so surprising to me that we still seem so skeptical of new tools when they come along. The synthesiser was seen as the death of music, because why would anyone want to learn the cello when you can just press a key to make the perfect sound every time? The keyboard was seen as the death of handwriting, since why bother learning how to write? The calculator the death of arithmetic, the camera the death of painting, the bicycle the death of walking.
Even writing (writing!) was seen by some ancient Greek thinkers as the death of memory.
And yet the human capacity for integrating new tools into our (literal) toolbox remains undefeated. Rather than tools limiting human creativity and capability, in every single instance the tools have always been additive. The trick is to avoid seeing the tool in terms of what it replaces, but rather in what it enables. Photography enables the capture of fidelity in a way that created a brand new branch of art using the camera, while also freeing painting from the need for realism. Electronic instruments allowed for new, previously impossible speeds and accuracy, while also freeing traditional musicians to be able to explore new areas of creativity inspired by their digital bandmates.
And yes, this is another post about AI. A reaction, this time, to the idea that the goal of AI is to somehow make everything effortless, and that by seeking to abolish effort, we somehow risk losing something essential about ourselves.
The idea goes that the hard work is the thing that makes the work itself capable of greatness. Remove the hard work, and the result will be bland. Unearned. Unoriginal. It will miss that human something. Further, our grit and our determination will atrophy, and we will find ourselves unable to create any more, subject only to the slop that AI can produce for us.
I argue that not only does our history with tools suggest this is nonsense, but also that it misses the point. Hard work is not the only signifier of endeavour. As a counter to this, consider the state of flow: that place where we find that we are tackling tasks with ease, effortlessly, our skills and our whims aligned to create what we want.
Is flow effortless? It’s one of the defining characteristics of flow! Is it somehow bland and unearned? I would say not.
The “hard” part here is triggering that state. Flow can often feel like an accident.
But what if AI could be used as a tool to help, to make us “accident prone” as it were? What if AI could be used to coach through the blank page, the fear of failure, the fear of success? What if it could be used to nudge rather than solve, to offer different ways of looking at a problem?
Sure, some will use AI to simply solve the problem — one of the many things we have evolved is a fine sense of calorie efficiency — but the creative ones among us should be able to find ways to use AI to enhance their abilities. Not to tell them new ways of looking at the world, but to prompt them into finding their own new ways of seeing the world.
As with the previous centuries of tools, though, those creatives that learn to harness this new tool may not be the ones who were proficient with the old tools. And that’s a shame, because it’s the same deep curiosity that drives both. The same desire when confronted with a new idea to figure out how to use it to do more of what we love, better, faster, brighter.
And if that tool allows more people to participate? If it can get more people to write, to take photos, to compose, to push past their inhibitions and create? Isn’t that part of the goal of humanity in the first place?
This is why I choose optimism. I choose to hope that we can find a way through this current inflection point, just as we have before, just as we have for our entire existence, and just as our ancestors did for literally millions of years.
Yes, AI is different, but so was everything else. Our history, our pre-history, and the history of our entire species, is one of bending tools to our will.
I choose to believe this fire will be tamed.
13 June, 2025
For the longest time, I was sure these LLM chatbots were the next crypto grift: ideas that had existed for decades, implemented in the worst possible way, but with a lick of paint and a shiny marketing campaign, designed to separate the gullible from their money. The best strategy, I thought, was to sit it out, watch others lose their shirts, and wait for it all to blow over.
But I may have to go back a little further to see history repeating.
I think now that the current LLM craze is this generation’s dotcom bubble.
You see, the internet in general was a transformational technology, but it was the web that built on top of it with something tangible: it felt like we had productised it. We had the internet in a box now, and all that was left to come up with clever ideas, package it, and become rich.
So along came the speculators with their late 90s ideas of ideas that would revolutionise humanity! They all shared two common features (besides being in cyberspace):
Whether it was that people would be comfortable ordering clothes sight unseen, or typing credit card details into online forms, or watching movies on tiny screens, or listening to music on crappy speakers, or waiting three days to download software that they’d be faster driving to the store to get, the ideas all raised huge sums of money by focusing on the hype, and glossing over the real problems that needed to be solved.
And a lot of people bought into the hype. And they spent a lot of money. And then it all blew up and people lost their jobs, or their savings, or both. And they were angry, felt cheated, felt lost, and struggled to see a path forward. Many wrote the web off as a fad.
So what’s the analogue now? Just like the early dotcom companies, there’s a lot of easy hype money to be made by selling the future, then cashing out when people realise it isn’t here yet. It’s also easy to roll our eyes at how ridiculous the idea is that an LLM could do whatever it is that we’re being told it can do, or get angry at how expensive it all is to run, at how wasteful and energy hungry the technology is, at how immoral the training data is, at how dystopian the disruption of artists and writers and programmers and musicians and actors will be.
All of this is true. But look back at the list of dotcom problems above, and the funny thing is that all those problems eventually got solved so comprehensively that they seem positively archaic now. Back in 2000, boo.com was the poster child for how ridiculous it all was. As if people would ever buy clothes online! Hah.
Maybe the stakes are higher now, but LLMs, or whatever comes next, will soon be able to do the things they can’t do now. And like the web, that will both change everything and change nothing at all.
What I do know, though, is that the people who navigated the dotcom bubble to stay relevant were the ones who saw that whatever happened, the toothpaste was out of the tube. The web was here to stay, so they rolled up their sleeves and started working to address those problems, to build the future, rather than laugh at it or yell at it. I suspect the same will be true today — we can try to put the toothpaste back in the tube, or be angry at the ones who squeezed it out.
Or maybe we can roll up our sleeves once again, figure out how to take control of the technology again, to use it to build the future we want. Maybe it’s impossible, or if it is possible it won’t last, but since when was that a reason not to try anyway?