Fred Brooks (1931-2022)

I just learned that Fred Brooks has passed away.

If you are outside software development, it is unlikely that you have ever heard about him, but inside the field, he was a giant. He is not known as much for his technical achievements, though they were impressive, as his seminal work “The Mythical Man-Month“, first published in 1975, in which he made remarkable claims like “[a]dding manpower to a late software project makes it later”. It is also famous for expressing the concept that if it takes a pregnant person 9 months to give birth to a baby, adding 8 more people, won’t change the period of time to one month – thus addressing the concept of man-months as a concept, showing why it is nonsense in some situations.

The Mythical Man-Month is still a book that I recommend to people working in software development (I’d suggest getting the 20 year edition from 1995, which has 4 extra chapters).

Knowing your limits

Yesterday, I feel down a hole of watching clips from past episodes of Masterchef Canada. I like the Canadian version of the show, because the people in it are truly skilled, and the judges are not assholes (unlike the US version). While watching those clips, I came across this clip

The setting is that contestants are split into two teams, who have to make the food for the customers at one of the judge’s restaurants.

The video is focused on the eventual winner of that season of Masterchef Canada, Beccy Stables, but I wanted to share the clip not because of her performance, which was outstanding, but because of the great example of the team captain, Kaegan Donnelly. He realized that he was out of his dept, and instead of trying to cling on to this position, he stepped aside, and let Beccy Stables take over the role, allowing their team to win.

I have worked with many great people over the years, and this is one of the rare skills that set them aside – the ability to look beyond their ego, realize what is needed while knowing their limits, and then step aside, and let other do the job.

Not everything advanced in Computer Science is AI

IEEE Spectrum has an article Stop Calling Everything AI, Machine-Learning Pioneer Says in which Michael I. Jordan addresses the overuse of artificial intelligence in Computer Science

Artificial-intelligence systems are nowhere near advanced enough to replace humans in many tasks involving reasoning, real-world knowledge, and social interaction. They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley.

He notes that the imitation of human thinking is not the sole goal of machine learning—the engineering field that underlies recent progress in AI—or even the best goal. Instead, machine learning can serve to augment human intelligence, via painstaking analysis of large data sets in much the way that a search engine augments human knowledge by organizing the Web. Machine learning also can provide new services to humans in domains such as health care, commerce, and transportation, by bringing together information found in multiple data sets, finding patterns, and proposing new courses of action.

I think this is a important point. Machine learning is an incredible powerful tool, but it is all too often bunched together with artificial intelligence, rather than considered a tool-set in itself, which is used creating artificial intelligence.

There are many uses of machine learning – I have seen it used for fraud detection, document analysis, and even as a tool for creating faster builds and deploys for developers. In the latter case, machine learning was used to figure out what tests needed to run dependent on a number of factors, including what code was edited, the track record of the developer, and the complexity of code. I have yet to come across artificial intelligence being used in a daily setting.

As Michael I. Jordan points out, there are serious considerations to keep in mind, when dealing with machine learning, without having to try to make it into something more grand

“While the science-fiction discussions about AI and super intelligence are fun, they are a distraction,” he says. “There’s not been enough focus on the real problem, which is building planetary-scale machine learning–based systems that actually work, deliver value to humans, and do not amplify inequities.”

The article links to an interesting article by Michael I. Jordan going more into the subject of artificial intelligence being a way off: Artificial Intelligence—The Revolution Hasn’t Happened Yet

This article also gets into the difference between machine learning and artificial intelligence

Most of what is labeled AI today, particularly in the public sphere, is actually machine learning (ML), a term in use for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions, and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical, back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase ‘data science’ emerged to refer to this phenomenon, reflecting both the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, as well as reflecting the larger social and environmental scope of the resulting systems.This confluence of ideas and technology trends has been rebranded as ‘AI’ over the past few years. This rebranding deserves some scrutiny.

Historically, the phrase “artificial intelligence” was coined in the late 1950s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. I will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the notion that the artificially-intelligent entity should seem to be one of us, if not physically then at least mentally (whatever that might mean). This was largely an academic enterprise. While related academic fields such as operations research, statistics, pattern recognition, information theory, and control theory already existed, and often took inspiration from human or animal behavior, these fields were arguably focused on low-level signals and decisions. The ability of, say, a squirrel to perceive the three-dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these fields. AI was meant to focus on something different: the high-level or cognitive capability of humans to reason and to think. Sixty years later, however, high-level reasoning and thought remain elusive. The developments now being called AI arose mostly in the engineering fields associated with low-level pattern recognition and movement control, as well as in the field of statistics, the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses, and decisions.

We need to dismantle the myth of the genius programmer

After writing the headline, I realized that there are actually two myths around genius programmers – the one I am going to address in this blogpost, and a myth surrounding the importance of genius programmers in teams, which I might have to address some other time (short hint: teams are more important than any individual).

Yesterday, I spent a couple of hours at the Emergent Works 2020 Summer showcase where people who were part of the mentee program at Emergent Works showed what they had learned over the summer. It was really impressive, and shows how a good mentor can help you learn a lot in a short time.

During one of the presentations, one of the mentees mentioned that one of the most important things he had learned, was that you don’t have to be a genius to be a programmer, and mentioned that that had always been his impression before.

Since I have been working in the tech field for a couple of decades, I tend to forget how people think of people in the field, so this comment really made me think about the perception many people have of people in the field. Especially people who don’t really know anyone in the field. And it is true, there is the whole idea that to be a programmer, you have to be a genius.

This impression is perpetuated by the stories we get out of the tech field. About the big successes, generating multi-million fortunes for the founders and early employees. Here people involved, mostly young white men, are usually presented as geniuses, that have done something that no normal person could have done.

The truth is, this is just a myth. A damaging myth.

There are obviously a level of skill involved, but a lot of it has to do with connections and the sheer dumb luck of being at the right place at the right time.

In reality, the tech field is not characterized by these people. Most people who work in the tech fields are not geniuses, but rather ordinary people who have learned a particular skill set. Not necessarily an easy skill set to learn, but one that most people can learn if they have the chance.

It is also important to remember that many people who work in tech don’t program, but fulfill other roles, such as testers/QA, business analysts, program managers etc. Here the skill set needed is different, and again something most people can learn.

In a time where we desperately need more people to go into tech, we need to dismantle this myth of having to be a genius to work in tech. We obviously welcome geniuses, but most people, also those working in tech, are ordinary people. We need to show everyone that tech is a viable path, even if you haven’t grown up with a computer, even if you don’t spend all your spare time on programming.

Note, that I am not arguing that working in tech is necessarily easy. It is a field that is constantly changing, and where you need to put some effort into keeping up. But this is true for many other fields as well, and no one claims that you have to be a genius to work in those. Instead people agree that you have to put in some effort to getting into the field, and in staying in the field.

So, in other words, the myth of having to be a genius to learn to program, or to work in tech, is one that needs to go. We need it to go, because it is a barrier for people who are well qualified to work in the field, but get turned away by the belief that it requires something extra-ordinary of people. This needs to end.


A note on Emergent Works. It is a wonderful organization, which describes itself as:

Emergent Works is a nonprofit software company that trains and employs formerly incarcerated people.

The organization has a special focus on Black and Latinx people, since they are so over-represented in prison and under-represented i n the tech industry.

If you have money to spare, consider helping them with donation. If you are in a leading role in a US tech company, consider hiring them for software development. If you work in tech, and are willing to use some of your spare time, consider becoming a mentor.

Lazy linking – tech edition

I thought I’d share a few links about tech related stuff that I have found interesting in recent times.

Extreme Programming Creator Kent Beck: Tech Has a Compassion Deficit

Before, Beck saw technologists as “us” and management as “them,” he said. Now, he is “them,” and his view has changed.

“I do my one-on-one coaching, but I’m also in the room helping make strategic decisions with very little information, and I’ve gained a lot of respect and empathy for those decision-makers,” he said. “As a punk-ass programmer, I’d grumble about ‘management.’ Well, they have a job to do, and it’s a really difficult job.”

So, the capital-M management is alright with him. But that doesn’t mean Beck’s view of tech leadership is entirely rosy. Many of his anxieties about the tech industry center on power players and their evolving stances on issues like remote compensation, racial justice and content moderation.

“Not a lot makes me hopeful,” he said. “You caught me in isolation [due to COVID-19 precautions]. So this is not my day for bright sunshine.”

Kent Beck was the creator of eXtreme Programming (XP), which is probably the most programmer friendly agile methodology, and which has come up with many of the techniques and tools which is widely used in systems development today. I found this interview interesting because it shows how Kent Beck has evolved and shifted his focus to a much broader perspective than in earlier days.

For doubters of agile, there is also a great question/answer:

You signed the Agile Manifesto almost 20 years ago. How do you feel about agile now?

It’s a devastated wasteland. The life has been sucked out of it. It’s a few religious rituals carried out by people who don’t understand the purpose that those rituals were intended to serve in the first place.

I think he is a bit too pessimistic, but I also understand where he is coming from. From those of us, who have used agile for many years, it is some times scary to realize how little has improved over the years, and how little understanding there is of the ideas behind agile. When I try to explain to people that one of the main strengths of agile is rapid feedback, they all too often fail to understand that this is not just about implementing automatic testing (though that is a given), but also on making measurements and giving outsides the chance of providing feedback – either directly or through their behavior.

Agile and Architecture: Friend, not Foe

Continuing in the realm of agile, here is an article that is really partly a sales pitch for a book. Still worth reading nevertheless.

As an architect, I am frequently asked about the role that architecture can play in environments that practice agile development methods. The core assumption behind this question is usually that agile teams don’t need architecture or at least don’t need architects.

I once took a course by Kevlin Henney called Architecture with Agility, which went into how software architecture and agility could co-exist. One of the major points in the course, is that good architecture is a function over time. Things that were good decisions at one stage, can turn into being bad decisions later, when things change. As a natural consequences of this, you want to defer decisions as long as possible.

Gregor Hohpe seems to be making the same point, but he also makes the excellent point that software architecture allows you to defer certain decisions, until you have the knowledge to make it.

I have ordered his book, and am looking forward to reading it.

Blockchain, the amazing solution for almost nothing

We all have biases, and my bias regarding blockchain is that it is an over-hyped technology which has been born out of a completely useless idea (crypto currency). There are many reasons why I feel this way, but I don’t think I have seen any article describe my feelings about the technology as well as this article by Jesse Frederik.

I’ve been hearing a lot about blockchain in the last few years. I mean, who hasn’t? It’s everywhere.

I’m sure I wasn’t the only one who thought: but what is it then, for God’s sake, this whole blockchain thing? And what’s so terribly revolutionary about it? What problem does it solve?

That’s why I wrote this article. I can tell you upfront, it’s a bizarre journey to nowhere. I’ve never seen so much incomprehensible jargon to describe so little. I’ve never seen so much bloated bombast fall so flat on closer inspection. And I’ve never seen so many people searching so hard for a problem to go with their solution.

I am sure that many blockchain fans can point to examples in the article where it is unfair, but it doesn’t change the overall message. Blockchain is, at its current state, completely over-hyped and largely useless. The article kindly doesn’t mention goes into this, but the performance issues of blockchain technology makes it useless at its current state, and it seems like the only solution to the performance problems is to basically redefine the basic premises of how permissions should work (see e.g. Performance and Scalability of BlockchainNetworks and Smart Contracts (pdf).

Book review: Accelerate

Book review of Accelerate: The Science Behind DevOps – Building and Scaling High Performing Technology Organizations by Nicole Forsgren, PhD, Jez Humble, and Gene Kim.

If you have been at any programming or agile related conference within the last 8 months or so, you will probably have heard people recommend Accelerate. One of the reasons it is often recommended is that it explains the importance of DevOps for high performing tech organizations. This is not really anything new, but what Accelerate does, is base it on actual science, and not just anecdotes and opinions – something we see all too often in the tech field.

The findings of Accelerate is based upon the survey data collected through the yearly survey “State of DevOps” in the years 2014-2017. Those data clearly demonstrated that a high performing organization performed much better than a low performing organization, but they could also be used to explain what caused this differences in performance.

The book is split into 3 parts, a conclusion, and some appendixes. The first part explains the findings, the second part, the science used, and the third part is a case study contributed by Steve Bell and Karen Whitley Bell. I will go through each part separately below.

The first part of the book is called “What we found”, and what they found is certainly noteworthy. They looked at some key indicators of software delivery performance, and found that a high performance organization had the following performance compared to low performance organizations:

  • 46 times more frequent deployments
  • 440 faster lead time from commit to deploy
  • 170 times faster mean time to recover from downtime
  • 5 times lower change failure rate

These numbers are from page 10 of the book, and show a much greater difference than most would expect, no matter how big a proponent of DevOps.

The rest of the first part of the book goes through their other findings, which identifies “24 key capabilities that drive improvement in software delivery performance, and, in turn, organizational performance”. According to the authors, “[t]hese capabilities are easy to define, measure, and improve”.

I won’t include the list here, but many of them relate to DevOps practices and lean management practices, though there are a couple related other things, such as architecture. One thing I will mention, is that the findings indicate that while culture has an influence on the use of DevOps techniques, the use of DevOps techniques also have an influence on culture, which changes as a result of that.

None of the mentioned capabilities are new, but here we have, for the first time, evidence that they actually work, and help improve performance.

The second part of the book, “The Research”, goes into how the research was conducted, and why survey data was suitable for the research. It doesn’t include the actual data, which would have been nice, but I can understand why that isn’t the case (breach of confidentiality etc).

I can’t recall seeing a similar chapter in any other book on programming/systems development, and I highly applaud it. I also think it was a smart move to put it in the second part, rather than the first part, as most people will be more interested in the findings, rather than the methods behind finding them.

The third part of the book, the case study contributed by Steve Bell and Karen Whitley Bell, is called “Transformation”. They takes us to Ing Netherlands, a world-wide bank, where they have been involved in a cultural change, started and lead by the IT manager, enabling the organization to become high-performance.

To be honest, I find this part to be the weakest part of the book, both in content and in presentation, but it does provide a nice overview of practices on the team, management, and leadership level (it can be found online here).

All in all, the findings of the book should not be shocking to people who has worked with agile, DevOps etc., but it is nice to have a list of proven capabilities to focus on. It is also a useful tool when debating with management about these subjects.

I highly recommend the book to people working in any aspect of software engineering – be it as a programmer, a project manager, in a leadership position, or in any other capacity.

RIP Joe Armstrong, the author of Erlang

Sad news from the world of programming, Joe Armstrong, one of the authors of the Erlang language has died

I never worked much with Erlang, and have never met Joe Armstrong, but from everything I hear, he was a genuinely nice man.

If you want to know more about Erlang, and how it was used, you can watch Erlang the Movie.

To be honest, I highly doubt anyone outside the world of programming will get much out of that clip, but it is interesting to watch, since it shows what type of problems Erlang was developed to solve. It gives a view into the early days of digitizing telephony, which wasn’t that long ago, considered how long telephones and other forms of telecommunication has been around.