Category Archives: Uncategorized

Team Cognitive Load: Team Topology Series

Note to Audience: This is a post I wrote originally in Confluence. I will protect names and references to past companies. Since I’ve moved most of my writing to SeedTheWay on Substack, this is more a post for me to track topics I’ve written in the past.

One of the most important topics in the research supporting Team Topologies centers around the need for teams to assess and measure their overall, team cognitive load. If you are unfamiliar with the definition of cognitive load, the Australian psychologist, John Sweller defined cognitive load as “The total amount of mental effort being used in the working memory.”

When we talk about cognitive load, it’s easy to understand that any one person has a limit on how much information they can hold in their brains at any given moment. The same happens for any one team by simply adding up all the team members’ cognitive capacities.

The Three Types of Cognitive Load

The Team Topologies authors introduce us to the science of cognitive load. They break down the three core types of cognitive load so that we can use this for classifying and categorizing the types of work, issues and knowledge that weighs on us cognitively.

Intrinsic Cognitive Load

Intrinsic cognitive load is the cognitive load that is inherent in the task itself. It is the amount of effort required to understand and complete the task.

There is a limited amount of Intrinsic cognitive load that can be handled at one time, so tasks that are complex or require a lot of mental processing will have a higher intrinsic cognitive load. This deals with the skills we need to understand. For instance, Python developers need to know how to write functions.

Extraneous Cognitive Load

Extraneous cognitive load is the cognitive load that is not inherent in the task itself but is added by the environment or the way the task is presented. Extraneous cognitive load is the tasks around software delivery. A few examples the author shares are provisioning a resource, deploying and application, and monitoring an existing application. 

Germane Cognitive Load

Germane cognitive load is the cognitive load that is necessary for task completion. It includes the Intrinsic and Extraneous cognitive loads, but it is limited to the tasks that are essential for completing the job. This type of cognitive load allows people to focus on the task at hand and reduces distractions. This is the business domain we must know. If our application is a banking system we need to understand how that works. This can have subtle changes for each company.

Cognitive Load When Not Addressed Can Be Dangerous to the Success of a Team

Many of us at XYZ Company have seen demands on our brains and minds in all three categories over the course of your journey. On the intrinsic side, a number of engineers have made the move from Python to Go. Some folks were completely new to Python or Go and came from a completely different language background, whether that be Node, Ruby, C# or Java. For those working in Python, we’ve had to increase the demands here as well as our Python is based on the Zappa framework, which has a very limited adoption beyond our team.

We see the same kinds of challenges with extraneous and germane cognitive load. Here are two examples that plague our teams:

Example 1: Single On-Call Model across teams that to some degree still exists. It requires anyone on-call to potentially be the forensic expert of domain areas in the product, as well as technology areas that they have no experience working with.

Example 2: The monoliths that are both upstream and downstream impose a huge burden and demand on all committers to those code bases. It’s a collective playground that spans many business domains and problems. Our committers have to not only play nice, but they also have to be cognizant of big, unforeseen changes that impact all committers.

There are some serious dangers at play when cognitive load isn’t addressed. Teams overburdened with heavy cognitive demands tend to struggle with performance. They lose a high degree of their autonomy and ability to make “team-first decisions” with the intent of improving the team’s productivity.

The team struggles to gain mastery of their domain as they have to go into this anti-pattern of being “good” or “average” across many things when we want them to be exceptional and industry leading. One last point is that teams tend to lose their purpose.

We want teams to be able to identify competing demands, areas of focus outside of their domain boundary and areas where specialization of service and capability is needed. This arms us as leaders with a roadmap of how to better support our teams.

Why Team Structure Matters

Teams come in various forms and purpose. One team type is the Enabling team. This is a team of experts in a given domain. We might have a SRE enabling team. They would work with teams that need to improve their observability, monitoring and service level definition strategies. The engagement would be limited while they train the team and teach them to build up their knowledge and autonomy in SRE fundamentals. Stream-aligned teams are intended to align with the business needs. They produce software for the business needs based on their area of domain expertise and specialization.

Apply Cognitive Load Techniques

I imagine that a number of you are going to dig into the archives of your team ownership. Most likely you will start with the domain inventory and identify legacy areas of ownership that you want to shed. I would say any team that wants to go through that exercise of identifying is welcome to do that. I stress that we can’t just shed something without finding a new home.

My youngest daughter does this to me and my wife a lot of times. Every few months she performs a purge in our bedroom. Instead of bagging items up for charity, she stacks them in piles in the corner of our our bedroom or in the hallway. At the time of this blog, we are a few weeks away from Thanksgiving, which is another opportunity for my daughter to purge. I say this with jest, but am serious that we should resist the temptation to purge by simply removing our team’s name from the domain inventory.

A more thoughtful strategy would be to start identifying examples of cognitive load issues and beginning the dialogue with your managers and directors. We have an opportunity retire parts of the product, recommend architectural changes and of course make future investments in specialization.

I would recommend that as a team you work through your Team API, start identifying potential areas of cognitive overload. This may pull directly from your domain inventory. It may be related to toil or friction the team experiences day to day. Start creating a list with examples and categorize the type of cognitive overload.

Why It’s Called Observability AND Monitoring

Note to Audience: This is a post I wrote originally in Confluence. I will protect names and references to past companies. Since I’ve moved most of my writing to SeedTheWay on Substack, this is more a post for me to track topics I’ve written in the past.

You will often hear the word monitoring appended to the word observability. It’s not by mistake. The two are not the same and it’s important to understand why.

Observability is defined as the ability of the internal states of a system to be determined by its external outputs. With the unknown unknowns of our software’s failure modes, we want to be able to figure out what’s going on just by looking at the outputs: we want observability.

I’ve seen this definition stolen from every ISV under the sun. They will often follow-up the definition above with some additional marketing content about the three pillars of observability: logging, metrics and traces. Since today’s blog isn’t really about the inner depths of observability I won’t go into each of the pillars. Rather, I will try to make a simplified point that observability is the process of gathering telemetry about a system as a learning exercise amongst unknowns.

We use observability to build up data stories and evidence. It could be evidence around architectural dependencies. It could be evidence around capacity planning and performance. It could be evidence around infrastructure spend. Ultimately, observability is about converting unknowns into potential knowns. Notice I’m not quite ready to commit to full-blown knowns. I use the word potential.

Monitoring is the sibling to observability. The system or aspects of the system are measured with telemetry. Monitoring is based in absolute. I introduce a monitor to validate a condition occurred or in some cases didn’t occur. I prove that the system performance X operations, therefore should have Y output. I reached a predefined threshold and therefore I must trigger some conditional event.

What Inspired Today’s Blog?

Last night we had a situation. It wasn’t an incident. I think we all concluded, it was a bug that warranted discussion and actionability by the team. Our metering service was 100% operational. A customer, VMWare, experienced a single product/SKU that failed to process metering records for 10 or 11 days. It was the end of the quarter and VMWare’s customer (Amtrak) was frustrated that their metering/billing was not accurate.

Our observability tooling captures telemetry on the running system. Our monitoring validates the service is up and running. There were no active observability exercises on-going with the Metering team. There were no signals in a sea of noise that implied metering as a service was problematic. The monitoring of the service would say the service was healthy.

Observability is a telemetry capturing methodology that excels with data exploration, hypothesis setting and spelunking of specific observability use cases.

The reality is that there was an issue.

This specific issue was very conditional and I will reuse the word specific. VMWare was metering on product XYZ. The product had a rich history of metered events. Then on on October 20th, it stopped metering and we didn’t know until it was escalated to us 11 days later.

Where To Go From Here

Let me give kudos to the team. The customer was upset. They managed the issue well and remediated the problem quite rapidly. Their forensic process was second to none. The feedback this morning is that the customer is quite happy.

I know the team plans to run a retrospective, I believe today. I imagine there will be a lot of questions about the migration that occurred some time back. They will likely talk about the use of experimental flags. They may even talk about the on-call support and whether some recommendations should be made. I imagine they may even dig into the retry having to be processed manually.

I would recommend that the team ask some questions about discovery of the issue. The issue was brought to our attention. An observability forensic use case validated that the situation was audited in our observability tooling. There were no monitors for this use case. Is this a use case for a discrete monitor? Does the team need to perform some additional observability exercises to define other discrete monitoring conditions or events?

The Rhythm of OKRs

Note to Audience: This is a post I wrote originally in Confluence. I will protect names and references to past companies. Since I’ve moved most of my writing to SeedTheWay on Substack, this is more a post for me to track topics I’ve written in the past.

We are digging into OKRs as a company and more specifically as a P&E organization for our Q4 initiatives. For many folks this is a first time event. For some, it’s an opportunity to continue a method you likely had success with in the past. For others, it will be a chance to try it again with a new level of focus, insight and experience that maybe was missing from your last OKR efforts.

Let me start by saying, I’ve been a student of OKRs for many years. About 5 years ago, my neighbor (Tim Meinhardt) and I guess my gym buddy was thinking about leaving his company to build a startup all around OKR coaching. He started his company Atruity nearly 4 and half years ago. Basically, I got an MBA in OKRs every day at the gym.

At the time I was working at Contrast Security, a venture-backed startup as well. He wanted to sell me on OKRs, while at the same time learn everything I could share with him about being a part of a startup.

It was Tim who suggested I read the OKR bible, Measure What Matters. Of course, I gravitated towards Radical Focus. I love stories with subtle intentions. Wodtke’s book is both subtle and informative. Both books express how hard OKRs can be, but it took my gym sessions with Tim to really understand why a lot of teams fail with OKRs.

I wanted to use today’s blog to propose a short experiment that we run over the next few weeks. The experiment centers around establishing a rhythm around planning, reporting, measuring and assessing. These may look familiar if you studied Radical Focus, as they are the four quadrant alignment doc. Here are two examples: one and two.

I intentionally use the word rhythm. What I’ve learned with teams successful with OKRs is that there is consistency, predictability and visibility in a team’s OKR work. OKRs are like the equivalent of running a marathon. You don’t want to jump out of the gates sprinting to build up a lead and let others catch you. Rather, you want to set a healthy pace that will enable you to increase intensity and speed as you move through the OKR. Remember that OKRs are hard. They aren’t hard because you didn’t work smart. They are hard because you are challenging yourself and your teammates to accomplish new heights and goals that are presently out of reach.

Small Experiment to Our OKR Lifecycle

The current iteration of our weekly OKR updates has evolved to reference each Key Result, the owner and a short summary of contributions. The experiment calls for us to provide more information around planning, reporting, measuring and assessing. We should create a weekly four quadrant alignment document as the next iteration of the weekly OKR reporting.

Let’s start with planning. I’m suggesting we identify the key priorities for an OKR for a given week. As Wodtke notes in her book, priorities are either P1 (most important) or P2 (important). If they are not a P1 or a P2, they don’t belong in the team’s current focus.

A second area of planning is around the runway of 3 to 4 weeks. This is about knowing your team’s horizon for both planning an anticipation.

The third quadrant is about maintaining a running forecast. We don’t want to get to the end of quarter and realize we are nowhere near capable of accomplishing our OKRs. This is an empirically driven methodology, hence it calls for its participants to assess and forecast progression of work.

The fourth quadrant is about measuring the health of the team and the business. Wodtke suggests a simple Red, Yellow or Green system.

Feedback Matters…So Does Direction

My motivation for recommending this experiment is to help the team with focus and clarity. Priorities should be shared and discussed as a team, but reviewed by stakeholders for feedback purposes. Progress updates that are measurable should be captured and enumerated. Studying health can serve as an early warning system.

Little Bits of Management Advice Part 1: One on One Meetings

Note to Audience: This is a post I wrote originally in Confluence. I will protect names and references to past companies. Since I’ve moved most of my writing to SeedTheWay on Substack, this is more a post for me to track topics I’ve written in the past.

This could be a one part entry or 100 parts. Who knows at this stage as I’m not quite sure what I would write in part 2. Today’s entry is all about a technique I have for one on one meetings. The technique is that I create a shared 1:1 Google Doc between me and my teammate. I started this habit several months back. It has been a game changer in a variety of ways. I will share my notes below as to why.

Some Basic Components in the Document

All of my 1:1 documents look like the image below. At the top of the document, we maintain a running list of outstanding issues from previous weeks. It’s a pretty ephemeral set of content. I generally go through historical agendas each week and anything that has not been checked gets copied to the outstanding issues list. Note that I copy it and leave it in the agenda for a given date. Technically it will be listed 2x in a document if it makes the outstanding issues list.

Each meeting, I create a blank agenda for the day like the example below. It has 2 sections: topics and action items. Topics can be created before the meeting or during the meeting by me or the participant of the 1:1.

I try to encourage my 1:1 attendee to put one or more entries in the document before the meeting. I do the same either at the beginning of the week or the morning of the call. Sometimes you don’t have time to pre-seed the meeting with topics. Often, I will start a 1:1 and suggest we take 2-3 minutes to create an agenda.

Usually I will keep notes in my notebook by hand. More often than note, I will add a bullet point or three below an agenda topic so that I can cover notes for reference purposes.

The action item section is always left blank for the agenda at the start of the meeting. As we go through the topics and add notes, we add action items by referencing the person’s name, what they are on the hook for, any additional people to reference and ideally a time to follow-up.

The Best Audit Record You Can Imagine

This is a pretty obvious outcome we get with a shared document. I can look-up the agenda by date and scan to see what we discussed in the past. I often will reference the document for past conversations.

Make Your 1:1s Count More

This is probably the best advice I can give you about 1:1s. Most folks meet 1x every 2 weeks for 25 to 30 minutes for their formal 1:1s. Having a couple minutes of coffee talk is a good thing. It’s human nature to want to ask how someone is doing. I typically start my 1:1s with a more personal edge to it.

Both members should prepare an agenda in advance. This is valuable time between you and a teammate. Make it count. If you don’t have an agenda, give the time back. If you want to use the time differently, use the time to run through a demo, jump into progression and assess progress, show some code or review a pull request together. Ideally, the time is being spent in a productive manner.

Team Topologies: The Team API

Note to Audience: This is a post I wrote originally in Confluence. I will protect names and references to past companies. Since I’ve moved most of my writing to SeedTheWay on Substack, this is more a post for me to track topics I’ve written in the past.

Last week all of the Engineering Managers, Directors and our departmental business partners met for a multi-day summit in sunny Denver, Colorado. It was my first chance in my 12+ months here at XYZ Company to call a summit with the Engineering Management team. In past companies, I would try to have a summit or two per year. Working remote and distributed across time zones is an awesome experience, but nothing replaces the tactile feeling of high-fiving a teammate in a conference room or breaking bread at the dinner table.

Our focus on day 1 was 100% on the book Team Topologies. I mentioned the book a while back in a blog hoping it would inspire a number of you to add it to your reading list. I was pleasantly surprised the last few weeks when I learned that a number of team members had picked-up the book and started reading it. I will reiterate to any member of the Engineering team that you can expense the book. It’s essential to our growth and development as a team.

One of the most important exercises we accomplished during the summit was each Pod/Team wrote a draft Team API. The authors of Team Topologies created some sample templates, which they published to GitHub. They happened to make a Team API template. We’ve borrowed the template and created a Confluence Version that teams can easily extend and publish in their team space. I’m looking for all of the Pods/Teams to begin the journey of writing and maintaining a living Team API. First, let’s talk about the Team API to give you some context.

Here’s a great cheat sheet on Team Topologies.

What Is This Team API All About?

I joked with the team that the first 4x I read Team Topologies (I’ve read or listened to it 10x in the last 2+ years), I didn’t pick-up on the idea that a team could/should publish an API so that other teams and members of the current team could understand team dependencies. The Team API encourages the team to make their dependencies (upwards, sideways and downwards) visible, in an easy to consume method.

It’s simply a matter of fact that our Pods/Teams work across teams. We encounter issues such as communication challenges, scheduling issues and prioritization problems when 2 teams are working with each other. It’s often not intentional, but rather a matter of awareness that one team has a dependency on another.

The Team API gives each team a living document with the intent of defining and surfacing the dependencies between teams. There is a mutability that must exist in each Team’s API given that our work changes, evolves and matures over time. The nature of our relationships across teams will change, hopefully for the better in a more self-service way. Also, new teams will be introduced and we must consider that responsibilities and ownership may change over time as well.

I like to think of the Team API equivalently to my personal ReadMe. A good TeamAPI helps internal and external members understand the best way to communicate with the team. The focus and areas of ownership are defined, thus minimizing the need to decode the team’s goals and purpose.

A Note About Living Documents

We’ve all had a lot of enthusiasm to write a document only to find it sits on the shelf collecting dust. When I look in my shelf of living documents, I basically have two that I maintain. My professional ReadMe is a document I try to maintain every few months or 1x a year if something changes. The other document is my Last Will and Testament, which frankly I don’t update much at all.

Living documents are really hard to maintain. The team has to be conditioned and empowered to maintain them. Ultimately, it means the documents have to be considered 1st class citizens. You should share it often. You should refer to it constantly. Members of the team should be trusted to maintain it and update it. It is a team document, hence the team needs to update it.

Think of it like a README in a GitHub repository for a product or artifact. You want your consumers of the repository to be self-sufficient making use of the repository. The README needs to be accurate, informative, simple and “just works”.

So What Goes in the Team API?

An effective Team API will consist of three informational sections that consumers (internal or external) will benefit from reading. The first section is a metadata about the team’s overview and communication attributes. The second section focuses on the the current work of the team. The third section is a small dependency map of teams that the current team interfaces with. I will cover each section in greater detail.

Section One: Team Overview and Communication

The opening section of the Team API helps the reader understand the purpose of the team and their role in the product. The area of focus should be descriptive and obvious that an outsider consumer of the API will understand the core focus of this team.

Team Topologies introduces 4 types of teams: Stream-Aligned, Platform, Enabling and Complicated Sub-Systems. The vast majority of our teams are Stream Aligned, with a healthy group of Enabling and Platform teams. As of right now, we have not identified a complicated sub-system team in our present topologies. We may down the line, but for now we have not identified any. It’s important that each team represent their topology for what it is in the present time. Below is a quick summary of each topology.

The other meta-data about the team such as versioning, service level, search terms, schedule and chat channels are pretty self-explanatory. I will say it only is self-explanatory if you include and maintain the level of information necessary so that a new consumer or an outside consumer can easily understand without having to have a 1:1 with a member of the team to go over the API. That’s the true litmus test of the Team API.

Section Two: Current Work of a Pod/Team

Interesting enough, I believe the template the authors made should probably retitle this section as the “Current Area of Focus for the Pod”. Often teams have legacy responsibilities that are tied or bound to the team, but they have no efforts or initiatives focused on those legacy areas. I gather the authors are suggesting that teams surface the current work. Needless to say, these three questions are incredibly important. Some teams may even have additional sub-documents that are maintained as living documents. For example, the Cephalopods maintain this Ways of Working document.

Section Three: Lightweight Dependency Map

The authors of Team Topologies introduced another template called a Team Dependency Tracker. In our workshop, we opted to have teams write this small table in as a lightweight exercise. We had originally anticipated performing a small exercise on writing dependencies (incoming and outgoing). We settled on simply writing them in the Team API.

As you can see from the table below, the goal of this section is to define the relationships of interactions between teams. Most teams took the perspective of teams they depend on. However, if your are aware that other teams depend on you, you should identify that relationship in the table. It’s essential to understand the Interaction Mode (Collaborative, X-As-Service or Facilitating).

The Five Fundamentals that Shaped my Early Career

Note to Audience: This is a post I wrote originally in Confluence. I will protect names and references to past companies. Since I’ve moved most of my writing to SeedTheWay on Substack, this is more a post for me to track topics I’ve written in the past.

Over the weekend I was thinking a lot about the impressionable years of my software development career. When I graduated college, I ended up working for a small, boutique consulting company. They specialized in building applications for financial and health care companies.

The company had a small number of salaried staff, which I was one of on the salaried payroll. Most of the engineers were contractors. The staff who were on the payroll were software architects. They mainly spent their days designing elegant architectures and modeling their designs mainly in UML.

I spent about year or so at that first company, then was immediately thrown into the major leagues. My next gig was with a 10,000 person company called USWeb. It rebranded to marchFirst. My job there was as an integration developer for American Airlines or aa.com. I worked on the Sabre integration for booking/reserving flights from the web. It was the first time I had to wear the figurative pager. AA.com was a high traffic website. They didn’t have any extensible APIs for booking sites that were just starting to take-off. So these crawlers would spam aa.com and 99% of the time be responsible for bringing down our service. It was there where I learned about the importance of monitoring in production. The monitoring we had that time were rudimentary. Often we would have to codify our own debugging needs.

After a year or so there, I settled into a more traditional software route. I ended up moving on to a Supply Software Company called Manugistics. They are now part of Blue Yonder. It was there were I learned how to be a better developer. I learned about how to comment my code, write tests (shockingly I never wrote a single test in my first 3 years as a software engineer), debug my code and profile my code.

I wanted to share a few of my notes with the team. Looking back over my 24+ years of software development, I look at these fundamentals as alive and well. I would say if they are something you are not doing, you may want to consider them.

Fundamental #1: The Importance of Design and Modeling

On of my all-time favorite movies is Pixar’s Ratatouille. The whole premise of the move is that in Guesteau’s mind, anyone can cook, even a Parisian sewer rat.

I would say the same is true around design and modeling. Anyone can model/design. I would go a little further and say you could throw in requirements definition as well. Anyone can author requirements.

In my first gig as a software engineer, I didn’t even know the language I was coding. I came out of college with some rudimentary C and C++. In 1998/1999, the cool language of the day that had just launched was Java.

Learning a new language with visuals made things a lot easier. Back then we wrote a lot of documentation. They weren’t in long form prose. Rather, we wrote a lot of UML diagrams in a product called Rational Rose. It was a really cool software product. My two earliest mentors were big Rational Rose nerds. They made their paychecks writing these incredible models. This is kind of how we wrote all of our requirements back in the day. They were in Use Case views. We would annotate our schemas in Rational as well (Entity Relationship Diagram). We would include sequence, state, activity, component, deployment and collaboration diagrams to go along with the Class, Use Case and Data Model diagrams.

Diagramming was heavy and expensive at the time. A lot of engineers I worked with were really bad at maintaining them. We treated our rational rose files as code. We would iterate constantly when things changed. Eventually, we made our way to Visio. Diagramming was more accessible and cheaper as we had an enterprise license of Microsoft.

Diagramming fell by the wayside in mid-2000s. It might be the biggest regret I have from that time. We wanted to move fast and iterate. Often we opted not to diagram. The end result was that we designed really bad software.

Miro has UML diagrams that our engineers can take advantage of today. Miro has dozens of templates for various software modeling techniques. If you are unfamiliar with UML and want to explore it with Miro, here’s a great article that will help you get started.

Fundamental #2: The Importance of Code Comments

I guess there are 3 schools of thoughts around commenting your code. Thought #1 is that writing comments in code isn’t something they think of at all. Thought #2 is that you should comment “all the things”. Thought #3 is why comment when your code is “self documented”.

In the early part of my career, I didn’t write comments often if at all. I never really thought anyone else would read my code. Like many folks who don’t know what they don’t know…I didn’t write a single comment in my first 2+ years of software engineering.

When I joined my first Enterprise Software team, I quickly realized that I was the only engineer on my team that wasn’t writing comments. At first I didn’t think much of it. Then I found myself constantly getting interrupts from the other developers on the team. They were pretty nice about it. They would be in the middle of reviewing one of my earlier changelists. They would hit a stumbling block and walk over to my cube. Nobody ever really said anything.

I later learned from my manager that my teammates were being really nice by not shaming me. They definitely asked me a lot of questions. They were intentional about the interrupts. As my manager put it, comments were considered code. It was our way of giving context and elaborating on requirements. We didn’t have a product like JIRA. Our requirements were in Lotus Notes. We experimented at times writing requirements as text files in our code repositories as well.

Comments were treated like code. If you changed the implementation, you were expected to make changes to the comments as well. What I realized was that the best engineers were the best engineers because their comments were so informative and context driven. They would often spend more time thinking about what they were going to write in the comments versus the implementation.

Comments were part of our coding standards. We had specific comment notations. Comments served a purpose. The purpose was context. Our code was intentionally clean. It was intentionally informative. Our coding standards helped you understand the WHAT behind the implementation. Our comments were used to explain the WHY and sometimes the WHEN. When we would see comments that explained the WHAT or HOW, we would remove them as they were redundant. The goal was to make comments a tool that the next person reviewing or leveraging your code would be faster and more efficient.

I’ve seen a few examples of impressive comments at XYZ Company.There’s a lot of goodness in the example. I would probably add a few other things such as identifying information about the author. Otherwise, the comment is very self-serving for anyone that will be making use of the function.

I would encourage two other things. The first is to consider evaluating a comment generator that plugs into your IDE. There are a number of plugins available. I reference one below. Second, treat your comments like code. If the code changes, then change your comments. If you remove a block of code and comments are tied to the block of code, then remove the comments. Leave the code in better shape then when you found it.

If you happen to be using VSCode, there is a free plugin called Mintlify. The plugin helps you build the pattern of adding comments. It will format and annotate comments. It tells the WHAT and the HOW. It does not tell you the WHY. That’s context that you have. However, it is a great tool for you to consider using to help you go through the process of code commenting.

I recently played around with it on some Python code. I would give the comments a B+. I appended a little more context to the comments, which moved it IMHO from a B+ to A.

Fundamental #3: The Importance of Testing (Unit and Integration)

I probably could write 100 blogs about testing. Don’t worry…I will write them in the coming months. I’m going to come clean on something. I didn’t write a single test until my 3rd software engineering gig. Let me rephrase that…I didn’t write a single unit or integration test until 3 years into my career.

My earliest mentors were really good at design and architecture. My second set of mentors taught me the importance of production monitoring. Neither group every asked me if I had written a test or not. I truly didn’t know what I didn’t know.

The same team that pressed me on code comments was the same team that challenged me on testing. Back in those days, we manual testers that were embedded on our teams. We put a lot of upfront work designing (modeling) and our implementations were very crisp. We didn’t have code reviews back then the way PRs are formed now. We had a very non-agile approach. We had a commit Captain. It was the most senior person on the team. They would work with a dedicated build engineer. We probably had 5 build engineers supporting roughly 2050 software engineers. You would prepare your changelist for them. They would review your code. Back then, they would make edits directly to the code and merge your changes. The build engineer would handle the true last mile of the code line.

Our testers would get dedicated builds that they would install and setup. I remember our relationships were pretty collaborative. The tester role was often the gateway into engineering. They were 100% focused on user acceptance and end-to-end testing. In some ways, I really felt for the folks in that role. Because they were often the most junior folks, they didn’t have the confidence to challenge the team if their code wasn’t ready. They would simply log a bunch of bug tickets.

I remember that we had a situation where both of dedicated test engineers were away for several months at the same time. This is where I learned about the importance of writing automated tests (unit and integration). Several of the implementations that I was working on were as buggy as can be. Let’s just say for about 3 days I felt like I broke our code and set our team back. I can’t remember all of the details of what I was working on, but I remember how I felt when three of my teammates stopped what they were doing to help me and my manager figure out what I broke.

Up until then, I didn’t really think about the tests I wrote. Testing, specifically writing JUnit tests were so new to me. I would write really weak and meaningless tests. To my teammates the optics were that I was writing a ton of tests, so clearly I was thinking about the quality of my code. The reality was that I was writing a lot of tests that did nothing. They were all mocked and if you dug into my mocking approach, you would have called BS on me. One of my teammates who dug into my tests equated the value of the tests I had written to the same as the Door Close button on an elevator.

It was on this team where I truly learned about the meaning of accountability. Our team was setup in a way that you could assume the developers wrote the code and the testers tested. That wasn’t the real case. The real case was that we were accountable for putting the best design, code, comments and tests (unit and integration) into our work. My code needed to do more than compile and build. I needed demonstrable evidence that it functioned to design.

A promise is a promise…expect to see more from me about unit and integration testing in some upcoming blogs.

Fundamental #4: The Essentials of Debugging Your Code

I was late to the game using an IDE. Debugging was really hard back in my early days of programming. We mainly dealt with thread safety (racing conditions), synchronization issues or propagation issues. Debugging is a lot easier today. All of the modern days IDEs have built in debuggers.

It’s easy to add breakpoints and step through your code. Some of the best developers I worked with during the mid-2000’s would often debug as their approach to code reviews. They would step through the debugger to see what was happening under the cover.

Debugging can be more than just instituting break points. I found myself enhancing the fidelity of logging all of the time so I could get under the covers of what was happening in our code without having to profile. It would make for really verbose logs, but it was a fast path to figuring out an issue.

I also found myself getting really defensive in my code. I would add a ton of conditioning and handling whenever possible. So much of that is done for you in today’s modern frameworks. Getting stack traces made the forensic journey so much easier for me.

VS Code has a fantastic debugger. It’s easy to configure and use. There’s a ton of documentation and reference material.

Fundamental #5: Why You Should Measure Your Code

I will keep this section really simple and on-point. Very few developers instrument their code. There is profiling techniques, which IMHO everyone should understand how to profile their code to understand cost (timings, frequency, size, object maps, etc..). Profiling was and remains a really expensive way to understand what’s going on under the covers.

Many have chosen to opt-out of profiling in favor of tracing, which will give you timing, counts and stack traces. I would argue that tracing might solve 95% of all forensic efforts without requiring profiling. Today’s tracing technologies (APM and Distributed Tracing) have improved the fidelity of data without having to jeopardize the cost of profiling.

Ultimately, tracing and profiling is just debugging on steroids. The difference is that code measurement (tracing and profiling) are often used to identify why something is slow or why something costs so much, whereas debugging will help us understand why something is functionally broken.

Setting Goals and Continuous Improvement

Note to Audience: This is a post I wrote originally in Confluence. I will protect names and references to past companies. Since I’ve moved most of my writing to SeedTheWay on Substack, this is more a post for me to track topics I’ve written in the past.

A big part of becoming a world class team, building a world class product in a world class organization is deeply rooted in goal setting, feedback loops, upfront coaching and of course continuous improvement. That’s definitely a mouthful to digest. If the phrase continuous improvement is new to you, it’s a pretty basic concept in which individuals, teams and organizations make small, incremental improvements to their processes, practices and behaviors.

Take Small Steps Regularly Over Giant Leaps

Of late, I’ve been talking about changing some behaviors when we start work. Specifically, I’ve been asking folks to think about evolving from a mostly feedback driven approach to a slightly more expensive, upfront coaching approach. The first dart I threw on the wall with this topic was about the craftsmanship movement back in November of ’21. I was proactively encouraging folks to do more shadowing, pairing and also reverse shadowing (when the apprentice takes the lead and the craftsman watches).

There’s an underlying reason to all of these blogs of late. We are 11 months into my tenure here at XYZ Company, I see so much opportunity for growth and development in the consistency and predictability of our work.

Our pods have more than doubled in size (number of teammates per pod). We have also doubled the number of pods from 6 to 13. We split up the TLM into 2 roles (Engineering Manager + Technical Lead). We have added a fair amount of depth to help teams achieve more autonomy, but also produce more value for our customers.

At the same time, we have made tremendous progress establishing the foundations of a product roadmap. We have put 7 months into release planning. Those 7 months have involved some process maturity around our ticketing, traceability and our estimation. It’s been a lot of incremental efforts to improve. We still have a ways to go, but the winds are changing from headwinds to tailwinds.

A Lot of Opportunity to Grow

Next month all of the pods will have their Release Planning meeting without me. I gave all of the pods visibility that the September Release Planning sessions should be all about the Pod + their Partner Pods that are working on the same initiative. Due to my travel schedule for work, I literally won’t be able to dial into most, if not all of the sessions. I will have to watch the zooms after the happen or catch-up with @Minhe Oum for the TL;DR.

I wanted to come back to the whole team about two specific opportunities to grow as we head into September. The first is around breaking work down upfront. The second is more of a reminder about the need for practicing estimation.

Breaking Work Down

Let’s start with breaking work down. As our Product Management organization evolves, we are going to see the vast majority of Epic and Story writing come from the PMs. I would be willing to bet that by October, 100% of all Epics and Stories will originate in ProductBoard. They will integrate into JIRA. We will still have Technical Epics and Stories that originate out of Jira and are defined by our Engineers. Those items certainly won’t make the Product Roadmap. They will however make the Release Plan.

I have two suggestions that teams should consider around the topic of work decomposition. I will cover them below.

Recommendation 1: Use Checklists for a Single Branch Strategy

If your branching strategy is to treat your user story as a single branch (which also implies that you are the sole committer to this user story), you may want to consider using Jira’s Checklist functionality. The functionality is enabled in our Jira instance. Some of our teams are using the checklist functionality.

I recommend it as a way for teams to define their definition of done. Creating a checklist should be a step that the engineer performs as part of their approach to getting to a shared understanding of what is involved in the ticket, as well preparing to add their value to the field “Revised Estimate”.

I suggest using checklists in a single branch strategy to make things less complicated. Ideally, each commit could line up to a checklist item. Teams could establish much of the requirements and expectations of the checklist in a template that we could automate. Ideally, the engineer would add a few checklist items that are specific to the work and would expand beyond the standard checklist template.

Recommendation 2: Use Sub-Tasks for a Multi-Branch Strategy

For years I’ve been very against sub-tasks. I’ve come to appreciate a few teams (specifically some feedback I heard from @Dori Amara and @Tim Poulsen ) about how sub-tasks can be helpful for teams. I would recommend using sub-tasks to a user story when there are multiple branches involved.

I might start with a feature branch associated with the story. I would rebase that branch with mainline for the entire life of the story. For each sub-task, I would create a working branch for the given sub-task. I would try to make PRs smaller and incremental. It will likely mean more PRs for the team, but they should be smaller and more atomic. I see this as a likely scenario when teams are touching multiple services or code artifacts (upstream vs. downstream).

Within each sub-task, I would make use of the checklist strategy again. I might still use a checklist approach for a user story with sub-tasks. It’s really up to discretion of the team.

What’s Behind Breaking Down the Work More

Teams need to establish their definition of done. It’s one thing to define it in a Confluence page or have a ceremony. It’s another thing to practice it, automate it and iterate against it. Call me old school, but I’m of the belief that when you want to get better at something (continuous improve), you practice it every time until it moves from habit to norm.

I try to encourage teams to break work into really small chunks. As I mentioned above, I encourage folks to break checklist items into single commits. Historically, I’ve encouraged teams to break tickets down in a way that they could be completed in 1 to 2 days. Sometimes it’s impossible to execute the whole lifecycle in under a day. When stories are small enough that they can be completed in under 2 days, it’s a win win…but when stories are more complex and take 3 to 5+ days, that’s when I realize I need to keep breaking the work down into smaller chunks.

As I mentioned in a previous blog, we need to find our inner Tom Skerritt and break things down to smaller, more manageable chunks.

Practice Estimation

I offer the team some words of encouragement. Estimation is tough. It is not impossible to become really good at it. It starts with breaking work down into smaller chunks (see above). If every team had a true definition of done that could be applied in a systemic way, estimation of stories and tasks will transcend from feeling like a guess to a derived calculation.

Remember we are not throwing darts, but rather we are playing a game of horse shoes. We want to be close to our timeline. It’s a timeline that we set and establish. As I mentioned in some previous blogs on estimation, we need to treat estimation as a phased effort where we start with a small amount of information (swag at the epic level). As we start decomposing work into stories and tasks, we put our original estimate at the story + update our epic if it changes. When the engineer takes on the work, the decompose their stories into lists or subtasks + lists, then apply a revised estimate.

Looking in Jira for tickets resolved in the last 30 days, we had roughly 205 tickets that were closed with a resolution of done. The challenge that I’m still seeing is that a really small percentage have initial estimates. I’m thrilled that we have folks applying revised estimates more than original estimates. We still have a long way to go in which we apply original and revised estimates.

Total TicketsOriginal EstimateRevised EstimateContains Both
205304817

If folks are curious, I would say if the original estimate is let’s say 3 day and your revised estimate is the same, it’s OK to put the revised estimate in as 3d. The practice is in the application.

How Teaching and Feedback Loops are Different

Note to Audience: This is a post I wrote originally in Confluence. I will protect names and references to past companies. Since I’ve moved most of my writing to SeedTheWay on Substack, this is more a post for me to track topics I’ve written in the past.

I save 6 spots a week for office hours. The spots are intended for anyone in the department or company. Usually, the main attendees are our engineers. Occasionally, a manager will pop on the schedule if they see an open window.

Recently, I’ve asked my attendees to share how they are getting feedback. There was a noticeable theme. Feedback was given when someone encountered an issue and/or was blocked.

Alternatively, feedback would happen when a branch of work was ready for a Pull Request. When I say a noticeable theme, I literally mean out of an audience of 18, we had a perfect score of 18/18.

Somewhere around the 10th or 11th person, I decided to get a little savvier. I asked how folks whether they were getting coached, taught or trained to do something. The answers weren’t as obvious. About 1/2 the team shared that they couldn’t cite any examples where they were getting explicitly coached. They all said they were getting really good feedback and had great relationships with their TLs and EMs. Most of their learning came from feedback looks after the work was assigned and performed. They would be on the receiving end of feedback when blocked or on a PR. The other 1/2 of the group said they were given a reference example and were asked to work through that example asynchronously. Feedback would come once the work was in Pull Request.

Listening to the group, it sounded a bit Darwinian. We should really dig into this more as this will hold us back.

Here’s the Quick TLDR

My goal of this blog is to get folks thinking about proactive or upfront teaching. I’m really zeroed in on this topic of late. As a continuous learner, I’m always looking for new techniques, practices and approaches. I like to learn new topics on the regular. One of the things I personally struggle with is finding a mentor that will work with me in a different way than just giving me feedback. A lot of mentors I’ve worked with in the past put more emphasis on feedback, by observing how I’ve performed. I’ve realized over the years that sometimes you need more than feedback…sometimes you need someone to come in and teach you something completely new.

Back to the Post

When I first titled this blog, I originally put “How Coaching and Feedback Loops are Different”. I decided to change the word coaching with teaching. I consider myself a coach, a mentor, a boss and most importantly a teacher. I’ve shared with many that when it’s time for me to leave tech, my plan is to go into teaching full-time. The reality is that a big part of my job is teaching.

Teachers do a lot of things in this world. They introduce new ideas. They coach you through problems and challenges. They give you feedback with the intent of improving. They are basically part of the lifecycle of learning (Introduce → Enhance → Feedback → Repeat).

The Reference Example…Nailed It

How many have you watched the show Nailed It on Netflix? It’s a pretty unique show in which bakers compete against one another to “recreate” a culinary masterpiece prepared by a famous chef. The contestants are given a time bucket to recreate the item. They get to look at it and take note of the design. They are given the supplies to recreate it as best they can. The winner takes $10,000!

I’ve probably watched a dozen episodes since it first aired. What I can say is a lot of the outcomes of the competition look like the example above.

I call-out the show Nailed It, because it often reminds me when an engineer is asked to look at a previous implementation, specifically a really complex implementation, but they’ve never worked on anything this complex before. I see this a lot. The reality is that the person sharing an existing implementation has the best intentions. They are trying to be efficient, trusting and empowering to the engineer about to take on the work.

What they might not realize is that the person working on the ticket, a first timer on a really complex ticket, might be going through some imposter syndrome. In fact, that was something many folks shared with me during our 1:1s. Usually someone experiencing imposter syndrome isn’t going to lead on that they need help. That’s like rule #1 if you have imposter syndrome…don’t let them know that you don’t know something.

Teaching on the Front End

I wrote a two part blog about craftsmanship. I think I managed to write two blogs that failed to say the quiet part aloud. In trades that have true apprenticeship programs, the apprentice sometimes will spend weeks or months before they do the work. They spend their time observing, taking notes and preparing to be assessed. We should be doing more of this.

It’s also something that we should strive to do even after you graduate from an apprenticeship. We should be showing new techniques and approaches whenever possible.

I’m not necessarily advocating for us to implement a full-scale approach like this, but I’m suggesting that we consider adopting more “ride alongs” and “pairing” sessions at a greater pace in the first 90 to 120 days that folks are getting comfortable.

From a capacity standpoint, we plan for 3 to 4 months for on-boarding. A lot of that on-boarding tends to be asynchronous reading, reviewing and watching. There has to be a better way. Ideally, we want folks to get more synchronous time as the observer and then as the driver.

We need to show how we do something and discuss why we do something…not once or twice, but several times over and over until it becomes habit.

Always Room for Feedback

I want to encourage folks not to over-correct by focusing more on coaching and upfront teaching. Right now we are indexing hard on the backend with feedback. We simply need to balance with more upfront and proactive teaching. It likely means things might at first take longer. Ideally, we can discover inefficiencies and areas of confusion. We can spend more upfront time teaching folks new ways…most likely more productive techniques for achieving their outcomes, reducing complexity and minimizing back-and-forth feedback loops. Ultimately, it might make sense to go a slower in order to go a lot faster in due time.

Simple Habits to Consider From an Old School Programmer

Note to Audience: This is a post I wrote originally in Confluence. I will protect names and references to past companies. Since I’ve moved most of my writing to SeedTheWay on Substack, this is more a post for me to track topics I’ve written in the past.

I spent the last 3 weeks plugging in office hours and one on ones with a number of engineers on the team, 15 to be exact. The conversations were very enjoyable. Unlike most office hours where I wait for my teammate to ask me questions, I selfishly used each session to ask the following…”How and when are you getting feedback?”

You could have been a wall flower for each and every meeting and heard very similar answers. Everyone had 1:1s with their managers. Most folks had them weekly for 30 minutes. Some had 1:1s with their Tech Leads. The predominant interaction with the Tech Lead was occurring in pair programming sessions or impromptu discussions about a ticket about to go in PR or in PR.

I shared a number of perspectives with folks. So I figured I would use today’s blog to share my notes with everyone on the team.

Let me start off by saying, I never really considered myself to be an exceptional developer. Professionally, I mainly programmed in Java, but I spent a considerable amount of time working in C, Visual Basic, C++ and Perl. I wrote a lot of SQL by hand, predominately in stored procedures and triggers.

My first code editor was in fact Notepad. I graduated to Notepad++, Visual SlickEdit and eventually to Eclipse. When I had a little bit of cash, I made the move to IntelliJ. In the last few years I’ve settled into to Microsoft Visual Studio Code. I would be remiss if I didn’t share that I was a Unix junkie in the late 90s, mainly working SCO Unix, then promptly making the move to AIX and HPUX. I call that out because I would often work in VI to do a lot of my development.

Before You Write a Single Line…Break Down the Problem (Again)

I wish I could say this was the first thing I learned, but sadly I didn’t get this advice until I was 4 or 5 years into my professional career. Back in the early 2000’s, most of our requirements were stored in a document management system called Lotus Notes. We used it for email, calendaring and of course document management.

At the time it was a really innovative system as it stored everything we needed in one place and easily tied into email. Note there was no slack. We had AIM and a local IRC setup, but back then our main form of communication was email.

We would have these big bulky requirements documents in long form. They were often 10 to 20 pages when printed out. This is where the best advice came. I had a teammate, Kirk Everett, one of our most principal engineers taught me about the importance of decomposing verbose requirements into single line statements. Later in life, a good friend of mine, David Hafley shared an example from a River Runs Through It, in which Tom Skerritt is editing Joseph Gordon-Levitt’s paper telling him to make it “half as long”. Both examples were about problem simplification. Break big problems into smaller problems as they are more manageable.

Assume There are Unknowns…Get to Shared Understanding

We like to think our Product Managers, Engineering Managers and our Tech Leads understand everything about a problem we are about to solve. The reality is they do not. They will be the first to tell you and admit they understand a fair amount about a problem, but there are still unknowns.

I wrote a blog about this, outside of XYZ Company a couple of years back called the 70% Theory on Specifications. As I wrote in the blog…

“The 70% Theory (which I’m coining) is that when a conversation starts, there is a likely outcome that the information presented and immediately understood between 2 parties is roughly 70% of a problem. The goal of the two people having the conversation is to elaborate with examples (concrete and discrete) to raise shared understanding from 70% to as close to 90% or more.”

We often take a user story or task and get started with coding. Here’s what I’m suggesting:

  1. Take 15 to 60m at the on-set of a ticket to read through the ticket and any supporting documentation.
  2. Breakdown the long-form narratives into explicit requirements.
  3. Write down some leading questions in the form of “What If…” (It’s ok to make them informal and in the form of scenarios/examples).

Ideally, this hour of time you are spending is not considered wasted by anyone. I for one would wouldn’t consider it a waste. What you do next might be the most important part of this whole exercise.

Get some synchronous face time with the person who wrote the ticket. If it was your tech lead or manager, you may want to also grab your product manager. It doesn’t have to be a formal session. What you will need to do is review the user story or task, your breakdown of the requirements and then go through your leading questions.

You may not introduce a single new requirement…that’s totally fine. What you should accomplish in this session are two things:

  1. Your work decomposition helps you gain confidence that you have a deep understanding for the problem you are solving. In a blameless culture, getting alignment at all aspects of the lifecycle is a great way to keep software engineering a team sport.
  2. Your leading questions should become examples. Ultimately, these examples are practical test definitions, which you should author as part of your deliverable.

Create a Plan of Attack…At a Code Level

Whether you are refactoring existing code or creating a whole new set of logic in the application, this next piece of advice can be critical to your success in delivering a high-quality, high-value feature. At this point, I likely have not created a branch. I’m likely working out my notebook (writing notes) and I’ve synced the latest mainline or development branch.

If I’m refactoring an existing area of the application, either due to a defect or enhancement, I typically will spend 5 to 10 minutes glancing at the code. If it’s human readable like my teammates always say it is, I will step through it manually as though there was a debugger with break points. If I’m struggling to understand the code, then I actually will create a temporary/disposable branch and get to some debugging.

I want to understand the following:

  • The intended logic of the existing code.
  • Existing design patterns
  • Current testability
  • History of commits/churn
  • Recent authors who touched this code

I approach this effort like I do when putting Ikea furniture together. I take all of the parts and pieces out of the boxes. I neatly arrange them in their own piles in a way that is organized and orderly. While it might take extra time to organize, it helps me stay focused and come-up with a true plan of attack in how I’m going to assemble.

I typically will take the five areas of understanding and from it put together an organized set of questions. I’ve included a couple examples below:

Initial UnderstandingPlan of Attack Questions
The intended logic of the current code.Do I understand the flow, logic and sequence of the current implementation? Will it change to support this new need?
Existing design patternsCan I describe the current design pattern implemented? Do I recognize it? Is it the right pattern? Can my changes be made without abandoning the current patterns?
Current testabilityWhat testability exists today? Will my change help me understand if I have broken anything and caused a regression failure? What can I do to leave this in better testability shape than it is in now?
History of commits and code churnHave most of the commits been focused on new capabilities? Have any of the commits been a result of refactoring a bug? Is this a part of the code with a high degree of churn? Is there anything I can decipher about this code from past tickets, older requirements, PR messages or code comments?
Recent authors who touched the codeIs there anyone on staff who has worked on this area of the code recently? If so, who? Do they remember this area of the code?

Stub and Comment

There’s no secret that I’m a huge fan of stubbing out or outlining code. One of my closest friends was really big about stubbing out his code as his first activity after branching. He would name classes and methods. I gave it a shot one day and realized that it helped me understand what was involved and needed to be built.

I added the commenting as part of stubbing. I would write a few notes to new classes/methods. If I knew that I needed to refactor an existing part of the code, I would often annotate the comment with [Date – Ticket Number – Name] and then put a couple of lines in plain English what I was going to implement.

I have said it before and I will say it again…I’m a huge fan of commenting my code. Some folks are not. I’m not going to force a behavior on anyone who doesn’t buy into it. A lot of folks emphasize elegant, clean code that is readable and has the appearance of being self-explainable.

I often would put 1 of 4 types of comments in my code. I often would explain the details of loaders/persisters that I would implement. Anytime I would interact with a data store, I wanted folks to understand the purpose behind the data access pattern. I would often include examples.

Whenever I was making use of caching, I would put in details about the cache. Sometimes, I would like to wiki notes about benchmark micro-tests I would run.

A third type of comment I would add would be around testability and test strategy. I put a lot of emphasis in my development efforts around automated tests. In 2004 through 2006, I practiced a lot of TDD, so more often then not I would stub out tests and the comments about the tests before I wrote any code.

The fourth type of comment would be to explain the details of a defect when I was refactoring. I’m not talking about cosmetic bugs. Rather, when I would write comments about a bug, it was because it was a pretty challenging bug that wasn’t really obvious.

Feedback Is Omnipresent…Not Just At the End or a Problem

When I asked folks in my office hours when do they get feedback, all of them called-out that feedback happened when they got stuck or when they submitted the PR. Today’s blog take-away is that feedback should be omnipresent. My suggestions above hint at getting feedback early…as early as reviewing tickets before a single line of code is written.

I am of the mindset that there are some really good nuggets of advice above. I understand that these suggestions are different and require a change in mindset to adopt. They require folks to put a little more upfront time as well. I encourage folks to give it a shot and try some of these techniques.

Happy programming!

Why I’m a Fan of Tuesdays, Wednesdays and Thursdays

Note to Audience: This is a post I wrote originally in Confluence. I will protect names and references to past companies. Since I’ve moved most of my writing to SeedTheWay on Substack, this is more a post for me to track topics I’ve written in the past.

It looks like it’s been a hot minute since I last blogged. Apologies to my readers who are anxiously waiting for those new nuggets of intellect, feedback and of course opinion. Today’s topic is more opinionated than intellect or feedback driven. It’s based on my previous experience at past companies. I wrote a similar blog a while back when I talked about deploying around holidays. At the end of the blog I casually referenced that I don’t have an issue with deploying on Fridays. I do stand by that belief. If your engine is repeatable, automated, timely and of course has the test coverage to give you the confidence to deploy, by all means, deploy.

What’s Different About Today’s Blog?

I was looking through the Release Plan in Jira and one of the odd coincidences I noticed is that several of the active initiatives were scheduled to wrap-up on Fridays. Here’s what I know about Fridays:

  • It’s the last line of defense between you and the weekend.
  • A lot of folks tend to take the day off to extend their weekend.
  • Some folks like to turn Thursday evening into their “Friday Eve” celebration day.
  • It’s that last day of the week after a long week of hard work an effort.

I’ll just come out and say it. I’m not a fan of closing out initiatives on Fridays. I’m fine with deploying on Fridays. In fact, every team should feel confident deploying on Fridays. There’s a difference between closing out an initiative and making a deploy.

When you close out an initiative in delivery, it typically involves a handoff to another team in a Go To Market Function. Who wants to do a hand-off on Friday? I for one don’t like to be on the receiving end of a handoff on Friday. It would be an absolute Debbie Downer for my weekend.

So let’s say you finish on a Friday, but then you have to perform a handoff on a Monday? Well, that defeats the purpose of our intentional “No Meeting Mondays”.

Let’s not forget the other gotcha…let’s say that something does go bad on a Friday? Are you really excited about the prospect of potentially working through the weekend to resolve the issue? Likely not…so let’s talk about some other options.

Tuesdays, Wednesday and Thursday

At my last company, a number of teams were making use of time-boxed sprints. Most of those teams started with the M-F cadence. They ran mostly 1 or 2 weeks sprints. Over time, teams realized the issues above. They wanted a different experience.

Several of the teams shifted to a Wednesday to Tuesday schedule. I believe one team went Thursday to Wednesday as well. Work would wrap-up during the week. New initiatives would start after planning, which was typically performed a few minutes after the sprint close-out.

While most of our teams/pods don’t make use of sprints, I would say you don’t need to in order to pull this off. First, all of our pods take advantage of no-meeting Monday. To fully embrace the purpose of no meeting Monday, it means we shouldn’t be planning, reviewing or retro’ing that day. Second, all of our pods control their own schedule.

Nothing is really holding us back from doing it other than thinking through it and making a decision.