Why I Blog

From time to time people ask me why I blog. The best way for me to answer this is to give you the quick elevator pitch, as well as refer you to a passage from a blog I wrote back in 2008 below. I started blogging internally and then externally when I realized that there was a potential audience of listeners. It wasn’t just about being heard. When I say listeners, I mean people who were curious about my work, my team’s work or the things that we as a team came across. 

In the early days I used my blog to tell a story about a forensic exercise, a tool evaluation, an idea I had or even some deep intellectual stuff. I wanted a quick and easy way to document my own experiences in a scratchpad. I was really hoping that by me blogging, it would become contagious and others on the team would start blogging.

I was trying to break a bad habit in my engineers. I noticed my engineers treated knowledge sharing as the final exercise in a project. It was kind of like their code commit patterns. Back in the early 2000’s the developers I worked with were really unpredictable in committing code. We would have month long projects and often we would see commits 1x or 2x a week (if that) and then a couple big commits at the end of the project. Documentation would come in the same cadence. Maybe we would see a TOC early in the project. Then all of the content would miraculously show-up a week or two after the final commit (if we were lucky). I constantly felt in the dark about our progress and issues. The only time I really heard from my engineers was when they were about to miss a deadline and needed an extension…or if they wanted to share a success. What I really wanted was for my engineers to show their work as they went along. I wanted their work to be more transparent. Basically, I wanted them to develop some new good habits. 

What I found quickly was that blogging was contagious. Nearly every member of my team took to blogging. Eventually they took to daily commits (some even more extreme…YEAH!!!). At Blackboard, we were considered not only the most transparent team, but often considered the most innovative. Many of our blogs were about experimentation and exploration with new technologies. Because we also shared our thoughts, processes and workflows (we just put them out there for all of Bb to criticize or commend), many teams viewed us as pioneers in thinking. 

As I mentioned earlier, I posted a blog in 2008 about Transparency of Work. I’ve included a passage below from that entry. My thoughts in 2008 haven’t really changed all that much in 6 years. Take a look at the entry. Hopefully, you will start blogging as well.

Old Blog Post

Seven Habits of a Highly Effective Performance Engineer

This is really an extension of #3 Share Your Experience. For this point, I want to share a quick story. In high school, I had a Math teacher named Captain McIsaac. My high school was originally a feeder school for the Naval Academy, Coast Guard Academy and the Merchant Marine. So we had a lot of older teachers who used to be former Navy. Well anyways…Old Cap McIsaac was an interesting guy. He looked like Ted Kennedy’s twin and probably scored the same on most of his breathalyzer tests. He was a terrible Math teacher. Most of us thought he was awesome because he would give us the answers to the questions on our exams during an exam. We never had to show our work. That’s great for kids who cheat off each other. I have to admit…looking back the guy was terrible. He didn’t hold us accountable for our work. It showed in all of my Math classes after Cap’s class. I did well because I love Math, but it takes an awfully long time to break bad habits. You can pick-up a bad habit in seconds, but it takes weeks…sometimes years to break a bad habit.

There’s an important reason for showing your work…actually there are multiple. The number one reason is so that you personally can spend the time reviewing what you did and explaining it to your peers in a visual manner. Don’t worry if you change your ideas…you just write new blogs. The second reason is that we are a global team. Everyone on the team should get the opportunity to learn from other members of the team. It’s a good way to get feedback and share work. The third reason, which is sadly a bit lame is that our days become so busy, that sometimes we need to be able to comment on a blog rather then having a conversation or email thread.

Code is a Team Asset and Not Personal Property

 

“Code is a team asset, not the personal property. No programmer should ever be allowed to keep their code private.”

 

I just finished this book this morning. I’ve been reading it the past 5 rides into the office. It’s a quick read and one any manager (new or experienced) should read. If you read my entry about transparency from earlier in the week, you probably get a sense that I’m a firm believer in openness and sharing. High-performing teams more often than not are very open and sharing. They put their thoughts out there in person, as well as in written form. They expose their artifacts, whether it be code or content, to be viewed, critiqued or commended on a continuous basis (daily being the longest cadence). 

Software teams that want to practice Continuous Integration have to think like Osherove suggests about their code. Developers have to be willing to commit often knowing that the code they produce for their product or project is not their own art that they can keep protected on their laptop or even personal Github account (I’ve seen this happen over and over mind you). If they are contributing code to a product or project, then they have to be willing to share and integrate their code ‘as frequent as humanly possible’.

Step 1 is changing perspective. I may have written the code, but the whole team owns it. If for some reason I won the lottery and left the company, the team is still accountable for the quality and functionality of that code. Step 2 is about creating the habit. The habit is to commit early and often. A commitment device that I would recommend is to setup a CI-server like Jenkins or Bamboo. Setup a job that pings our source code tree every 10s to see if something new has been checked in. Have that job do a simple compile. Eventually daisy chain steps like building, unit tests, static analysis, integration tests and eventually acceptance tests. Step 3 is about sharing your CI-server dashboards constantly in your team space and at the forefront of your morning stand-ups.

Give it a try…See what happens. 

Transparency…Training…Setting Expectations

I’ve been reading The Hard Things About Hard Things the past few days by Ben Horowitz. Half-way through there is an interesting chapter called “Why Startups Should Train Their People”. The chapter is essentially a replay of a blog Horowitz wrote that talks about Good Product Managers Bad Product Managers. You can see the story here. Personally, I think the chapter should be renamed to “Why All Companies Should Train Their People…Continuously!”

Thoughts About His Blog First…

As technologists, I think we take for granted the training of our engineers and scientists. We see training as an ‘exercise’ that is done during the initial phase of on-boarding an employee to a team or a new job. Rarely is it seen as a continuous exercise. Most engineer organizations miss it big time during the on-boarding phase as well. They may spend a fair amount of time training new hires to learn the corporate policies, processes and norms. A little bit of time is spent training the engineer to get going, but mainly from an access and peripheral perspective.

Most technical managers take for granted that training is something that should be part of a continuous exercise. Just because a software developer may have years of working with an IDE (Integrated Development Environment), a source code repository like Git or SVN and a defect tracking system like JIRA or TFS, doesn’t necessarily mean that they are good to go with simply a wiki document on how to get started. The good technical managers go through the experience together with their developers to make sure they are on the right course and that the little intricacies (rarely documented) are small bumps in the road to get started and be productive.

Training is more than just getting a development environment up and running. It’s about setting a standard and defining expectations around performance and commitment to one’s craft. It’s about sharing the norms of the team culture, demonstrating them first hand and then working as a team to highlight mastery or achievement of those norms. For example, if the norm of the team is to commit code daily and the value is for transparency of work and a dedication to a quality pipeline, then what should be trained are both the norm and the value to the new team member. I call that out, because often teams will bring on new developers and have several norms in place that are poorly communicated to the new team member.

I remember that when I was more focused on Performance Engineering, the first thing I had a new team member to my team or software engineer to our development organization do was read my blog on habits good performance engineers exhibited. It was only the start of training for being on my team. I felt it was my responsibility to training all of my managers. I didn’t train all of the engineers, but every manager had to spend a fair amount of time with me so that we were really clear on expectations, as well as setting the standard that I wanted my managers to uphold.

This isn’t a one-time thing either. I felt it was important that I continuously learn and share my experiences with my team. For example, we value good engineers not just because they write elegant, reusable code, fix bugs timely and share their work. We consider them good because their constantly evolving and learning through experiences outside of their daily assignments. They are working on Open Source projects, learning new languages and collaborating with industry peers. We call them good because they are growing. The same holds true for good managers. They don’t do a set of tasks real well and repeatedly do them well. They learn about new things, technology, practice, process, etc…they experiment, share and incorporate. 

About the Article

I think anyone can replace the role or “Product Manager” with any job they want to insert. It could be “Budget Analyst” or “Software Architect” or “Release Engineer”. It really doesn’t matter from my perspective. What matters is the forethought about the beliefs Horowitz feels make up a good versus bad product manager from his own ideology lens. I don’t necessarily agree with every statement that Horowitz writes, but I do agree with the thoughts that expectations should be set. Managers and team members cannot just share the “expected” norms to a new team member without sharing the norms that detract from the team’s culture and success in the role. Being able to share both positive and negative norms are critical to establishing a stage of transparency for the team.

Ben Horowitz: Good Product Manager…Bad Product Manager

Good Product Manager/Bad Product Manager

Courtesy of Ben Horowitz

Good product managers know the market, the product, the product line and
the competition extremely well and operate from a strong basis of
knowledge and confidence. A good product manager is the CEO of the
product.  A good product manager takes full responsibility and measures
themselves in terms of the success of the product. The are responsible
for right product/right time and all that entails. A good product
manager  knows the context going in (the company, our revenue funding,
competition, etc.), and they take responsibility for devising and
executing a winning plan (no excuses).

Bad product managers have lots of excuses. Not enough funding, the
engineering manager is an idiot, Microsoft has 10 times as many engineers
working on it, I'm overworked, I don't get enough direction. Barksdale
doesn't make these kinds of excuses and neither should the CEO of a
product.

Good product managers don't get all of their time sucked up by the
various organizations that must work together to deliver right product
right time. They don't take all the product team minutes, they don't
project manage the various functions, they are not gophers for
engineering. They are not part of the product team; they manage the
product team. Engineering teams don't consider Good Product Managers a
"marketing resource." Good product managers are the marketing counterpart
of the engineering manager. Good product managers crisply define the
target, the "what" (as opposed to the how) and manage the delivery of the
"what." Bad product managers feel best about themselves when they figure
out "how". Good product managers communicate crisply to engineering in
writing as well as verbally. Good product managers don't give direction
informally. Good product managers gather information informally.

Good product managers create leveragable collateral, FAQs, presentations,
white papers. Bad product managers complain that they spend all day
answering questions for the sales force and are swamped. Good product
managers anticipate the serious product flaws and build real solutions.
Bad product managers put out fires all day. Good product managers take
written positions on important issues (competitive silver bullets, tough
architectural choices, tough product decisions, markets to attack or
yield). Bad product managers voice their opinion verbally and lament that
the "powers that be" won't let it happen. Once bad product managers fail,
they point out that they predicted they would fail.

Good product managers focus the team on revenue and customers. Bad
product managers focus team on how many features Microsoft is building.
Good product managers define good products that can be executed with a
strong effort. Bad product managers define good products that can't be
executed or let engineering build whatever they want (i.e. solve the
hardest problem).

Good product managers think in terms of delivering superior value to the
market place during inbound planning and achieving market share and
revenue goals during outbound. Bad product managers get very confused
about the differences amongst delivering value, matching competitive
features, pricing, and ubiquity. Good product managers decompose
problems. Bad product managers combine all problems into one.

Good product managers think about the story they want written by the
press. Bad product managers think about covering every feature and being
really technically accurate with the press. Good product managers ask the
press questions. Bad product managers answer any press question. Good
product managers assume press and analyst people are really smart. Bad
product managers assume that press and analysts are dumb because they
don't understand the difference between "push" and "simulated push."

Good product managers err on the side of clarity vs. explaining the
obvious. Bad product managers never explain the obvious. Good product
managers define their job and their success. Bad product managers
constantly want to be told what to do.

Good product managers send their status reports in on time every week,
because they are disciplined. Bad product managers forget to send in
their status reports on time, because they don't value discipline.

 

Who Are Your Tech Role Models

This is kind of a fan-boy post and I’m hoping I get some discussion or follow-up blogs from my colleagues. I would like to use this blog to talk about a few my role models in technology. I encourage anyone who comes across this blog to put their list out there for themselves. For my list, I’m going to cover four of the technologists I look up to for guidance and direction. I consider them my north star of technology. All have a background in software performance. Each are engineers at heart. They have built product, systems and architectures. 

Tech Role Model #1: Cary Millsap

My list is not in any particular order by the way. If I did order it, I would probably put Cary #1 anyway. I first read Cary’s book Optimizing Oracle Performance in the fall of 2003. I was about to leave my job as a Performance Engineer at Manugistics to take on a new job as the Director of Performance Engineering at Blackboard. I think I read Cary’s book cover to cover 3 times over the course of a 10 day period. His methodology, which he calls Method-R (also the name of a company he created) was the most pragmatic and practical approach to performance forensic analysis any other engineer had presented in last 10 years. I followed Cary’s career from Hotsos to Method-R to Enkitec to Accenture. 

I attended multiply Hotsos Symposiums and even hosted Cary for a week long consulting engagement with my team. If I had to sum up why Cary has influenced me in a sentence or less, well it really comes down to the paper Cary wrote called Thinking Clearly About Performance. In a little less than 15 pages, he’s able to distill my entire career of beliefs and practices about software performance. 

Tech Role Model #2: Steve Souders

I first met Steve Souders formerly of the Yahoo Exceptional Performance team in 2008 at the first Velocity Conference. He was the public face of YSlow and one of the first engineers that I had the experience with in the industry looking at performance from a cognitive experience. In my early years of performance engineering, I had been focused on the throughput and processing times of algorithms. When I moved to Blackboard, my attention shifted from server and database to a complete full-stack including the client. Steve and his team really distilled front-end performance first. He was a true pioneer.

I’ve met Steve many times at various conferences. We have exchanged a couple of emails over the years as well. Like Cary, he appreciates his community of followers and welcomes the attention. The good thing is that he doesn’t seek or crave the attention. He takes it in stride. These days, Steve is no longer with Yahoo or Google. He’s moved onto Fastly as their Chief Performance Officer. The fact that a company has a Chief Performance Officer is a testament to Steve. 

Tech Role Model #3: Adrian Cockcroft

Adrian Cockcroft was one of my earliest performance engineering heroes. He worked at Sun Microsystems back in the late 90’s and early 2000’s in various roles. Back in 2002, I worked on a competitive benchmark with him and the Performance Engineering team at Sun. I was a like a kid in a candy shop. Looking back I probably didn’t appreciate the opportunity I was granted and the access I had to him. Like Steve and Cary, what makes Adrian special is not some insane degree of intelligence. Though Adrian is ridiculously smart. Rather, it’s pragmatic and practical thinking. Performance Engineering is an engineering discipline based on decomposition of time, demand and inertia. Smart, critical thinkers succeed. 

It’s Not Goodbye, but See You Later…

I’ve never been one for goodbyes over the years, so I will leave you with a blog of hope that one day we will see each other again. We might be working together away from Blackboard or maybe one day I will come back. I just don’t know…What I do know is that for this blog I’m going to cover some parting words that hopefully will resonate with my reading audience. I’m hoping it will ignite some kind of spark and make an impact on the future.

 

When I started the Performance Engineering team in the fall of 2003, I set out on a mission to make Blackboard the fastest performing, most scalable e-Learning software platform in the world. I wanted us to be the benchmark in the software space, where companies look at us lovingly in the distance with deep admiration and respect. That leads me to my first major point…

 

1) Set high expectations for yourself and your teammates…Do what you can to achieve them.

 

There were a lot of things I wanted to accomplish when I came to Blackboard, but the one thing I knew I didn’t want was to fail. I came here in my late twenties. I was a mere child in terms of professional experience. I was being entrusted to build a multi-million dollar team for a $100 million dollar company that wanted to go public and become a $1 billion dollar company.

 

Our CEO, Michael Chasen had high expectations for me, therefore I needed to set higher expectations for myself and the team I was building. Setting expectations is really about setting goals and then being transparent about those goals. Achieving expectations is about being both strategic and operational at the same time.

MUPPETS MOST WANTED
You don’t have to have 20+ years of experience to be successful in any venture. You have to be smart, committed and resilient. The smarts come from planning, researching and my personal favorite, continuous learning from experience. The commitment is about execution to plan, as well as a willingness to re-plan after learned mistakes. The resilience is about perseverance when times are challenging.

 

2) Every day is a benchmark

 

I wrote a blog back in June of 2007 to my team about the importance of seizing the moment. Unfortunately, the blog was internal and I never posted it. I was half correct with the blog. The part I nailed was the part that insisted every day is a chance to start over. Every day is a chance for a new beginning.

 

I missed an essential part which looking back could have and should have fundamentally changed our team’s purpose. It was an aha moment that if I could do it all over again, I would totally have done it differently.

epic

The focus of that blog was about testing and benchmarking. In 2007, we were a very good testing and benchmarking organization. Some in the industry might have said we were one of the best given our maturity, practice and tooling. We should have looked at all of that production data real-time and built an analytics engine that studied live system data. That was the real data we needed more than anything. I’m not talking volumetric sampling. I’m talking APM (Application Performance Measurement).

 

We should have built the collection tools and engine to process the data. That would have been disruptive. It would have been game changing. We didn’t and as a result we failed to reset expectations and learn from our past experiences.

 

3) We cannot change the cards we are dealt…we can change how we play the hand

 

I don’t know the original author of this quote. The context for me is hearing it back in 2008 watching a YouTub clip of Randy Pausch giving his famous Last Lecture. I think I watched that lecture a dozen times. I bought the book and read it over and over as well.

last_lecture

That quote has been in my head constantly for the last few months as I’ve been deciding whether to leave Blackboard or stick around. I’ve thought about it in the context of my 11 years here. I realized over and over again me and my teammates were dealt blow after blow. Some of those blows were good…some were bad. Rarely did anything we as a group planned happen in a natural order. More often than not we found ourselves treading water or playing a defensive game of ping pong.

 

We got through it all. The way we got through it all was being adaptive and willing to change our plan.

 

Blackboard will hopefully outlive me for many decades to come. The folks who are a part of the future, will hopefully adapt like me and my colleagues adapted over the years. Looking back, that’s what made this place so special. There was a simpatico ebb and flow to change on the fly. Hopefully the people and the company won’t forget that going forward.

 

- Steve Feldman

Blackboard (2003 – 2014)

Continuous Delivery…Continuous Integration…Continuous Deployment…How About Continuous Measurement?

I spend a lot of my free time these days mentoring startups in the Washington, DC and Baltimore, Maryland markets. I mentor a few CEO’s, who are building software for the first time, as well as a few folks in the roles of VP of Engineering or Director of Development. It’s fun and exciting in so many ways. I feel connected to a lot of these startups and personally feel a lot of satisfaction mentoring some really great people, who are willing to put it all out there for the sake of fulfilling an entrepreneurial spirit.

I’m not just partial to startups. I enjoy collaborating with peers and colleagues that work at more tenured companies. I think it’s important to get alternative perspectives and different outlooks on various subjects such as engineering, organizational management, leadership, quality, etc…

http://www.miraclegroup.com/images/easyblog_images/205/lean.gif

For about four years or so there’s been a common theme amongst many of my peers and the folks I mentor. Everyone wants to be agile. They also want to be lean. There’s a common misconception that agile = lean. Yikes! I’ve also noticed that a lot of them want to follow the principals of Continuous Delivery. Many assume that Continuous Delivery also means Continuous Deployment. The two are related, but they are not one and one the same. Many of them miss that Continuous Integration is development oriented while Continuous Delivery focuses on bridging the gap between Development and Operations (aka…the DevOps Movement). Note: DevOps is a movement people…not a person or job.

The missing piece…and I say this with the most sincere tongue by the way…is that there still remains this *HUGE* gap with regards to “What Happens to Software In Production?”. My observation is that the DevOps movement and the desire for being Continuous prompted a lot of developers and operations folks to automate their code for deployment. The deployments themselves have become more sophisticated in terms of the packaging and bundling of code, auto-scaling, self-destructing resiliency tools, route-based monitoring, graphing systems galore, automated paging systems that make you extra-strong cappuccinos, etc…Snarky Comment: Developers and Operation Engineers can’t be satisfied with deploying an APM and/or RUM tool and calling it a day.

gadget  monkey

Continuous Measurement is really what I’m getting at. It’s not just the 250k metrics that Etsy collects, although that’s impressive or maybe a little obsessive to say the least. I would define Continuous Measurement as the process of collecting, analyzing, costing, quantifying and creating action from measurable data points. The consumers of this data have to be more than just Operations folks. Developers, architects, designers and product managers all need to consume this data. They *need* to do something with it. The data needs to be actionable and the consumer needs to be responsive to the data and thoughtful going forward for next generation or future designs.

In the state of Web Operations today, companies like Etsy or Netflix make a tremendous amount of meaning from the data they collect. The data drives their infrastructure and operations. Their environments are more elastic…more resilient and most of all scalable. I would ask some follow-up questions. For example, how efficient is the code? Do they measure the Cost of Compute? (aka: the cost to operate and manage executing code)

Most companies don’t think about the Cost of Compute. With the rise of metered computing, it’s amazing to abstract the lost economic potential and the implied costs because of inefficient code. Continuous Measurement should strive to balance that lost economic opportunity (aka…less profit). Compute should be measured as best as can be from a service, feature and even at a patch set level.

A lot of software companies measure the Cost to Build.  Some companies measure the Cost to Maintain. Even less measure the Cost to Compute. Every now and again you see emphasis placed on Cost to Recover. Wouldn’t it be a more complete story with regards to Profit if one was able to combine the Cost to Build with the Cost to Maintain and the Cost to Compute?

Maybe the software community worries about the wrong things. Rather than being focused on speed/delivery of code and features, maybe there should be greater emphasis placed on efficiency of code and effectiveness of features. Companies like Tesla restrict their volume so that each part and component can be guaranteed. Companies like Nordstrom’s and the Four Seasons are very focused on profit margins, but at the same time they value brand loyalty in favor. I used to think that of Apple, but it’s painfully obvious that market domination and profitability have gotten in the way of reliable craftsmanship. I love my Mac and IPhone, but I wish they didn’t have so many issues.

http://www.toonpool.com/cartoons/buy%20magic%20beans_53582

I have no magic beans or a formula for success per se. I would argue that if additional emphasis was placed on Continuous Measurement, many software organizations would have completely different outcomes in their never-ending quest to achieve Continuos Delivery, Continuous Integration and Continuous Deployment. It just takes a little bit of foresight to consider the notion that Continuous Measurement is equally important.

What Ever Happened to Software Patterns and Anti-Patterns

Thirteen years have passed since I first read Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software, which if you know me well I consider it my bible of Software Performance Engineering. Note, I’m not a religious guy so the fact that I referenced the bible is saying a lot. I still keep a copy of it about 5 feet from my desk. I lend it out at least 3 or 4 times a year to members of my team still to this day. It’s a book that clearly maintained its luster. I can’t call-out anything in the book that doesn’t apply to today’s computer scientists. 

Image               Image

My big takeaway from the book and the teachings’ of Smith and Williams is the notion of Software Performance Anti-Patterns. Earlier in my career and my studies I was intimate in my learnings about Software Patterns. I read the Gang of Four’s classic Design Patterns which was published in the mid-1990’s and in awesome fashion is still relevant today. I have a copy of that book sitting next to my copy of Performance Solutions. I wonder if today’s CS graduates are even reading either of these books as solid references. It’s like a journalist or English major making it through undergraduate studies without reading Strunk and White’s Elements of Style. Is it possible to graduate without reading these books?

Image

As a young engineer and computer scientist I focused persistently on software performance, I lived and breathed patterns and anti-patterns. I used them for meaning as well as guidance in helping my fellow developers learn from simple coding mistakes that in the early days of Java were easily exposed. Early Java code in the days of Java 1.3 and 1.4 was wasteful. Heck, there’s still a lot of Java code today that’s wasteful as well. By wasteful I am referring to poor memory management and wasteful requests to name a few. Simple anti-patterns such as wide loads or tall loads were common. There was blatant disregard for understanding the lifecycle of data, how data was accessed, whether the data was ephemeral or long-lived. There was too much inter-communication transport between containers and relational data stores. Not that I’m trying to equate every software pattern to memory management, data transport or advocating the use of caches to solve every problem. I’m just picking a few off the top of my head.

Image

So my goal of this post is not to sound like a curmudgeon. I’m too young to be old and crabby. I’m not a purist either. I’m more of a pragmatic engineer who likes to constantly ask questions. That’s the forensic side of my personality I guess. The question that’s on my mind these days is do the developers of today…specifically the developers who are going hard after languages such as PHP, NodeJS and Ruby, think about software patterns and anti-patterns? Do we see an absence of thought around challenging our designs because our code can be quickly assembled via frameworks tied to these languages and others? Has Lean Thinking made developers more conscious of code productivity versus code efficiency?

I’m sure I’m going to get lambasted by a few readers who are passionate about these modern web frameworks and the new stacks. That’s cool and fine. I personally am a big fan of these stacks and capabilities. Mainly because they make the development of software more accessible to everyone. That’s not my point in being interrogative about design patterns and anti-patterns. I guess I’m more curious about whether developers today are thinking about design patterns and are able to identify anti-patterns, or whether they are more focused on writing code faster and with less effort. Don’t forgot code that’s testable from the start.

That’s actually one of the greatest observations I’ve seen about today’s developers using these newer languages and frameworks. The are social coders that fork and pull all of the time. They are following more agile practices around TDD and BDD. They write their own automation. A lot of these developers take more accountability of the quality of their code than any other generation of developers that I’ve witnessed. Note I am young (38) and really have only worked with 2 or 3 generations of coders. A lot of these developers are focused on deployment of their code as well. They make use of the accessibility of systems through providers like Amazon, Rackspace, Azure, Google and Heroku. They leverage deployment tools like Capistrano or RunDeck. They write automation with configuration management frameworks like Chef, Puppet and Ansible. They love APM tools like New Relic or AppDynamics. All indications support the thesis that today’s more modern developers take more accountability of so many facets of development.

We should commend those developers for what they are doing. Greater consolidation of languages, frameworks and tools increases the likelihood or probability that the community of contributors to these technologies will give back. It also leaves open, which in my small sample size, the possibility of bliss or unawareness to good design, structure and scale. You have more developers today than in any time in the world. There are more outlets for developers for social coding, open source contributions, etc…Is it possible that a larger percentage of developer are really just coders, assembling software as a commodity? This is more passion and theory, than empirical analysis…

I did some unscientific research…aka lots of Google Searches…here’s my observations:

1. The few relevant postings I saw about software design patterns and anti-patterns were more scholarly. They were traditional research papers written by academia posted on ACM or IEEE. While I used to be a big ACM and IEEE reader back in the day, few of my contemporaries use it or refer to it. In fact, I haven’t read an ACM or IEEE article since after 2010, which is kind of sad.

2. A lot of blog posts and even slideshare presentations used anti-patterns in the software context about developer habits or bad behavior. This kind of annoyed me the most because there were some folks that run in the same circles that I run in. They weren’t talking about code design (good or bad), but rather behavior. 

3. The one community that anecdotally had the most entries around design patterns and anti-patterns was the Scala community. That made a lot of sense to me as every Scala developer I know was a hard-core Java developer who made a run at Ruby for a project or two and decided that Play was even cooler than Rails. 

4. The MVVM community big on backboneJS, angularJS, emberJS, etc…didn’t really write much about anything pattern or anti-pattern at all. There were blogs and presentations. Some were good…some were so so. Most were developer behavioral. Most were about code efficiency. There was this one blog about backboneJS that was ok. Nothing game changing…Nothing that would act like a slap in the face to developers to think about the efficiency of their design, ability to scale and the cost of compute.

That last phrase I guess is what blows my mind (ie: “cost of compute”). I got into the world of software performance and optimization at a time when compute was really expensive. We saw years of compute (CPU and Memory) become a commodity. If our code couldn’t scale, we would simply add more memory, more CPUs, more bare metal systems, more VMs, more storage, etc…The public cloud makes that access to more compute so simple…

The compute of 2014 and beyond is metered now or better yet again. It was metered back in the early days of timeshare computing. Today, I find myself getting out of the game of running private data centers or using colos. I buy less hardware and storage each year. My private data center footprint is the smallest it’s been in years. Not because of virtualization and consolidation, but rather because I’m moving more stuff outside to the public cloud.

Each month I look closely at that bill I get from Amazon and Rackspace. That meter is constantly running. Pennies add up and you don’t realize it until it’s late in the game. It turns out a lot of that waste is because my cost of compute (aka…the efficiency of our code) isn’t as great as it could be. We write a lot less code, but it’s not necessarily all that efficient.  

I’m hoping this blog starts a conversation, not a fight. The thing I see is that innovation in the software world is at an all-time high. We do have more developers than we have ever had in our lifetime. We have more languages and frameworks today than ever before. We have more choice…more variation…more outlets. At the same time, I can’t help but think about questions like:

  • Are we producing more coders than developers because we have a supply/demand problem?
  • Are the CS grads we are producing around the world bliss of solid architectural design?
  • Do developers focus less on good system design, sustainable, long-lasting architectures to be used for years or do they place more emphasis on quick applications?
  • Has the days of profiling become an ancient practice to be done by the few developers and avoided by coders?
  • Has the accessibility of cheap compute blinded our awareness of cost?

I probably could go on for longer…