DevOps Enterprise 2014…A Conference Review of Sorts

I had the chance to attend a new conference on the DevOps circuit called DevOps Enterprise earlier in the month. For those of you who did not have a chance to attend, it was a conference co-hosted by one of the greatest Tech Connectors to have ever walked this Earth, Gene Kim, his colleagues at IT Revolution Press and the main sponsor Electric-Cloud. The timing of the conference was spectacular as it happened to fall around the five year anniversary of the DevOps movement.

https://www.flickr.com/photos/bethaniehines/15478309207/in/set-72157648631801518

Source: Flickr

I first learned about the Conference from a friend and colleague in the Washington, DC area named Jeff Gallimore, a longtime friend of Gene Kim and a collaborator on theDevOps Defense Audit Toolkit. Jeff had a speaking role at the conference with a few auditors and technologists, acting as the moderator on a panel about the Toolkit. Simon Storm, Josh Corman and Byron Miller participated in the panel and have been part of the Toolkit project for quite some time. I won’t go into too much about the Toolkit, other than to suggest reading the blog post and joining the Google Group if you have an interest in participating.

Pre-Conference Perspective

When I first thought about going to conference I was a little weary of what the topics would be like and who the attendees would be. The thought of DevOps concepts and culture inside the enterprise wasn’t something I naturally aligned. As a long-time Velocity Conference attendee, I was concerned that this conference would miss the mark and not teach me something new about the movement or introduce me to new ideas and practices. I personally watched the DevOps movement come out of Velocity, long before the term DevOps was even coined, so any other conference that was going to use DevOps as a culture would have to be top notch. Since Gene was running it, I gave him the benefit of the doubt.

Then I thought about my new role as VP of Engineering at Contrast, specifically about who are our customers. The bulk of our customers are not Silicon Valley startups or technology plays. Rather, they are enterprise players. Naturally, I thought it would be a good idea to see how the “Enterprise” was thinking about the DevOps movement and how they associated it with building and operating software system.

Thoughts on the Conference

I wrote a ton of crazy notes about each of the sessions from Day 1. I guess you can call me the “King of Horses” then. There was one session that stood out in my mind more than anyone and that’s the Target presentation. Ross Clanton (Ops) and Heather Mickman (Dev) presented about how the DevOps culture was brought into the organization at scale. It was how Dev + Ops found a home together at a large Enterprise player with years of legacy code and systems. You can also see the slides here.

Target is/was/will always be a big horse. They are doing the things that make them unicorn like to their competitors and their colleagues in other companies. What are those things you might ask? Well, first they are breeding a culture of transparency through a few core means. The first is making the move to collaborative development via Git using Pull Requests. Everyone uses Git on both the Dev and Ops side. They share repos and access across teams. Second, not only are they big players iMinneapolis’ DevOps Days, but they are running their own internal DevOps Days so that everyone in the organization can participate. Third, they are sharing to the outside world little by little via their Github blog.

The most impressive thing I waked away from the talk was this notion of Flash Builds. As they described, Flash Builds =
(flash mob + scrum + hackathon = awesome) 8hr day with 2x4hr sprints (includes retrospectives, planning, etc…). They promised a blog about the idea. Since it’s on Twitter and the Internet it has to happen ;). From their description, it sounds like Flash Builds are mini-hackday events that accomplish a full development and operations lifecycle. What this team showed as well as anyone is the need to breakdown silos and become one culture.

Devops More than Tooling

One of the pleasant surprises of the conference was the alignment of DevOps and Continuous Integration. I have to admit that I was totally expecting a ton of talks talking about the token DevOps tool sets for automation (Chef, Puppet, Ansible, Salt, etc…). I was expecting a ton of Docker, Vagrant, AWS Services and of course monitoring/reporting (Graphite, StatsD, CollectD, New Relic, etc…). The tooling was called-out in a few presentations, but it wasn’t three days of “Tool Overkill” and presentations about those tools.

I definitely felt like the enterprise players put a great amount of emphasis on build pipelines, feedback loops to development teams and collaborative deployment architectures involving development and operations working together. There were more references to Jenkins-CI than any other tool, followed by Git. In my mind that’s telling me the crowd was focusing on the development delivery pipeline striving for continuous development and feedback.

Unicorns, Horses and Ninjas

One of the main themes of the conference is the notion of horses and unicorns. The companies and players associated with the early DevOps movement were/are often referred to as Unicorns, as their work is considered magical and mystical. This has been echoed in papers, blogs and countless slides at DevOps Days, Velocity, FlowCon, OSCon and other conferences. It was fitting that Gene used this theme as I believe he calls out Project Unicorn in The Phoenix Project, a must read for any and all.

During the conference I found myself writing down some notes on what I thought a Unicorn looked like in the movement, as well as a Horse so I could put a picture to it. I even added a third persona called a DevOps Ninja. I’ll briefly post my comments on the three below. Note, I’m not intending to be snarky, but it reads pretty snarky.

Unicorns are companies that simply get it. They have broken down the culture barriers. There are no silos. Everyone practices Continuous Delivery, They post all of their awesome tools that they built in-house on Github. When they don’t build their own tools, they make time for Pull Requests on the latest and greatest projects on Github. They attend all of the latest conferences as presenters and sometimes as sponsors. Their most favorite part about conferences is the “Hallway Track”. They never sleep. It’s almost like they are vampire unicorns or something. If they aren’t coding on a plane, attending a conference or participating in a hack-event for a declining population of siamese albino wales, then they are guest speakers on podcasts. When do they have time to sleep or better yet…play golf?

https://drawception.com/pub/panels/2012/5-6/D59KOb3YZ7-10.png

What exactly are horses then? I guess it’s fair to say that they come on to the movement well after the movement is gaining steam, but rather has no become a fad. Why? Because they have been heads down on some important projects for the last few years and haven’t had time to pick their heads up. They manage their own source code (SVN, TFS or Perforce). They buy enterprise monitoring tools from IBM, HP and CA. They run their own data centers. They choose Perl and Shell as their scripting languages of choice. They go to conferences, but tirelessly take notes and even pictures of the slides on their smartphones and tablets. They don’t like to get up from their seats at the conference in fear of losing their spot.

http://image13.spreadshirt.com/image-server/v1/compositions/1001659506/views/1,,,appearanceId=231/horse---nerd-Women-s-T-Shirts.jpg

I decided to add a 3rd persona in the mix. Let’s call them DevOps Ninjas. Who are these ninjas we speak of? Well, they are the guys from Docker. Ah…just kidding. Well they might be from Docker, but essentially these are the guys and gals who skip the conference all together and have an “Un-Conference” or sponsor secret meetups. They are full-stack engineers who code-automate-deploy-fix-redeploy. They work purely in the cloud. They drift from Starbucks to Starbucks daily.

http://kyleart.com/wp-content/uploads/2008/07/07_ninjastarbucks.jpg

Should We Aspire to be Unicorns or a DevOps Ninjas?

I think it’s fair to say that there are no such things as unicorns or ninjas. They really don’t exist. I know a lot of folks who work for companies that are considered both and between you, me and this blog post they all have issues. Don’t get me wrong…they love their jobs. They feel empowered to make decisions. They collaborate a ton. They don’t live in bliss. Heck, watch a episode of My Little Pony and you will see that even the unicorn ponies on that show have drama. (Note: I am the father of two daughters ages 8 and 5, so we have My Little Pony on a queue at our house).

http://www.hdwallpapersos.com/wp-content/uploads/2014/08/My-Little-Pony-Wallpaper-Photos.png

I guess it’s fair to say that all companies can be compared to horses. You have ponies, jumpers, thoroughbreds, clydesdales, etc…Companies can be any or all at the same time depending on their attitude and culture, probably the best takeaway from the conference in my opinion. It makes me think of a great post by John Willis back in 2010 called What DevOps Means to Me. In the post, Willis references C.A.M.S. (Culture…Automation…Monitoring…Sharing), which many in the community identify as the four pillars of DevOps. I think the conference presenters nailed the C and S. Nearly every presentation focused on Culture and Sharing. Throw in the word Collaboration for C.S.C and you have a more robust picture of the mood and attitude of the presentations. Collaboration was truly being called-out between the traditional roles of Dev and Ops.

I think the Enterprise represents the line of demarcation between Development and Operations so well. For years you often had the development folks working in cubicles in the bowels of the corporate headquarters. The Operations folks they worked in some NOC or Data Center miles, sometimes many states away. Most of the time the developers didn’t know the operations folks and the operations folks didn’t know the developers. What the conference highlighted and the DevOps movement highlights so well is that the barriers have to be removed. The two cultures have to become an amalgam of one culture. These groups need to be together in order to breed a new ideology of thoughts and perspectives together.

Would I Go Back in 2015?

Absolutely…but I want to go back as a presenter. I felt like there were gaps in the stories I heard. The biggest gap is that Performance and Security continue to be called-out as something important to the Continuous Delivery pipeline, but it’s only lip service in my mind. The only folks who are making strides with Performance and Security remain to be the Operations folks. Incorporating Performance and Security as part of the build pipeline is not happening with the diligence and intensity that it should. These are by far the most important non-functional requirements of any system or application. Yet, they remain to be treated as second class problems by development teams.

Advertisements

The Power of Rundeck

A big part of the DevOps movement is the passion and commitment to “automate everything” and provide as much self-service as humanly possible. I’m a big believer in automation…not because I’m lazy, but rather because I have a desire to make all things repeatable, reliable and robust. I call those the “Three R’s” and they are a huge part of why I became a big believer in a 4th “R” which is called Rundeck. Note, I’m not the author of Rundeck. The awesome guys at SimplifyOps were the authors. I’m just a user, fan and admirer of cool, easy to use technology. Below is a quick passage about Rundeck…

Rundeck is an open-source software Job scheduler and Run Book Automation system for automating routine processes across development and production environments. It combines task scheduling, multi-node command execution, workflow orchestration and logs everything that happens. Access control policy governs who executes actions across nodes via the configured “node executor” (default for unix uses SSH) and does not require any additional remote software.[1] to be installed on them. Jobs and plugins can be written in scripting languages or Java. The workflow system can be extended by creating custom step plugins to interface external tools and services.

Wikipedia Comparison of Open Source Automation Tools

I’m a big fan of Rundeck for a number of reasons. My first reason is pretty straightforward. Basically, it’s a simple web application that provides the basic controls and workflow for self-service. The simple web gui is just so easy to use that anyone can understand how to use it with little training. My second reason is that it pretty much can make use of any automation/scripting framework out on the market. Third, it gives developers, operations engineers or even support staff a simple workflow for doing work on a server without ever logging into the server. Fourth and certainly not last is that it provides an audit and tracking system. There are other key things such as scheduling and reporting, which are super easy-to-use features to enjoy as well.

Source: Rundeck.org

Long before the SimplifyOps guys built Rundeck, my old team at Blackboard built an automation engine we called Galileo. My team built it in Groovy/Grails. It was a lot like Rundeck, but not as simple to contribute and extend. It served a great purpose during its time. It helped us achieve so many of the needs I listed above. It required a listener on each destination client. Rundeck works without an installation on the client system. All that’s needed is an SSH key or simply passing login credentials for a trusted user within the script.

Crowd-Sourcing Development

One of the cool things that the SimplifyOps guys do is crowd-source their development via Trello, which is one of the best kanban boards available (for freemium btw) on the market. Their board is public for everyone to follow, vote and even contribute.

Making Time for a Side Project Using a Commitment Device

My team is getting used to my style and attitude about work. One core value I believe in is making time for other work (that’s relevant to one’s career) outside of the normal velocity of a sprint to accomplish additional learning or work. If you have a chance, take a look at my presentation about PTOn which is about applying a commitment device (ie: scheduling of time for Paid Time On) to ensure that the work is accounted for and is not disruptive to a team’s work velocity.  

A really good friend of mine (David Hafley) sent me this article today which is directly in line with my presentation about PTOn (Paid Time On). Teams (and individuals) need time to work through a work problem (project of role) or a problem that could yield incredible inspiration (project of passion). The challenge that I see with software development (engineering teams in general) is that teams focus on scheduling every ounce of time imaginable. If the team has 12 months in a year and they follow a 1 month velocity, then they have 12 units. The same applies to a 2-week velocity in which the team works off a 26 unit schedule. What I’m really getting at is that software teams tend to build utilization models that account for work and vacation. Occasionally, these work models account for training or an off-site. You get my point which is teams tend to over-schedule their team members like they are a bunch of Carbon Based Units.

CBLF meaning - what does CBLF stand for?

If you read the article closely, you will see it emphasizes creating “personal time” which I personally find difficult. I have a wife, kids, hobbies, etc…I will agree that finding personal time is important, but in the same grain, I would suggest that in the 40+ hours we spend at work (some 60+), we need to “find work time” for learning. 

 

Balancing Testing versus Measurement

One of the advantages of having a SAAS application is the ability to capture true production telemetry. This telemetry consists of functional and non-functional (performance and security) data points. These data points can be and should be optimized for use by our team to make us a more informed development team about the quality of our product. This by no means implies that live production metrics should be leveraged 100% in lieu of testing. There should be a balance of testing and measurement.

octopusK

I covered my testing philosophy in one of my earliest blogs in which I stressed and advocated for the need for robust build/test pipelines complete with quality inspection (unit, static, integration and acceptance). This pipeline is nothing original or unique that I’m proposing. The pipeline is a component of Continuous Integration in which developers commit early and often. The pipeline grows in complexity and maturity in an iterative fashion with each day as the team’s commits becomes a robust product or module ready for deployment. Consider this early phase more of an incubation phase in which the product is nothing more than executable code, but not deployable ore useable. When code is being incubated, teams should be placing more emphasis on testing and evaluation. This testing is more Unit and API, not acceptance testing.

11LEFTHANDED

If the product is ready for acceptance testing, then the product is ready for a deployment (synthetic or production). If the product is deployed, then it should be measured with deep telemetry (dynamic analysis) such as RUM (Real User Measurement), APM (Application Performance Management) and ASM (Application Security Management). Artifacts such as log files and live telemetry from component systems (Queuing Systems, Ephemeral Caches, RDBMS and Non-Relational Structures) should be captured and used. Why…Because the data is there. Why ignore passive data that can be analyzed, captured and organized in an automated fashion?

I can’t really explain why the data often gets ignored. It simply does because so many development organizations focus on the discrete activities of testing. They often fail to capture the more meaningful data that comes from embedded telemetry into the development process. That same telemetry data that can be captured in the testing process can be captured from live production systems. It’s like a golden egg that gets laid every day. The team has to take advantage of this goldmine of data.

cloud_22

I had the chance to talk with Badri Sridharan from LinkedIn about a year ago. Badri and I both ran Performance Engineering practices in our careers. We were exchanging perspectives on the current and future of Performance Engineering. During the call, Badri shared insight into a system called EKG that the Development and Operations teams introduced at LinkedIn. The blog was written by the Operations team, so it shows a lot of infrastructure data points visually. If you look toward the bottom of the blog, you will see the reference to exception counts and a “variety of other metrics”. Those other metrics as Badri explained are functional verification data points. Teams at LinkedIn can get live production data for their Canary and A/B deployments before they promote code throughout the whole system

EKG compares exception counts, network usage, CPU usage, GC performance, service call fanout, and a variety of other metrics between the canary and the control groups, helping to quickly identify any potential issues in the new code.

I’m still learning what telemetry exists in our systems right now. I’m eager to hear from all of our teams about what data is captured, where it is stored, how it’s made actionable and how the data is brought back into the development process. 

Why I Blog

From time to time people ask me why I blog. The best way for me to answer this is to give you the quick elevator pitch, as well as refer you to a passage from a blog I wrote back in 2008 below. I started blogging internally and then externally when I realized that there was a potential audience of listeners. It wasn’t just about being heard. When I say listeners, I mean people who were curious about my work, my team’s work or the things that we as a team came across. 

In the early days I used my blog to tell a story about a forensic exercise, a tool evaluation, an idea I had or even some deep intellectual stuff. I wanted a quick and easy way to document my own experiences in a scratchpad. I was really hoping that by me blogging, it would become contagious and others on the team would start blogging.

I was trying to break a bad habit in my engineers. I noticed my engineers treated knowledge sharing as the final exercise in a project. It was kind of like their code commit patterns. Back in the early 2000’s the developers I worked with were really unpredictable in committing code. We would have month long projects and often we would see commits 1x or 2x a week (if that) and then a couple big commits at the end of the project. Documentation would come in the same cadence. Maybe we would see a TOC early in the project. Then all of the content would miraculously show-up a week or two after the final commit (if we were lucky). I constantly felt in the dark about our progress and issues. The only time I really heard from my engineers was when they were about to miss a deadline and needed an extension…or if they wanted to share a success. What I really wanted was for my engineers to show their work as they went along. I wanted their work to be more transparent. Basically, I wanted them to develop some new good habits. 

What I found quickly was that blogging was contagious. Nearly every member of my team took to blogging. Eventually they took to daily commits (some even more extreme…YEAH!!!). At Blackboard, we were considered not only the most transparent team, but often considered the most innovative. Many of our blogs were about experimentation and exploration with new technologies. Because we also shared our thoughts, processes and workflows (we just put them out there for all of Bb to criticize or commend), many teams viewed us as pioneers in thinking. 

As I mentioned earlier, I posted a blog in 2008 about Transparency of Work. I’ve included a passage below from that entry. My thoughts in 2008 haven’t really changed all that much in 6 years. Take a look at the entry. Hopefully, you will start blogging as well.

Old Blog Post

Seven Habits of a Highly Effective Performance Engineer

This is really an extension of #3 Share Your Experience. For this point, I want to share a quick story. In high school, I had a Math teacher named Captain McIsaac. My high school was originally a feeder school for the Naval Academy, Coast Guard Academy and the Merchant Marine. So we had a lot of older teachers who used to be former Navy. Well anyways…Old Cap McIsaac was an interesting guy. He looked like Ted Kennedy’s twin and probably scored the same on most of his breathalyzer tests. He was a terrible Math teacher. Most of us thought he was awesome because he would give us the answers to the questions on our exams during an exam. We never had to show our work. That’s great for kids who cheat off each other. I have to admit…looking back the guy was terrible. He didn’t hold us accountable for our work. It showed in all of my Math classes after Cap’s class. I did well because I love Math, but it takes an awfully long time to break bad habits. You can pick-up a bad habit in seconds, but it takes weeks…sometimes years to break a bad habit.

There’s an important reason for showing your work…actually there are multiple. The number one reason is so that you personally can spend the time reviewing what you did and explaining it to your peers in a visual manner. Don’t worry if you change your ideas…you just write new blogs. The second reason is that we are a global team. Everyone on the team should get the opportunity to learn from other members of the team. It’s a good way to get feedback and share work. The third reason, which is sadly a bit lame is that our days become so busy, that sometimes we need to be able to comment on a blog rather then having a conversation or email thread.

Code is a Team Asset and Not Personal Property

 

“Code is a team asset, not the personal property. No programmer should ever be allowed to keep their code private.”

 

I just finished this book this morning. I’ve been reading it the past 5 rides into the office. It’s a quick read and one any manager (new or experienced) should read. If you read my entry about transparency from earlier in the week, you probably get a sense that I’m a firm believer in openness and sharing. High-performing teams more often than not are very open and sharing. They put their thoughts out there in person, as well as in written form. They expose their artifacts, whether it be code or content, to be viewed, critiqued or commended on a continuous basis (daily being the longest cadence). 

Software teams that want to practice Continuous Integration have to think like Osherove suggests about their code. Developers have to be willing to commit often knowing that the code they produce for their product or project is not their own art that they can keep protected on their laptop or even personal Github account (I’ve seen this happen over and over mind you). If they are contributing code to a product or project, then they have to be willing to share and integrate their code ‘as frequent as humanly possible’.

Step 1 is changing perspective. I may have written the code, but the whole team owns it. If for some reason I won the lottery and left the company, the team is still accountable for the quality and functionality of that code. Step 2 is about creating the habit. The habit is to commit early and often. A commitment device that I would recommend is to setup a CI-server like Jenkins or Bamboo. Setup a job that pings our source code tree every 10s to see if something new has been checked in. Have that job do a simple compile. Eventually daisy chain steps like building, unit tests, static analysis, integration tests and eventually acceptance tests. Step 3 is about sharing your CI-server dashboards constantly in your team space and at the forefront of your morning stand-ups.

Give it a try…See what happens. 

Transparency…Training…Setting Expectations

I’ve been reading The Hard Things About Hard Things the past few days by Ben Horowitz. Half-way through there is an interesting chapter called “Why Startups Should Train Their People”. The chapter is essentially a replay of a blog Horowitz wrote that talks about Good Product Managers Bad Product Managers. You can see the story here. Personally, I think the chapter should be renamed to “Why All Companies Should Train Their People…Continuously!”

Thoughts About His Blog First…

As technologists, I think we take for granted the training of our engineers and scientists. We see training as an ‘exercise’ that is done during the initial phase of on-boarding an employee to a team or a new job. Rarely is it seen as a continuous exercise. Most engineer organizations miss it big time during the on-boarding phase as well. They may spend a fair amount of time training new hires to learn the corporate policies, processes and norms. A little bit of time is spent training the engineer to get going, but mainly from an access and peripheral perspective.

Most technical managers take for granted that training is something that should be part of a continuous exercise. Just because a software developer may have years of working with an IDE (Integrated Development Environment), a source code repository like Git or SVN and a defect tracking system like JIRA or TFS, doesn’t necessarily mean that they are good to go with simply a wiki document on how to get started. The good technical managers go through the experience together with their developers to make sure they are on the right course and that the little intricacies (rarely documented) are small bumps in the road to get started and be productive.

Training is more than just getting a development environment up and running. It’s about setting a standard and defining expectations around performance and commitment to one’s craft. It’s about sharing the norms of the team culture, demonstrating them first hand and then working as a team to highlight mastery or achievement of those norms. For example, if the norm of the team is to commit code daily and the value is for transparency of work and a dedication to a quality pipeline, then what should be trained are both the norm and the value to the new team member. I call that out, because often teams will bring on new developers and have several norms in place that are poorly communicated to the new team member.

I remember that when I was more focused on Performance Engineering, the first thing I had a new team member to my team or software engineer to our development organization do was read my blog on habits good performance engineers exhibited. It was only the start of training for being on my team. I felt it was my responsibility to training all of my managers. I didn’t train all of the engineers, but every manager had to spend a fair amount of time with me so that we were really clear on expectations, as well as setting the standard that I wanted my managers to uphold.

This isn’t a one-time thing either. I felt it was important that I continuously learn and share my experiences with my team. For example, we value good engineers not just because they write elegant, reusable code, fix bugs timely and share their work. We consider them good because their constantly evolving and learning through experiences outside of their daily assignments. They are working on Open Source projects, learning new languages and collaborating with industry peers. We call them good because they are growing. The same holds true for good managers. They don’t do a set of tasks real well and repeatedly do them well. They learn about new things, technology, practice, process, etc…they experiment, share and incorporate. 

About the Article

I think anyone can replace the role or “Product Manager” with any job they want to insert. It could be “Budget Analyst” or “Software Architect” or “Release Engineer”. It really doesn’t matter from my perspective. What matters is the forethought about the beliefs Horowitz feels make up a good versus bad product manager from his own ideology lens. I don’t necessarily agree with every statement that Horowitz writes, but I do agree with the thoughts that expectations should be set. Managers and team members cannot just share the “expected” norms to a new team member without sharing the norms that detract from the team’s culture and success in the role. Being able to share both positive and negative norms are critical to establishing a stage of transparency for the team.

Ben Horowitz: Good Product Manager…Bad Product Manager

Good Product Manager/Bad Product Manager

Courtesy of Ben Horowitz

Good product managers know the market, the product, the product line and
the competition extremely well and operate from a strong basis of
knowledge and confidence. A good product manager is the CEO of the
product.  A good product manager takes full responsibility and measures
themselves in terms of the success of the product. The are responsible
for right product/right time and all that entails. A good product
manager  knows the context going in (the company, our revenue funding,
competition, etc.), and they take responsibility for devising and
executing a winning plan (no excuses).

Bad product managers have lots of excuses. Not enough funding, the
engineering manager is an idiot, Microsoft has 10 times as many engineers
working on it, I'm overworked, I don't get enough direction. Barksdale
doesn't make these kinds of excuses and neither should the CEO of a
product.

Good product managers don't get all of their time sucked up by the
various organizations that must work together to deliver right product
right time. They don't take all the product team minutes, they don't
project manage the various functions, they are not gophers for
engineering. They are not part of the product team; they manage the
product team. Engineering teams don't consider Good Product Managers a
"marketing resource." Good product managers are the marketing counterpart
of the engineering manager. Good product managers crisply define the
target, the "what" (as opposed to the how) and manage the delivery of the
"what." Bad product managers feel best about themselves when they figure
out "how". Good product managers communicate crisply to engineering in
writing as well as verbally. Good product managers don't give direction
informally. Good product managers gather information informally.

Good product managers create leveragable collateral, FAQs, presentations,
white papers. Bad product managers complain that they spend all day
answering questions for the sales force and are swamped. Good product
managers anticipate the serious product flaws and build real solutions.
Bad product managers put out fires all day. Good product managers take
written positions on important issues (competitive silver bullets, tough
architectural choices, tough product decisions, markets to attack or
yield). Bad product managers voice their opinion verbally and lament that
the "powers that be" won't let it happen. Once bad product managers fail,
they point out that they predicted they would fail.

Good product managers focus the team on revenue and customers. Bad
product managers focus team on how many features Microsoft is building.
Good product managers define good products that can be executed with a
strong effort. Bad product managers define good products that can't be
executed or let engineering build whatever they want (i.e. solve the
hardest problem).

Good product managers think in terms of delivering superior value to the
market place during inbound planning and achieving market share and
revenue goals during outbound. Bad product managers get very confused
about the differences amongst delivering value, matching competitive
features, pricing, and ubiquity. Good product managers decompose
problems. Bad product managers combine all problems into one.

Good product managers think about the story they want written by the
press. Bad product managers think about covering every feature and being
really technically accurate with the press. Good product managers ask the
press questions. Bad product managers answer any press question. Good
product managers assume press and analyst people are really smart. Bad
product managers assume that press and analysts are dumb because they
don't understand the difference between "push" and "simulated push."

Good product managers err on the side of clarity vs. explaining the
obvious. Bad product managers never explain the obvious. Good product
managers define their job and their success. Bad product managers
constantly want to be told what to do.

Good product managers send their status reports in on time every week,
because they are disciplined. Bad product managers forget to send in
their status reports on time, because they don't value discipline.