Tag Archives: performance

Interesting Review of Java 7 Hotspot Options

I came across this interesting artcile from Java World while working with Chris on the Windows SP10 hardware comparison project. The article is a review of a presentation given by Charlie Hunt from Oracle’s Java Performance Team. Apparently Charlie also authored a book in 2011 about Java Performance as well.

Definitely worth browsing…

Our Performance Core

We do a lot of things in Performance Engineering. Some good…some not so good. Because we do so many things, at times we get distracted. Our ability to progress is hampered by our inability to remove work from our plate. We may not even want to remove the work. The work for some is so appealing that at times it becomes more interesting than the work that might be better positioned for our team. I digress…

As we finish up 2012, the 9th full year of performance engineering here at Blackboard, we have the opportunity to get back and focused on our core with a little more rigor and a little more direction. I wanted to use this blog as a starting thread about our core. We need to re-establish our identity and remember who we are and what our team needs to be. That’s a little different than the notion of who the individual wants to be. Our team is the performance team. Now individually, many of us have passions and goals beyond performance. That’s great…heck it’s encouraged. We have to be true to our core and realize that our personal wants can’t come at the sacrifice of our core.

I’ve simplified our practice areas to four main services: Performance Design, Customer/Production Forensics, Performance Development and Benchmarking/Testing. I will cover each core area in the section below. All four of these are simply components of a larger enterprise. That enterprise I’m speaking about is Continuous Delivery. We all (beyond Performance Engineering) have to get on the same page that we need to be connected to all of the teams that make, support and operate this product.

Core Service Brief Description
Performance Design We often refer to this as a team as SPE (Software Performance Engineering). I think we need to stop, because it’s a mistake to call this as SPE. SPE is more encompassing and crosses into other service areas. The heart of this service is to build product for performance and scalability. We collaborate with development from customer need, through requirements, next to design and then through development. We model, assess risk, build test artifacts, evaluate code (static and dynamic), micro-benchmark, refactor and start the cycle all over before releasing to system test. We need to view this service as a service that happens before deployment.
Production Forensics We have been doing Tier-3 escalation for years and have been doing it well. We need to continue to improve in this area. There are two main areas I think we can improve. The first and obvious is greater emphasis on root cause analysis. What was the cause of the performance defect and why was it introduced. The second is the missing piece to our success, which is the how can we solve the problem of why it was missed during performance design or even during system benchmarking/testing? That leads to a second effort which involves figuring out whether we are exposed to future risk because we are missing something.
Performance Development We really have two development initiatives on this team. We have to prioritize based on need. The most prominent from a need perspective is our maintenance development, specifically for patches and defects that come in as diagnosed defects. We need to be on top of our game for this work. We cannot have mistakes and our degree of confidence from this work must be high at all times. We must put in the most amount of testing, profiling and micro-benchmarking as possible to make sure this work resolves our customer’s issues. The second is prototype development for the sake of performance. This is new feature or platform development for the sake of performance and/or scalability. We have to be 100% confident that what we are developing makes our product better and does not expose our product to greater risks.
Benchmarking and Testing This is our bread and butter. To many in the organization, this is one of the most important if not the most important exercise we perform as a group. This is focused on incorporating benchmarking and testing into our Continuous Delivery Pipeline (see image above). We need to fit in multiple places in the pipeline. It’s not 1 place. It’s in multiple places, providing both developer feedback loops, as well as system deployment/configuration guidance.
 

Blackboard Performance Tuning and Optimization Guide

I’m hoping a few lurkers to my blog area will chirp up once they see the title to this document. Yes, we are about to undertake a new version of the Blackboard Learn Performance Tuning and Optimization Guide this fall. We have a big benchmark project running from September through November with intentions to publish a new guide in the winter. I’m hoping that this will be our last “big” guide and going forward we could start building more wiki like articles smaller in size and nature.

A dream scenario would be that we as a community put our heads together and co-author a more collaborative document that’s living and breathing. It always feels as though once we put out a PDF, the document is immediately obsolete. Hopefully that won’t be the case with this guide. We will have to see.

We are in the early outlining stages of the document right now. I’m mainly putting together a master list of platform components to focus on. The most glaring holes as I see from the last guide is the incredible lack of information we left out on Blackboard specific information such as Snapshot, Content Exchange and Bb-Tasks to name a few. That’s my bad as I organized the last document and wrote most of the content. The other big hole that I’m seeing right now is Lucene. We have a few implementations of Lucene in the product that never gets the appropriate care and feeding.

My ultimate goal is to get the customer community involved in writing this blog. I did a quick Twitter post a few minutes ago soliciting some early thoughts. I’m hoping to get some feedback from the community about what they want in the guide, versus what would they remove. Now iss everyone’s chance to have their voice heard.

Send me a DM to @seven_seconds if you are interested in participating.

BbWorld 2010 Presentations

Wow…I’m the biggest blogging slacker on the face of the earth. I do apologize to my 5 loyal readers 😉 for not posting more over the past few months. I do have a ton to write…just need to convert my posts from my internal Bb blog over to this one. I wanted to post links to my presentations from BbWorld 2010 in Orlando. It was a great conference. I can’t wait until next year. I’m hoping to have an entire performance track next year in Las Vegas with every intention to cover Load Testing, Code Profiling, Designing for Performance, Query Analysis, System Optimization, Product Optimization, etc..If anyone has any thoughts on presentations they would like me, my team or even yourself to present, through them my way!

DevCon Presentation: Deploying a Highly Available Blackboard Solution

Day 1 Session: Best Practices for Optimizing Your Blackboard Learn Environment

Day 2 Session: Scaling Blackboard Technology for Large Scale Distance Learning and Online Communities

BbWorld09 In the Books

I just took a nice little 3 hour nap. Boy did that feel good after a long, but very enjoyable week with colleagues. I gave my last session this morning to a small intimate group of fellow Performance Tribe members. I really enjoyed the last session the most of all. It just seemed like everyone came together to work with each other on their performance and scalability issues. I also got to meet some great folks from the ANGEL community from Penn State.

I’ve attached the slides from the session. Hope everyone enjoys…

See you next year at BbWorld next year in Orlando.

Happy Reading!

Steve

Five Articles Worth Reading About Client-Side Performance

The days of focusing on server performance appear to be shrinking. We need to build-up our skills in the area of client side processing. While I’ve made a number of posts on the subject over the past 18 months, new posts might become more of a daily or weekly pattern. There are five articles that I would like to have the team read. They are quick reads. Four out of the five should take less then 10 minutes. The PowerPoint presentation from Yahoo might take a little longer. Abstracts below:

A Study of Ajax Performance Issues

The first article comes from the blog Direct from Web2.0 . It covers 6 points that are primarily about the competitive browsers during the early part of 2008. Nothing is captured about Google Chrome as the browser was not available at the time. It’s definitely a good primer to read.

  • Array Is Slow on All Browsers
  • HTML DOM Operation Performance in General
  • Calculating Computed Box Model and Computed Style
  • FireFox Specific Performance Issues
  • IE Specific Performance Issues
  • Safari Specific Performance Issues

Optimizing Page Load Time

The second article is about Optimizing Page Load time in web applications. The author Aaron Hopkins covers a lot of the information that the YSlow team has written about over the past 2 years. The author has a very comprehensive list of tips, plus about 1/2 dozen links of comparative information on the topic of client side performance. This is definitely worth reading and following the links.

HTTP Caching and Cache Busting Presentation from Yahoo

This third article takes the prize for being the most comprehensive of the group. It’s really a presentation from the Apache Conference back in 2005 from Michael Radwin of Yahoo . For those of you who want to know more about HTTP caching, this takes the prize.

Caching Tutorial

This is a somewhat obscure article that came from Mark Nottingham. What I like about this posting is how he simplifies the topic of web caching. He doesn’t make the sophisticated reader feel bored or the unknowing reader feel stupid. He simply states the data with very clear and easy to understand content.

Circumventing Browser Connection Limits for Fun and Profit

This fifth article comes from Ryan Breen at Gomez. He’s the author of Ajax Performance. What Breen is talking about is that not all browsers behave the same. Most load in a synchronous fashion that still cause latency when interacting with a client-rich page. These browsers can be manipulated to do parallel operations, but require configuration changes. The author makes a great point about why configuring these changes can really speed up performance.

Old Blog Post: SQL Server 2005 Performance Dashboard Reports

Originally Posted on December 17, 2007

Let me start off by saying this is by no means new. Microsoft released SQL Server 2005 Performance Dashboard Reports back in early March of 2007. It took me until now to stumble across the tool, mainly because I’ve been out of the thick of things from a benchmark perspective. I spent the better half of the day playing with the report. It’s quit impressive and easy to configure.

The Performance Dashboard Reports are targeted toward SQL Server Administrators and other users; the objective of the report set is to act as both a health monitoring and diagnostic tool. Although it relies upon Reporting Services definition files (.rdl), Reporting Services does not need to be installed to use the Performance Dashboard Reports. This custom report set relies upon SQL Server’s dynamic management views (DMV’s) as a data source, providing the wealth of data the dynamic management views contain, while insulating the viewers of the information from the views and the structures underlying them. No additional sources, data capture or tracing is required to access and use this storehouse of performance information. Other obvious benefits of using these prefabricated views are constant availability of the information they contain and their inexpensive nature (from the tandem perspective of collection and querying) as a source of server monitoring.

The report set comes with a primary dashboard report file, as we shall see in the hands-on installation procedure that follows. This report file is loaded directly as a custom report in SQL Server Management Studio. The other Performance Dashboard Reports are accessed via the Reporting Services drill-through mechanism, each path of which is initially entered when the user clicks a navigation link on the main page. The linkages are pre-constructed, and, once the primary dashboard report is loaded as a Custom Report in Management Studio, the rest of the reports work “out of the box” automatically, without any additional setup.

You have to start by installing the add-on. It takes about 20 seconds to install. Once you have run the installer file, go to the directory in which the installer is placed. From there you will find a sql script called setup.sql. Run this against the SQL Server database you want to report. The instructions are a little misleading. They appear to make it seem like you have to run this for every schema in your 2005 instance. That’s not the case. It’s only for every named instance you have installed. From the same directory open the performance_dashboard_main.rdl file. It will format into an XML file. Close that file and you are now ready to play with the Dashboard. To open the Dashboard, open SQL Server Management Studio. Right mouse click on the named instance. From here, select Reports followed by Custom Reports. If you navigate to your install directory, you will see the performance_dashboard_main.rdl file again. Open this and viola you have your report.

Check-out this article for screen shots.

Start with this article from William Pearson. He breaks down each and every aspect of the report. Another article from Brad McGehee on SQL-Server-Performance.com is not as descriptive as the first article, but is pretty good. While I was on the SQL-Server-Performance.com site I came across other links worth taking a look at.
Other Interesting Links

* SQL Server 2005 Waits and Queues
* DBCC SHOWCONTIG Improvements in SQL Server 2005 and comparisons to SQL Server 2000
* Troubleshooting Performance Problems in SQL Server 2005
* Script Repository: SQL Server 2005
* Top 10 Hidden Gems in SQL Server 2005
* Top SQL Server 2005 Performance Issues for OLTP Applications
* Storage Top 10 Best Practices