Monthly Archives: July 2010

Velocity Workshop Complete Web Monitoring

Information About the Session

Metrics 101: What to Measure on Your Website

Sean Power (Watching Websites)
9:00am Tuesday, 06/22/2010
Web Performance Ballroom AB

Please note:this is a workshop, for which there is an additional fee required to attend.
This session will help you build a complete web monitoring strategy. We’ll cover the many different metrics you can collect, from latency and uptime to usability and navigation – and show you how to tie them to the goals of your web business.

Taught by the co-authors of O’Reilly’s Complete Web Monitoring, this packed workshop will look at every part of your online measurement strategy, with a particular focus on what web operators need to know. We’ll cover:

  • The elements of web latency
  • Strategies for collecting end user experience
  • How performance and availability fit into the rest of the monitoring picture
  • Linking performance to business metrics like time-on-site, conversion rates, and engagement
  • Measuring sites you rely on, but don’t control
  • How to roll up measurement data to share it with other stakeholders

About the Speaker

Watching Websites
Sean Power is a consultant, analyst, author and speaker. He is the co-founder of Watching Websites, a boutique consulting firm focusing on early stage startups, products and non-profits as they emerge and mature in their niches. He has built professional services organizations and traveled across North America delivering engagements to Fortune 1000 companies. He helps executives understand their competitive landscape and the future of their industry. He has done technical edition for Troubleshooting Linux Firewall for Addison-Wesley and co-authored Complete Web Monitoring for O’Reilly media with Alistair Croll.

Sean has had first-hand experience creating and implemented social computing strategies with larger companies like MTV and smaller startups like Akoha. He is active in the social computing space, using Twitter and blogs as his communication platforms of choice. He often speaks on the subject of product acceleration, measurement or social computing in clinics, workshops, presentations and one-on-one training.

Notes on the Session

The session started off with Steve Souders kicking off. He mentioned that the conference finally sold-out. It’s quite amazing how many people are here for the workshop. It feels as though there are 2X more attendees this year then in the past. Originally, Alistair Croll, formerly of Coradiant was going to give the presentation, but he bailed. So his co-author gave the session instead. The goal of this session is to really understand the basics of web monitoring. Apparently Sean is a former Coradiant guy as well. He ran the services division at Coradiant for a number of years.

Note to Self: It might be a good idea to (a) pick-up their book and (b) would we be willing to bring them in as consultants.

New book called Web Operations from Jesse Robbins coming out today. Might want to pick this up well.

Power’s started the session off discussing the cost of Downtime. His example was that every hour Amazon goes down, it loses 1 million dollars. Typical consumer sites lose about $50k. He had an interesting slide about availability percentages and the total cost or loss of revenue.

Next he went into role of planning. Study data quarter over quarter…year over year. Trends are used for strategic planning. Automation is absolutely critical. Not just for operations, but for monitoring. Then we need measurement for optimization.

  • Everything starts with a baseline
    • Account for what is happening (Accounting Analytics)
      • Know what is worst
      • What you can optimize
    • Make it better through optimization
      • It’s a cyclical process: collection, reporting, institutionalizing, KPi/ROI, Repeat…Repeat
  • Have to do more than just collect
    • Must tie to something or some other aspect of business
  • Understand your business goals
    • Amazon’s goal is to maximize sales
    • Four types of businesses on the web
      • Transaction Sites
      • Collaboration Sites
      • Software as a Service (SAAS)
      • Media Sites

Slow sites suck! Obvious…but let’s get to why. Sites that are slow affect conversion rates. They are less likely to keep their user’s loyal. Poor performance can costs money (refunds or service credits). Customers may find other ways to reach you.

How TCP can affect performance. Used an abstract example of the Postal Service. Follow the rules…put a stamp, get to mail box before 5pm and put the right address. It just works…TCP is no different then the mail. There are rules and a natural order/sequence to everything. TCP creates an E2E link. HTTP has rules as well for requesting web objects.

Random Note: Was searching around SlideShare for this presentation, but found this one instead from a few years back.

Where to Measure

  • Three tiers of data
    • WAN Accessibility: Can users reach the site and how long does it take to process?
    • App Functionality: places and tasks
    • Tiered tests
  • Use analytics to drive synthetic tests.

Synthetic Testing

  • Use internal systems like Naggios or HP Openview
  • Use a monitoring service outside of firewall
    • Should we consider using the SOASTA appliance from the DR location to run both web/HTTP tests and functional/Selenium tests from outside of the network?
      • Browser puppetry
  • Synthetic monitoring is simply not enough

Math Rules

  • Averages Suck…little play on Millsap’s skew (Don’t use averages)
    • Let’s actually stop using averages on the team.
  • Use percentiles and histograms
  • Use traffic requests per second

Another Cool Idea
Show the histogram of a “business process” or clickpath. How cool would that be in which we show the histogram for CPIds?
Consider using an image of the Count from Sesame Street for presentation at BbWorld.

You know how we show histograms of individual transactions? Would it be possible to take full CPIDs (add all transactions for a user in a CPID) and develop a histogram for those? Let’s say that CPID 1001 (taking an assessment) is performed 350 times. It would be good to show a histogram of the entire CPID (total time).

Another view would be to show scatter plot of the CPID over time as well. Represent the dot by the total time and the position of the last time. If we didn’t do a scatter plot, we could offer a line chart from start to end.

What do you think?

Target Metrics for Audience
Speaker suggested APDEX…not really interested in going down that route

Tools Mentioned

  • Site Meter: Like Google Analytics
  • Net Optics: Network Tap for User Performance Monitoring
  • Tea Leaf: Customer Experience Monitoring
  • Dotcom Monitor
  • Aternity
  • Beatbox
  • Atomic Labs
  • Moniforce (Oracle)
  • Unica
  • IBM Coremetrics
  • Web Page Test
  • Monitor.us
  • Jiffy
  • Juice Analytics
  • Dashboard Spy

Velocity Workshop 90-Minute Optimization Life Cycle

The 90-Minute Optimization Life Cycle: “Fast by Default” Before Our Eyes?

Joshua Bixby (Strangeloop Networks), Hooman Beheshti (Strangeloop Networks)
1:45pm Tuesday, 06/22/2010
209/210
Please note:this is a workshop, for which there is an additional fee required to attend.
By now, we’ve all internalized Steve Souders’ rules for optimizing web performance, but the question is: do you need to spend 6 months and raise an army of top developers to make your sites fast by default? In this workshop, we’ll subject an unsuspecting website to real-time optimization, following Google and Yahoo’s rules for high-performance websites.

Over the course of the workshop, we’ll witness the entire optimization life cycle:

  • We’ll choose our guinea pig site and use various measurement tools to benchmark current performance, focusing on load time, start render time and round trips.
  • We’ll implement A/B segmentation to measure key business metrics like conversion, bounce rate and page views/visit.
  • We’ll iterate through acceleration best practices.
  • We’ll analyze results from different geographical locations using different browsers.

Sponsored By Strangeloop Networks

  • People planning to attend this session also want to see:
  • Psychology of Performance
  • Building Performance Into the New Yahoo! Homepage
  • A Day in the Life of Facebook Operations
  • Stupid Web Caching (and Other Intermediary) Tricks

About Joshua Bixby

Strangeloop Networks
As President of Strangeloop Networks, Joshua defines Strangeloop’s strategic marketing and product direction. Prior to founding Strangeloop, he co-founded and served as President and CEO of IronPoint Technology, helping lead the company to successful acquisition by The Active Network in 2006. Joshua also served as Senior Vice President, Marketing and Product Development, at NTS Internet Solutions, after having held senior marketing and product roles at MNK and GRAPAD.

Web site

About Hooman Beheshti

Strangeloop Networks
Hooman is a pioneer in the application acceleration space. In 1997, he helped design one of the original load balancers. Since then, he has defined and driven the development of load balancing, web acceleration, and application delivery products, while also leading the technical evangelism initiatives behind them. Prior to becoming Vice President, Product, at Strangeloop Networks, Hooman was VP of Technology at Crescendo Networks and CTO of Radware Inc.

Notes on Session

Funny presenters…they are using the “Cooking Show” approach for presenting the materials. They wrote a nice little blog on this presentation. Starts by talking about the 8-second rule from 1999. Then he talked about the 2 second rule from Forrester. He called it the “Barbie” Rule…everything needs to be pretty.

Visualizing the Problem
In the example we have a web performance problem. Front-end problem is getting worst in this example. They are increasing the size of data on the web. The visual he showed was based on 1995 to 2010 of the change of the web. He starts with a waterfall chart. It’s a 60+ http object chart. The page is 9.5s. The first 1 to 2 seconds is the page (backend request). The remaining 7 seconds is the actual front-end page requests.

I did a quick experiment on my Managed Hosting site. Sadly our home page took 7s to load. You probably can see it here. We have so many issues that still to this day we need to address:

  • No compression still of images
  • Need image optimization
  • Need to minify
  • Need to change the order of loading

Makes me think that we should put this behind our NetScaler and turn on compression. This is a good way to demonstrate the value of compression. We can turn on compression as well at the system level. That’s worthwhile from an experimentation perspective.

Random Note to Research this Project from Soudershttp://stevesouders.com/p3pc/

Another random note: They showed video comparison of the screens as they load. It’s a capability on the web page test app. The screen goes gray when the page is fully loaded. It’s a cool idea…might want to do for BbWorld.

What They Optimized

  • Keep-alives and Compression made about a 6s difference
    • Keep alives reduced the connections from 97 connections to 19 connections
  • Caching: RFC2616, Section 13
    • Dropped from 10 to 5s for first page and 6 to 2s from second load.
  • Used a CDN for content
    • Got about 20% savings after adding the CDN
  • Reduce Roundtrips
    • Combine images, JS and CSS
    • Minify CSS and JS
    • Image Compression
    • Increase concurrency with more domain sharding
      • Is it possible for us to deploy sharding technique in our application architecture?
    • Went from 8.3s to 3.8s

Note to Self: OK…so I think Anand’s recommendation from a while back to consider StrangeLoop is something we should seriously consider.

Velocity Workshop on Progressive Enhancement

Progressive Enhancement: Tools and Techniques

Anne Sullivan (Google)
3:45pm Tuesday, 06/22/2010
Web Performance 209/210
Please note: to attend, your registration must include workshops.
The painful impact JavaScript has on page load times is well understood – scripts block downloads and rendering, even in newer browsers. The solution to this problem is progressive enhancement – rendering the visible elements immediately as HTML and adding JavaScript interactivity later. Web performance experts often recommend using progressive enhancement to optimize page load times. However, there isn’t a lot of practical information available on how to implement progressive enhancement in a complex web application. In this workshop, we’ll cover tools and techniques for implementing progressive enhancement including Closure Compiler, Google Page Speed, and other JavaScript frameworks.

People planning to attend this session also want to see:

  • Psychology of Performance
  • Stupid Web Caching (and Other Intermediary) Tricks
  • Building Performance Into the New Yahoo! Homepage
  • Keeping Track of Your Performance Using Show Slow

Notes on this Session

First question is “what is progressive enhancement?” Take a look at this post which does a good job explaining. Here’s another good explanation. Here’s a good presentation.

Progressive Enhancement is a powerful methodology that allows Web developers to concentrate on building the best possible websites while balancing the issues inherent in those websites being accessed by multiple unknown user-agents. Progressive Enhancement (PE) is the principle of starting with a rock-solid foundation and then adding enhancements to it if you know certain visiting user-agents can handle the improved experience.

PE differs from Graceful Degradation (GD) in that GD is the journey from complexity to simplicity, whereas PE is the journey from simplicity to complexity. PE is considered a better methodology than GD because it tends to cover a greater range of potential issues as a baseline. PE is the whitelist to GD’s blacklist.

Implementing Progressive Enhancement

  • Finding serial requests
  • Finding requests that block rendering

What is JavaScript Closure Compiler? Can it be implemented in our build environment?

Fixing UI Generated by JS

  • Progressive Enhance HTML
    • Research the impact of Accessibility (Speaker mentioned ARIA)
  • Event Queuing
  • Late loading

Tools

  • Webkit Timeline
  • MySpace Performance Tracker
  • Google Page Speed
  • Rhino for Static Analysis of JavaScript
    • Closure Compiler Finds Unused JavaScript

Velocity Fast By Default Keynote

Day 2 of the Velocity Conference which is going real well. Souders and Robbins are their normal character selves up on the stage right now. Souders says there are 1000 attendees this year, which is sick. Year 1 there were 200 of us. In year 2 we might have had 400 or 500 attendees. So this is huge. Souders mentioned that he had a waiting list of dozens more. Next year is going to be crazy…

Speaker One: James Hamilton

High-scale cloud services and internet search depend upon efficient mega-scale infrastructure. The pace of innovation is ramping up in datacenter power distribution, mechanical systems, intra-data center networking, and server hardware. When the scale is massive and infrastructure costs dominate, technology changes quickly. This talk inventories high-scale service infrastructure costs and some of the innovations driven by optimizing for work done by joule and work done per dollar.

Amazon Web Services
James is VP and Distinguished Engineer at Amazon Web Services where he focuses on infrastructure efficiency, reliability, and scaling. Prior to AWS, James was architect on the Microsoft Data Center Futures team and, over the years, has held leadership roles on several high-scale services and products, including Exchange Hosted Services, Microsoft SQL Server, and IBM DB2. James loves all things server related and is interested in optimizing all components from data center power and cooling infrastructure, through server design, and to the distributed software systems they host. He maintains a high scale services blog at http://perspectives.mvdirona.com.

Notes About the Speaker

Hamilton is one of the infrastructure engineers at Amazon. I’ve run into the guy a few times in the elevator, but haven’t introduced myself. His focus is on data center infrastructure. Cloud services are huge and will have a long-term impact driving innovation up and cost down. 34% of the costs go to power. Amazing since their systems at AWS is only 30% draw.

Next few slides (Yawn…) were about power distribution efficiency. I think everyone in the audience was so bored. Too bad the US/Algeria World Cup game already finished. You probably would have seen half of the crowd leave to watch the game. I think Robbins pumped him up as a speaker because they knew this guy was way too boring.

Only funny point he made was that we could run our data centers hotter. By hotter he means to like 90 to 105 degrees (note to self…Really?). His observation is that game consoles run in hot spaces and burn up a lot of power.

Speaker Two: Urs Hölzle

Google
Urs Hölzle served as the company’s first vice president of engineering and led the development of Google’s technical infrastructure. His current responsibilities include the design and operation of the servers, networks and datacenters that power Google. He is also renowned for both his red socks and his free-range Leonberger, Yoshka (Google’s top dog). Urs joined Google from the University of California, Santa Barbara where he was an associate professor of computer science. He received a master’s degree in computer science from ETH Zurich in 1988 and was awarded a Fulbright scholarship that same year. In 1994, he earned a Ph.D. from Stanford University, where his research focused on programming languages and their efficient implementation.

As one of the pioneers of dynamic compilation, also known as “just-in-time compilation,” Urs invented fundamental techniques used in most of today’s leading Java compilers. Before joining Google, Urs was a co-founder of Animorphic Systems, which developed compilers for Smalltalk and Java. After Sun Microsystems acquired Animorphic Systems in 1997, he helped build Javasoft’s high-performance Hotspot Java compiler.

In 1996, Urs received a CAREER award from the National Science Foundation for his work on high-performance implementations of object-oriented languages. He was also a leading contributor to DARPA’s National Compiler Infrastructure project. Urs has served on program committees for major conferences in the field of programming language implementation, and is the author of numerous scientific papers and U.S. patents.

Notes on Session

Speed Matters…started off with an awesome presentation about Chrome is faster than a speeding potato. Absolutely funny…Here’s a link to the video. He wants the web to be about .01ms or as fast as turning a page in a book.

Puts emphasis on the Chrome browser. It uses HTML5, V8 JavaScript engine, DNS prefetching, VP8 codec…it’s open source and spurs competition. Calls out that IE and FF are still suffering from latency. Avg. page latency still is greater than 2s.

Note to Self: At one point during yesterday’s session and today’s keynote I got the strange suspicion that a lot of folks presenting and a lot of folks listening are struggling with fresh ideas for tackling performance gains. It’s almost as though they feel the low-hanging fruit is all picked and that some transformative change is necessary. No one could really articulate what that transformation is or will be. Kind of strange…I only say this because after 3 years of Velocity and several mid-year webinars, I get the sense that there’s not a lot of innovation. Maybe I’m wrong, but given the repetition of the subject I’m struggling with this…

He mentioned using Public DNS and the benefits for Performance. Check this link that discusses

Tools He Mentioned

  • Auto Spriter
  • Speed Tracer
  • Closure Compiler
  • Page Speed

Speaker Three: Vik Chaudhary

Keynote Systems, Inc.
Vik Chaudhary serves as vice president of product management and corporate development. He is responsible for leading Keynote’s product strategy, sales enablement, and executing on the company’s acquisitions and partnerships. Mr. Chaudhary has spent 19 years in chief executive, marketing, and engineering positions at blue-chip and start-up technology companies. At Keynote, he previously served as vice president of marketing and corporate development, extending the company into new markets via ten acquisitions. Before joining Keynote, he was CEO of on-demand analytics company Bizmetric, ran product management at database pioneer Gupta Technologies, and led core software engineering teams at Oracle. Mr. Chaudhary is a frequent speaker at industry events on software strategy and M&A, and has been featured in the New York Times and on the ABC News Nightline program. Mr. Chaudhary holds a B.S. in Computer Science and Engineering from the Massachusetts Institute of Technology.

Notes on Session About Mobile Web Performance

Great…another year…another sales presentation by Keynote. I know they sponsor the event, but it’s an absolute travishamockery that these guys get up year after year and peddle their products. This one is about monitoring and testing mobile applications.

They claim the product, called MITE 2.0 is 100% free. It’s designed from the ground up for mobile. So question for us…can we really use this? It looks like they simply have built a simulator. MITE produces a score similar to YSlow score. This might be something to share with the mobile team.

http://mite.keynote.com/

Velocity Lightening Demos

Lightning Demos

Andreas Grabner (dynaTrace Software), John J. Barton (IBM), Stoyan Stefanov (Yahoo! Inc), Bryan McQuade (Google)
10:40am Wednesday, 06/23/2010
Keynotes Ballroom ABCD
This is presentation will be streamed live along with the other keynotes.
Demos from dynaTrace, Firebug, YSlow, and Page Speed.

dynaTrace software Inc.

dynaTrace is the innovator and emerging leader in application performance management (APM). The company offers the only continuous APM system on the market – one that can monitor all transactions at all times and one that is used by all key contributors to application performance – architects, development, test and production. More than 200 customers including Sears, Pershing, Renault, Zappos, BBVA, Fidelity, and Thomson Reuters use dynaTrace’s patent pending technology to gain deep visibility into application performance, identify problems sooner and reduce the mean time to repair issues by 90%. Leading companies rely on dynaTrace to proactively prevent performance problems from happening and quickly resolve those that do occur – saving time, money and resources.

Firebug

The Firebug Velocity demo will feature new features in the Net panel, new comprehensive breakpoint support, and a sneek peek at our next version. In Firebug 1.5 we reimplemented the Net panel to dramatically improve the timing accuracy and to support exporting the traffic analysis info. The export format (HAR) was designed to be flexible enough so, it can be adopted across projects and various tools. We added breakpoint support for every panel: I’ll demo using Javascript breakpoints from XHR events and Cookies. Firebug 1.6 integrates Firebug extensions to give you a jolt of new features all pretested with Firebug.

dynaTrace Software
Andreas Grabner has 10 years experience as an architect and developer in the Java and .NET space. In his current role, Andi works as a Technology Strategist for dynaTrace Software in the Methods and Technology team. In his role he influences the dynaTrace product strategy and works closely with customers in implementing performance management solutions across the entire application lifecycle. He is a frequent speaker at technology conferences on performance and architecture related topics and regularly publishes articles blogs on blog.dynatrace.com

John J. Barton

IBM
John J. Barton is the manager of Interaction Science, an IBM Almaden Research group specializing in fundamentals of human-computer interaction (HCI) technologies, especially multi-device interaction. Interaction Science studies users, invents new techniques and technology, then validates progress by scientific tests with real users. Current projects in my group include text input on handheld devices, integration of information across devices via instant messaging, adapting web pages for mobile devices, and extending web debugging to support more dynamic applications and environments.

John has 21 years of experience in industrial research with over 60 publications in the diverse fields of ubiquitious and mobile computing, compiler technology and programming languages, physics of electron scattering, and chemistry on surfaces. After early work in quantum chemistry at NASA’s Jet Propulsion Lab in Pasadena CA, he got his MS in Applied Physics at Caltech and moved to Berkeley. There he worked at the Lawrence Berkeley Lab and got his PhD at UC Berkeley. John joined the Physics department at IBM Watson to work on Photoelectron Holography, moving to Computer Science in 1991 to work on C++ compilers and co-author a book, “Advanced C++”, with Lee Nackman. John managed the Jikes Java Research Virtual Machine team until 1998 when he moved to HP Labs Palo Alto where he was part of the Cooltown web-based ubiquitious computing project. When he isn’t playing computer games with his sons or working on his deck, he contributes to the Firebug open-source Javascript debugger.

Stoyan Stefanov

Yahoo! Inc
I work for Yahoo!’s Exceptional Performance team. My daily tasks include research, experiments and building tools (such as YSlow) to improve the performance of the Yahoo! properties worldwide. I’m also a contributor to several open source projects and author of a few books and numerous online articles. Creator of the smush.it (http://smushit.com) online image optimization tool.

Bryan McQuade

Google
During Bryan’s time at Google, he has contributed to various projects that make the web faster, including Shared Dictionary Compression over HTTP, optimizing web servers to better utilize HTTP, and most recently, the Page Speed web performance tool. Prior to working on web performance, Bryan was the first full time engineer on the Google TV Ads team, where he helped to build some of Google’s TV ad auction and video management systems.

Notes on the Session

Dynatrace Demo

Starts off with Dynatrace. Unsolicited blog post from John Ressig, the father of JQuery. The speaker starts off talking about a blog post he wrote about FIFA.com (given the World Cup is happening right now). Take a look at this blog.

Installs the dynatrace AJAX client. Runs against FIFA.com and traverses through a couple of pages. He collected a live session of clicking. Now he can work offline to review the diagnostics. New version has an Overall Web Site Performance Report, similar to YSlow and Page Speed. Shows time for first visual impression, time for page to load and time for full page load. Color coded by thresholds. Take a look at the best practices document to understand the thresholds. Note to Self: Might want to consider experimenting with the thresholds from Galileo.

They include a tab on Server-Side performance rank. Fully integrated with the Server-Side performance tool. Right now they focus on IE6, IE7 and IE8. They are working on a FF integration. Can also upgrade to ShowSlow.com.

Firebug

Slides and demo are based on Firefox 1.6. First he talked about Firebug Swarms (tested extension collections that will install together). Added scroll bar and drop-downs. You can opt-out of certain features if you don’t like. Accuracy of net panel is improved to show paint events. New net export format (HAR). It’s supported by HttpWatch, Page Speed and ShowSlow. Can upload directly to ShowSlow. Can also introduce network breakpoints.

YSlow

Calls the session 3-in-1. First is that it’s a Lint tool. Second it’s a monitoring tool (Gtmetrix.com or Showslow.com). Third it’s a platform/framework. Two concepts that there is a rule and a ruleset.

New extension called YSpy (security check). Another is called WTF (Web Testing Framework).

Page Speed

Need both Firefox and Firebug to make it work. Produce specific suggestions from highest impact to lowest impact. Even have the ability to download minified content. Addresses 3rd party content analysis. You can study ads, trackers and content.

They built a C++ SDK to work with other browsers. You can now run this against other browsers.

Talk about time to first paint. Focus on identifying JavaScript and CSS that are candidate for deferral. This is interesting as it breaks the rules that JS files need to always load earlier. This can tell you which JS functions were not called.

Overall bad demo…too many technical difficulties.

Velocity Keynote About 3rd Party Content

Don’t Let Third Parties Slow You Down

Arvind Jain (Google), Michael Kleber (Google)
11:45am Wednesday, 06/23/2010
Keynotes Ballroom ABCD
Ads, widgets and other third-party content bring many benefits to your web pages and users. However, they often slow down your pages. We’ll share data on how page speed is affected by such content. We’ll also discuss recent work at Google to make ads as fast as possible, and what site owners and third-party content providers can do to make sure pages are not slowed down by them.

Google
Michael is a mathematician who has spent the last ten years working on efficient algorithms in different contexts. Now he works for Google making the internet faster.

Notes on the Session

Average page load time is 4.9s. Pages are complex pulling 44+ resources from 7 domains and 320KB. A lot of what’s loaded is third party content. Calls out several widgets that really slow down performance. Google created something called the Knockout Lab to study performance/latency from 3rd party content.

  • Digg Widget: Lots of JS blocking
  • AdSense: Responsible for 12.8% of pageload time
  • Google Analytics: now offered asynchronous
  • Doubleclick: 11.5% latency

Making Google AdSense Fast by Default

  • Want to minimize the blocking of the publisher page
  • No retagging
  • Put the ad right here
  • Must run in the publisher domain

Now have working solution…

  • Make show_ads.js a tiny loader script
  • Loader creates a same domain iframe
  • Loads the rest of show_ads very quickly

Velocity Session on Performance Impact More Findings from the Front Lines of Web Acceleration

Performance Impact, Part Two: More Findings from the Front Lines of Web Acceleration

Joshua Bixby (Strangeloop Networks)
1:00pm Wednesday, 06/23/2010
Velocity Culture, Web Performance Ballroom AB
Last year at Velocity, Hooman Beheshti presented the findings from phase one of Strangeloop’s long-term research into the relationship between web performance and business benefits. The results were also published in Watching Websites. Since then, we’ve received a barrage of questions from the web performance community, which fueled phase two of our study. Today I’ll be presenting our most recent findings.

Some of the community’s questions were:

Who were the clients?

  • How fast were the pages?
  • What acceleration techniques were implemented?
  • What happened to the key page components (such as JS size, payload and roundtrips) of the websites?
  • How did changing key variables (page load time, payload, number of roundtrips, etc.) affect the outcome?
  • We’ve been collecting and analyzing data to help us answer these questions, as well as some new ones we’ve thought up along the way. Join us as we present our findings, and help us consider what areas deserve further study.

People planning to attend this session also want to see:

  • Psychology of Performance
  • Building Performance Into the New Yahoo! Homepage
  • A Day in the Life of Facebook Operations
  • TCP and the Lower Bound of Web Performance

Joshua Bixby

Strangeloop Networks
As President of Strangeloop Networks, Joshua defines Strangeloop’s strategic marketing and product direction. Prior to founding Strangeloop, he co-founded and served as President and CEO of IronPoint Technology, helping lead the company to successful acquisition by The Active Network in 2006. Joshua also served as Senior Vice President, Marketing and Product Development, at NTS Internet Solutions, after having held senior marketing and product roles at MNK and GRAPAD.

Notes on the Session

Made an interesting point about “mortal” companies that are unlike the “big” players (Amazon, Google, Yahoo, Facebook, Twitter, etc…). Let’s look at small players. I like this point because it talks about how smaller deployments (which are much like our customers) can really gain performance with small investments without having to mobilize an entire army of developers. Emphasis should be on KPIs as they emphasize the business value from performance gains.

Put together an experiment in which the request comes in to either optimized or not optimized. Trying to demonstrate how Strageloop can accelerate over non-accelerated data every single time. The KPI he showed was around business analytics.

  • Optimize Caching
  • Minimize Roundtrips
  • Minimize Payload
  • Optimize browser ordering

Josh then brought up AutoAnything speaker. It’s a small site that talks about how people buy cars. Did some StrangeLoop optimization. Showed some major business impact.