The anticipation is building up this morning for the real start of the conference. Yesterday we had several workshops, which were great. The real meat of the show happens today. This morning we have the entire congregation meeting up to listen to Souders, Rauser, O’Reilly (Tim that is…not Bill) and a few other guest speakers. It will be 20 minute slam sessions in which the speakers come up, do their schtick and then pass the microphone to the next person.
I’m debating my first session. I could listen to “Real-Time Real-Fast” or “CSS3 & HTML5 – Beyond the Hype!”
As web applications continue to become more interactive and sophisticated, real-time messaging and updates are becoming increasingly prevalent. One of the hottest new APIs in HTML5, is WebSocket, which enables true duplex communication without the overhead, complexity, and extraneous latency of HTTP-based solutions. In this talk, we will see how the WebSocket removes these barriers to create optimal real-time delivery of messages from servers to browsers. Although WebSocket is an exciting new API, we will see how we can easily fallback to HTTP-based techniques when WebSocket is not available with Dojo’s Socket API. The server-side is equally important, and real-time messaging has pushed the need for asynchronous I/O in the server. We look at how we can create scalable real-time applications using the Node.js platform that is so perfectly suited for Comet, using the Tunguska library. The presentation will cover the use of streaming abstractions to minimize buffering. We will also consider the performance implications of topic-based publish-subscribe distribution versus filtering techniques.
HTML5 and CSS3 can improve performance, or they can be disastrous to your site speed. In this session you’ll learn which features are ready to use now and which to avoid.
The session will cover: hacks, shims, data URIs, border-radius, animations, gradients, offline storage, rgba, aria roles, and many other features. We will discuss browser support in detail. By the end of the session, you will know how to boost your site’s performance while maintaining fallbacks for older browsers. You will learn about places where the new technologies break down and discuss ways of giving IE users the best possible experience.
We’ll take a look at the bleeding edge, and learn how to apply the techniques to websites with real traffic. The session will draw on examples from Nicole’s work with Box.net, Salesforce.com, and other large-scale sites.
The debate is between “Writing Fast Client-Side Code: Lessons Learned from SproutCore” and “Performance Measurement and Case Studies at MSN”
The SproutCore framework has evolved over the past five years to be an extremely high-performance framework that focuses on making it possible to build native-like applications in the browser.
This means handling problems like working with extremely large data-sets, inconsistent connectivity, and complex DOMs. Lately, it has meant figuring out how to properly use new browser features that can make a big difference to perceived performance, like hardware acceleration.
In this talk, Yehuda will cover some of the techniques that SproutCore has used historically to enable extremely complex applications to perform well in the browser, as well as what new technologies the team is looking at to leverage the latest browser technologies in building compelling content for the web.
MSN is on a mission to be the world’s fastest portal. Driving this mission requires a paradigm shift in how we measure performance and its impact. In this session we describe why existing metrics used at MSN and the industry in general are deficient, and need to evolve from an internal system view to a human view – namely, to represent rendering and responsiveness. We describe the requirements and gaps in this space, and offer a Call to Action to browser makers, tool makers, and the performance community in general to address these gaps. We also describe the range of performance measurement systems used at MSN, spanning both synthetic and real user environments.
Effectively measuring performance and its impact also requires assessing business impact. We describe how MSN uses A/B testing to assess the impact of performance changes on business metrics such as Page Views and Searches. A/B testing is critical, as business impact is the ultimate truth of whether a change is worthwhile.
We also will share the results and insights of several case studies, including the quantified impact on business metrics of implementing specific performance improvements and best practices. Lastly, we will discuss the techniques we’re using to drive the performance mission inside of MSN.
I’m not too sure about the 3rd session. I am going to choose between “Understanding Mobile Web Browser Performance” and “Where Is Your Data Cached (And Where Should It Be Cached)?”
In this session, attendees will participate in an in-depth discussion of two key aspects to understanding mobile Web browser performance: the specifics of 3G/4G mobile networks, and (ii) the constraints on browser software architecture from operating on a mobile device.
Mobile networks generally exhibit much higher latencies than seen on wired networks. Bandwidth can fluctuate, and there are power and latency considerations related to bringing the radio connection up and down in response to network traffic patterns. We will use examples to illustrate this concept and suggest guidelines for front-end engineers to optimize for mobile networks.
(ii) In order to understand the impact of constraints on mobile browser software architecture, we will look at the Android browser. Because it is open-source, Android allows for unique insights into the overall performance of the mobile browser.
The Android Web browser has some unusual features that have a significant impact on page download performance. For instance, by default it uses four HTTP processing threads, which means that it can process requests on at most four sockets concurrently. The impact of this is mitigated significantly by another key feature, which is that the Android browser pipelines HTTP requests (and it is perhaps the only mainstream browser to do so by default). In addition to these two features, a third factor that has a significant impact on page load times is the browser’s caching policies. It is also important to understand the role that DNS lookups and TCP algorithms play in the overall page load process.
With the constraints above, simple design choices can have a significant impact on page load time. We will take a look at specific case studies, including sharding across multiple domains, and the impact on the browser cache of tiny sprites that decode to over a megabyte. We will demonstrate how various tools can be used to analyze and understand page load behavior on Android Web browsers (such as pcapperf, htracr, and Qualcomm’s own Web optimization tool for developers).
The level of this session is intermediate – some knowledge of the basics of DNS, TCP and HTTP will be helpful to attendees (e.g. of the level of John Rauser or Tom Hughes-Croucher’s talks from last year’s Velocity). We will quickly go over the basics so that people unfamiliar with the protocols should still be able to follow the presentation.
As a result of this session, attendees will have a deeper understanding of how Internet protocols, wireless networks, browser software architecture and Web page design all come together to determine page load performance in mobile Web browsers, especially for the Android OS. The talk should help front-end engineers in particular to optimize their design and content for the mobile environment.
Taking a look at the many layers of caching in the modern webstack can lead to some interesting optimizations. We know that raid card caches, disk caches and CPU caches all exist at the hardware level, but how do they interact with database caching, application code caching and rendered page caching. Where are the redundancies and where is the most optimal location for your services to cache? Are you risking data integrity by using both disk and raid card caching? Is there a similar risk in using both filesystem and database caching? These are hard won lessons if you have to learn them during an outage. they can be avoided with some forethought and benchmarking. I’ll call out the many layers where data is cached and talk about some of the risks and potential performance gains that we’ve found by selectively disabling and adding particular caching layers.
10 tricks for mobile performance by Josh Fraser will cover the following topics and more:
Understanding the differences between devices
Every mobile device acts a little differently than the others. We’ll talk about these differences and figure out which ones matter most.
Resizing images for mobile screens
Why download a 10-inch image to a 2.5 inch screen? We’ll look at the best way to resize images for mobile devices.
Lazy-loading images below the fold
Sometimes there’s no need to load all the images right away. We’ll look at the best techniques for lazy-loading images that fall below the fold.
Working with different cache sizes on mobile
Mobile browsers have different cache sizes. We’ll talk about what this means for mobile WPO.
Preloading content for the next pageview after the onload
Take advantage of the time while the user is reading a page and start downloading the next one.
Knowing the effects of iframes on mobile
Iframes are typically bad for performance. We’ll compare how iframes act on mobile vs. desktop.
Evaluating the various tradeoffs with mobile
WPO is a series of tradeoffs. In this talk, we’ll talk about some of the tradeoffs that are specific to mobile.
Understanding loading indicators on mobile
Loading indicators are important for giving feedback to the user on how the page is loading. We’ll look at how mobile browsers handle loading indicators and how you can work with them.
OK…so the debate is between “Take it all off! Lossy Image Optimization” and “WebPagetest Update”
For the majority of websites, images consume the most bandwidth and are the most requested content type. The focus over the last few years has been on lossless image optimization tools, which typically achieve between a 5% to 15% reduction in size. With images constituting such as large percentage of the web, what else can be done to reduce file size while preserving the perceived quality of images?
This is where lossy image optimization comes in. Lossy image optimization allows between 30% to 70% file size reduction by discarding some of the image data. This lossy aspect has caused most people to immediately discount lossy image optimization as a realistic option. However this is short sighted. MP3s achieve enormous size reduction by using knowledge about how we hear and process sound to discard audio data without significant losses in perceived quality. Similarly, by intelligently approaching images and their content, we can apply different image formats and lossy compression schemes to achieve substantially smaller file sizes while maintaining image quality and user experience.
In this presentation, we will discuss different techniques and approaches to further optimize your web images without noticeable quality loss. We will provide guidelines that show when lossy image optimization is appropriate and how to apply it. We will show before and after pictures to illustrate when lossy optimizations succeed and fail, and how much savings can be achieved. We will demonstrate how free tools can be used to automate the detection and optimization of candidate images for lossy compression. Finally we will discuss how new web image formats like WebP and JPEG-XR might change the image optimization landscape.
No debate…going to see “Web Site Acceleration with Page Speed Technologies”
The best practices for creating fast web pages are both established and evolving. Page Speed is an open-source web page analysis and optimization tool that helps web developers to make their sites faster. Page Speed Automatic is a technology for dynamically accelerating web pages by rewriting them as they are served. Its initial implementation is mod_pagespeed, an open-source Apache module.
In this talk we will discuss:
- The evolving state of the art of fast web design
- The newest Page Speed suggestions
- Optimizing sites for modern browsers vs older browsers
- Optimizing sites for Mobile vs Desktop
- Technology to help you implement these best practices
- Page Speed run from desktop browsers and other tools
- mod_pagespeed on Apache httpd
- Page Speed Automatic: open source APIs to add acceleration to web software
- Best Practices in Action
- How Page Speed has helped to improve individual sites
- mod_pagespeed’s impact on the web: a data-driven review of best practices and their measured improvements on latency and bandwidth across a large number of web sites
There have been a lot of user-facing features that have been added to WebPagetest over the last year and this talk will help you make sure you get the most benefit from the tool. The presentation will also cover information on running your own private instances of WebPagetest and use of the API for automation or integration with existing systems. You will also be introduced to some of the more advanced capabilities from browser scripting/automation to selective content blocking and learn how to effectively use them when analyzing a live site.