The funniest intro started the morning. They lead with a YouTube clip of Conan O’Brian and Louis CK about people appreciating technology. It was absolutely hilarious. Shortly after Steve Souders talked, a guy named Jeremy Bingham from DailyKos.com came-up to talk about surviving the 2008 elections. It was quite possibly the worst presentation in the history of mankind. I thought the guy was going to freeze on stage and someone would have to rescue him tarzan like swinging from one end of the stage to another. I haven’t jumped on Twitter, but I’m almost certain the Twitts flogged him.
Then the real speaker came-up, a guy named Jonathan Heililger from Facebook came-up to talk about Facebook scalability. A couple of walkaways from this speech:
- Facebook tackled I18LN a completely different way then any other company has done this.
- FB defines an active user as a user who comes to the site within the last 30 days.
- FB doesn’t have QA, but rather ENG is responsible for all test case development, execution and even deployment.
- They do have an Opps team that works with ENG to assist with deployment
- FB has a suite of tools for Performance: still looking for documentation on this
- FB has a performance engineering team
- One major point the speaker said is that we really need to consider testing with real users and not just depending on automated performance tests.
Eric Schuman from the Microsoft Bing team and Jake Brutlag from Google put together a joint presentation on the affects of artificial delays injected into page responsiveness and there affects on user behavior with search engines. The two teams kind of randomly worked on this and came to the conclusion that they had a lot of similarities with their data. They studied three things: server delays, page weight increases and progressive rendering.
They determined that server delays as much as 50ms to 2s had a drastic affect on behavior. Users became quickly uninterested in working with the search site and often abandoned their work. They also found that page weight had little impact. They made changes of 1.05 to 5X page sizes. For higher bandwidth users it simply didn’t make a difference.
Progressive rendering which is based on chunk transferring encoding provided a positive experience for users. Users felt more captivated and subsequently kept working within their application. This is definitely something we need to investigate further.
- Delays under .5s impact business
- # of bytes in response time is less important then what they are and when sent
- Progressive rendering should be used in order to get quick feedback to users
- Make investment in experimental platforms
This was just a marketing presentation about Keynote’s new product called Transaction Perspective 9. It was cool, but definitely too much markitechture.
The best part of this presentation was the opening. They showed the YouTube clip for Cool Guys Don’t Look at Explosions. It’s a must watch…
I will keep this short. One thing this guy talked about was an idea at Twitter called Whale Watching. Apparently Twitter has had some scalability issues over the past year. They try to keep their whales per second (HTTP 503 errors) > whale threshold.
Another interesting thing they have done is make their website performance completely transparent. Take a look here for an uptime report.
I’ll keep this brief as well. Page Speed is cool. It’s not a replacement for YSlow, Fiddler or HTTPWatch. It tries to be the replacement, but fails to do just that. I see it has providing similar, yet different data to those other tools. One thing it does is optimize images for you to place back in your code. It also tells what JS is wasted and deferred. It also minifies for you…
The team built in rules for determining inefficient CSS selectors. They built in the rules from David Hyatt’s CSS best practices that I talked about in yesterday’s blog. That’s pretty cool…They also have an activity panel that will soon show reflow (paint events).
The three top dogs from IE, FF and Chrome went back to back to back on sessions about their browsers. It was cool as I got to meet Christian Stockwell personally from the IE team. He worked with me on some of our Grade Center issues last year. Mike Belshe from Chrome and Christian Blizzard from FireFox also spoke.
- IE team says they focused on layout, JScript and Networking with IE8 improvements.
- Chrome team says they focused on rendering, JS and Network with Chrome 3 improvements
- FF team says they focused on Network Performance (HTTP stack) and DNS prefetching
- IE 8 has native JSON support, raised connections from 2 to 6 and a new Selectors API
- Chrome is based on 3 processes (Browser, Renderer and Plugin)
- Chrome uses WebKit for rendering
- Use V8 for scripting engine
- FF uses trace monkey (JS engine) and Gecko for DOM (rendering)
One last point…need to look at Chrome’s community page
Ok…I have to admit I’ve never been on MySpace. I don’t have an account, nor have I ever logged in. I’ve seen shots from my wife’s computer and in the news, but I never made the jump. The PE team fromMySpace presented a tool they wrote called MSFast. The tool looks cool…it’s a JS injector. I doubt I would use it. I’ll give it a spin when I get back and make a final judgement.
So the father of AJAX performance from Yahoo presented. Sadly, I walked away from the presentation disappointed. I think the only thing I walked away was that IE doesn’t handle arrays, but rather uses linked lists.
- How big is our cache at Bb for the typical customer?
- Have we considered HTTP chunking?