mPulse

Friday, November 30, 2012

Effective Web Performance - Will A CDN Really Solve The Problem?

Content Delivery Networks (CDNs) have been available for over a decade, with companies leaning on these distributed networks to accelerate the delivery of their content, absorb the load of the peak traffic periods, and help deal with the problem of geography. During my career I have helped companies assess and evaluate CDN solutions so that they understood the performance improvement benefit they would receive for the money they were spending.

But a something bothered me during these evaluations. I felt that companies were rushing into the decision because it was the "right" thing to do, that their site could only get faster if they used a CDN. This rush often left question unasked and unanswered.

In this post, I want to provide 4 questions that you should ask before deploying a CDN, questions that will help you ensure that a CDN is the performance improvement solution you need right now.

Is the performance issues due to slow application components?


If the performance issue can be tracked down to application elements, then a CDN is likely not the immediate starting point for you and your team. If generating dynamic content or database lookups are causing the performance issue, this is on you, in your datacenter.

CDNs do not make slow applications run faster; CDNs move bits to customers faster more efficiently.

How are we measuring page response times?


If you are measuring response times using the load time of the entire page, then you may be inflating your response times by including all of the third-party content that has been included. You may want to set up parallel measurements that allow you to compare the full page with a page measurement that only includes the content you are directly responsible for managing. If you find that third-party content is slowing down your pages, then a CDN can't help you.

Full page load times only give you one performance perspective. Modern performance measurement teams need to understand how long it takes for a page to be ready and usable for the customer - the perceived rendering or page ready time. What you could discover is that customers can do 90% of what they want on your page well before all of the content fully loads. If this is true, than the perceived load time for your company to determine that a CDN isn't necessary right now.

Have we done everything to optimize our own performance?


Often companies choose the easy way (CDN) to solve a hard problem (improve performance). Unfortunately, the easy way can be more expensive than taking the time to design pages and content that ensure long-term and sustainable performance. A CDN could mask a bad design until it slows down so much that even the CDN can no longer help.

Measure your page and challenge the devops team to tweak the exiting design until they don't think they can get any more performance out of it. Then, during your CDN evaluations, set up measurements that compare the performance of origin and CDN delivered pages. You might find that making the pages as fast as you possibly can before purchasing the services of a CDN make the difference between the origin and CDN times less dramatic than they would have been before optimization.

Can we effectively estimate if the CDN cost will be offset by an increase in revenue?


CDNs don't come cheap, so choosing to deploy a CDN better pay for itself over time. Challenge the CDNs you are evaluating to present you with ROI calculations that show how their service more than pays for itself and case studies (or customers) that prove that the cost of the CDN was offset by a business metric that can be easily tracked.

Summary


Don't mis-understand the questions I am posing here: I am still a strong advocate for the use of CDNs. However, the days of companies simply purchasing the services of a CDN because it's the "right thing to do" are long over.

As companies evolve their perspective on web and mobile performance, they need to ensure they have done everything they can to make their own applications faster. Once the hard work of tuning and optimization is complete, the process of choosing a CDN must include deeper, more probing questions about the performance and business benefits that come along with the service.

 

Tuesday, November 27, 2012

Web Performance - At What Cost? Trends for 2013

As we moved through the traditional start of the holiday shopping season (Thanksgiving / Black Friday / Cyber Monday), it is clear that most sites were prepared for what was coming. No big names went down, no performance slowdowns rose to the headlines, and online revenue - both web and mobile - appears to have increased over 2011.

But when you these companies do their year-end review, they need to take a step back and ask: "Could we have done it better?"

While performance events were few and far between (if they occurred at all), companies will need to examine the cost of scaling their sites for performance. When planning for the peak performance period, companies will need to asses whether simply scaling-up to handle increased traffic and sales could have been managed more effectively, by implementing sites that were not only fast, but  also efficient.

Joshua Bixby (here) noted that web page size has increased 20% in the last 6 months, an indication that efficiency is not always at the top of mind when new web content is presented to visitors. In order to deliver ever more complex web content, companies are spending more on services such as CDNs and cloud services to deliver their own content, while incorporating ever increasing numbers of third-party items into their pages to supply additional content and services (analytics, performance, customer service, Help Desk, and many more) that they have outsourced.

Increasing page size, outside acceleration and cloud services, and third-party services - a potent mix that companies need to asses critically, with an eye to understanding what all of these mean for the performance experienced by their visitors and customers. Add in the increasing importance of the mobile internet, with its variable connection speeds and service quality, and things become even more interesting.

In 2013, I see companies assessing these three trends with a focus on making sites perform the same (or better!) at the same (or lower!) cost than they did in 2012.

Over the next 12 months, I will be watching the performance industry news to see if those companies that have been successful at making their sites perform under the heaviest loads increasingly focus not just on speed and availability, but on efficient delivery of their entire site at a lower cost with the best user experience possible.

The key strategics questions that online businesses will be asking in 2013 will be:

  • Have we optimized our content? This does not mean make it faster, this means make it better and more efficient. It is almost absurdly easy to make a big, inefficient site fast, but it is harder to step back and "edit" the site in a way that you deliver the same content with less work - think Chevy Volt, not Cadillac Escalade.

  • Are we in control of our third-party services? Managing what services get placed on your site is only the first step. Understanding where the content you have added comes from and whether it is optimized for the heaviest shared loads will also become important checklist items for companies.

  • Can we deliver the design and functionality our customers want at a lower cost? This is the hardest one to be successful at, as each company is different. But Devops teams should be prepared to be accountable for not just cool, but also for the cost of creating, deploying, and managing a site.


image courtesy of Corey Seeman - http://www.flickr.com/photos/cseeman/

Tuesday, November 20, 2012

Managing Performance Measurement: Who uses this stuff anyway?

Clogged Pipe - staale.skaland - FLICKROne of the least glamorous parts of managing performance measurement data is the time I have to take every month to wade through my measurements and decide which stay on and which get shut off. Since I'm the only person who uses my measurement account, this process usually takes less than 10 minutes, but can take longer if I've ignored it for too long.

With large organizations that are collecting data on multiple platforms, this process may be more involved. By the time you look at the account, the tests have likely accumulated for months and years, collecting data that no one looks at or cares about. They remain active only because no one owns the test and can ask to disable it.

What can you do to prevent this? Adding some measurement management tasks to your calendar will help prevent Performance Cruft from clogging your information pipes.

  1. Define who can create measurements. When you examine account permissions on your measurement systems, do you find that far more people than are necessary (YMMV on this number) have measurement creation privileges? If so, why? If someone should not have the ability to create new measurements, then take the permissions away. Defining a measurement change policy that spells out how measurements get added will help you reduce the amount of cruft in your measurement system.

  2. Create no measurement without an owner. This one is relatively easy - no new measurement gets added to or maintained on any measurement system without having one or more names attached to it. Making people take responsibility for the data being collected helps you with future validations and, if your system is set up this way, with assigning measurement cost to specific team budgets. It's likely that management will make this doubly enforceable by assigning the cost of any measurement that has no owner to the performance team.

  3. Set measurement expiry dates. If a measurement will be absolutely critical during  only a specific time range, then only run the measurement for that time. There is no sense collecting data for any longer than is necessary as you have likely either stored or saved the data you need from that time for future analysis or comparisons.

  4. Validate measurement usage monthly or quarterly. Once names have been associated to measurements, the next step is to meet with all of the stakeholders monthly or quarterly to ensure that the measurements are still meaningful to their owners. Without a program of continuous follow-through, it will take little time for the system to get clogged again.

  5. Cull aggressively. If a measurement has no owner or is no longer meaningful to its owners, disable it immediately. Keep the data, but stop the collection. If it has no value to the organization, no one will miss it. If stopping the data leads to much screaming and yelling, assign the measurement to those people and reactivate.


Managing data collection is not the sexiest part of the web performance world, but admitting you have a data collection cruft problem is the first step along the path of effective measurement management.

Monday, November 12, 2012

Real User Measurement - A tool for the whole business

The latest trend in web performance measurement is the drive to implement Real User Measurement (RUM) as a component of a web performance measurement strategy. As someone who cut their teeth on synthetic measurements using distributed robots and repeatable scripts, it took me a long time to see the light of RUM, but I am now a complete convert - I understand that the richness and completeness of RUM provides data that I was blocked from seeing with synthetic data.

They key for organizations now is to realize that RUM is not a replacement for Synthetic Measurements. In fact, the two are integral to each other for identifying and solving tricky external web performance issues that can be missed by using a single measurement perspective.

My view is that the best way to drive RUM collection is to shape the metrics in a manner similar to that you have chosen to segment and analyze your visitors using traditional web analytics. The time and effort used in this effort can inform RUM configuration by determining:

  • Unique customer populations - registered users, loyalty program levels, etc

  • Geography

  • Browser and Device

  • Pages and site categories visited

  • Etc.


This information needs to bleed through so that it can be linked directly to the components of the infrastructure and codebase that were used when the customer made their visit. But to limit this vast new data pool to the identification and solving of infrastructure, application, and operations issues isolates the information from a potentially huge population of hungry RUM consumers - the business side of any organization.

This side of the company, the side that fed their web analytics data into the setup of RUM, needs to now see the benefit of their efforts. By sharing RUM with the teams that use web analytics and aligning the two strategies, companies can directly tie detailed performance data to existing customer analytics. With this combination, they can begin to truly understand the effects of A/B testing, marketing campaigns, and performance changes on business success and health. But business users need a different language to understand the data that web performance professionals consume so naturally.

I don't know what the language is, but developing it means taking the data into business teams and seeing how it works for them. What companies will likely find is that the data used by one group won't be the same as for the other, but there will be enough shared characteristics to allow the group to share a dialectic of performance when speaking to each other.

This new audience presents the challenge of clearly presenting the data in a form that is easily consumed by business teams alongside existing analytics data. Providing yet another tool or interface will not drive adoption. Adoption will be driven be attaching RUM to the multi-billion dollar analytics industry so that the value of these critical metrics is easily understood by and made actionable to the business side of any organization.

So, as the proponents of RUM in web performance, the question we need to ask is not "Should we do this?", but rather "Why aren't we doing this already?".