mPulse

Thursday, December 20, 2012

Performance Trends for ???? - Smarter Systems

IF-repair by Yo Mostro (Flickr)Most of the trending items that I have discussed in the last two weeks are things that can be done today, problems that we are aware of and know need to be resolved. One item on my trend list, the appearance of smarter performance measurement systems, is something the WPO industry may have to wait a few years to appear.

A smarter performance measurement system is one that can learn what, when, and from where items need to be monitored by analyzing the behavior of your customers/employees and your systems. A hypothetical scenario of a smarter performance measurement system at work would be in the connection between RUM and synthetic monitoring. All of the professionals in WPO claim that these must be used together, but the actual configuration relies on humans to deliver the advantages that come from these systems. If RUM/analytics know where your customers are, what they do, and when they do it, then why can't these same systems deploy (maybe even create and deploy!) synthetic tests to those regions automatically to capture detailed diagnostic data?

Why do measurement systems rely on us to manually configure the defaults for measurements? Why can't we take a survey when we start with a system (and then every month or so after that) that helps the system determine the what/when/where/why/how of data and information we are looking to collect and have the system create a set of test deployment defaults and information displays that match our requirements?

The list of questions goes on, but they don't have to. Measurement systems have, for too long, been built to rely on expert humans to configure and interpret results. Now we have a chance to step back and ask "If we built a performance measurement system for the a non-expert, what would it look like?"

More data isn't the goal of performance measurement systems - more information is what we want.

Thursday, December 6, 2012

Performance Trends 2013 - Employees are Customers too

Peter Levine from Andreesen Horowitz wrote an article on The Renaissance of Enterprise Computing yesterday that finally sprouted the seed of an idea that has been dormant at the back of my brain for a few months. While the ideas of enterprise computing and web/mobile performance seem disconnected, they're not.

When companies begin to rely on outside services (Levine mentions Box, Google Docs, and others in his article) they have given part of their infrastructure over to an outside organization. And, when you do that, this means that any performance hiccups that affect us as consumers can have a very major effect on us as employees.

Even if your company decides to purchase and deploy an enterprise application within your own infrastructure or datacenters, the performance and experience that your employees experience when using it on their desktops or on their mobile devices can affect productivity and effectiveness in the workplace. An unmanaged (read unmonitored) solution can have shut down groups in the company for minutes or hours.

Think of the call-center. No matter the industry you're in, what increase customer calls: slow performance or a poor experience with the web/mobile application. Now, if your employees rely on a variant of the same web application to answer questions in the call-center, have you actually improved the customer experience and increased employee productivity?

Some considerations when managing, designing, or buying an enterprise application in the coming  year:

  • What do your peers tell you about their experience implementing the solution or using an outside service - has it made employees more effective and efficient?

  • Are employees already using a "workaround" that makes them more effective and efficient? Why aren't they using the internal or mandated solution?

  • Is performance and experience a driving factor in the lack of adoption of the mandated solution?

  • Do you have clear and insightful performance information that shows when employees are experiencing issues performing critical tasks? Can you clearly understand what the root cause is?

  • Are employees experiencing issues using the application in certain browsers or on certain mobile devices? How quickly can your design or your outside service respond to these issues?

  • Are you reviewing the chosen solution regularly to understand how usage is changing and how this could affect the performance of the application in the future?


Performance issues are not simply affecting the customers you serve. Your own employees use many of the same systems and applications in their day-to-day tasks, so a primary goal of managing these application should be to ensure that the applications deliver performance and experience that encourages employees to use them, no matter whether they are developed in-house or purchased as software or SaaS.

Tuesday, December 4, 2012

Web Performance Trends 2013 - Third Party Services

Every site has them. Whether they're for analytics, advertising, customer support, or CDN services, third-party services are here to stay. However, for 2013, I believe that these services will face a level of scrutiny that many have avoided up until now.

Recent performance trends indicate that while web site content has been tested and scaled to meet even the highest levels of traffic, the third-party services that these sites have some to rely on (with a few exceptions) are not yet prepared to handle the largest volumes of traffic that occur when many of their customers experience a peak on the same day.

In 2013, I see web site owners asking their third-party service providers to provide verification that their systems be able to handle the highest volumes of traffic on their busiest days, with an additional amount of overhead - I suggest 20% - available for growth and to absorb "super-spikes". Customer experience is built on the performance of the entire site, so leaving a one component of site delivery untested (and definitely unmonitored!) leaves companies exposed to brand and reputation degradation as well as performance degradation.

In your own organizations, make 2013 the year you:

  • Implement tight controls over how outside content is deployed and managed

  • Implement tight change control policies that clearly describe the process for adding third-party content to your site, including the measurement of performance impacts

  • Define clear SLAs and SLOs for your third-party content providers, including the performance levels at which their content will be disabled or removed from the site.


When speak to your third-party content and service providers about their plans for 2013, ask them to:

  • Explicitly detail how they handled traffic on their busiest days in 2012, and what they plan to do to effectively handle growth in 2013

  • Clearly demonstrate how they are invested in helping their customers deliver successful mobile sites and apps in 2013

  • Lay out how they will provide more transparent access to system performance metrics and what the goals of their performance strategy for 2013 are.


Take control of your third-party content. Don't let it control you.

Monday, December 3, 2012

Web Performance Trends for 2013 - Performance Optimization

As we approach the end of 2012, I will be looking at a few trends that will become important in 2013. In a previous post, I identified optimization as an important performance trend to watch. It is one of the items on a performance checklist that companies can directly influence through the design and implementation of their web and mobile sites.

The key to optimization in any organization is to think of objects transmitted to customers, regardless of where they originate, as having a cost to you and to the customer. So, a site that makes $100,000 in a day and transfers 10 million objects to customers has an object-to-revenue ratio of 100. But, if the site is optimized and only 7.5 million objects are transferred to make $100,000, that ratio goes down to 75; and if the reduction in objects causes revenue go up to $150,000, the ratio drops to 50.

This approach is simplistic and does not include the actual cost to deliver each object, which includes costs for bandwidth, CDN services, customer service providers, etc. as well as revenue generated by third-party ads and services you present to customers. The act of balancing the cost of the site (to develop and manage), the performance you measure, the revenue you generate, the experience your customers have, and the reputation of your brand is an ongoing process that must be closely considered every time someone asks, "And if we add this to the site/app...".

There is no optimal figure for site optimization. But there are some simple rules:

  • Use Sprites where you can. Combing multiple small images into aggregated image maps that you can use CSS to display gives you a double-plus good improvement - fewer objects to download and more text (HTML, JavaScript, and CSS) that can be delivered to visitors in a compressed format

  • Combine JavaScript and CSS files. Listen to your designers - they will likely try to convince that each file needs to be separate for some arcane reason. Listen and then ask if this is the most efficient way to deploy this particular function or formatting. Ask the developer to produce a cost/benefit analysis of doing it their way versus using something that is already in place

  • Control your third-party services. This means having a sane method for managing these services, and shutting them off if necessary. Have every team that is responsible for the site meet to approve (or deny) the addition of new third-party services. And those who want it better come with a strong cost/benefit analysis.


Optimization is the act of making the sites you create as effective and efficient as the business you run. No matter how "low" the cost to operate a web site is, each object on a site can cost the company more money than it is worth in revenue. And if that object slows the site down, it could turn a profitable transaction into a lost customer.

Friday, November 30, 2012

Effective Web Performance - Will A CDN Really Solve The Problem?

Content Delivery Networks (CDNs) have been available for over a decade, with companies leaning on these distributed networks to accelerate the delivery of their content, absorb the load of the peak traffic periods, and help deal with the problem of geography. During my career I have helped companies assess and evaluate CDN solutions so that they understood the performance improvement benefit they would receive for the money they were spending.

But a something bothered me during these evaluations. I felt that companies were rushing into the decision because it was the "right" thing to do, that their site could only get faster if they used a CDN. This rush often left question unasked and unanswered.

In this post, I want to provide 4 questions that you should ask before deploying a CDN, questions that will help you ensure that a CDN is the performance improvement solution you need right now.

Is the performance issues due to slow application components?


If the performance issue can be tracked down to application elements, then a CDN is likely not the immediate starting point for you and your team. If generating dynamic content or database lookups are causing the performance issue, this is on you, in your datacenter.

CDNs do not make slow applications run faster; CDNs move bits to customers faster more efficiently.

How are we measuring page response times?


If you are measuring response times using the load time of the entire page, then you may be inflating your response times by including all of the third-party content that has been included. You may want to set up parallel measurements that allow you to compare the full page with a page measurement that only includes the content you are directly responsible for managing. If you find that third-party content is slowing down your pages, then a CDN can't help you.

Full page load times only give you one performance perspective. Modern performance measurement teams need to understand how long it takes for a page to be ready and usable for the customer - the perceived rendering or page ready time. What you could discover is that customers can do 90% of what they want on your page well before all of the content fully loads. If this is true, than the perceived load time for your company to determine that a CDN isn't necessary right now.

Have we done everything to optimize our own performance?


Often companies choose the easy way (CDN) to solve a hard problem (improve performance). Unfortunately, the easy way can be more expensive than taking the time to design pages and content that ensure long-term and sustainable performance. A CDN could mask a bad design until it slows down so much that even the CDN can no longer help.

Measure your page and challenge the devops team to tweak the exiting design until they don't think they can get any more performance out of it. Then, during your CDN evaluations, set up measurements that compare the performance of origin and CDN delivered pages. You might find that making the pages as fast as you possibly can before purchasing the services of a CDN make the difference between the origin and CDN times less dramatic than they would have been before optimization.

Can we effectively estimate if the CDN cost will be offset by an increase in revenue?


CDNs don't come cheap, so choosing to deploy a CDN better pay for itself over time. Challenge the CDNs you are evaluating to present you with ROI calculations that show how their service more than pays for itself and case studies (or customers) that prove that the cost of the CDN was offset by a business metric that can be easily tracked.

Summary


Don't mis-understand the questions I am posing here: I am still a strong advocate for the use of CDNs. However, the days of companies simply purchasing the services of a CDN because it's the "right thing to do" are long over.

As companies evolve their perspective on web and mobile performance, they need to ensure they have done everything they can to make their own applications faster. Once the hard work of tuning and optimization is complete, the process of choosing a CDN must include deeper, more probing questions about the performance and business benefits that come along with the service.

 

Tuesday, November 27, 2012

Web Performance - At What Cost? Trends for 2013

As we moved through the traditional start of the holiday shopping season (Thanksgiving / Black Friday / Cyber Monday), it is clear that most sites were prepared for what was coming. No big names went down, no performance slowdowns rose to the headlines, and online revenue - both web and mobile - appears to have increased over 2011.

But when you these companies do their year-end review, they need to take a step back and ask: "Could we have done it better?"

While performance events were few and far between (if they occurred at all), companies will need to examine the cost of scaling their sites for performance. When planning for the peak performance period, companies will need to asses whether simply scaling-up to handle increased traffic and sales could have been managed more effectively, by implementing sites that were not only fast, but  also efficient.

Joshua Bixby (here) noted that web page size has increased 20% in the last 6 months, an indication that efficiency is not always at the top of mind when new web content is presented to visitors. In order to deliver ever more complex web content, companies are spending more on services such as CDNs and cloud services to deliver their own content, while incorporating ever increasing numbers of third-party items into their pages to supply additional content and services (analytics, performance, customer service, Help Desk, and many more) that they have outsourced.

Increasing page size, outside acceleration and cloud services, and third-party services - a potent mix that companies need to asses critically, with an eye to understanding what all of these mean for the performance experienced by their visitors and customers. Add in the increasing importance of the mobile internet, with its variable connection speeds and service quality, and things become even more interesting.

In 2013, I see companies assessing these three trends with a focus on making sites perform the same (or better!) at the same (or lower!) cost than they did in 2012.

Over the next 12 months, I will be watching the performance industry news to see if those companies that have been successful at making their sites perform under the heaviest loads increasingly focus not just on speed and availability, but on efficient delivery of their entire site at a lower cost with the best user experience possible.

The key strategics questions that online businesses will be asking in 2013 will be:

  • Have we optimized our content? This does not mean make it faster, this means make it better and more efficient. It is almost absurdly easy to make a big, inefficient site fast, but it is harder to step back and "edit" the site in a way that you deliver the same content with less work - think Chevy Volt, not Cadillac Escalade.

  • Are we in control of our third-party services? Managing what services get placed on your site is only the first step. Understanding where the content you have added comes from and whether it is optimized for the heaviest shared loads will also become important checklist items for companies.

  • Can we deliver the design and functionality our customers want at a lower cost? This is the hardest one to be successful at, as each company is different. But Devops teams should be prepared to be accountable for not just cool, but also for the cost of creating, deploying, and managing a site.


image courtesy of Corey Seeman - http://www.flickr.com/photos/cseeman/

Tuesday, November 20, 2012

Managing Performance Measurement: Who uses this stuff anyway?

Clogged Pipe - staale.skaland - FLICKROne of the least glamorous parts of managing performance measurement data is the time I have to take every month to wade through my measurements and decide which stay on and which get shut off. Since I'm the only person who uses my measurement account, this process usually takes less than 10 minutes, but can take longer if I've ignored it for too long.

With large organizations that are collecting data on multiple platforms, this process may be more involved. By the time you look at the account, the tests have likely accumulated for months and years, collecting data that no one looks at or cares about. They remain active only because no one owns the test and can ask to disable it.

What can you do to prevent this? Adding some measurement management tasks to your calendar will help prevent Performance Cruft from clogging your information pipes.

  1. Define who can create measurements. When you examine account permissions on your measurement systems, do you find that far more people than are necessary (YMMV on this number) have measurement creation privileges? If so, why? If someone should not have the ability to create new measurements, then take the permissions away. Defining a measurement change policy that spells out how measurements get added will help you reduce the amount of cruft in your measurement system.

  2. Create no measurement without an owner. This one is relatively easy - no new measurement gets added to or maintained on any measurement system without having one or more names attached to it. Making people take responsibility for the data being collected helps you with future validations and, if your system is set up this way, with assigning measurement cost to specific team budgets. It's likely that management will make this doubly enforceable by assigning the cost of any measurement that has no owner to the performance team.

  3. Set measurement expiry dates. If a measurement will be absolutely critical during  only a specific time range, then only run the measurement for that time. There is no sense collecting data for any longer than is necessary as you have likely either stored or saved the data you need from that time for future analysis or comparisons.

  4. Validate measurement usage monthly or quarterly. Once names have been associated to measurements, the next step is to meet with all of the stakeholders monthly or quarterly to ensure that the measurements are still meaningful to their owners. Without a program of continuous follow-through, it will take little time for the system to get clogged again.

  5. Cull aggressively. If a measurement has no owner or is no longer meaningful to its owners, disable it immediately. Keep the data, but stop the collection. If it has no value to the organization, no one will miss it. If stopping the data leads to much screaming and yelling, assign the measurement to those people and reactivate.


Managing data collection is not the sexiest part of the web performance world, but admitting you have a data collection cruft problem is the first step along the path of effective measurement management.

Monday, November 12, 2012

Real User Measurement - A tool for the whole business

The latest trend in web performance measurement is the drive to implement Real User Measurement (RUM) as a component of a web performance measurement strategy. As someone who cut their teeth on synthetic measurements using distributed robots and repeatable scripts, it took me a long time to see the light of RUM, but I am now a complete convert - I understand that the richness and completeness of RUM provides data that I was blocked from seeing with synthetic data.

They key for organizations now is to realize that RUM is not a replacement for Synthetic Measurements. In fact, the two are integral to each other for identifying and solving tricky external web performance issues that can be missed by using a single measurement perspective.

My view is that the best way to drive RUM collection is to shape the metrics in a manner similar to that you have chosen to segment and analyze your visitors using traditional web analytics. The time and effort used in this effort can inform RUM configuration by determining:

  • Unique customer populations - registered users, loyalty program levels, etc

  • Geography

  • Browser and Device

  • Pages and site categories visited

  • Etc.


This information needs to bleed through so that it can be linked directly to the components of the infrastructure and codebase that were used when the customer made their visit. But to limit this vast new data pool to the identification and solving of infrastructure, application, and operations issues isolates the information from a potentially huge population of hungry RUM consumers - the business side of any organization.

This side of the company, the side that fed their web analytics data into the setup of RUM, needs to now see the benefit of their efforts. By sharing RUM with the teams that use web analytics and aligning the two strategies, companies can directly tie detailed performance data to existing customer analytics. With this combination, they can begin to truly understand the effects of A/B testing, marketing campaigns, and performance changes on business success and health. But business users need a different language to understand the data that web performance professionals consume so naturally.

I don't know what the language is, but developing it means taking the data into business teams and seeing how it works for them. What companies will likely find is that the data used by one group won't be the same as for the other, but there will be enough shared characteristics to allow the group to share a dialectic of performance when speaking to each other.

This new audience presents the challenge of clearly presenting the data in a form that is easily consumed by business teams alongside existing analytics data. Providing yet another tool or interface will not drive adoption. Adoption will be driven be attaching RUM to the multi-billion dollar analytics industry so that the value of these critical metrics is easily understood by and made actionable to the business side of any organization.

So, as the proponents of RUM in web performance, the question we need to ask is not "Should we do this?", but rather "Why aren't we doing this already?".

Tuesday, October 30, 2012

The Rule of Thirds: The Web Performance Analyst

Blurry Man - Brian Auer - http://www.flickr.com/photos/brianauer/2929494868/Recently, there has been a big push for the Dev/Ops culture, an integrated blending of development and operations who work closely together to ensure that poor performing web and mobile applications don't make it out the door. They have become the rockstars of the conference circuit and the employment boards.

I fit into neither of these categories. I have never run anything more than a couple of linux servers with Apache and MySQL. I write code because I'm curious, not because I'm good at it - in fact, I write the worst code in the world and I am willing to prove it!

I am a member of a web and mobile performance culture that is language and platform independent, to use some buzzwords.

I am a web and mobile performance consultant and analyst.

I can take apart reams of data to find statistical patterns and anomalies. I believe that averages are evil, and have believed this for more than a decade. I have been using frequency and percentile distributions for almost as long and watched as the industry finally caught up.

I can link the business issue that faces your company with the technical concerns you are facing and help guide you to the middle ground where performance and the balance sheet are in careful equilibrium.

I don't care what you write your code in. I don't care what you run it on. Now, don't get me wrong: I respect and admire the Dev/Ops folks I have met and know. I am just not in their tribe.

Tuesday, September 25, 2012

HTTP Compression - Have you checked ALL your browsers?

Apache has been my web server of choice for more than a decade. It was one of the first things I learned to compile and manage properly on linux, so I have a great affinity for it. However, there are still a few gotchas that are out there that make me grateful that I still know my way around the httpd.conf file.


HTTP compression is something I have advocated for a long time (just Googled my name and compression - I wrote some of that stuff?) as just basic common sense.

Make Stuff Smaller. Go Faster. Cost Less Bandwidth. Lower CDN Charges. [Ok, I can't be sure of the last one.]

But, browsers haven't always played nice. At least up until about 2008. After then, I can be pretty safe in saying that even the most brain-damaged web and mobile browsers could handle pretty much any compressed content we threw at them.

Oh, Apache! But where were you? There is an old rule that is still out there, buried deep in the httpd.conf file that can shoot you. I actually caught it yesterday when looking at a site using IE8 and Firefox 8 measurement agents at work. Firefox was about 570K while IE was nearly 980K. Turns out that server was not compressing CSS and JS files sent to IE due to this little gem:
 BrowserMatch \bMSIE !no-gzip gzip-only-text/html

This was in response to some issues with HTTP Compression in IE 5 and early versions of IE6 - remember them? - and was appropriate then. Guess what? If you still have this buried in your Apache configuration (or any web server or hardware device that does compression for you), break out the chisels: it's likely your httpd.conf file hasn't been touched since the stone age.

Take. It. Out.

Your site shouldn't see traffic from any browsers that don't support compression (unless they're robots and then, oh well!) so having rules that might accidentally deny compression might cause troubles. Turn the old security ACL rule around for HTTP compression:

Allow everything, then explicitly disable compression.


That should help prevent any accidents. Or higher bandwidth bills due to IE traffic.

Monday, September 17, 2012

OCSP and the GoDaddy Event

[caption id="" align="alignleft" width="240"]Image by vissago - http://www.flickr.com/photos/vissago/ Image by vissago - http://www.flickr.com/photos/vissago/[/caption]

The GoDaddy DNS event (which I wrote about here) has been the subject of many a post-mortem and water-cooler conversation in the web performance world for the last week. In addition to the many well-publicized issues that have been discussed, there was one more, hidden effect that most folks may not have noticed - unless you use Firefox.

Firefox uses OCSP lookups to validate the certificate of SSL certificates. If you go to a new site and connect using SSL, Firefox has a process to check the validity of SSL cert. The results are of the lookup cached and stored for some time (I have heard 3 days, this could be incorrect) before checking again.

Before the security wonks in the audience get upset, realize I'm not an OCSP or SSL expert, and would love some comments and feedback that help the rest of us understand exactly how this works. What I do know is that anyone who came to a site the relied on an SSL cert provided and/or signed by GoDaddy at some point in its cert validation path discovered a nasty side-effect of this really great idea when the GoDaddy DNS outage occurred: If you can't reach the cert signer, the performance of your site will be significantly delayed.

Remember this: It was GoDaddy this time; next time, it could be your cert signing authority.

How did this happen? Performing an OCSP lookup requires a opening a new TCP connection so that an HTTP request can be made to the OCSP provider. A new TCP connection requires a DNS lookup. If you can't perform a successful DNS lookup to find the IP address of the OCSP host...well, I think you can guess the rest.

Unlike other third-party outages, these are not ones that can be shrugged off. These are ones that will affect page rendering by blocking the downloading the mobile or web application content you present to customers.

I am not someone who can comment on the effectiveness of OCSP lookups in increasing web and mobile security. OCSP lookup for Firefox are simply one more indication of how complex the design and management of modern online applications is.

Learning from the near-disaster state and preventing it from happening again is more important that a disaster post-mortem. The signs of potential complexity collapse exist throughout your applications, if you take the time to look. And while something like OCSP may like like a minor inconvenience, when it affects a discernible portion of your Firefox users, it becomes a very large mouse scaring a very jumpy elephant.

Thursday, April 26, 2012

Web Performance: Your opinion is only somewhat relevant

Project 365 - Year 2 : Day 004 : 04/01/10 - Peter GerdesContext is everything. Where you stand when reading or watching something shapes the way you experience it. Just as Einstein explained to us in the Train/Platform Thought Experiment, the position of the observer dictates how the event is described and recorded.

There is no difference with web performance. When a company develops an online application and presents it to customers (it doesn't matter if they are outside/retail or inside/partner/employee), the perspective of the team that approved, created, tested, and released the application becomes, as a VP at a previous company explained to me, "interesting, but irrelevant".

Step away from the world of online application performance for a minute, and put yourself in the shoes of the customer; become a consumer. How do you feel when a site, application, or mobile app is slow to give you what you want? I'll give you some idea:
The stress levels of volunteers who took part in the study rose significantly when they were confronted with a poor online shopping experience, proving the existence of ‘Web Stress’. Brain wave analysis from the experiment revealed that participants had to concentrate up to 50% more when using badly performing websites, while EOG technology* and behavioural analysis of the subjects also revealed greater agitation and stress in these periods. ("Web Stress: A Wake Up Call for European Business", emphasis mine)

I know it comes from a competitor, but it is true. It applies to me; it applies to you. And web performance professionals need to step away from the screens for a minute and put themselves in the shoes of the people standing on the platform.

Everyday, your online applications change, grow, fail, falter, and evolve - the train is always moving. To the people on the platform, all they see is your train and how it's moving compared to the other trains they have watched go by. You worked hard on your train, polishing the brass, adding new cars, even upgrading the engine. To you, the train is a magnificent achievement that everyone should admire, especially now that the new engine makes it so much faster!

The customer on the platform is measuring how your updated train is moving compared to the MAGLEV bullet train on the super-conducting rail next to you and asking "How come this train is so slow?"

The complexity of a modern web site is astounding, and improving performance by 0.4 seconds is often a feat worthy of applause...among web performance professionals. From the perspective of your customers, that 0.4 second improvement is still not enough.

Web performance is a numbers game. As an industry, we have been focused on one set of numbers for too long. The customer experience, not the stopwatch, has to drive your company to the next level of performance maturity. To do that, you have to step off your online application train and take a cold hard look at what you deliver to your customers, alongside them down on the platform.

Wednesday, February 1, 2012

Company Culture is your Company Reputation

Building on the theme from yesterday, I am now more motivated than ever by an article on the Fast Company site today: Culture Eats Strategy For Lunch

A number of books on my list this past month (Tribal Leadership and Delivering Happiness to name two) showed me just how critical a true, strong, and real culture is in allowing any organization to step beyond the brand. When a company can step beyond its brand, it has the rare opportunity to demonstrate what it means to be a great, not merely a good, company.

How do you do it? The examples are everywhere, and they all show the same thing - the company comes last.

Ok, so maybe not last, but you get the point. Doing what's right for the company (and in really bad companies, what's right for me!) has turned organizations so many companies into examples of corporate inertia: If we keep doing this, maybe they won't hate us.

How has your company REALLY (no lip service allowed!) put the customer first today?

Can you find an example where the whole company put the customer (not A CUSTOMER) first?

Image courtesy of Jacob Nielsen

Tuesday, January 31, 2012

Still trying to brand yourself? That must hurt...

I've been enjoying the articles posted by Matthew Prince on PandoDaily from the WEF in Davos over the last week. But the one that got me in the right place at the right time is the one where he described how Paddy Cosgrave, inspired by the desire to make something happen in Ireland, created the F.ounders conference.

How did this hit me? It focused on how someone stood up and created a reputation that he can carry anywhere he goes. Not a brand; a reputation.

As I have said before: Personal Branding is all about you, closed source. Everything has to come back to the "I" that's not in team (although there is a "me", so a person can still screw up a team).

Taking what you have, and giving it to others to advance everyone, that builds reputation.

Are you building a personal brand or a personal reputation?

Tuesday, January 24, 2012

The Three Pillars of Web Performance

Had a great conversation with a colleague today. She and I were bouncing around some ideas, and I listed my top 3 topics in Web performance as "Speed, Revenue, and Experience". She was quick to correct me.

"No, not revenue, conversions".

She was right. Just last week, I talked about how critical it is to convert visitors into customers. Doing this in some businesses doesn't mean that there is any revenue, but the goal remains the same.

Speed is the one everything thinks is the same as Web Performance. It's not. It's the don't be that guy measure of Web Performance, the one that can be easily quantified and put on display. But performance for an online application is so much more than raw speed.

Experience is the hardest of the three to measure, because what it is depends on who you ask. Is it design, flow, ease of use, clarity, or none of these things? But a fast application can still make people cranky. There are online applications that are clearly designed to make the customer do things the way the vendor demands and these are the ones that make you go "Why am I here?".

Now, can all the metrics that measure Web Performance be distilled to Speed, Conversions, and Experience? If you stepped away from the very product specific terms the Web Performance industry uses every day, what would describe the final, bottled, and served essence of Web Performance?

Web Performance: The Myopia of Speed

In February 2010, Fred Wilson spoke to the Future of Web Apps Conference. He delivered a speech emphasizing 10 things that make a Web application successful.

The one that seems to have stuck in everyone's mind is the first of these. People have focused, quoted, and written almost exclusively about number one:
First and foremost, we believe that speed is more than a feature. Speed is the most important feature.

Strong words.

Fred has worked with Web and mobile companies for many years, so he comes at this with a modicum of experience. And for years, I would have agreed with this. But Fred goes on to describe 9 other items that don't get the same Google-juice that this one quote does. There are probably 10 more that companies could come up with.

But a maniacal focus on speed means that in some companies, all else is tossed in order for that goal of achieving some insane, straight-line, one-dimensional goal. Some companies are likely investigating faster than light technologies to make the delivery of online applications even faster.

Can you base your entire business on having the fastest online application? What do you have to do to be fast?

Strip it down. Lose the weight, the bloat, the features. And what's left is a powerful beast designed to do one party trick, likely at the expense of some other aspect of the business that supports the application.

If a company focuses on a few metrics, a few key indicators, they might evolve up to NASCAR, where it is not just speed, but cornering, that matters. Only left-hand corners, mind you, but corners nonetheless. Here speed is important, but is balanced against availability and consistency to ensure that a complete view of the value of the site is understood.

But is that enough? Do your customers always want to go left in your application? What happens if you are asked to allow some customers to go right? Do all of the other performance factors that you have worked on suddenly collapse?

As you can tell, growing up means that my taste in fast cars and racing forms has evolved, become more complex. Straight-line speed, followed by multi-dimensional perspectives have led me to realize that speed is only one feature.

So, if top-fuel and stock-car racing aren't my gig, what is?

For a number of years in the 1980s and again since 2008, I have had a love of Formula One. The complexity of what these machines are trying to achieve boggles the mind.

Formula One is speed, of that there is no doubt. But there is cornering (left and right), weight distribution, brake temperature, fuel mix, traffic, uphill (and downhill, sometimes with corners!), street courses and track courses. And there are 24 answers to the same question in every race.

And then, there is a driver. In Formula One, a driver with an "inferior" car can win the day, if that inferiority is what is particularly suited to that course, in the hands of a skilled manager.

There is no doubt that like Formula One, speed is key to coming out on top. But if the organization is focused solely on speed, then your view of performance will never evolve. The key to ensuring a complete Web performance experience is a maniacal focus on a matrix of items: speed, complexity, third-parties, availability, server uptime, network reliability, design, product, supply-chain, inventory management integration, authentication, security, and on and on.

The Web application is a just that: a web. Multi-dependent factor and performance indicators that must be weighed, balanced, and prioritized to succeed. No web application, no online application, fixed or mobile, will survive without speed.

However, if speed is all you have, is that enough to keep someone coming back?

Is your organization saying that speed is all there is to performance?

Monday, January 23, 2012

The Customer Investment

Who uses the products or services your company sells?

The usual answer, once you get through the marketing spin and positioning, is customers.

Companies spend a large amount of time, resources, and treasure converting prospects into customers, but where is the investment in keeping customers from becoming anti-customers?

The mobile phone business is an ideal example for this ebb and flow, a prime case study for customer investment.

I'm a T-Mobile USA customer; have been since 2004. This year, T-Mobile USA has decided that 2012 Is The Year T-Mobile Fixes Churn. Does this mean just the customers at the end of their contract or the one leaving because of the lack of the iDevice they want?

Or will T-Mobile USA extend this churn-loss plan (Go New Year's Resolution!proactively to all customers.

Will T-Mobile bother to personally contact (hey, with a phone call?) every one of its current customers?

Will T-Mobile ask customers who are leaving why? Not in a stupid, aggressive way, but in a way that admits that they didn't do enough for that person, but they really want to understand what went wrong.

Will T-Mobile USA take the time to invest in their customers?

Investing in customers means proactively working with them to ensure that the service they are getting:

  1. Meets the customer's current needs

  2. Is flexible enough to adjust to the evolution of the customer's business.


 

Joseph Michelli discusses the concepts of service velocity and service recovery in The Zappos Experience. These are items that companies need to consider. Customers want you to adapt and evolve to meet their post-sales needs (service velocity) and then be truthful, upfront, and solution-focused when there is a problem (service recovery). Customers want you to invest in them, in sickness and in health.

It's so much easier to keep a customer than it is to get a new one to replace them. So why are so many companies lacking focus and discipline when investing in their customers.

Saturday, January 21, 2012

Career Reform - The Changing Face of Expertise

Empty Road - William WarbyIn August 2011, I took the title "consultant" off my business card after having it for eight years. It was sad to see the old friend leave, but it was for the best - for both consultant and for me.

Two years ago (22 months for those of you who are more precise), I composed two pieces on what it meant to me as I evolved out of the role of "analyst" and into the role of "consultant" (here) and how this meant developing the skills of a "selling consultant" (here). It was a heady time. I was learning a lot of new skills, meeting the challenges of a post-technical role, managing to a new level of "success".

Many things have changed since then. But the key lesson that I learned is that the career path that was in front of me was not headed in the direction I wanted to go. The true sign of this, that I ignored at the time, but which is so obvious to me now, is when I started counting down the days to my annual vacation.

Having just finished Onward and Delivering Happiness, I read that these moments come to all people. It's how they choose to face them that determines their happiness after.

Due to a serious of weird misfortunes, fortune shined upon me. A new opportunity was presented to me, and I was able to use it to shape a new path forward, one I think that many maturing consults imagined that their roll would look like when they started their journey.

My new role is to act as a consultant to the entire organization. And what does that mean? My goal (and I get to invent the role as I go along) is to develop and share the knowledge of the strategic use of the product line, approached from a technical and sales perspective, to help current and new members of the company not only learn the How of the product line, but the Why that motivates prospects to become company customers. I also get to see how the product plan morphs, shifts to meet new information and new ideas.

Am I happy? Yes. When I began my change from analyst to consultant, I had hoped this is where I would end up.

If I stayed the course, would I have ended up here?

Despite being a counter-factual question, I think that the answer is no. I was being squeezed, shaped, and directed by the role of consultant. I had lost control of my own career and was being driven to the next destination in a blacked-out van.

Now I have gotten out of the van, checked my bearings, and started walking in the direction I want to go in.

What's next? Well, I'm sure that in two years, I'll have something to share.

Friday, January 20, 2012

Customer Experience: The Vanishing Reviews

SJE is an excellent supporter of the online economy. However, she is also very focused on the experience she suffers through on many online retail applications. The question I get frequently from the other end of the living room (Retail and Wardrobe Management Control Center - see image) is: "Is Company X a customer? Because their site (is slow | is badly designed | doesn't work | sucks)!".

Most of the time, there isn't much to do, and the site usually responds and SJE is able to complete the task she is focused on.

Last night, however, a retailer did something that strayed into new territory. This company unwittingly affected the customer experience to such a degree that they actually destroyed the trust of a long-term customer.

This isn't good for me, as I wear a lot of fine products from this retailer. But even in my eyes, they committed a grievous sin.

This retailer decided, for reasons that are known only to them, to delete a number of negative comments, reviews, and ratings for a product that they have for sale.

I just checked, and sure enough, all of the comments, including my wife's very strong negative feedback about the quality, are gone.

I can think of a number of really devious and greedy reasons why a company might do this. It could also be an accident. If it was an accident, you might want to note that reviews and comments for this product were accidentally lost.

Now, if you went to a retailer and saw that your comments and reviews had been deleted, how would you feel? Would you trust that retailer ever again? What would happen if the twittering masses picked up the meme and started to add fuel to the bonfire?

A strong business, a solid design, an amazing presentation, and unrivaled delivery aren't enough for some businesses. As a company, there is substantial effort, time, and treasure dedicated to converting visitors into customers. And it sometimes takes only one boneheaded move to turn a customer into the anti-customer.

This Post Rating:  

Customer Experience: Standing on your own four legs

Tables. They're pretty ubiquitous. You might even be using one right now (although in the modern mobile world, you may not. LAMP POST!).

A strong business is like a table, supported by four legs.

  • The Business. The reason that resources and people have been gathered together. There is a vision of what the group wants to do and what success looks like.



  • The Design. Don't think style; think Design/Build. This is where the group takes the business idea and determines how they will make it happen, where the stores will be, what a datacenter looks like, who they will partner with.



  • The Presentation. How the Business and the Design are shown to people. How the shelves are stocked, the landing pages look, the advertising is placed, how the business looks to potential customers.



  • The Delivery. This is the critical part of how the business uses the systems they have designed and the presentation they have crafted to deliver something of value to the potential customer.


Without any one of these, an organization will fail to meet the most critical goal it has set to be successful: an experience that turns a visitor or browser into a customer.

All the Business and MBA grads in the audience are yawning, and slapping their Venti non-fat, no-whip, decaf soy lattés down on the table. This message isn't for you. Well, it is, but you can stand up and give your chair to one of the people behind you.

Now that I have Dev, QA, and Operations sitting with me (remember, the Business guys are still in the back of the room, tapping away on their Blackberries), tell me what you think of this conceptual table. How does the Table of Customer Experience relate to you?

Ok, put down the Red Bulls and Monsters and listen: Everything that Dev, QA, or Operations does has an effect on the experience (negative or positive) of the potential customer. If one of the table legs is broken (or even shorter than the others), the rippling shockwaves will eventually affect the entire operation.

So, if I were to ask the member so of your organization how their daily activities supported the online application in each of these four areas, do you think they could answer?

Grab a white board. This is going to be a long day.

Picture courtesy of sashafatcat

Thursday, January 19, 2012

The Nomenclature Problem (or "What's in a name?")

Someone walks into your store. They say hello, poke around the racks, ask a few questions. Then they walk out.

Now, if I asked you, how would you describe that person?

Customer? Visitor? Yes?

I have been asking this question in preparation for some session for a group of motivated partners and employees in Singapore and Bangalore. As I prepare the presenter slides (not the dense textbook slides the participants get - thank you Garr Reynolds!), I keep correcting the words, typing customer to describe a visitor who is not.

When you and your teams discuss deep topics like conversion rates and transaction abandonment (WAKE UP! NO MEDITATION!) does the group classify non-buying, real people as  customers or visitors?

The label customer should be reserved for those visitors who complete the transaction and provide the revenue/information to the company whose online application they are interacting with. This means that the customer is the visitor who has bought into the entire online application experience.

A visitor becomes a customer only when they are happy with:

  • The Business

  • The Design

  • The Presentation

  • The Delivery


Where in the four areas has your application let the company down before?

If you asked a random visitor why they haven't become a customer, what do you think the typical answer would be right now? Next week? A year from now?

Then ask your parents (or your spouse, if you're brave) to use your application. You must show incredible restraint during this exercise (I suggest a remote operated camera and 6,000 miles of separation) to stop yourself from leaping in and telling them what to do,  shaping their experience and guiding them to your expected and desired outcome.

Can they do it? Would your parents or spouse become a customer?

When you look at your online applications tomorrow, use beginners mind to truly look at what you are doing in the four key areas. If you find yourself shaking your head and saying that this doesn't make sense, put yourself in the visitors' shoes.

You may ask yourself if the application you provide to support your business is truly improving the visitor experience.  What you have a strong chance of finding is that your application is designed for customers at the expense of visitors.

When a visitor doesn't complete the tasks you defined for them to reach the goal of becoming one of your customers, what do you call them?

And do you know what to do next?

Wednesday, January 18, 2012

Overcoming the Momentum of Traditional Web Performance

When I asked if traditional Web performance still mattered, the post generated a flurry of comments and questions that I haven't seen in in a long time.

After some reflection and discussions with people who have been tackling this problem for longer than I have, the answer is yes, it does matter. However, synthetic Web performance measurement will not matter the way it does now. The synthetic approach will decrease in importance within fully evolved companies, organizations that have strong cultures of Web performance.

In these organizations, the questions change as the approach becomes foundational and integral to the operation of the online business. Ways of examining competition and performance improvement evolve, and the focus moves - from the perspective of We have a problem to one of of Our customers / visitors have a problem.

[caption id="attachment_2776" align="aligncenter" width="300" caption="The Focus of Web Performance"]customer focus diagram[/caption]

The shift is fundamental and critical. For as long as I have been in the business, synthetic measurements have served as a proxy for customer experience. But unless you get into the browser, out to where and how the customer uses the online application, the margin of error will remain large.

The customer is not an operational issue. There is no technical fix for perceived performance.

There is no easy solution for evolving the experience of performance.

Image courtesy of james_gordon_los_angeles

Monday, January 2, 2012

What does success mean for you?

Talking to customers is always teaches me new ways of looking at the industry I'm in. I don't talk to as many customers as I used to, but when I do, it is interesting how many companies, be they large and established or small and emerging, are focused on the problems of now.

I've talked about the different perspectives on the problems of now that I have seen in my industry (here and here), but if you ask any consultant or analyst, similar questions can be traced through any company/organization in any industry/sector.

I always look at the problems of now as a passing fad. As I answer each question, a new one would arise, appearing from the ashes of the previous one. To prevent endless flailing about, dancing from question to question like a tactical pinball, I ask the most important question of any scoping process as early as I can:

What does a successful engagement look like to you?


Innocuous. Simplistic. But powerfully effective. Putting this simple question into the scoping process helps the customer explain to you how you're going to help them be successful, because they know what success means to them.

What applies in the macro can often trickle-down to the personal. I am notorious for not having a "plan" - I have driven a number of Type A (A+++?) managers to madness with my lack of a plan. But I know what success means to me; I just leave the details of how I achieve it open-ended.

Have you considered what success looks like to you?