mPulse

Wednesday, December 17, 2008

SLA: The myth of simplicity

Service Level Agreements. SLAs.

Three of the most contentious words, and most contentious acronym, in the technology sector. Arguments are had, suits are filed, and relationships broken and strained as a result of this single concept.

How can something seemingly simple as setting an agreed upon level of service delivery be so problematic and misunderstood?

The word agreement is the key to the problem. SLAs assume that all parties understand and agree of the level of service. And how that information is to be reported. And who is responsible for reporting the data. And how long you have to file grievances. And who handles problems. And...well, lawyers are involved.

As Guy Kawasaki states regarding the lies of venture capitalists: there is no such thing as a vanilla term sheet.

There is also no such thing as a vanilla SLA. A company that tries to present you with a standardized SLA is trying to pull something over on you.

Some rules about SLAs.

  1. The vendor does not define the SLA. If the vendor selling the product tells you, the customer, what your expected level of service is, then they don't care about you. Find another vendor.

  2. The customer does not define the SLA. If the customer tells you that they cannot sign an SLA unless you, the vendor, agree to their conditions, walk away from the deal.

  3. An SLA is not an SLO. Service Level Objectives are the targets of success defined by both parties within the SLA. These numbers, however, are not the alpha and the omega of an SLA

  4. A customer-initiated penalty condition is always in the vendors favor. If the vendor states that the client must initiate the SLA grievance conversation when SLOs are violated, then the vendor is assuming that you are not looking at the data.

  5. SLOs should never be based on single, aggregated metrics from the data. If some bozo tries to say that they provide 99% availability and 3 second average performance, walk away. That is not an SLO.

  6. SLAs are not set in stone. If something is not working, or if targets change, or anything changes, then the parties have to be willing to sit down on a schedule (defined in the SLA) and renegotiate their SLA.

  7. The vendor and the customer have transparent access to the data used for the SLO. If the ccustomer cannot see the data that the vendor is using in the SLO anytime it wants, there will always be a level of mistrust. If you like having all your customers mistrust you, this is a great strategy.

  8. The Problem and Issue Management processes are clearly defined. When something bad happens, or a change needs to be made, the customer and the vendor have to have very clearly defined roles in the process. Responsibility and trust. Do you have that in your current SLA?

  9. The customer and the vendor decide when a problem or issue is resolved. It is not up to one side in an SLA to decide when an issue or problem is resolved. As there are likely penalties involved the longer the abnormal state exists, the customer has a vested interest in quick resolution. As there is likely lost revenue on the table, the customer has the same interest. But the customer also has the seemingly unreasonable idea that this will never happen again, it will be clearly documented, and that getting the right solution is better than getting a solution.

  10. Communication is the key to a good SLA. In the 9 previous points, the emphasis is on communication, the sharing of information. Current SLAs seem to be designed to hide information from each side, and only release it under the most dire situation. People talk. The information will get out. You want your well-crafted brand to implode because you have a reputation as sneaky and untrustworthy?


I've likely missed many of the key points, but these are the ones that I see, from both sides of the field, on a pretty regular basis.

In the end, an SLA is not simple. It is not standardized. It is not defined by one side or the other. It is a negotiated treaty of behavior that, in the end, defines the daily operational relationship between two organizations. If you enter an SLA process with both sides trying to find the best way to work together in the long term, there is a good chance that the SLA will be easier than if you go in as stone-cold adversaries.

Thursday, December 11, 2008

Upgrade to 2.7 Complete

Newest Industry is now upgraded to Wordpress 2.7. You will notice no changes. I like that!

Monday, December 8, 2008

Why Web measurements? The Series.

In my life as a consultant, I often discuss What Web performance data means and how to interpret it to solve problems. Solving the problems is, however, inherently based on whether the data that is collected is meaningful. In trying to find data that is meaningful, we have found that there are four categories that Web performance measurements fall into: Customer Generation, Customer Retention, Business Operations, and Technical Operations.

Customer Generation


How can you use Web performance measurement data to outperform your competition and impress your prospects. Read it here!

Customer Retention


Impress your customers with your skill and responsiveness, and keep the competition from sneaking in the back door. Read it here!

Business Operations


Know how you are doing against your competition and prioritize what you need to do to stay ahead. Read it here!

Technical Operations


Know what to measure and how often to keep a detailed eye on your internal systems and external performance. Read it here!

Why Web Measurements? Part IV: Technical Operations

In the first three parts of this series, the focus has been on the business side of the business: Customer Generation, Customer Retention, and Business Operations. The final component of any discussion of why companies measure their Web performance falls down to Technical Operations.

Why is Technical Operations last?


This part of the conversation is the last, mainly because it is the most mature. A technical audience will understand the basics of a distributed Web performance measurement system, or a Web analytics system, or a QA testing tool without too much explanation. The problems that these tools solve are well-defined and have been around for many years.

Quickly thinking about these types of problems makes it clear, however, that the kind of data needed in a technical operations environment is substantially different than that which is needed at the Business Operations level. Here, the devil is in the details; at Business Operations, the devil is in the patterns and trends.

What are you trying to measure?


The short answer is that a Technical Operations team is trying to measure everything. More data is better data at this level. The key is the ability to correlate multiple sources of system inputs (Web performance data, systems data, network data, traffic data, database queries, etc.) to detect the patterns of behavior which could indicate impending crises or complete system outage, or simply a slower than expected response time during peak business hours.

And while Technical Operations teams thrive on data, they do not thrive on explaining this data very well to others. So the metrics which are important in one organization may not be the key ones in another. Or they may be called by a completely different name. Which is why Technical Operations sigh and throw up their hands in despair when talking to management who are working from Business Operations data.

How do you measure it?


Measure early. Measure often.

This sums up the philosophy of most Technical Operations teams. They want to gather as much data as possible. So much data that the gathering of this data is often one step away from affecting the performance of their own systems. This is how the scientific mind works. So, be prepared to control this urge to measure and instrument everything with a need to ensure that the system is operationally sound.

Summary


Even in the well-developed area of Technical Operations, there is still opportunity to ensure that you are measuring the right things the right way. Do an audit of your measurements. Ask the question "why do we measure this this way?".

Measure meaningful things in a meaningful way.

SEESMIC: Leopards of the Living Room

Jungle cats of the living roomhttp://seesmic.com/embeds/wrapper.swf

Friday, December 5, 2008

Why Web Measurements? Part III: Business Operations

In the Customer Generation and Customer Retention articles of this series, the focus was on Web performance measurements designed to serve an audience outside of your organization. Starting with Business Operations, the focus shifts toward the use of Web performance measurements inside your organization.

Why Business Operations?


When I was initially developing these ideas with my colleague Jean Campbell, the idea was to call this section Reporting and Quality of Service. What we found was that this didn't completely encompass all of the ideas that fall under these measurements. The question became: which part of the organization do reporting and QoS measurements serve?

What was clear was these were the metrics that reported on the health of the Web service to management and the company as a whole. This was the measurement data that the line of business tied to revenue and analytics data to get a true picture of the health of the online business.

What are you measuring?


Measurements for business operations need to capture the key metrics that are critical for making informed business decisions.

  • How do we compare to our competitors?

  • Are we close to breaching our SLAs?

  • Are the third-parties we use close to breaching their SLAs?

  • What parts of the site affect performance / user experience the most so we can set priorities?

  • How does Web performance correlate with all the other data we use in our online business?


Every company will use different measures to capture this information, and correlate the data in different ways. The key is that you do use it to understand how Web performance ties into the line of business.

How often do I look at it?


Well, honestly, most people who work in business operations only need to examine Web performance once a day in a summary business KPI report (your company has a useful daily KPI report that everyone understands and uses, right?), and in greater detail at weekly and monthly management meetings.

The goal of the people examining business operations data is not to solve the technical problems that are being encountered, but to understand how the performance of their site affects the general business health of the company, and how it plays in the competitive marketplace.

What metrics do I need?


Business operations teams need to understand

  • End-to-end response time for measured business processes

  • Page-level response times for measured business processes

  • Success rate of the transaction during the measurement period

  • How third-parties are affecting performance

  • How Web analytics and Web performance relate

  • How different regions are affected by performance

  • How does performance look from the customer ISPs and desktops


Detailed technical data is lost on these people. It is their role to take all of the data they have, and present a picture of the application as it affects the business, and discuss challenges that they face at a technical level in terms of how they affect the business.

Summary


For people who work at an extremely detailed level with Web measurement data (the topic for the next part of this series), Business Operations metrics seem light, fluffy, and often meaningless. But these metrics serve a distinct audience: the people who run the company. Frankly, if the senior business leaders at an organization are worried on a daily basis about the minute technical details taht go into troubleshooting and diagnosing performance issues, I would be concerned.

The objective of Business Operations measurements is to convey the health of the Web systems that support the business, and correlate that health with other KPIs used by the management team.

Tuesday, December 2, 2008

Why Web Measurements? Part II: Customer Retention

In the first part of this series, using Web performance measurements to generate new customers was the topic. This article focuses on using the same data to keep the customers you have, and make them believe in the value of your service.

Proving the Point


Getting a customer is the exciting and glamorous work. Resources are often drawn from far and wide in an organization to win over a prospect and make them a customer.

Once the deal is done, the day-to-day business of making the customer believe that they are getting what they paid for is the job of the ongoing benchmarking measurements. CDNs and third-party services need to prove that they are delivering the goods, and this can only be done by an agreed upon measurement metric.

Some people leap right into an SLA / SLO discussion. As a Web performance professional, I can tell you that there are few SLAs that are effective, and ever fewer that are enforceable.

Start with what you can prove. Was the performance that was shown me during the pre-sales process a fluke, or does it represent the true level of service that I am getting for my money?

Measure Often and Everywhere


The Web performance world has become addicted to the relatively clean and predictable measurements that originate from high-quality backbone measurement locations. This perspective can provide an slightly unrealistic view of the Web world.

How many times have you heard from the people around you about site X (maybe this is your site) behaving badly or unpredictably from home connections? Why, when you examine the Web performance data from the backbone, doesn't this show up?

Web connections to the home are unpredicatble, unregulated, and have no QoS target. It is definitely best effort. This is especially true in the US, where there is no incentive (some would say that there is a barrier) to delivering the best quality performance to the home. But that is where the money is.

As a service provider, you better be willing to show that your service is able to surmount the obstacles and deliver Web performance advantages at the Last Mile and the Backbone.

Don't ever base SLAs on Last Mile data - this is Web performance insanity. But be ready to prove that you can deliver high quality performance everywhere.

Show me the data


As a customer of your service, I expect you to show me the measurement that you're are collecting. I expect you to be honest with me when you encounter a problem. I do not want to hear/see your finger-pointing, especially when you try and push the blame for any performance issues back to me.

As a service provider, you live and die by the Web performance data. And if you see something in the data, not related to your business, but that could make my site faster and better, tell me about it.

Remember that partnership you sold me on during the Customer Generation phase? Show it to me now. If you help me get better, this will be added to plus column on the decision chart at renewal time, when your competitor comes knocking on my door with a lower price and Web performance data that shows how much you suck.

Shit Happens. Fess up.


The beauty of Web performance measurement is that your customers can replicate exactly the same measurements that you run on their behalf. And, they may actually measure things that you hadn't thought about.

And sure as shooting, they will show up at a meeting with your team one day with data that shows that your service FUBAR'd on a massive scale.

It's the Internet. Bad shit happens on the Internet. I've seen it.

If you can show them that you know about the problem, explain what caused it, how you resolved it, and how you are working to prevent it, good.

Better: Call them when the shit happens. Let them know that you know about the problem and that you have a crack team of Web performance commandos deployed worldwide to resolve the problem in non-relativistic time. Blog it. Tweet it. Put a big 'ol email in their inbox. Call your pimary contact, and your secondary contact, and your tertiary contact.

Fess up. You can only hide so much before your customers start talking. And the last thing your want prospects seeing is your existing customers talking smack about your service.

Summary


Web performance measurement doesn't go away the second you close the deal. In fact, the process has only just begun. It is a crazy, competitive world out there. Be prepared to show that you're the best and that you aren't perfect every single day.

Monday, December 1, 2008

GrabPERF: What and Why

Why GrabPERF?


About four years ago, I had a bright idea that I would like to learn more about how to build and scale a small Web performance measurement platform. I've worked in the Web performance industry for nearly a decade now, and this was an experimental platform for me to examine and encounter many of the challenges that I see on a daily basis.

The effort was so successful and garnered enough attention during the initial blogging boom that I was able to sell the whole platform for a tiny (that is not a typo) sum to Technorati.

The name is taken from another experimental tool I wrote called GrabIT2 which uses the PHP cURL libraries to capture timings and HTML data for HTTP requests. It is an extension of my articles and writings on Web performance that started at Webperformance.org, and that have since moved to this blog.

What is GrabERF?


GrabPERF is a multi-location measurement platform, based on PERL, cURL, PHP, and MySQL that is designed to

  • Measure the base HTML or a single-object target using HTTP or HTTPS

  • Report the data to a central database (located in the San Francisco Area)

  • Report the data using a GUI or through text based download


Why not Full Pages with all Objects?


Reason 1: I work for a company that already does that. Lawyers and MBAs among you, do the math.

Reason 2: I am an analyst, not a programmer. The best I can say about my measurement script is hack job.

Why is the GrabPERF interface so clunky?


See reason 2 above.

If you want to write your own interface to the data, let me know.

Why has the interface not changed in nearly three years?


The current interface works. It's simple, clean, and delivers the data that I and the regular users need to analyze performance issues. If there is something more that you would like to see, let me know!

I like what I see. How can I host a measurement location?


Just contact me, and I can provide you with a list of PERL modules you will need to install on your linux server. In return, I need a static IP address of the machine hosting the measurement agent.

How stable is GrabPERF?


Most of the time, I forget it's even running. I have logged onto the servers and typed in uptime and discovered that it's been 6 months or more since the servers have been re-booted.

It was designed to be simple, because that's all I know how to do. The lack of complexity makes it effectively self-managing.

Shouldn't all systems be that way?

What if my question isn't asked / answered here?


Your should know the answer to this by now: contact me.

Why Web Measurements? Part I: Customer Generation

Introduction to the Series


This is the first of a four-part series focusing on the reasons why companies measure their Web performance. This perspective is substantially different than ones posited by others in the field as they focus on the meat and potatoes reasons, rather than the sometimes more difficult to imagine future effects that measurement will bring.

Reason One: Customer Generation


It is critical that a company be able to show that their Web services are superior to others, especially in the third-party services and delivery sectors of the Web. In this area, Web performance measurement is key to demonstrating the value and advantage of a service versus the option of self-delivering or using another competitor's service.

Comparative benchmarking that clearly demonstrates the performance of each of the competitive services in the geographic regions that are of greatest interest to the prospect is key to these Web performance measurements. To achieve truly competitive benchmarks and prove the value of a service, measurements must be realistic and flexible.

In the CDN field, a one object fits all approach is no longer valid. CDNs are responsible for delivering not just images or static objects, but may also host an entire application on their edge servers, serving both HTTP and HTTPS content. In other cases, the application may not be hosted at the edge, but the edge server may act a s a proxy for the application, using advancing routing algorithms to deliver the visitor the requested dynamic content more quickly (in theory) than the origin server alone.

This complex range of services means that a CDN has to be willing to demonstrate effective and efficient service delivery before the sale is complete. A CDN has to be willing to expose their system not just to the backbone-based measurements offered in a traditional customer generation process, but to take measurements from the real-user perspective.

Ad-providers have to be willing to show that their service does not affect the overall performance of the site they are trying to place their content on. Web analytics firms face the same challenge. Web analytics firms have one advantage: if their object doesn't load properly, it may not effect the visitor experience. However, neither ad-providers nor Web-analytics providers can hide frow Web measurement collection methods that show all of the bling and the blemishes.

Using Web performance measurements to generate customers is a way that a firm can clearly show that they have faith enough in their service to openly compare it to other providers and to the status quo.

But woe be the firm who uses Web performance metrics in a way that tries to show only their good side. Prospects become former prospects very quickly if a firm using Web performance data to generate new business is found to be gaming the system to their advantage. And it will happen.

Customer Generation is a key method that Web performance measurements are used by firms to clearly show how their service is superior to what a prospect currently has, or is also considering. However, this method does come with substantial caveats, including

  • The need to measure what is relevant

  • The need to measure from where the prospect has the greatest interest

  • The need to consider that gaming the system to show advantage will cost a firm in the end.

Saturday, November 29, 2008

Black Friday 2008: The pain, the horror, the suffering

The GrabPERF Black Friday Dashboard is done for another year and there were two performance victims that suffered the most at the hands of the onslaught of bargain-hunters in the area of Web performance.

Some caveats that I need to mention about the GrabPERF measurement methodology.

  1. Only the base HTML file of each site is measured.

  2. Only the base HTML of the homepage is measured. This means that any issues that arose in the shopping process were not captured.


All of the sites in the GrabPERF Holiday Retail Measurement Index can be continually monitored on the GrabPERF Black Friday Dashboard. This page will be available until January 1 2009.

That said, the two primary performance victims this year are HP Shopping and Sears. We focus here on those that did not do that well because sites who have met the Web performance challenge and survived to fight another year are not as interesting from a learning perspective.

HP Shopping


hp-shopping-blackfriday-2008

HP Suffered the greatest response time problems, by effectively becoming unresponsive as of 09:00 EST. The greates affect on overall response time came as a result of the First Byte time metric which is a solid proxy for measuring the server or application load, as it is the time between the initial client HTTP request and the server's HTTP response.

Factored into the poor performance analysis is the fact that GrabPERF only captures data for the base HTML object. If the performance seen here is carried over to the download of all of the graphical content on the page, I would be surprised if anyone was able to make any kind of purchases on the HP web site on Black Friday.

Today, performance has returned to substantially lower levels, indicating that this application was simply not ready for the amount of traffic it received, or ran into a completely unexpected issue when the load increased.

Recommendation for 2009: Load Test the application using this year's traffic metrics as a baseline for validating the scalability of the application.

Sears


sears-shopping-blackfriday-2008

Sears is a returning visitor from last year's Black Friday measurements. Unfortunately, they return for exactly the same reason that they were on last year - scaling/capacity issues that appear as errors.

And these are the worst kind of errors. As can be seen in the graphic below, the Sears Web site announced to the whole world that they had over-reached and that they could not handle the incoming volume of traffic.

What is interesting is that Sears owns properties that survived the day very well, namely Lands End. The question that must be posed is why does the parent site fail so badly when the child sites handle the traffic without difficulty?
sears-error-image-blackfriday

Recommendation for 2009: Load testing for capacity, and meeting with the Lands End team to understand what they are doing to handle the load.

Thursday, November 27, 2008

Wednesday, November 26, 2008

Why Terms Matter: Consultant v. SME v. Evangelist

The term consultant is bandied about so much in this new economy that it has lost it's meaning. Wikipedia defines a consultant as

A consultant (from the Latin consultare means "to discuss" from which we also derive words such as consul and counsel) is a professional who provides advice in a particular area of expertise....


A consultant is usually an expert or a professional in a specific field and has a wide knowledge of the subject matter. A consultant usually works for a consultancy firm or is self-employed, and engages with multiple and changing clients. Thus, clients have access to deeper levels of expertise than would be feasible for them to retain in-house, and to purchase only as much service from the outside consultant as desired.


http://en.wikipedia.org/wiki/Consultant



What this definition misses is that a good consultant, especially in a small firm, is not simply a person with specific subject-matter expertise and therefore a subject-matter expert (SME), a consultant is a jack-of-all-trades.

A simple list of skills needed by a good consultant include:

  • Sales

  • Project Management

  • Product Management

  • Educator

  • Trainer

  • Mentor and Coach

  • Business Manager

  • Subject-Matter Expert


In large consulting organizations, these functions are broken out into specific team members. In a small consultancy, everyone has to be able to manage all of these items.

And then there is another leap: How does a consultant move to being an evangelist? These two roles are substantially different.

While both are SMEs, an evangelist takes that one final step from being a functional expert who is able to make things happen and work in a product to a place where they can stand in front of any audience and make the product sing. It is not just able the abilty to do anymore; it is about the ability to show.

Go through the list of people that you or your organization work with. Do you work with true consultants, SMEs, or evangelists? Which group is most effective in helping your organization get better?  Are you using consultants as expert problem-solvers, or are you simply using them as staff augmentation?

To draw on my experience, I am learning to be a better small-firm consultant. I have developed my skills as a SME and Evangelist over the last decade, but I have not had to be worried about any of the things listed above until the last two years when I started working in a more structured consulting/Professional Services environment.

What has your experience been? Did you start as a SME and become a consultant? Or did you come out of B-school and then develop into a SME?

How has your development as a consultant affected the clients you have worked with and experiences you have had?

Monday, November 24, 2008

UAL - Thank you for flying, but to hell with your Premier Status.

Flying back from SFO after a long and frustrating week introduced me to a new rule that UAL gate staff have been asked to start enforcing. Apparently, my Premier status, which I realize is the lowest of the frequent-flyer levels, means even less now than it did in the past.

Over my career, I had settled on UAL as my carrier of choice. Flying out of SFO for the first 4.5 years in the US meant that UAL was the primary choice to get anywhere. After a while, I became a devoted UAL fan when I realized that in this day of limited overhead bin room having Premier status got you the vaunted 1 on your boarding pass.

I could accept that First-Class and 1K flyers got to board ahead of me - hell, they're on a first-name basis with most of the flight crews. This didn't bother me because I knew that I got to board next.

Friday, that changed.

Apparently, the rule is that Premier Executive now rates between the Red Doormat Club and the Premier status flyers.

I have commented in the past about how people who travel a great deal assume too much from their airline frequent-flyer plans. I do not want to become one of these people. All I ask is that this single privilege I had grown accustomed to having be re-instated. I know my travel money doesn't have a huge effect on your bottom-line, but I stuck with you through thick and thin.

But now this is a really thick move, and my patience has grown thin.

Friday, November 14, 2008

Wednesday, November 12, 2008

GrabPERF: FiOS and BitTorrent - Don't Play Nice

I fired up the Boston FIoS measurement location today after a couple of days off, and found that suddenly FIoS doesn't like the BitTorrent.

The line of purple dots all indicate measurements that reported an error code. All of those measurements come from Boston FiOS. See the real-time graph here.

Accident? Design? That I cannot comment on. I simply report on what I see.

Tuesday, November 11, 2008

GrabPERF: Three New Measurement Locations

In the last 24 hours, thanks to the help of some willing volunteers, GrabPERF has seen the addition of three new measurement locations:

  • Dallas, TX (USA)

  • Virginia (USA)

  • London, UK


All of these location have been graciously provided by the team at e-planning.

Thanks to all of you who volunteer your machines and bandwidth for this project.

As always, we are looking for as more measurement locations. It would be great if we could get some data from the Asia-Pacific region.

Monday, November 3, 2008

Two Weeks with the MacBook

My new MacBook arrived two weeks ago, and I felt that I had spent enough time with it to actually make some useful comments on the good, the bad, and the headbangingly frustrating.

The Finder


Dear Apple: Shoot the Finder development team. Thanks.

I have switched to Path Finder as a Finder replacement. Truly the finder is one of the most debilitating pieces of software I have ever used. Nautilus on Gnome is a far superior file management system.

Software, in general


On the whole, I have found replacements for most of the Windows tools I use on a regular basis. But, as I am not made of money, I am using GIMP for Mac, and that is just clunky in the X11 environment.

Living in the browser makes my life much more tolerable than those who require the Windows environment. I haven't got the money to buy Parallels or VMWare Fusion right now, so I am using RDC to connect to my Windows box. Slap Windows in Space #3 Fullscreen, and no one would know the difference.

Haven't found a good Mind Map tool. And BBEdit is also muchos dineros. So Smultron is the text editor.

Usability


I rate this very high. Other than adjusting to the lack of certain keys (DEL, Pg up/dn, etc), the transition has been seamless. The trackpad is a dream and I miss being able to throw stuff around on my Dell laptops' trackpads like I can using the one on the MacBook. I do find I leave apps hanging, as I am still adjusting to CMD-q closing the app.

Dashboard. What can I say? It's what I need - high-level data at a glance, including the Prem Tables!

Overall


After four years waiting for a MacBook, I can say that it has been worth the wait. Solid, dependable, and slowly becoming my primary computer.

The only concern that I have is the aluminum case. I have an aluminum sensitivity, and if my hands start to peel and otherwise be in bad shape, I will have to determine a solution to that issue.

Friday, October 31, 2008

Video: The mistake of the personal brand

Personal Brand, Reputation, and The Mistake of Closed SourceA video description of why reputation outranks brand every timehttp://seesmic.com/embeds/wrapper.swf

Thursday, October 23, 2008

Sunday, October 19, 2008

Moving from Windows - My First Week With Ubuntu (Hardy Heron)

For the last week, I have been using Ubuntu 8.04 (Hardy Heron) on my personal laptop. I can say that the experience has been mostly transparent for me, even with the need for a complete re-build last night after an attempt to install a complex theme replacement.

I can say that it has been transparent because I have been using Linux desktops in one form or another on an intermittent basis since 1999. When business was slow in the Fall and Winter of 2001/2002, I was the Guinea Pig in my organization to see if Linux could be a corporate replacement for Windows for all desktops and laptops.

So, when I say that the process has been transparent, you will have to realize that I have been a technical user of these desktop interfaces for a number of years. But I can say that since my first positive experiences with the Red Hat Fedora and the Ximian Gnome replacement interface, things have come a very long way.

Ubuntu 8.04 is the first real interface that seems to work predictably, efficiently, and effectively with external devices and programs that are business friendly. This is especially the case if most of the tools are Web-based, as Firefox and Opera work seamlessly. OpenOffice 2.4 can open DOCX files, and media players support most of the files I want to watch/listen to.

It prints to the home network printer.

It accesses the home file server.

I can share and synchronize files among my computers using DropBox.

Some caveats to my positive experience.

  • I work mainly on the Web

  • I do not play games

  • I have been using Linux in various forms and editions since 1999.


If you have technically savvy friend, or really want to push and expand your knowledge of computers and highly configurable operating systems, I would definitely suggest giving Ubuntu a try on the extra computer you have lying around. My laptop is at least 3.5 years old, and not anywhere near as fast as my work laptop running XP. However, with Linux, the two are comparable in speed and performance.

Go on. Try it. I know you want to.

Thursday, October 16, 2008

Web Performance: Nice Display. Now Show Me the Data.

Today's Web interfaces are all about the Flash (literally). Smooth charting, cool effects, callouts to references -- ways to try and simplify complex data collections.

Problem-solving and diagnosis requires a far deeper dive than the flashiest interface could ever provide, because it comes down to the numbers. The actual measurements that make up the flashy chart. If you look at the data used by a professional trader and a someone at home looking at stock charts, there is a substantial difference.

When you get down to that level of analysis, the interface becomes irrelevant. Any analyst worth her or his salary (or salt - same thing) can tell you more from a spreadsheet full of relevant numbers than they can from any pretty graphic. This is true in any field.

When do traders or Web performance analysts use pretty charts? When they have to explain complex issues to non-technical or non-specialist audiences. When these analysts work on solving the sticky problems faced in the everyday world, they always fall back on the numbers.

Web performance data consists of the same few components, regardless of which company is providing the data. In effect, beyond a few key pieces of information about how the measurement data is captured, all Web performance data is the same.

Just because the components that make up the data are the same does not guarantee that the data from two different providers is of the same quality. In an imaginary system, Web performance data from all the major providers could flow into a centralized repository and be transformed using an XSLT or some other mangler so that it would be indistinguishable in most cases to tell which firm was the source.

But a skilled analyst would quickly learn to recognize the data that can be trusted. That would be the data that quickly and accurately represented the issues he was trying to diagnose. The data that flowed with the known patterns of the Web site. The data that helped him do his job more effectively.

In the end, a pretty interface can go a long way to hide the quality of the data that is being represented. A shiny gloss on poor data does not make it better data. It is critical that the data that underlies that pretty chart is able to live up to the quality demands of the people who use it every day.

Selling the interface is selling the brand. Trust in the data builds the reputation.

Which one sold you when you chose your Web performance measurement provider?

Web Performance: The Strength of Corporate Silos

When I meet with clients, I am always astounded by the strength of the silos that exist inside companies. Business, Marketing, IT, Server ops, Development, Network ops, Finance. In the same house, sniping and plotting to ensure that their team has the most power.

Or so it seems to the outsider.

Organizations are all fighting over the same limited pool of resources. Also, the organization of the modern corporation is devised to create this division, with an emphasis on departments and divisions over teams with shared goals. But even the Utopian world of the cross-functional team is a false dream, as the teams begin to fight amongst themselves for the same meagre resources at a project, rather than a department level.

I have no solution for this rather amusing situation. Why is it amusing? As an outsider (at my clients and in my own company) I look upon these running battles as a sign of an organization that has lost its way. Where the need to be managed and controlled has overcome the need to create and accept responsibility.

Start-ups are the villages of the corporate world. Cooperation is high, justice is swift, and creative local solutions abound. Large companies are the Rio de Janeiro's of the economy. Communication is so broken that companies have to run private phone exchanges to other offices. Interesting things have to be accomplished in the back-channel.

This has a severe effect on Web performance initiatives. Each group is constant battling to maintain control over its piece of the system, and ensure that their need for resources is fulfilled. That means one group wants to test K while another wants to measure Q and yet a third needs to capture data on E.

This leads to a substantial amount of duplication and waste when it comes to solving problems and moving the Web site forward. There is no easy answer for this. I have discussed the need for business and IT to find some level of understanding in previous posts, and have yet to find a company that is able break down the silos without reducing the control that the organization imposes.

Friday, October 10, 2008

Performance Alerting: Is Louis Gray the Canary in Your Coal Mine?

Yesterday in the Fast Company Live Fail Whale session [mention on Scoble's blog here], Paul Bucheit of FriendFeed jokingly said that his company's external alerting mechanism was Louis Gray.

I cringed when I read that, as the last people who should be letting you know you have an issue are your visitors or customers. I know that FriendFeed is new and may not have the ops team that Dorion Carroll and Technorati have developed over the years, but it is still critical.

You have done a lot as a company to build a brand. Don't let your internal and external performance sully your reputation. There are a number of low-cost and free ways to watch your performance and alert you before things break.

Louis Gray is a great guy. But he is not an objective and reliable way to alert you when something is wrong with your site.

Thursday, October 9, 2008

Technorati 1,000,000 - Help Me Break Into It!

<sarcasm>

Currently, the Newest Industry sits at #1,106,225 in the Technorati Charts. I'm looking to break into the Technorati 1,000,000 before Christmas.

If Chris Brogan can get into the Technorati 100, I know I can do this!

Help me break into this elite group!

</sarcasm>

Why Do I Do This? - Educate, Guide, and Solve

This is the year I turn 40. As a result, I am looking back upon my life, my career, and trying to determine what I do best. If I could make my life into an elevator pitch, what would it be?

I decided to take what I do right now and see how low I could take it. What does my career boil down to?

It came down to three simple words: Educate, Guide, and Solve.

Each of these describes a facet of my career that provides a profound sense of personal satisfaction. Each of these is unique in that they give me the chance to share what I know with others, while still gaining new experiences in the process.

These three things are simultaneously selfish and selfless. I believe that in order to have a successful, productive, and fulfilling career, these three things need to serve as the foundation of everything I do.

Educate


I work in a small community of Web performance analysts. I have spent years training myself to see the world through the eyes of a Web site and how it presents to the outside world. As I taught myself to see the world this way, I was asked to share what I knew with others.

At first I did this through technical support and a training course I helped develop. Then I moved into consulting. I began to blog and comment on Web performance.

I needed to share what I knew with others, because it is meaningless to hoard all of your knowledge. While I am paid well as a consultant, it is also important that as many people as possible learn from me; and that doesn't always need to sold to the highest bidder.

Guide


While some may say that there is no difference between Guide and Educate, I see a profound chasm between the two.

We have all been educated at some point. We have sat through classes and lectures and labs that convey information to us, and have provided the foundation for what we know.

But we have also encountered people who have shown us how to step beyond the information. They place the information that they are giving us in a larger context, allow us to see problems as a component of the whole.

That is what I strive to do. Not only do I want to give people the functional tools they need to interpret the data, I want them to then take that data and see the patterns in the data. I work closely with colleagues and customers, helping them see the patterns, understand how they tie to the things I say everyday, and then be able to solve this type of problem on their own the next time.

A guide is only useful when the path is not known. Once I have showed someone the path, I can return to my place, in the knowledge that they are as experienced on the path as I am.

Solve


Once you have shown someone what the data can do, how to see the patterns, it is critical that they have an understand how to take that pattern and change it for the better. Seeing a pattern and understanding its cause are only the beginning.

I can share my experiences, share how others have solved problems similar to this one, help them fix the problem.

And then be able to show that the problem is solved. An unmeasured, yet resolved problem, is meaningless.

Summary


This is the skeletal description of what I want to achieve in my career. I could expand these topics for a lot longer, but the question I propose is: What three concepts can you boil your career down to?

Monday, October 6, 2008

Branding v. Reputation: Idea Pairing

I spent some time today pairing ideas that separate Branding from Reputation. These came from my discussion of Branding being closed-source and Reputation being open-source [here].

It's just a start, but it's a start.

Marketing and Social Media: The Bullseye of Communicating



Marketing has traditionally been a two-pronged attack on your mind and your wallet, designed to find the most effective ways to reach your mind, and get you to part with your money.

The techniques used to identify who to go after, how to go after them, and why this message will work drives a social media campaign as much as it does an old-school marketing campaign. The traditional layers in this model are targeting and messaging.

What is interesting is that the emergence of social media has turned a two-layer model into a three-layer model. The third layer has always been there, it just hasn't been large enough to matter to anyone until the last 2-3 years.

The navel-gazing that is occurring in the social media marketing community is due to the rise of this third layer, the layer that is concerned with communicating.

This is not the communications that so many organizations confuse with branding. This is the communication that focuses on the best way to isolate conversations, identify engaged audiences, and participate in communities.

Targeting


The science of marketing lives here. Demographics are the foundation of the targeting phase of any marketing campaign. What does the market we are trying to reach look like?

In this area, Lookery and QuantCast provide organizations with the data they need to decide when and where there message should go.

Messaging


This is where the science becomes the visible. Advertising and branding create the message that portrays the product to the customers, using the information gathered in the targeting phase.

Advertising and branding are not the same thing. Branding is the overarching vision that a product wants to push to the world while advertising is the ephemeral visual and aural methods used to get the brand embedded in the consciousness of a population.

Communicating


The third, and most critical circle in this cycle is communication. It is the one that companies so often get wrong, and that is garnering such a great deal of interest now. I would argue that until recently, companies have not understood communication, preferring to try and shape communication remotely, using advertising and branding, rather than engaging in it directly.

An organization that actively engages in communication is one that has a willingness to walk out from behind the safety of its brand and its advertising and talk to customers. Participate in conversations. Shape communities that emerge either for or against the product.

This is what companies are having so much difficulty with.

Attention and Reputation


Communicating with clients is the smallest circle because so few companies are doing it at all, and those that do it find it so hard to get right. What organizations have found is that attempting to use communication in the same way they use their existing marketing tools leads to failure here.

Getting the attention of a population of key customers is a targeting and messaging success. Holding the attention of these customers doesn't require new advertising and a constantly refreshed brand. The people who we listen to most have a reputation, have opinions we trust.

It will be interesting to watch the true evolution of Corporate Communication (Corporate Conversations?) circle evolve in the next few years.

Saturday, October 4, 2008

Peter Kim's discussion of Social Media Marketing and Scalability

If you are interested in the area of social media marketing, head over to Peter Kim's blog and check out Social Media Marketing's Scalability Problem. The post is excellent, and the comments are the kind of conversation that needs to be had in this area.

The best comments so far:

The interesting thing is that this post is nearly two months old. And without realizing it, that's about the time I started writing about conversation and community, branding v. reputation, and how the content-based advertising algorithms are failing the social media market.

I agree with the commenters and Peter Kim that there is a scalability problem when you are trying to have a conversation. that's why companies rely so much on branding. However, if you take the time to build a community, you don't have to scale your own conversation, as you will have the community willing to build your reputation.

Conversations and community happen around the reputation of brands, people, and products. And where there is a gap between the branding message and the reputation conversation, that's when the greatest problems arise.

Friday, October 3, 2008

(Personal) Branding is Closed-Source

Last night I asked myself what would happen if blogs and social-media sites were no longer allowed to have advertising on them. What would be the revenue model for them? How would they generate income?

I fell back to the position that these sites were not originally created to be driven by advertising, but to develop "personal brands", a topic that has been discussed by Chris Brogan [here and here] and others.

Then I realized something else: The idea of a personal brand, and the concepts of community and conversation, are mutually exclusive.

How can a brand interact with a community? How can a brand participate in a conversation?

People do these things. And while brands are important to people when thinking about companies, when dealing with with people, there is a far more important factor that gives a person's opinion weight in a conversation: Reputation.

In a conversation and in a community, how you are perceived, regarded, and trusted is critical to allowing what you say to matter. If you have no reputation, your opinion may be politely listened to, and promptly ignored.

It comes to this: Branding and Brands, be they corporate or personal, are closed-source. By their nature a brand is something that is directed and defined by the brand-ee, not the community.

Reputation is the opposite of that. Reputation is what a brand gets from the community, from the conversation had outside the branded entity.

What does this mean?

Branding is closed-source. Reputation is open-source.

Wednesday, October 1, 2008

PageRank for Social Media is a Broken Metaphor

When I posted Advertising to the Community: Is PageRank a Good Model for Social Media? a couple of days ago, I was working in a vacuum. I was responding to some degree to the infamous BusinessWeek article, and to the comments Matt Rhodes made on the idea of PageRank being used to rate social media participation.

Turns out I am not alone in criticizing this simplistic approach rating the importance and relevance of conversations and community. Mark Earls comments on the power of super-users [here], and how the focus on these influencers misses the entire point of community and conversation. John Bell of the Digital Influence Mapping Project and Ogilvy points out that the relationships in social media and online communities are inherently more complex than creating a value based on the number of interactions someone has with a community [here].

This conversation is becoming very interesting. There are a lot of very bright people who are considering many different approaches to ranking the importance of a conversation or a community based not only on who is participating, but how engaged people are.

If communities or conversations are run and directed by a select group of people, then they are called dictatorships or lectures. Breaking down, rather than erecting, barriers is why social media is such a powerful force.

Tuesday, September 30, 2008

FriendFeedHolic - A Social Media Ranking Model for Advertising and Marketing Success

One of the most challenging things in social media is finding the conversation leaders. Those people who drive the conversation, and create a community.

FriendFeedHolic (ffholic) has taken the base knowledge that exists in FriendFeed and added a ranking mechanism to it based on input and output. In fact, they weight the participation in the FriendFeed community more heavily than participation in other communities.

This is important. Although FriendFeedHolic is separate from FriendFeed, they have found the way to isolate and target those users who are most likely to participate and create conversations. These users, be it Scoble or Mona N, are where advertisers and marketers can target their money.

How would they do this?

Think about it. If someone that is a large commenter or conversation-creator on FriendFeed creates new content, they are assigned a higher ranking in the new conversation-driven ad-discovery model that advertisers will have to create to succeed.

This new targeted advertising logic will be forced to discover:

  • The content of the conversation

  • The context of the conversation

  • The tone of the conversation

  • The participants in the conversation


This model will be able to identify when it is an inward-facing conversation that involves mostly super-users, or if it is a conversation that engages a wide-spectrum of people.

Conversations among super-users will lead to more passive advertising being shown, as that is a spectator event, with only a few participants.

Conversations created by super-users, or that involve super-users, but have a higher participation from the general community will get more intelligent attention to ensure that the marketing messages and advertising shown fit the four criteria above.

In this new model, advertisers will have to see that they can't simply slap a set of ads up on the popular kids web sites. They will have to understand who leads a community, who generates buzz, and who can engage the most people on a regular basis.

In this model, the leader has far less power than the community that they create. And maintain.

Monday, September 29, 2008

Ferrari Full Service: FAIL

funny pictures
moar funny pictures

Full Story here.

Advertising to the Community: Is PageRank a Good Model for Social Media?

In previous posts about advertising and marketing to the new social media world [here and here], I postulated that it is very difficult to assign a value to a stream of comments, a community of followers, or a conversation.

As always, Google seems (to think) it has the answer. BusinessWeek reports the vague concept of PageRank for the People [here]. Matt Rhodes agrees with this idea, and that advertising will become more and more focused on the community, rather than on the content.

Where the real value in this discussion lies is in targeting the advertising to be relevant to the conversation. It's not just matching the content. It's all about making the advertising relevant to the context.

Is the tone of the conversation about the brand positive or negative? I like to point out that I see my articles about Gutter Helmet creating a content-match in the AdSense logic that drives this product to be advertised. What is lost in the logic that AdSense uses is that I am describing my extremely negative experience with Gutter Helmet.

Shouldn't the competitors of Gutter Helmet be able to take advantage of this, based on the context of the article? Shouldn't Gutter Helmet be trying to respond to these negative posts by monitoring the conversation and actively trying to turn a bad customer experience into a positive long-term relationship?

Conversation and community marketing is a far more complex problem than a modified PageRank algorithm. It is not about the number of connections, or the level of engagement. In the end, it is about ensuring that advertisers can target their shrinking marketing dollars at the conversations that are most important.

Injecting irrelevant content into conversation is not the way to succeed in this new approach. Being an active participant in the conversation is the key.

In effect, the old model that is based on the many eyeballs for the lowest cost approach is failing. A BuzzLogic model that examines conversations and encourages firms to intelligently and actively engage in them is the one that will win.

The road to success is based on engagement, not eyeballs.

The Dog and The Toolbox: Using Web Performance Services Effectively

The Dog and The Toolbox


One day, a dog stumbled upon a toolbox left on the floor. There was a note on it, left by his master, which he couldn't read. He was only a dog, after all.

He sniffed it. It wasn't food. It wasn't a new chew toy. So, being a good dog, he walked off and lay on his mat, and had a nap.

When the master returned home that night, the dog was happy and excited to see him. He greeted his master with joy, and brought along his favorite toy to play with.

He was greeted with yelling and anger and "bad dog". He was confused. What had he done to displease his master? Why did the master keep yelling at him, and pointing at the toolbox. He had been good and left it alone. He knew that it wasn't his.

With his limited understanding of human language, he heard the words "fix", "dishwasher", and "bad dog". He knew that the dishwasher was the yummy cupboard that all of the dinner plates went in to, and came out less yummy and smelling funny.

He also knew that the cupboard had made a very loud sound that had scared the dog two nights ago, and then had spilled yucky water on the floor. He had barked to wake his master, who came down, yelling at the dog, then yelling at the machine.

But what did fix mean? And why was the master pointing at the toolbox?

The Toolbox and Web Performance


It is far too often that I encounter companies that have purchased Web performance service that they believe will fix their problems. They then pass the day-to-day management of this information on to a team that is already overwhelmed with data.

What is this team supposed to do with this data? What does it mean? Who is going to use it? Does it make my life easier?

When it comes time to renew the Web performance services, the company feels gipped. And they end up yelling at the service company who sold them this useless thing, or their own internal staff for not using this tool.

To an overwhelmed IT team, Web performance tools are another toolbox on the floor. They know it's there. It's interesting. It might be useful. But it makes no sense to them, and is not part of what they do.

Giving your dog the toolbox does not fix your dishwasher. Giving an IT team yet another tool does not improve the performance of a Web site.

Only in the hands of a skilled and trained team does the Web performance of a site improve, or the dishwasher get fixed. As I have said before, a tool is just a tool. The question that all organizations must face is what they want from their Web performance services.

Has your organization set a Web performance goal? How do you plan to achieve your goals? How will you measure success? Does everyone understand what the goal is?

After you know the answers to those questions, you will know that that as amazing as he is, your dog will not ever be able to fix your dishwasher.

But now you know who can.

Friday, September 26, 2008

Managing Web Performance: A Hammer is a Hammer

Give almost any human being a hammer, and they will know what to do with it. Modern city dwellers, ancient jungle tribes, and most primates would all look at a hammer and understand instinctively what it does. They would know it is a tool to hit other things with. They may not grasp some of the subtleties, such as that is designed to drive nails into other things and not beat other creatures into submission, but they would know that this is a tool that is a step up from the rock or the tree branch.

Simple tools produce simple results. This is the foundation of a substantial portion of the Software-as-a-Service (SaaS) model. SaaS is a model which allows companies to provide a simple tool in a simple way to lower the cost of the service to everyone.

Web performance data is not simple. Gathering the appropriate data can be as complex as the Web site being measured. The design and infrastructure that supports a SaaS site is usually far more complex than the service it presents to the customer. A service that measures the complexity of your site will likely not provide data that is easy to digest and turn into useful information.

As any organization who has purchased a Web performance measurement service, a monitoring tool, a corporate dashboard expecting instant solutions will tell you, there are no easy solutions. These tools are the hammer and just having a hammer does not mean you can build a house, or craft fine furniture.

In my experience, there are very few organizations that can craft a deep understanding of their own Web performance from the tools they have at their fingertips. And the Web performance data they collect about their own site is about as useful to them as a hammer is to a snake.

Tuesday, September 23, 2008

Web Performance and Advertising: Latency Kills

One of the ongoing themes is the way that slow or degrading response times can have a negative effect on how a brand is perceived. This is especially true when you start placing third-party content on your site. Jake Swearingen, in an article at VetureBeat, discusses the buzz currently running through the advertising world that Right Media is suffering from increasing latency, a state that is being noticed by its customers.

In the end, the trials and tribulations of a single ad-delivery network are not relevant to world peace and the end of disease. However, the performance of an advertising platform has an affect on the brands that host the ads on their sites and the on the brand of the ad platform itself. And in a world where there are many players fighting for second place, it is not good to have a reputation as being slow.

The key differentiators between advertising networks fighting for revenue are not always the number of impressions or the degree to which they have penetrated a particular community. An ad network is far more palatable to visitors when it can deliver advertising to a visitor without affecting or delaying the ability to see the content they originally came for.

If a page is slow, the first response is to blame the site, the brand, the company. However, if it is clear that the last things to load on the page are the ads, then the angst and anger turns toward those parts of the page. And if visitors see ads as inhibitors to their Web experience, the ads space on a page is more likely to be ignored or seen as intrusive.

Monday, September 22, 2008

Welcome Back!

If you can see this post, the DNS system has finally propagated my new host information out to the Web, and you have reached me at the new server, located at BlueHost.

After my LinkedIN request last night, I got two separate rcommendations for BlueHost, both from folks I highly respect.

Let me know what you think.

Web Performance: Managing Web Performance Improvement

When starting with new clients, finding the low-hanging fruit of Web performance is often the simplest thing that can be done. By recommending a few simple configuration changes, these early stage clients can often reap substantial Web performance improvement gains.

The harder problem is that it is hard for organizations to build on these early wins and create an ongoing culture of Web performance improvement. Stripping away the simple fixes often exposes deeper, more base problems that may not have anything to do with technology. In some cases, there is no Web performance improvement process simply because of the pressure and resource constraints that are faced.

In other cases, a deeper, more profound distrust between the IT and Business sides of the organization leads to a culture of conflict, a culture where it is almost impossible to help a company evolve and develop more advanced ways of examining the Web performance improvement process.

I have written on how Business and IT appear, on the surface, to be a mutually exclusive dichotomy in my review of Andy King's Website Optimization. But this dichotomy only exists in those organizations where conflict between business and technology goals dominate the conversation. In an organization with more advanced Web performance improvement processes, there is a shared belief that all business units share the same goal.

So how can a company without a culture of Web performance improvement develop one?

What can an organization crushed between limited resources and demanding clients do to make sure that every aspect of their Web presence performs in an optimal way?

How can an organization where the lack of transparency and the open distrust between groups evolve to adopt an open and mutually agreed upon performance improvement process?

Experience has shown me that a strong culture of Web performance improvement is built on three pillars: Targets, Measurements, and Involvement.

Targets


Setting a Web performance improvement target is the easiest part of the process to implement. it is almost ironic that it is also the part of the process that is the most often ignored.

Any Web performance improvement process must start with a target. It is the target that defines the success of the initiative at the end of all of the effort and work.

If a Web performance improvement process does not have a target, then the process should be immediately halted. Without a target, there is no way to gauge how effective the project has been, and there is no way to measure success.

Measurements


Key to achieving any target is the ability to measure the success in achieving the target. However, before success can be measured, how to measure success must be determined. There must be clear definitions on what will be measured, how, from where, and why the measurement is important.

Defining how success will be measured ensures transparency throughout the improvement process. Allowing anyone who is involved or interested in the process to see the progress being made makes it easier to get people excited and involved in the performance improvement process.

Involvement


This is the component of the Web performance improvement process that companies have the greatest difficulty with. One of the great themes that defines the Web performance industry is the openly hostile relationships between IT and Business that exist within so many organizations. The desire to develop and ingrain a culture of Web performance improvement is lost in the turf battles between IT and Business.

If this energy could be channeled into proactive activity, the Web performance improvement process would be seen as beneficial to both IT and Business. But what this means is that there must be greater openness to involve the two parts of the organization in any Web performance improvement initiative.

Involving as many people as is relevant requires that all parts of the organization agree on how improvement will be measured, and what defines a successful Web performance improvement initiative.

Summary


Targets, Measurements, and Involvement are critical to Web performance initiatives. The highly technical nature of a Web site and the complexities of the business that this technology supports should push companies to find the simplest performance improvement process that they can. What most often occurs, however, is that these three simple process management ideas are quickly overwhelmed by time pressures, client demands, resource constraints, and internecine corporate warfare.

Web Performance: Outages and Reputation

In the last few months, I have talked on a couple of occasions on how an outage can affect a brand, be it personal or corporate [here and here].

Yesterday my servers experienced a 11-hour network outage due to a broken upstream BGP route.

It's sometimes scary to see how worn the cobbler's shoes are.

Sunday, September 21, 2008

GrabPERF Network Outage

Today, there was a network outage that affected the servers from September 21 2008 15:30 GMT until September 22 2008 01:45 GMT.

The data from this period has been cut and hourly averages have been re-calculated.

We apologize for the inconvenience.

Saturday, September 20, 2008

Metrics in Conversational and Community Marketing

There is clear dissatisfaction with the current state of marketing among the social media mavens.

So what can be done? Jeff Jarvis points out that the problem lies with measurement. I agree, as there is only value in a system where all of the people involved agree on what the metric of record will be, and how it can be validly captured.

Currently CPM is the agreed upon metric. In a feed based online world, how does a CPM model work? And, most importantly, why would I continue to place your ads on my site if all your doing is advertising to people based on the words on the page, rather than who is looking at the page and how often that page is looked at.

In effect, advertisers should be the ones thrying to figure out how to get into the community, get into the conversation. As an advertiser, don't you want to be where the action is? But how do you find an engaged audience in an online world that makes a sand castle on the beach in a hurricane look stable?

The challenge for advertisers is to be able to find the active communities and conversations effectively. The challenge for content creators and communities is to understand the value of their conversations, the interactions that people who visit the site have with the content.

In effect, a social media advertising model turns the current model on its head. Site owners and community creators gain the benefit of being attractive to advertisers because of the community, not because of the content. And site owners who understand who visits their site, what content most engages them, how they interact with the system will be able to reap the greatest rewards by selling their community as a marketable entity.

And Steven Hodson rounds out the week's think on communities by throwing out the subversive idea that communities are not always free (as in 'beer', not as in 'land of'). If a community has paid for the privilege of coming together to participate in communal events and discussions, then can't that become an area for site owners to further control the cost of advertising on their site?

While the benefit of reduced or no marketing content is the benefit of many for-pay communities, this benefit can be used by site owners by saying that an advertiser can have access to the for-pay community at the cost of higher ad rates and smaller ads. The free community is a completely different set of rules, but there are also areas in the free community that are of higher value than others.

In summary, the current model is broken. But there is no way to measure the value of a Twitter stream, a FriendFeed conversation, a Disqus thread, or a Digg rampage. And until there is, we are stuck with an ad model that based on the words on the page, and not the community that created the words.

Friday, September 19, 2008

Blog Advertising: Fred Wilson has Thoughts on Targeted Feed-vertising

Fred Wilson adds his thoughts to the conversation about a more intelligent way to target blog and social media advertising. His idea plays right into the ideas I discussed yesterday, ideas that emphasize that a new and successful advertising strategy can be dictated by content creators and bloggers by basing advertising rates on the level of interaction that an audience has with a post.

Where the model I proposed is one that is based on community and conversation, Fred sees an opportunityfor fims that can effectively inject advertising and marketing directly into the conversation, not added on as an afterthought.

Today's conversations take place in the streams of Twitter and FriendFeed, and are solidly founded on the ideas of community and conversation. They are spontaneous, unpredictable. Marketing into the stream requires a level of conversational intelligence that doesn't exist in contextual advertising. It is not simply the words on the screen, it is how those ads are being used.

For example, there is no sense trying to advertise a product on a page or in a conversation that is actively engaged in discussing the flaws and failings of that product. It makes an advertiser look cold, insensitive, and even ridiculous.

In his post, Fred presents examples of subtle, targeted advertising that appears in the streams of an existing conversation without redirecting or changing the conversation. As a VC, he recognizes the opportunity in this area.

Community and conversation focused marketing is potentially huge and likely very effective, if done in a way that does not drive people to filter their content to prevent such advertising. The advertisers will also have to adopt a clear code of behavior that prevents them from being seen as anything more than new-age spammers.

Why will it be more effective? It plays right to the marketers sweet spot: an engaged group, with a focused interest, creating a conversation in a shared community.

If that doesn't set of the buzzword bingo alarms, nothing will.

It is, however, also true. And the interest in this new model of advertising is solely drive by one idea: attention. I have commented on the attention economy previously, and I stick to my guns that a post, a conversation, a community that holds a person's attention in today's world of media and information saturation is one that needs to be explored by marketers.

Rob Crumpler and the team at BuzzLogic announced their conversation ad service yesterday (September 18 2008). This is likely the first move in this exciting new area. And Fred and his team at Union Square recognize the potential in this area.

Thursday, September 18, 2008

Blog Advertising: Toward a Better Model

This week, I have been discussing the different approaches to blog analytics that can be used to determine what posts from a blog's archive are most popular, and whether a blog is front-loaded or long-tailed. The thesis is that it's not always what the words in the blog are that are important.

In a guest post this morning at ProBlogger, Skellie discusses how the value of social media visitors is different and inherently more complex than the value of visitors generated from traditional methods, such as search and feedreaders. Her eight points further support my ideas that the old advertising models are not the best suited for the new blogging world.

Stepping away from the existing advertising models that have been used since blogging popularity exploded in 2005 and 2006, it is clear that the new, interactive social web model requires an advertising approach that centers on community and conversation, rather than the older idea of context and aggregated readership.

The Current Model


Current blog advertising falls into two categories:

  1. Contextual Ads. This is the Google model, and is based on the ad network auctioning off keywords and phrases to advertisers for the privilege of seeing their ad links or images appear on pages that contain those words or phrases.

  2. Sponsored Ads. Once a blog is popular enough and can prove a well-developed audience, the blogger can offer to sell space on his blog to advertisers who wish to have their products, offerings or companies presented to the target audience.


In my opinion, these two approaches fail blog owners.

Contextual ads understand the content of the page, but do not understand the popularity of the page, or its relationship to the popularity of other pages in the archive.Contextual ads lack a sense of community, a sense of conversation. While the model has proven successful, it does not maximize the reach that a blog has with its own audience.

Sponsored ads understand the audience that the blog reaches, but do not account for posts that draw the readers' attention for the longest time, both in terms of time spent reading and thinking about the post as well as over time in an historical sense. The sponsored ad model assumes that all posts get equal attention, or drive community and conversation to the same degree.

The New Model


In the new model, more effective use of visitor analytics is vital to shaping the type and value of the ads sold. Studying the visitor statistics of a blog will allow the owners to see whether the blog is, in general, front-loaded or long-tailed.

If the blog has a front-loaded audience, the most recent posts are of higher value and could be auctioned of at higher prices. In order for this to work, both the ad-hoster and the advertiser would have to agree to the value of the most recent posts using a proven and open statistical analysis methodology. In the case of front-loaded blogs, this analysis methodology would have to demonstrate that there is a higher traffic volume for posts that are between 0-3 days old (setting a hypothetical boundary on front-loading).

For blogs that are long-tailed, those posts that continue to draw consistent traffic would be valued far more highly than those that fall out into the general ebb and flow of a bloggers traffic. These posts have proven historically that they appear highly in search results and are visited often.

In addition to the posts themselves, the comment stream has to be considered. Posts that generate an active conversation are farmore valuable those that don't. Again, showing the value of the conversation is reliant of the ability to track the numbers of people in the conversation (through Disqus or some other commenting system).

This model can be further augmented by using a tool like Lookery that helps to clearly establish the demographics of the blog audience. Being able to pinpoint not only where on a blog to advertise but also who the visitors are who view those page, provides a further selling point for this new model and helps build faith in the virtues of a blog that sells space using this new, more effectively targeted advertising pricing structure.

Now, I separate the front-loaded and long-tailed blogs as if they are distinct. Obviously these categories apply to nearly every blog as there are new posts that suddenly capture the imagination of an audience, and there are older posts that continue to provide specific information that draws a steady stream of traffic to them.

Summary


This is a very early stage idea, one that has no code or methodology to support it. However, I believe that the current contextual advertising model, one based solely on the content of the post, is not allowing the content creators and blog entities to take advantage of their most valuable resource - their own posts and the conversations that they create.

I also believe that blog owners are not taking advantage of their own best resource, Web analytics, to help determine the price for advertising of their site. Not all blog posts are created or read equally. Being able to very clearly show what drives the most eyeballs to your site is a selling point that can be used in a variable-price advertising model.

By providing tools to blog owners that intimately link the analytics they already gather and the advertising space they have to sell, a new advertising model can arise, one that is uniquely suited to the new Web. This advertising model will be founded in the concepts of conversation and community, providing more discretely targeted eyeballs to advertisers, and higher ad revenues to blog owners and content creators.

UPDATES


Appears that BuzzLogic has already started down this path. VentureBeat has commentary here.