Monday, September 28, 2009
I grew up amongst the Canadian Rocky Mountain Parks. Dead center amongst them you might say. Within two hours drive, there were five spectacular parks - Yoho, Banff, Jasper, Kootenay, Glacier, and Mt. Revelstoke.
All of these parks played a part in my childhood, adolescence, and young adult life. It has been nearly 20 years since I spent any time in these parks, but the experience I had there have shaped how I see the world around me. But only now can I really appreciate what these parks mean to us all, in all places.
The parks are a powerful reminder of the transitory effect that man has. Each of them contains some amount of ruins as a visible reminder of man's failed attempts to exploit and tame the parks. The carcasses of hotels, remains of viaducts, the skeletons of towns litter these refuges.
A part of that failed heritage is something I carry with me, as I am descended from one of the last group permanent residents of an industrial town in a Canadian National Park, as my grandfather lived for a time in the now abandoned town of Bankhead Alberta. My family took me to this place as a child and told me that 'Grandpa lived here', a concept I could not understand, as I was in a National Park, wasn't I? I had no idea of the conflict over what it meant to be a Canadian National Park at the time, as I saw them as the refuges and preserves they had become.
Growing up amongst these special places has left with a certain jaded perspective on beauty in the world. Yosemite does not awe the way it does others, as I was raised surrounded by beauty comparable to Yosemite, and perhaps exceeding it. But now I give my unrestrained thanks to those who made the effort to preserve, protect, and conserve these places.
Within the gently protective walls of the Canadian Mountain Parks, I have seen the sublime and the ridiculous. The commercial and the ethereal. Untouched wilderness and unabashed capitalism. And despite protests on both sides, it is clear that they work together, for without the treasure and largesse of one type of visitor, the other would not have a place to go.
Banff is the greatest eyesore amongst those who see the parks as the preserve of untrammeled wilderness. However, if Banff had not existed, the desire and initiative needed to protect the other four parks would not have gained ground. So a commercial pit keeps the wilderness protected, a balance that we can accept in a day of far greater compromises.
So though the idea of a National Park may have been originated in the US, Canada has done well to develop the idea on its own terms. Only now that I am many thousands of miles removed from them, can I appreciate what they have done to to shape me. These memories leave me breathless in the realization of the great privilege I have taken for granted for all of these years.
Friday, September 11, 2009
Wednesday, September 9, 2009
But fixing the 80% of performance issues that occur on the front-end of a Web site doesn't fix the 80% of the problems that occur in the company that created the Web site.
Huh? Well, as Inigo Montoya would say, let me explain.
The front-end of a Web site is the final product of a process, (hopefully) shaped by a vision, developed by a company delivering a service or product. It's the process, that 80% of Web site development that is not Web site development, that let a Web site with high response times and poor user experience get out the door in the first place.
Shouldn't the main concern of any organization be to understand why the process for creating, managing, and measuring Web sites is such that after expending substantial effort and treasure to create a Web site, it has to be fixed because of performance issues detected only after the process is complete?
Souders' 80% will fix the immediate problem, and the Web site will end up being measurably faster in a short period of time. The caveat to the technical fix is that unless you can step back and determine how a Web site that needed to be fixed was released in the first place, there is a strong likelihood that the old habits will appear again.
Yahoo! and Google are organizations that are fanatically focused on performance. So, in some respects, it's understandable how someone (like Steve Souders) who comes out of a performance culture can see all issues as technical issues. I started out in a technical environment, and when I locked myself in that silo, every Web performance issue had a technical solution.
I've talked about culture and web performance before, but the message bears repeating. A web performance problem can be fixed with a technical solution. But patching the hole in the dike doesn't stop you from eventually having to look at why the dike got a hole in the first place.
Solving performance Web problems starts with not tolerating them in the first place. Focusing on solving the technical 80% of Web performance leaves the other 80% of the problem, the culture and processes that originally created the performance issues, untouched.
Friday, September 4, 2009
Updates will be posted here.
UPDATE - Sep 4 2009 22:00 GMT: The database listener is up and data is flowing into the database and can be viewed in the GrabPERF interface. However, I have lost all of the management scripts that aggregate and drop data. These will be critical as the new database server has a substantially smaller drive. There is a larger attached drive, and I will try and mount the data there.
It will likely take more time than I have at the moment to maintain and restore GrabPERF to its pre-existing state. You can expect serious outages and changes to the system in the next few weeks.
[Whining removed. Self-inflicted injuries are always the hardest to bear.]
UPDATE - Sep 5 2009 03:30 GMT: The Database is back up, and absorbing data. Attempts to move it to the larger drive on the system failed, so the entire database is running on an 11GB partition. <GULP>.
The two most vital maintenance scripts are also running the way they should be. I had to rewrite those from very old archives.
Status: Good, but not where I would like it. I will work with Technorati to see if there is something that I'm missing in trying to use the larger partition. Likely it comes down to my own lame-o linux admin skillz.
I want to thank the ops team from Technorati for spending time on this today. They did an amazing job of finding a machine for this database to live on in record time.
I have also learned the hard lesson of backups. May I not have to learn it again.
UPDATE - Sep 5 2009 04:00 GMT: Thanks again to Jerry Huff at Technorati. He pointed out that if I use a symbolic link, I can move the db files over to the large partition with no problem. Storage is no longer an issue.
[And, why you ask, is Tara Hunt (@missrogue) on this post. Hey, when I asked Tagaroo for Technorati images, this is what it gave me. It was a bit of a shock after 8 hours of mind-stretching recovery work, but hey, ask and ye shall receive.]
UPDATE - Sep 7 2009 01:00 GMT: Seems that I got myself into trouble by using the default MySQL configuration that came with the CentOS distro. As a result, I ran out of database connections! Something that I have chided others for, I did myself.
The symptom appeared when I reactivated my logging database, which runs against the same MySQL installation, just in a separate database. It started to use up the default pool of connections (100) and the agents couldn't report in.
This has been resolved and everything is back to normal.
Friday, August 28, 2009
The resistance of this camp is strong, and they will appear without warning, even from amongst the most enlightened of companies.
How can they be recognized?
You will hear their battle-cry, their mantra, their fundamental belief that their application, their infrastructure is a misunderstood victim. That if they could only get their one idea across, the whole of the company would be enlightened.
The fundamental tenet of this group is simple and short.
How can we manage the Internet?
The obvious fallacy of this argument is clear to any Web performance professional or business analyst: Customers get to our business across the Internet, not via psychic modem. In order to keep close tabs on the experience of our customers, the site, application, code must be measured from the outside-in.
In order to prevent making enemies and perpetuating already ossified corporate silos, take the initiative. Gently steer the discussion in a new direction by making this incredibly vast problem into one everyone in the company can understand. By adding a single word to the initial question, the fearful and reactive perspective can be dramatically shifted to one that could make the members of this camp see the light.
Make the question:
How can we manage for the Internet?
Now the focus of the discussion is now proactive - is there something we are missing that could reduce the problems and/or prevent them from ever happening?
Taking the all-encompassing and awe-inspiring challenge that is the Internet and turning it into a Boy Scout moment may reinvigorate the internal conversation, and give people a sense of purpose. Now they will be galvanized to consider whether everything in their power is being done to prevent performance issues before bits hit the Internet.
Effective Web performance hinges on taking the obvious challenges that face all Web sites, and turning them into solutions that mitigate these challenges as much as possible. So, in the next team meeting, the next time you hear someone say that it's just the Internet, ask what can still be done to manage the application more effectively for the Internet.
Thursday, August 27, 2009
The longer I have this phone, the more of a clunker it becomes. My list of complaints include:
- In the last 24 hours, the battery has started to drain for no apparent reason - and yes, WiFi, Bluetooth, and background apps are all off. There appears to be no reason or logic behind this. The phone drained itself sitting on my bedside table last night, supposedly doing it's standby routine
- Windows Mobile 6.1 is underpowered and ancient. There are a lack of (Social Media) apps for the Windows Mobile platform. All development seems to be focused on the iPhone, Android, and Blackberry platforms. And with Windows Mobile 6.5/7.0 delayed or underwhelming, it's not going to get any better anytime soon
- It has weird behavior with bluetooth headsets. For every call, you have to manually tell the phone that "Hey! How about setting this to handsfree?"
- I got it for Active Sync, but frankly I could do better by hacking my way to near-Active Sync using Google Sync and routing my work email through a GMail account
- 3G Maybe. The TMobile 3G network is definitely not developed outside of major metro centers. I spend most of my time in EDGE mode, so the upgrade I thought I was going isn't really there
Overall, this phone gets a monstrous thumbs-down from me. But, I'm stuck with it. I can't afford to replace it, and the more I handle it, the crankier I get. I'm to the point that I may drag my old, underpowered, EDGE Blackberry 8100 out of the drawer and stop using the Dash 3G altogether.
Buyer's Remorse is sometimes hard to swallow. Looks like I have to swallow it for another 23 months.
Tuesday, August 25, 2009
Companies facing dire and obvious Web performance issues will want immediate results, leading them to fall into the CDN-First camp. Deploying a CDN will have a positive effect on response times, increase user satisfaction, and may even increase customer conversions, in the short term.
In six months, deeper questions may start to be asked. A core question that will need to be answered by CDN-First organizations will be "Are we using the CDN effectively and efficiently?".
A company that makes the leap to CDN deployment without assessing the overall performance environment of their Web site may be faced with a situation where they can't tell if they need more, less, or different CDN strategies in order to continue to succeed.
As a result of the buyers remorse that can result from the leap directly to a CDN, I highly recommend the Measurement-First approach when selecting a CDN.
To help you become an advocate for the Measurement-First approach, come to the table during the CDN discussions and ask three questions. The answers will allow your organization to make the best and most appropriate CDN decision.
1. Is the CDN necessary?
In most cases, the answer to this is a resounding yes. But what can happen with a sudden shift to the CDN is that a organization overlooks those things that they can do themselves to gain some initial performance improvements.
Baselining the existing site before deploying a CDN will allow items and elements that need to be improved to be clearly identified. In some cases, an organization can fix some of these on their own to improve performance before investing in a CDN. In other cases, measuring the performance of a site may clearly indicate that third-party content is responsible for the performance issues, which would likely not be fixed by a CDN deployment.
A Measurement-First policy helps clearly identify the geographies that have the worst performance before deploying the CDN. If performance in the US is acceptable, while performance in Europe or Asia-Pacific is intolerable, then the CDN deployment may initially be targeted to respond to the greatest pain first.
Understanding the current performance of your existing site can reduce the cost of the initial deployment and maximize the the long term effectiveness of the deployment.
2. Which CDN is best for us?
For a complex modern Web site, content comes in many different shapes, sizes, and formats. The thing is, so do CDNs. As I've discussed before, understanding what the CDNs vying for your business do and do well is as critical as the process of vetting their effectiveness compared to delivering the site yourself. The performance boost given to you site by a CDN may vary by region, leading your team to select one CDN for Europe and another for the Asia-Pacific region.
CDN performance can also vary based on the content you are asking them to accelerate. One CDN may be good at streaming media, while another may be better at static content (JS, CSS, Images, etc.), while yet another is better at accelerating the delivery of dynamic content.
Choose your CDN(s) based on what you need them to deliver. In some cases, one size does not fit all.
3. Is the CDN delivering?
This may look like a question for after the purchase has been completed and the solution deployed, but you will never know if the solution is working effectively unless you have a baseline of your performance before the deployment, and from your origin servers after deployment.
Measuring the performance of the CDNs under all conditions and from all perspectives (Datacenter, Last Mile, and from within the Browser) doesn't stop with the selection of a CDN(s). It becomes even more critical once the CDN solution(s) is rolled into production in order to ensure that the level of service that was promised during the sales cycle is delivered once you become a customer.
Constantly validate the performance of the CDN-accelerated site with the performance of the non-accelerated origin site. Have regular meetings with, and channels of communication into, your CDN(s) to discuss not only existing performance, but how changes you and/or the CDN provider are planning may affect performance in the future.
CDNs are a critical component for any Web business that wants to scale and deliver services to a national or global audience. But selecting a CDN should come after you have a very strong understanding of the current performance of your own Web site.
After you have measured and identified the items you can do to improve your own performance, your team will have greater insight into the areas of your site where the services of a CDN(s) can have the greatest impact.
The Measurement-First approach to selecting a CDN will ensure that you select a set of services that exactly meets the unique performance challenges of your site.
Monday, August 24, 2009
About three weeks ago, we were contacted by the regional Gutter Helmet franchisee for New England, trying to find out what they could do to make us happy.
Why would they do this?
Well, if you do a Google search for "gutter hemet", one of my blog posts detailing our negative experience back in 2005 is the third unpaid item on the list. It seems that they have gotten wise to the effect my simple blog posts were having on their reputation and brand.
Just for reference, I went back and looked in my logs for as long as I have them. I get on average 500 distinct page views a month for my Gutter Helmet posts and this number goes up substantially during peak gutter installation season.
"Gutter Helmet" is the number one search term coming into my site. And what people see when they get to my site is likely not the message the Gutter Helmet folks want to get across.
[Editor's Note: As someone who writes mainly about Web performance and social media, this traffic trend is disturbing. But it also shows that you never know what will resonate with your audience until you write it!]
This past Friday, Gutter Helmet sent someone out to repair the situation. He foundÂ incorrectly installed (or missing) flashing andÂ improperly installed helmet on both sides of the house. In the end, the entire helmet installation was replaced, and new flashing was installed.
Until we get Fall rains and the Winter snow returns, we won't really know how successful this new install was, so stayed glued to your browsers for further updates.
Tuesday, August 18, 2009
Six years ago, if you had asked me what the most important problems in Web performance were, I would have reeled off a list that was focused on technology and configuration: HTTP compression, HTTP persistent connections, caching, etc. In fact, six years on, these are still the concepts that dominate Web performance conversations.
Slowly, glacially, shaped by six years of working with customers and clients, listening to the Web performance conversations that flow across the Web and within companies, I realize that technology is only one component of the Web performance solution.
Web Performance is NOT Just Technology
Most organizations focus too much of their efforts on solving the technical problems because they are discrete, easy to track, and produce quantifiable results.
But a highly tuned engine with a rusted chassis, four flat wheels, and a voided warranty still has a Web performance problem, even if it is technically sound.
The complexity of the issue arises from the terminology used. Web performance, in current parlance, refers almost completely to the delivery of the site in an appropriate and measurable manner.
Web performance is not simply the generation and delivery of HTML and other objects. Web performance is conversation that defines the basic nature of any Web site.
Approaching Web performance, as I had for so many years, as a technical problem with a discrete solution overlooks the true nature of Web performance. A culture of effective Web performance absorbs a number of different inputs, and then ensures that the site performs across many different vectors, not just the two-dimensional response/success over time graph.
Web Performance is Culture and Communication
Web performance is an issue of culture. And at the root of all cultures lies communication.
The Web performance conversation has three components, each one shaping the potential response to the problem and providing elements of the solution.
1. Technical Capabilities
Technical organizations spend a great deal of their time defining what they can't do. In an organization that has a culture of effective Web performance, the technical teams provide clear definitions of the current capabilities, and clearly demonstrate how far they can take the organization down the chosen path, hopefully without spending all of the company's treasure.
2. Business Objectives
Just as the technical organization has to define what they can do with what they have, the business organization has to come to the table with a clear definition of what they want to achieve. If a business goal is clearly stated to the technical team, then a conversation about where there may be challenges and opportunities can occur. When business and IT talk and listen, a company is becoming far more effective at delivering the best site they can.
3. Customer Expectations
Neglected, forgotten, nay, even ignored, the role of the customers' expectations in the Web performance equation is just as critical as the other two participants. With clear business objectives and defined technical capabilities, a site can still be seen as a Web performance failure if the expectations of the customer are not met. And it is not simply listening to customer and providing everything they want. It's understanding why they need a feature/function/option in order to be more successful at what they do, and balancing that against the other two players in the conversation.
But where does an organization that wants to take Web performance beyond the technical problem, and into the realm of the strategic solution go?
Do a search on any search engine and you will find page upon page of technical solutions to a supposedly technical problem. Web performance is not solely a technical problem. In many cases, the site is configured and tweaked and tuned and accelerated to such a degree that you have to wonder if is under-performing out of spite more than any other reason.
Scratch the surface. Look beyond the shiny toys and massively-scaled infrastructure and you will find that technology is not the issue. The demand placed on the site by the business are bogging the site down in ways that no amount of tuning could improve.
Perhaps the business goals of the site, the need to support the business, have pushed the technology to its breaking point or beyond, but the technology team cannot clearly articulate what the problem or solution is.
Maybe customers, used to competitors delivering one level of Web performance and experience are simply not happy with the site, no matter how tuned it is and how clearly the call to action may be.
Making a Web site perform effectively means stepping back and asking some key questions:
- Why do we have a site?
- How does this site help our business?
- Why do our customers use our site?
- Do we like using our site?
- What are our competitors doing?
- What are the best Web companies doing?
These seem like silly questions. But you may be surprised by the differing answers you get.
And from there, the conversation can start.
Simply put, Web performance is not about understanding how to make your site faster. Web performance is about understanding what you can do to make your site better. An effective Web site is one that is shaped by a culture of effective Web performance.
Striving to make a better, more effective Web site may lead to such profound cultural and organizational changes that the process ends up making a better company. A company where the Web site is seen as an active conversation shared with employees, shareholders, investors, and customers.
A conversation where you explain what can be done, why you are doing it, and how you will do it. A conversation where you listen to what must be done, how it is expected to work, and what the customer defines as success.
So when you wake up six years from now, and realize that the day you stopped treating your Web site as a technical problem that needed to be fixed, and started seeing it as an opportunity to create a more effective business, I hope you smile.
Effective Web performance demands that a site take responsibility for the entire site, not just the parts under direct local management. Why? Because customers see a problem with your site, not with a provider.
How can the performance all of the third-party content on a site be managed? Using the exactly the same strategies already place to manage the performance of local content.
Measure from the outside-in
Customers come from the Internet. That measuring the performance of a site from the perspective of visitors is being mentioned here should not be a surprise. Critical to this part of managing third-parties is the ability to see into the page and determine if there are performance issues requesting and transmitting data from third-parties.
In the first article of this series, I detailed a number of approaches to actively gathering performance data. This method, whether from the datacenter or the last mile, will provide the early warning signs that there is an issue with a third-party, and feeding this data into the performance issue management plan.
Measure from inside the browser
The network and application performance of a third-party page component is just the start of the process, as this is what it takes to get the object to the browser. But what if this object then launches a number of actions, or starts to render on the screen. This may lead to a whole different range of issues that are a blind spot when analyzing Web performance.
Measuring the performance of discrete page elements from within the visitors browser will provide deeper insight into what effects the customer sees and which third-parties will need to be approached in order to improve the overall Web performance of the site.
Have clear and useful SLOs and SLAs
Service level objectives and service level agreements are often thrown about whenever there is the suspicion that there is a Web performance issue. Using these documents and frameworks as a club to beat up partners with is counter-productive.
SLOs and SLAs should clearly detail:
- the performance expectations of the Web site owner
- the performance and delivery capabilities of the third-party provider
Guess what? Arriving at this in a way that doesn't lead to resentment and mistrust on both sides requires open and honest discussion.
If Web site owners and third-parties are going to work together to ensure the most effective Web performance strategy possible, then data must flow freely. Vendors will need access to the same data that Web site owners have (and vice versa) in order to ensure that if an issue is detected, everyone can examine all of the available data, and solve the problem quickly.
A recurring and critical theme when establishing a culture of effective Web performance is communication. When working with third-parties, this is even more critical, as the performance culture of one organization may be completely different from another. The Web site owner may have one site of criteria that determines a Web performance issue, while the vendor has another, and unless these are understood, problems will occur.
Clear communication paths must be baked into the SLA. Named contacts or contact paths will be there, as will expected response times for inbound requests, and escalation procedures.
When there is a performance issue, both sides will need to be very clear about how each other will respond.
Third-party content on Web sites is a fact. It shouldn't be a headache. Effective Web performance measurement strategies, shared sources of Web performance data, and clearly understood paths and methods of communication will make using third-party content less stress-inducing to everyone.
Monday, August 17, 2009
I have a 10/90 rule . If your budget is $100 then spend $10 on tools and professional services to implement them, and spend $90 on hiring people to analyze data you collect on your website.
The web is quite complex, you are going to access multiple sources of data, you are going to have to do a lot of leg work. Blood, sweat and tears. You don't just need tools for that (remember 85% of the data you get from any tool, free or paid is essentially the same). You need people!
Hire the best people you can find, tools will never be a limitation for them.
Staring at this as I sipped my coffee stopped me dead.
Beside me I have two full pages of notes on what makes up the Web performance culture of company, and here is one of the most succinct points summed up for me in two short paragraphs.
Web performance is not just about tools and methodologies. Effective Web performance requires dedicated and trained human resources. And those people need to be able to work in a culture that values and understands the importance of Web performance to the business. Without a culture of Web performance, any tool, technology, and methodology purchased to make things better is useless.
In a previous post I touched on the question of whether an organization sees Web performance as a technology or business issue. Answering this question is key to understanding a company's perspective on Web performance issues.
Start by asking Who is responsible for Web performance? at a company. Is there a cross-functional team that meets regularly to discuss current performance, long-term trends, the competitive landscape, effects on customer experience, and how performance concerns are shaping and guiding upcoming development efforts?
Or is Web performance a set of anonymous charts and tables that have no context ,originating from the inscrutable measurement system, bundled up into an executive report by an unnamed staff member for a once a month meeting?
Most companies understand Web performance is crucial. They understand it affects the bottom line and customer experience. They understand all of the ideas and concepts of Web performance. But like the proverbial horse and water, they don't drink from the stream in front of them. They don't drink because they are too busy watching for cougars, wolverines, and poachers. They have too much going on to make Web performance a priority.
Part of developing a strong culture of Web performance is creating a business culture that is customer-centric. When a company turns their perspective around and makes delighting the customer a part of everything they do, the customer experience on the Web becomes a critical component of the culture.
The key to making Web performance a part of a customer-centric culture is to shift Web performance discussions from the abstract (full of numbers and charts representing the potential of Web performance to affect customers) to the real (effect of Web performance on towns and cities and people and the bottom line). Attaching a name, a place, or a value to every number on a Web performance chart makes it easier for people in an organization to absorb the effect it has on them as an employee.
Moving the discussion about Web performance from the testing lab and NOC to the breakroom and the hallway takes a greater effort. It starts by making Web performance data available to all, not just those who are tasked with monitoring it.
A culture of Web performance means that the $90 you spent on people is supplemented by a team of avid amateurs who notice changes and trends that may slip through the cracks. These amateurs are encouraged to participate in Web performance discussions, where the experts are encouraged to listen then contribute.
Why listen to avid amateurs? In many cases, they are the people who work directly with customers and use the products on a daily basis. Their feedback comes from real experience, set alongside abstract values. Once a measurement has a story, it makes it easier to understand the problem.
An example of the success of amateurs is Wikipedia. A population of amateur contributors, as well as a core of experts in certain fields, have ensured that this is a useful resource. A Web performance culture full of avid amateurs allows comments and stories to flow from the customer-centric parts of an organization into the technology and business parts of the organization. These stories and inputs make the Web performance more real, and make a chart in a report more important.
A culture of Web performance is one that is adopted by an entire company. It is a way of examining the reality of a site in a way that is customer-centric and customer-driven. A strong Web performance culture absorbs information from many sources, and filters the data through a customer filter, and makes every measurement count.
When I say lose control of the performance, I mean that despite everything that has been done to ensure scalability and capacity, the Web is inherently an infrastructure that is out of anyone's direct ability to manage.
This is something that needs to be accepted. And while the datacenter is only that part of an application/infrastructure/network that can be directly managed by the Web site's owners, a company has to accept that the real datacenter is the Internet. Not a datacenter that is on the Internet; the Internet as the datacenter.
Now that your head is spinning, let's step back and consider this idea for a minute. The whole concept of the Internet being the datacenter makes operations and IT folks very uncomfortable. Why? There is no way for one company to manage the Internet. As a result, the general perspective is that the Internet can't be trusted, and all that can be done is manage what can be managed directly.
Ignoring the Internet allows many organizations to leave the entire Internet out of their application or performance planning. They will measure and monitor, and they may even employ third-parties to help improve performance. When the shiny exterior is peeled back, it's pretty clear that these organizations have built their entire performance culture on the assumption that if a problem exists on the Internet, there is nothing that can be done by them to fix it.
This may be effectively true. And it is not positive way to ensure effective Web performance
Having a what-if, emergency response plan in place is never a bad idea. If a problem appears on the Internet, and it affects your Web site, what are you going to do about it? Whine and moan and point fingers? Or take actions that effectively and clearly communicate to customers the steps you are taking to make things right?
Wait. Managing the Internet through customer communication?
I argue that besides working feverishly behind the scenes to resolve the problem, customer communication is the next most critical component of any Web performance issue management plan.
Web performance issue management plan. You have one, don't you?
Well, when you get around to it, here are some concepts that should be built into the plan.
Effectively monitor your site
How can measurement and monitoring be part of issue management? Well, isn't it always good policy to detect and begin investigating problems before your customers do?
Key to the measurement plan is monitoring the parts of your application that customers use. A homepage test will not give you vital information on issues with your authentication process, and is the same as saying the car starts, while ignoring the four flat tires.
If you aren't effectively monitoring your site, your business is at risk.
Measure where the customers are
If your organization is focused on what it can control, then it will want to measure from locations that are controlled, and can provide stable, consistent, repeatable data.
Hate to break this to you, Sparky, but my Internet connection isn't an OC-48 provisioned through a large carrier with a written SLA. Real people have provider networks that are congested, under-built, and deliver bandwidth using the old best effort approach.
Some customers may have given up on wires altogether, and access the site through wireless broadband or mobile devices.
Understand how your customers use your site. Then plan your response to managing the Internet from the outside-in.
Test with what your customers use
The greatest cop-out any Web site can make is Our site is best viewed using...
I'm sorry. This isn't good enough.
Customers demand that your site work the way they want it to, not the other way around. If a customer wants to use Safari on a Mac, or Chromium on Linux, then understanding how the site performs and responds with these browsers is critical.
The one-browser/one-platform world no longer exists. If a large number of customers with one particular configuration indicate that they are having a problem with the new site, what is the proper reaction?
And why did this happen in the first place?
Monitor and respond to social media
No, this isn't just here for buzzwords and SEO. In the last year, Twitter and Facebook have become the de-facto soapboxes for people who want to announce that their favorite site isn't working. Wouldn't hurt to monitor these sites for issues that might not be detected by traditional performance monitoring.
This approach means that you have to be willing to accept responsibility when something affects your site performance or availability, even if it isn't your fault. No need to tell folks exactly what the problem is, but acknowledging that there is a legitimate issue that you recognize will go a long way toward making visitors/customers more understanding of the situation.
Get your message out effectively
Communicating about a performance issue means that the Marketing and PR teams will have to be brought in.
What? Marketing and Operations/IT working together? Yes. In a situation where there is a major outage or issue, Marketing will DEMAND to be involved. Wouldn't it be easier if these two parts of the organization knew each other and a plan for responding to critical performance issues?
If Marketing understands the degree of the problem, what it will take to fix, and what is being done about it, they can craft a message that handles any question that might come in, while acknowledging that there is an issue.
A corollary to this: If there is an issue, don't deny it exists. Denying a problem when it clear to anyone using the site that there is one is worse than saying nothing at all.
Practicing effective Web performance means a company understands that directly managing the Internet is impossible, but having a process to respond to Internet performance issues is critical. A Web performance incident plan shows that you understand that stuff happens on the Internet and you're working on it.
Thursday, August 13, 2009
When working with CDNs, it is critical to understand some terms or concepts that you will be presented with. Each CDN will present them in it's own unique way and using its own unique terminology. Having an understanding of the underlying concepts, you will be able to have discussions with CDNs that are more meaningful, and targeted on your needs.
The Massively Distributed Model
CDNs fall into one of two categories, the first being the massively distributed model. CDNs that use this method will demonstrate how they have hardware and caching content servers in almost every city and town of any size in the world. As well, they have their systems located on every major consumer network in order to ensure that they are as close to the end-user as possible.
The CDN everywhere model, while far-reaching and seemingly extremely effective does have its disadvantages. First, the CDN infrastructure relies on having extremely accurate maps of the Internet in order to direct visitors to the most proximate CDN server location. However, these maps are only truly effective when visitors use DNS servers that are on the same network that they are. Services such as OpenDNS and DNS Advantage can seriously effect the proximity algorithms of the distributed CDN by removing the key piece of localization information that they need to ensure that the best cache location is selected.
Also, as with any proxy caching methodology, this model relies on use. More popular items stay in the cache longer, while less popular items may be pushed aside or stored further upstream at parent caches for retrieval, adding a few extra milliseconds for the initial request. Also, new content has to be pushed out to the edge, and may take a few hours to be completely propagated.
The Massively Concentrated Model
CDNs that use this model rely on a smaller number of locations than the massively distributed model. However, these locations tend to be massive and incredibly well connected, relying on the concept that even if they are a few more hops away, their content is always there and ready for requests.
These sites have massive amounts of storage and rely on private networks to ensure that new content is immediately pushed out to the super-nodes as soon as it is added. And while they may be those extra few hops away, the performance difference may not be enough for the average site visitor to notice.
The obvious disadvantage of the massively concentrated model is that it is great for serving those places where there is a lot of traffic. However, in regions with less traffic, or less developed infrastructures, the fewer boots on the ground may begin to have an effect on performance.
Other CDN Concepts
CDNs offer many institutions the ability to use their network for all incoming requests, even if they are for dynamic content that will require processing in the client datacenter. In these instances, the CDN acts as an application proxy, using its superior knowledge of routing and traffic patterns to move requests from the edge of the Internet back to the datacenter more effectively.
Remember: Just because the CDN is providing fast routing and delivery to the visitor, your application is still the bottleneck. Poor app design or slow queries will affect the application in exactly the same way that it would if the call was coming straight to your datacenter.
In certain circumstances, security and regulatory concerns completely eliminate the ability of a business to use the standard CDN model. Banks, government agencies, and health-care providers cannot store data in an environment whose security they cannot vouch for, no matter how many safeguards are put in place.
These organizations still need to be able to deliver a good customer experience, so there has to be a way to help accelerate their content without taking control of it. Traffic acceleration serves this purpose by using proprietary network protocol adaptations that remove some of the overhead associated with standard network protocols.
Content is intercepted at the datacenter and routed across private networks using the streamlined network protocols to an network location that is as close to the visitor as possible. Once it has reached the appropriate location, it is converted back to standard TCP and passed to the visitor.
The method above describes how a standard Web request works, but this can also be extended to true point-to-point VPNs with endpoints separated by great network and/or physical distances.
Validating the Claims
Any component of choosing or using a CDN is quantifying the effectiveness of the solution. The standard for many years has been the bake-off method of comparison. The prospect's origin site is measured against the same site delivered by one or more CDNs. The CDN vendor with the fastest performance and the best price usually wins.
Before walking into a bake-off, come prepared. Turn your CDN bake-off into an episode of Iron Chef. Come to the table with the ingredients, and make the CDNs prepare a solution that meets your needs.
The standard base measurement that CDNs will use in a bake-off is single object(s) or page measurement. Your visitors do not just visit a single page, so ensure that the CDN has an effective solution that produces noticeable performance improvements across all the key functions of your site, including the secure components of the site, where the money is made.
Measure from the Edge
Backbone measurements are great for baselining and detecting operational issues that require a consistent and stable dataset. Your customers, however, do not have direct connections to high-priced datacenters with fat pipes.
The two CDN models will react differently to under certain circumstances, and this will appear in edge measurements. Measuring on the ground, from the ISPs that your customers use, will give you a clear sense of how much improvement a CDN will provide when compared to the performance of your origin datacenter.
The edge is messy, chaotic, and what your customers deal with everyday.
Understand the SLAs/SLOs
CDNs will always provide either service level agreement (SLA) with service level objectives (SLOs) stated in it. This topic is at once recognizable and about as well understood as 11 Dimensional Theoretical Physics.
I have written briefly about SLAs and SLOs before [here and here]. Do your research before you wade into this polite version of white-collar trench warfare.
Make sure you understand what the goal of the SLA is. Make sure that the SLOs are clear, measurable, valid, and enforceable. Then ensure that the method used to measure the SLOs is one that your organization can understand and can accept as valid.
Finally, ensure that the SLOs are reviewed monthly.
Understanding the foundational technology that underlies the CDNs you use or are considering using will help you make better decisions.
Wednesday, August 12, 2009
Effective Web performance is something that requires planning, preparation, execution, and the willingness to try more than once to get things right. I have discussed this problem before, but wanted to expand my thoughts into some steps that I have seen work effectively in organizations that have effectively established Web performance improvement strategies that work.
This process, in its simplest form, consists of five steps. Each step seems simple, but skipping any one of them will likely leave your Web performance process only half-baked, unable to help your team effectively improve the site.
1. Identification - What do we want/need to measure?
We want to measure everything. From everywhere.
This is an ineffective approach to Web performance measurement. This approach leads to a mass of data flowing towards you, causing your team to turn and flee, finding any way possible to hide from the coming onslaught.
Work with your team to carefully chose your Web performance targets. Identify two or three things about your site's performance that you want to explore. Make these items discrete and clearly understood by everyone on your team. Clearly state their importance to improving Web performance. Get everyone to sign off on this.
Now, what was just said above will not be easy. There will be disagreements among people, among different parts of the organization, about which items are the most crucial to measure. This is a good thing.
Perhaps the greatest single hindrance to Web performance improvement is the lack of communication. An active debate is better than quiet acceptance and a grudging belief that you are going the wrong way. Corporate silos and a culture of assurance will not allow your company to make the decisions you need to have an effective Web performance strategy.
2. Selection - What data will we need to collect?
In order to identify a Web performance issue (which is far more important than trying to solve it), the data that will be examined will need to be decided on. This sounds easy - response time and success rate. We're done.
Now, if your team wants to be effective, they have to understand the complexity of what they are measuring. Then an assessment of what useful data can be extract to isolate the specific performance issue under study can be made.
Choose your metrics carefully, as the wrong data is worse than no data.
3. Execution - How will we collect the data?
Once what is to be measured is decided on, the mechanics of collecting the data can be decided on. In today's Web performance measurement environment, there are solutions to meet every preferred approach.
- Active Synthetic Monitoring. This is the old man of the methods, having been around the longest. A URL or business process is selected, scripted, and them pushed out to an existing measurement network that is managed/controlled. These have the advantage of providing static, consistent metrics that can be used as baselines for long-term trending. However, they are locked to a single process, and do not respond or indicate where your customers are going now.
- Passive User Monitoring - Browser-Side. A relative newcomer to the measurement field, this process allows companies to tag pages and follow the customer performance experience as they move through a site. This methodology can also be used to discretely measure the browser-side performance of page components that may be invisible to other measurement collection methods. It does have a weakness in that it is sometimes hard to sell within an organization because of its perceived similarity to Web analytics approaches and its need to develop an effective tagging strategy.
- Passive User Monitoring - Server-Side. This methods follows customers as they move through a site, but collects data from a users interaction with the site, rather than with the browser. Great for providing details of how customers moved through a site and how long it took to move from page to page. It is weak in providing data on how long it took for data to be delivered to the customer, and how long it took their browser to process and render the requested data.
Organizations often choose one of the methods, and stay with it. This has the effect of seeing the world through hammer goggles: If all you have is a hammer, then every problem you need to solve has to be turned into a nail.
Successful organizations have a complex, correlative approach to effective Web performance analysis. One that links performance data from multiple inputs and finds a way to link the relationships between different data sets.
If your team isn't ready for the correlative approach, then at least keep an open mind. Not every Web performance problem is a nail.
4. Information - How do we make the data useful?
Your team now has a great lump of data, collected in a way that is understood, and providing details about things they care about.
Web performance data is simply the raw facts that come out of the measurement systems. It is critical that during the process of determining why, what and how to measure that you also decided how you were going to process the data to produce metrics that made sense to your team.
- Feeding the data into a business analytics tool
- Producing daily/weekly/monthly reports on the Key Performance Indicators (KPIs) that your team uses to measure Web performance
- Annotate change, for better or worse
- Correlate. Correlate. Correlate. Nature abhors a vacuum.
Providing a lot of raw data is the same as a vacuum - a whole bunch of nothing.
5. Action - How do we make meaningful Web performance changes?
Data has been collected and processed into meaningful data. People throughout the organization are having a-ha moments, coming up with ideas or realizations about the overall performance of the site. There are cries to just do something.
Stick to the plan. And assume that the plan will evolve in the presence of new information.
Prioritizing Web performance improvements falls into the age-old battle between the behemoths of the online business: business and IT.
Business will want to focus on issues that have the greatest effect on the bottom-line. IT will want to focus on the issues that have the greatest effect on technology.
They're both wrong. And they're both right.
Your online business is just that: a business that, regardless of its mission, based on technology. Effective Web performance relies on these two forces being in balance. The business cannot be successful without a sound and tuned online platform, and the technology needed to deliver the online platform cannot exist without the revenue that comes from the business done on that platform.
Effective Web performance relies on prioritizing issues so that they can be done within the business and technology plans. And an effective organization is one that has communicated (there's that word again) what those plans are. Everyone needs to understand that the business makes decisions that effect technology and vice-versa. And that if these decisions are made in isolation, the whole organization will either implode or explode.
Effective Web performance is hard work. It takes a committed organization that understands that running an online business requires that everyone have access to the information they need, collected in a meaningful way, to meet the goals that everyone has agreed to.
Tuesday, August 11, 2009
- Active Sync works like a dream with our work Exchange 2007 Server
- Evernote Mobile is great for collecting stuff and works flawlessly.
- Skype Mobile over WiFi rocks, and will be very useful if I ever get to travel outside the US and Canada again.
- Threaded SMS conversations remind me of the old Treo 600 I once had. It is a nice touch that the Blackberry really didn't do well.
- I'm not always sure where I am in the interface. The Windows Mobile platform that the Dash 3G uses makes it darn difficult to figure out where you are and how to get to where you want. I sometimes find myself navigating through a number of layers to find out how to get back to certain apps.
- Files? Where are my files? It shouldn't be this hard to figure out where images/videos/audio files are stored by the default applications.
- Camera can too easily be set to video mode, and it is not intuitive how to switch it back to camera-only mode.
- Lack of native Google Mobile app for email. I loved the GMail app for Blackberry. The only option I appear to have on the Dash 3G / Windows Mobile platform is their native IMAP client which is a clunky hack, IMHO.
- No intuitive way to sync Google and Active Sync calendar and contacts, a la Google Sync for the Blackberry.
- No intuitive way to join PEAP WiFi networks. The wireless network at my office uses PEAP to authenticate, which Windows Mobile, despite being a Windows-like product, appears to have no clue about. I have helped at least one person setup their iPhone to join the PEAP network without difficulty.
- Why can't the Shortcut key launch any app, not just the ones Windows Mobile wants you to launch. Mobile IE sucks compared to SkyFire, but I can't immediately start SkyFire without going through those nasty, non-intuitive Windows to find it.
- Why is sending an MMS so hard? It isn't clear if you are doing the right thing, and I'm never sure if the damn thing has worked properly. This is a key functionality that needs to be fixed, ASAP.
- Why offer the option to check for Windows Mobile Updates if you can't connect to the server?
- And, despite trying to hide it, it is still Windows. Occasionally apps just crash without warning, especially if the device has been on for more than 3-4 days continuously. I only had to restart my Blackberry when installing some apps and updating the firmware/OS.
As a smartphone, it is a good starter phone.
However, I am having some pretty large pangs of envy and regret about the myTouch, with full knowledge that the Android OS is not yet ready for the modern office environment, i.e. no ability to Active Sync. If Android gets Active Sync capabilities anytime soon, I will truly regret my decision to go with the Dash 3G.
OS: 4/10 - Still Windows. Can we hack Android with Active Sync onto this platform?
Apps: 4/10 - Complex menus. Lack of an App Store location or interesting/goofy utilities. Lacking Google apps (Google Sync and an independent GMail app).
Hardware: 7/10 - No light for the camera, keyboard a little small for Jolly Green Giant Hands, proprietary HTC plugs for headsets and power
Call Quality: 8.5/10 - Some fade out in quality when switching from 3G to EDGE
Data Quality: 5/10 - Mostly because I paid for 3G and I'm getting EDGE/GPRS in the Boston 'burbs. T-Mobile's slow roll of their 3G infrastructure shows
Frankly, I found having ads up on my site extremely hypocritical, as I do everything in my power to avoid seeing ads of any kind during my day-to-day Web use. My browsers have ad-blocking plugins, or pass through ad-blocking proxies to eliminate the content I see as intrusive and unwanted.
Still, I spent a long time thinking about ad-placement on my own blog, and what I could do to drive traffic to get revenue, from something I didn't believe in myself.
Yes, my blog doesn't get huge amounts of traffic. And yes, I have been paid out exactly four times by AdSense in the 5 years I have been blogging. In four years, I have made $400 from the ads on my site.
I find ads intrusive, invasive, repulsive, and, in many cases, extremely ugly. So why should visitors to my site have to suffer with them?
Effective Sunday, August 9 2009, the ad code, in all its various forms, has been eliminated from my site. My blog is now officially ad-free. And it will stay that way.
For me, ad-revenue is ineffective. It takes away from the true reason I started writing this blog: I have something to say. If I am always thinking "How will this play with the contextual ad providers?", then I am not writing in my own voice. I am writing to meet the criteria of an algorithm that triggers on certain words and will provide advertising that might make me money.
By presenting ads to visitors, the same ads that I despise.
When you step back and think about your blog, consider the following.
- Do you think about every word in your posts, considering its effect on your SEO?
- Do you change your site design often to try and discover the optimal ad layout?
- Is ad revenue more important than your reputation as a blogger?
- Do you always think about branding in terms of dollars instead of in terms of authority and reputation?
Blogging is not about the money. And while I read Darren Rowse and other pro-blogging advocates, I also realize that they're focus is on quality content for an appreciative audience.
I feel that ad revenues can lead to the loss of your blogging voice. And my voice and reputation are what are most vital to me, not dollars from ugly ads.
Monday, August 10, 2009
This does not bode well for an Internet that is shifting more directly to true read/write, data/interaction heavy Web sites. This needs to have home broadband that is not only fast, but which has equality for inbound and outbound connection speeds.
But will faster home broadband really make that much of a difference? Or will faster networks just show that even with the best connectivity to the Internet money can buy, Web sites are actually hurting themselves with poor design and inefficient data interaction designs?
For companies on the edge of Web performance, who are trying to push their ability to improve the customer experience as hard as possible, who are moving hard and fast to the read/write web, here are some ways you can ensure that you can still deliver the customer experience your vistors expect.
Confirm your customers' bandwidth
This is pretty easy. Most reasonably powerful Web analytics tools can confirm this for you, breaking it down by dialup, and high broadband type. It's a great way to ensure that your preconceptions about how your customers interact with your Web site meets the reality of their world.
It is also a way to see just how unbalanced your customers' inbound and outbound connection speeds. If it is clear that traffic is coming from connection types or broadband providers that are heavily weighted towards download, then optimization exercises cannot ignore the effect of data uploads on the customer experience.
Design for customers' bandwidth
Now that you've confirmed the structure of your customers' bandwidth, ensure that your site and data interaction design are designed with this in mind. Data that uses a number of inefficient data calls behind the scenes in order to be more AJAXy may hurt itself when it tries to make those calls over a network that's optimized for download and not upload.
Measure from the customer perspective
Web performance measurement has been around a long time. But understanding how the site performs from the perspective of true (not simulated) customer connectivity, right where they live and work, will highlight how your optimizations may or may not be working as expected.
Measurements from high-throughput, high-quality datacenter connections give you some insight into performance under the best possible circumstances. Measure from the customer's desktop, and even the most thoughtfully planned optimization efforts may have been like attacking a mammoth with a closed safety pin: ineffective and it annoys the mammoth [to paraphrase Hugh Macleod].
As well as synthetic measurements, measure performance right from within the browser. Understanding how long it takes pages to render, how long it takes to show content above the fold, and to gather discrete times on complex Flash and AJAX events within the page will give you even more control over finding those things you can fix.
In the end, even assuming your customers have the best connectivity, and you have taken all the necessary precautions to get Web performance right, don't assume that the technology can save you from bad design and slow applications.
Be constantly vigilant. And measure everything.
Friday, August 7, 2009
In March 2009, 23% of mobile phone sales in the US were smartphones. Yet this is where all the energy of tech writers and analysts is focused. What about the 77% of the market that uses what would be considered dumb-phones? Is there nothing interesting going on in this market?
Smartphone market share is growing, and quickly. But, if you step back and ask yourself what you want from your phone, your decision to buy a smartphone may start to slip a bit.
Go through a checklist of must haves before making a phone decision.
- Do you need to check your email all the time?
- Do you need access to social-networking sites?
- Do you need access to your calendar?
- Do you crave shiny new apps that entertain you?
- Will this device be a single mobile computing/communication/entertainment device?
- Do you need to make calls?
- Do you need to take pictures?
- Do you need to send SMS messages?
Advocates of smartphones will tell me that it is the fastest growing market share in the mobile phone market. Great.
But does the latest and greatest smartphone serve the needs that I have (or you have, or your mom has, or your sister has) for mobile communication?
I am an advocate for smartphones. I have one and I use it. I find that it serves the needs I have everyday. But I am not a phone-user. I am a data-user and a messaging-user. I have a massive phone plan, but unless I am travelling, I make very few calls (more due to my personality than anything, I suppose).
So I ask readers: do you carry more than one phone? Do you have a smartphone and a standard mobile phone? And if you do, why?
Is your smartphone a ball and chain for work, and when you aren't working, you carry something that works for you? Do you have one plan for data and one for calling or messaging?
And if you have had a smartphone, have you found it a good thing? Or have you wished you could go back to something simpler?
Monday, August 3, 2009
It's a mistake to consider Web performance a technology problem. Web performance is really a business problem that has a technological solution.
Business problems have solutions that any mid-level executive can understand. A site that can't handle the amount of traffic coming in requires tuning and optimization, not the firing of the current VP of Operations and a new marketing strategy.
Can you imagine the fate of the junior executive who suggested that a new marketing strategy was the solution to brick-and-mortar stores that are too small and crowded to handle the number of prospective customers (or former prospective customers) coming in the door?
Every Web performance event costs a company money, in the present and in the future. So when someone presents your company with the reality of your current Web performance, what is your response?
Some simple ideas for living with the reality that Web performance hurts business.
- Be able to explain the issue to everyone in the company and to customers who ask. Gory details and technical mumbo-jumbo make people feel like there is something being hidden from them. Tell the truth, but make it clear what happened.
- Do not blame anyone in public. A great way to look bad to everyone is to say that someone else caused the problem. Guess what? All that the people who visited your site during the problem will remember is that your site had the problem. Save frank discussions for behind closed doors.
- Be able to explain to the company what the business cost was. While everyone is pointing fingers inside your company, remind them that the outage cost them $XX/minute. Of course, you can only tell them that if you know what that number is. Then gently remind everyone that this is what it cost the whole company.
- Take real action. I don't mean things like "We will be conducting an internal review of our processes to ensure that this is not repeated". I mean things like listening and understanding what technology or business process failed and got you into this position in the first place. Was it someone just hitting the wrong switch? Or was it a culture of denial that did not allow the reality of Web performance to filter up to levels where real change could be implemented?
- Demand quantitative proof that this will never happen again. Load test. Monitor. Measure. Correlate data from multiple sources. Decide how Web performance information will be communicated inside your company. Make the data available so people can ask questions. Be prepared to defend your decisions with real information.
The most successful Web companies have done thing very well. It is the core of their success and it is what makes them ruthlessly strive for Web performance excellence.
These companies understood that in order to succeed they needed to create a culture where business performance and Web performance are the same thing.
Wednesday, July 29, 2009
A tool is designed to deliver a single unique function, such as a hammer or Twitter. Yes, Twitter is a tool. It is designed to take customer input in a variety of formats and from a number of sources and blast that content out to a variety of other formats and destinations.
Twitter is the tool. The items that feed into Twitter could be other tools (Tweetdeck as an example), or they could be true services, such as Ping.fm. What separates these two?
TweetDeck is a tool that feeds input into Twitter, and helps you manage output.
Ping.fm takes your input, and sends it where you what, modifying the format appropriately and hiding it all from you. It took the complexity of a problem (How do I post to multiple social media sites simultaneously?) and delivered a service solution, not a tool solution.
The problem with tool providers is that the problem, no matter what it is, always is a great for their tool. All customer problems fit neatly into the boundaries of what they know, and can be solved by what they sell. Tweetdeck's answer to helping you with Twitter is to give you more Twitter your way. But it doesn't extend or build on Twitter to create something that is truly new.
Solution providers look at the customer problem and see something new. The team at Ping.fm took a look at their personal social media management issues and found a way to create a social media input service. FriendFeed and FaceBook looked at the social media world and created a social media output service.
While tools are cool and shiny, they inevitable face the "Hammer v Screw" moment. The point when the tool reaches the outer ability of it's ability to be useful.
Having many different hammers isn't the solution. Heck, throwing in a wrench and a screwdriver isn't the answer either. You're still just selling tools.
When you step back and think about your business, when you consider what you deliver to your customers, can you really say that you deliver a service that extends and adds value to the tools you have at your disposal, that you are providing to your customers?
Or does everything just look like a nail?
Once again it is time to analyze browser usage in the US for the last month. July saw the appearance of Firefox 3.5, which has replicated the pattern seen with Internet Explorer 8, where it supplants the previous version slowly and linearly as people get around to upgrading.
Can MSIE 8 overtake MSIE 7 in August? How much will Firefox 3.5 usage grow in August and will it replace FF 3.0 as the dominant version in the Firefox family?
As with previous analyses, Internet Explorer 6 retains its iron grip on the corporate, custom Web application market. The question is not when, but if, this browser will actually fade away. It is unlikely that Internet Explorer 6 will disappear until Windows 2000 and Windows XP percentages are in serious decline.
This points to a larger concern that organizations will have to face within the next 18 months: What do they do when the Windows 2000 lifecycle terminates in July 2010 and as Windows XP sees fewer updates moving toward lifecycle retirement in 2014? [See the Microsoft LifeCycle information here]
Hiding from the inevitable just makes changes that much more dramatic and difficult.
It is not likely that the patterns in the StatCounter data will change until the summer vacation season is over in the US, and students bring their shiny new computers online at the start of the school year.
Friday, July 17, 2009
UPDATED: Impressions after the first month can be found here.
For the last 12-14 months, I have carried a Blackberry Pearl 8100 on my hip. And, as far as smartphones go, it served as a good, basic started phone. There were some issues though, including:
- No push email, due to limited corporate licenses for the Blackberry Enterprise server and an addition $30/month cost for Enterprise Support through T-Mobile. I did get my work email, but it was through a hack, and my work calendar and contacts had to be supported through Google Sync on the desktop and on the phone
- Poor camera. It was a 1.3MP camera. The quality of the camera varied over time, especially through various OS upgrades I put the phone through.
- Slow media support. Showing me the picture folder took up to 10 minutes, and got worse as more media was added
I was in the market for a new phone. The qualifications were:
- Better Camera
- 3G Support
- Active Sync Support
- Full physical keyboard
- WiFi (optional)
As a T-Mobile customer, the Active Sync and physical keyboard ruled out the new myTouch. The battery life on the G1 ruled that machine out.
Then I learned about the Dash 3G on Monday. It was exactly what I wanted.
Based on the HTC Snap, this phone is a serious upgrade to the old EDGE/GPRS Dash, which had always interested me. But, I was not willing to settle for EDGE/GPRS speeds. And though the T-Mobile 3G network may not be as built out as the AT&T 3G network, it is likely to improve over time.
Now, I have had the Dash 3G since Wednesday night. Comments so far?
- Moving contacts. This was sort of difficult, but it took only a few minutes once I had it sorted. Keeping my BlackBerry contacts synced with Google made this easy, as I exported my Google contacts, and imported them to Outlook
- GMail support. It's done through IMAP, which is great, but it created a whole bunch of new labels in GMail that weren't there before.
- Speed. Very fast, even when looking through folders of images.
- Camera. While 2MP is pretty low-end these days, it's perfect for what I'm looking for.
- Ease of Use. Moving from one OS to another is always a challenge. But, Windows Mobile 6.1 benefits from being like Windows. And as much as you might disagree with that, it is a model that most of the world is used to.
- Battery life. Not bad, but I am always power-conscious. I only turn the WiFi on at home, and I haven't fire up the Bluetooth yet. Using mainly the 3G network, battery life appears to be quite good. I plan to put it through a drain test over the next 36-48 hours to see how long the battery truly lasts.
Overall, through nearly 2 days, I am very happy with the new phone. If you are looking for a work-ready phone on the T-Mobile network, I highly recommend moving to the Dash 3G.
Wednesday, July 1, 2009
In the US browser market, Internet Explorer 8 continues its slow replacement of Internet Explorer 6 and 7, finally overtaking MSIE 6 on June 11 [Stats courtesy of StatCounter].
The great news is that Internet Explorer 6 is slowly falling of the pace, relegated to large companies with proprietary code and a degree of inertia that impedes their movement to accepting new browsers.
The two-month trend does show some very dramatic changes, most notably with Internet Explorer 7 and Firefox 3.
While these changes appear dramatic, the lack of absolute values to base the StatCounter graphs on means that it's very difficult to determine if these values are a result of a shift in the actual browser market, or a result in decreasing numbers of visitors to sites with the StatCounter tracking code.
Worldwide for June, the primary trend is that the decrease in Internet Explorer 7 is matched almost precisely by the increase in the use of Internet Explorer 8.
Firefox 3 and Internet Explorer 6 remained almost completely unchanged through June, indicating that the US trend is very different than that seen throughout the rest of the world. The tracking trend indicates that Firefox 3 could have overtaken Internet Explorer 7 by the end of July.
Could have is used purposely here, as the release of Firefox 3.5 will fragment the market share for this browser, and it is not likely that it will match the stats for Firefox 3 immediately.
Despite all the claims that the browser war is over, and that applications have moved beyond the browser, it is highly unlikely that this dream will be realized in the consumer browser market until late 2010, when the effect of Windows 7 can be seen on the use of Internet Explorer 8 .
Overall, June 2009 was a month of substantial shifts in the US browser market, which will be further aggravated with the release of Firefox 3.5, and the slow and steady adoption of Internet Explorer 8 by consumer and business users.
UPDATE: TechCrunch has noted the ongoing shifts to the browser share market [here].
Monday, June 29, 2009
When I left Canada, I always assumed that my stay in the Excited States would be short. I was in the country to learn, grow, take advantage of the experience that the giant next door would give me. Then we would return to Canada and settle into a quiet life.
Ten years on, I have a Green Card (don't ask about that nightmare) and when I come home to Canada, I realize how far I have wandered from the country that I still refer to as home.
Add to that the fact that most of the visits we make are to one of the fastest growing and most expensive cities in the country and every trip out to places I once knew brings 'Where the hell did that come from?' moments.
I wax nostalgic for this place, this city near the mountains, surrounded by the sea. It is the city of my young adult life, where I learned the skills I needed to get on in life; where I met my wife; where I felt at home.
What a difference ten years makes. We have placed our roots in another place, a very different place. A place that couldn't be more different than here.
I joke that I am legally prevented from voting in two countries. As a transplant I will never be completely at home in the place where I live. The country of my birth is an interesting and lovely place, a stranger that I rediscover a little bit on every visit.
When we travel back to the country of my birth, I realize that the move to the US saved me, saved from being trapped by narrow goals and shortened horizons. But the move came with a price.
As someone who lives with a gardener, I know the value of a plant in the right place. Often you don't discover that a plant is in the wrong spot until you have had it for a few years. Then, one season, you transplant it, and, with some more sun and a little more water, it blossoms, it thrives.
The gardener who does that is always pleased with the results, but is frustrated by the time lost by having a wonderful plant fight for life, wasting its its energy and effort to survive rather than to thrive.
To thrive, I had to leave. But I left a piece of me behind.