How to create a sustainable browser testing strategy

The announcement of Microsoft’s IE 10 platform preview set me thinking about creating a sustainable and easy to manage browser support strategy.

There are so many browsers, operating systems and platforms in the market that in reality it’s just not possible to test in them all.

There are automated tools available – a search on Google will show you what you need – but that doesn’t really answer the question about which browsers you should actively support and test on.

Standards are at the core of your strategy

The most important part of your strategy is the quality of your code. If your HTML, CSS and JavaScript are not standards-compliant then your job is going to be much harder going forward.

If you agree the standards to support and validate your code life will be much easier.

What do your visitors use?

Do you know what you customers use? If you’re a B2B organisation, chances are there will be a focus on Windows and Internet Explorer. In the consumer space, it becomes much more variable and depends on your target audience. The best way to find out is to use your Analytics package.

For example, Google Analytics has a very useful browser capabilities under the “Visitors” tab (under “Technology” in the new Beta).

Remember tomorrow

Microsoft’s IE10 is due out in 2012. You can get access to the platform preview now – does it form part of your test strategy? If not, it should.

How long is your site or application going to be around for? Chances are it will be more than a year or 2. What will the browser and system landscape look like then?

Taking a more forward-looking view of your standards will also help in the long-term.

Regular reviews

Review your support list every 6 months. Take into account what you know about your customers and upcoming changes in browser-land.

Don’t be afraid to change your list of supported browsers and systems.

Once a year review your overall code standards – do you need to change them?

The 5% rule

To make it easier for my team, we have a simple 5% rule. This isn’t a hard and fast. For instance you have to take into account the potential future growth of systems and browsers (I’m looking at you Messrs SmartPhone and Tablet), and the needs of particular customer groupings but as a rule of thumb it works:

  • Full functionality test – if a browser has more than 5% market share, it should be fully tested. Everything should work properly and look correct. You should fully test on core browsers every time you make a change or develop new functionality or content.
  • Light testing – between 1% and 5% everything still has to work properly. Minor visual issues should be ignored but all functionality should work properly.
  • Minor testing – below 1% systems should only be tested re-actively based on customer feedback.

Remember, you can’t choose what your customers use, nor should you, but you can make it easier to support them.