Assessing Accessibility Part 1: The SOCITM Story
There’s bit a bit of a furore going on in the last couple of weeks in some quarters as regards assessing accessibility of different websites and comparing them against one another. Basically, what has happened is that the SOCITM Better Connected 2007 report was published, looking at the state of Local Government websites across the UK.
One part of that report — one small part — was looking at accessibility. The accessibility part of the survey was carried out by the RNIB on SOCITM’s behalf.
The multi-award winning ClacksWeb, home of Clackmannanshire Council, and driven by my chum Dan Champion failed to reach the Double-A conformance level. This is not in dispute. Dan accepts that there were indeed validation errors on some pages of the site, and in order for the site to be seen as conforming to level AA, there should be no validation errors anywhere on the site.
However, there has been somewhat of a fuss caused because sites which had only X number of validation errors were put in a “marginal” category and allowed to continue through to the next round, whereas sites with a few more errors were deemed to have failed. ClacksWeb was a fail at this point. Other sites — which still had validation errors and therefore were in breach of the Priority 2 criterion create documents that validate to published formal grammars were still passed through to the more comprehensive testing stage and were potentially allowed to claim Conformance level double-A, despite WCAG being rather explicit on this point:
Conformance Level “Double-A”: all Priority 1 and 2 checkpoints are satisfied;WCAG Conformance
This caused considerable discussion and debate, on Public Sector Forums, on AccessifyForum, on Dan’s personal site and also on the RNIB Web Access Blog, and I’ve stuck my own oar in there a few times and whirled it about.
I do sincerely hope that the RNIB web team haven’t been at all upset or offended where I have been critical of the report — I’d hope they would realise that I’ve not been critical of their testing per se, as they have provided a very detailed breakdown of how they have carried out this testing, using a combination of automated tools and manual techniques. They’ve even explained (at my request) how they check whether or not quotations have been marked up properly. I cannot fault the way in which the RNIB have provided information and tried to answer the questions put to them on this topic.
The RNIB web team know a great deal about accessibility, about accessibility testing, about being open and accountable, and they obviously take a great pride in what they do, and I commend them for it. They are also lovely people.
However, I feel the problem with the Better Connected report is really quite simple. It is the way of the world that people want to be able to compare things. Indeed, as I’ll be applying for my eldest’s school later on this year, I’ve been looking into various measures of success of nearby schools, because I want to try and give him the best possible start in life. In order to do this easily, you have to have some sort of comparison indicator.
I know just as well as anyone else that just because school A scores 285 and school B scores 240 doesn’t necessarily indicate that school A has better teachers, or will help my child achieve better. It could be that school B does poorly because it has a greater proportion of children with English as a second language. It could be that three of the classrooms in school B burned down in the year, somewhat disrupting lessons. I know it’s not a perfect indicator, but the instinctive reaction is to think school A is considerably better, which may not be the case.
I could look at the schools in a greater amount of detail, perhaps reading the Ofsted reports. But with ninety-two primary schools in the area, that would take some doing. So what I will do is I’ll restrict myself to the schools nearest our house, I’ll look to see which ones amongst these are performing well, and then from this cut-down list, I’ll review the Ofsted reports. It’s simply not practical to do this for every school.
And that, in a nutshell, is the problem the Better Connected report faces. The only real international standard for accessibility is WCAG 1.0, as produced by the WAI. It’s out of date. I know it’s out of date, Dan knows it’s out of date, the RNIB knows it is out of date, and so does the WAI. But currently, it’s the only one we’ve got.
WCAG specifies three possible conformance levels, giving us four options:
- A site fails to achieve even the basics of accessibility — the Priority 1 criteria. This site does not conform to WCAG.
- A site meets all of the Priority 1 criteria, but fails one or more of the Priority 2 criteria. This site conforms to WCAG at the Single-A conformance level.
- A site meets all of the Priority 1 and 2 criteria, but fails one or more of the Priority 3 criteria. This site conforms to WCAG at the Double-A conformance level.
- A site meets all of the Priority 1, 2 and 3 criteria. This site conforms to WCAG at the Triple-A conformance level.
It is far from easy for large scale sites, such as Local Government sites, to achieve all of the level 2 criteria across the site. This is made more difficult if you allow content to be entered by a wider range of people — you need to ensure that everyone who might be entering content knows that direct quotations need to be marked up in a particular way, that lists needs to be marked up in a particular way and so on. It’s far from easy. In fact, I’ve covered this very point before myself with my much earlier article why AA Conformance is not easy.
So what am I saying? I’m saying that I feel that the RNIB have done a good job — on SOCITM’s behalf — of assessing the accessibility of multiple websites to produce a kind of a league table. The problem is — and it’s a fairly big problem — is simply that this kind of league table does not necessarily accurately reflect the accessibility of websites.
It is simply not possibly to definitively declare that website A is more accessible than website B by looking at a league table — you need to do the equivalent of reading the full Ofsted report for each website in order to have an accurate comparison. And that plainly is not practical.
So maybe it’s time to stop “ranking” websites in terms of accessibility? Some webistes which failed to achieve level AA conformace might actually have been more accessible to users with disabilities than some of the ones that did achieve AA conformance.
But then again, this mania for easy comparisons probably requires that we continue to have some form of scoring system to give an at-a-glance comparison for the people who can’t be bothered to read the finer detail. But if that is what we are going to do, then why are we basing it on a set of guidelines which are now 8 years old? Why not use real user experiences?
Instead of just looking at arbitrary tests, isn’t it time we started to look at actual user data? Let’s get people with — and without — disabilities to test the websites. Let’s ask them to complete various tasks — the same tasks for each website. If they can achieve them easily, give the site a high mark. If they struggle, mark them down.
Let’s stop this nonsense where a site that uses accesskeys — which in general aren’t of benefit to anyone — and uses them in such a way as to prevent that user from using their browser or machine as expected is actually given more credit than someone who doesn’t use accesskeys.
Let’s stop this nonsense whereby someone who writes a piece of javascript, and tests it with dozens of assistive technologies to make sure it works is marked down, when someone who has a more clunky client-side interface because they haven’t used javascript is marked up.
Let’s stop saying that WCAG is a measure of accessibility. It isn’t. WCAG shows you things you can do that will likely make your site more accessible. But throw some real people at your site and it either will or it won’t be accessible to them, irrespective of what your WCAG conformance level says.
Remember: WCAG Conformance is not the same as accessibility.
Real people. Real tests. Real progress?
Dan says:
March 16th, 2007 at 11:42 pm
Bravo, nice summary Jack. We’re all in the business of improving accessibility across the board, and that’s where most of my frustration with Better Connected stems from – its accessibility reporting doesn’t do much to help. Here’s hoping we can persuade the Insight Team to adopt a more positive approach next year and come up with some advice and recommendations that have practical benefits for its readership.
Stefan Haselwimmer says:
March 17th, 2007 at 1:33 am
Hi Jack.. nice summary of the debate and I agree with your general conclusions about the need to conduct more disabled user testing. However my impression from the debate so far is that no one seems to be aware that disabled user testing was carried out for Better Connected 2007. This was the first year that Better Connected included the results of disabled user testing of websites (carried out by Usability Exchange, see BC 2007, section 5.5, also Appendix 5). Last year Socitm also commissioned the Usability Exchange to carry out disabled user testing of Socitm’s Top 20 local authority websites – although this happend after Better Connected 2006 had been published, as the Usability Exchange was only launched in March last year.
Having worked with Socitm over the last year, my sincere impression is that they are well aware of the limitations of technical accessibility guidelines and are following the guidance of PAS 78 which stresses the need for disabled user testing when assessing accessibility (see BC 2007, section 2.6).
Best regards,
Stefan Haselwimmer
Managing Director
Usability Exchange
paul canning says:
March 20th, 2007 at 3:34 am
Hi Jack
Blogged a response here:
http://paulcanning.blogspot.com/2007/03/better-connected-accessibility-and-all.html
Yes, I agree that going beyond WCAG is a good idea: I suggest where.
cheers
Paul Canning
ThePickards » Blog Archive » Assessing Accessibility Part 2: ThePickards Audit says:
March 20th, 2007 at 8:44 pm
[...] offering my own thoughts on the Better Connected 2007 thing, where I mentioned some of the problems in trying to assess accessibility, particularly when it [...]
Web accessibility surveys – results are frequently disappointing « The ‘58 sound says:
November 15th, 2009 at 10:46 pm
[...] there have been concerns that surveys may have a negative impact on ‘usable accessibility.’ If the methodology [...]