I spent much of yesterday at the Government Digital Service (GDS) in London observing a four hour(!) service assessment. Service assessments by the GDS currently take place at three occasions: at alpha, before beta, and before go live. You can see this full process outlined on the GDS service design manual. It’s worth a read, even if you are not looking at running or attending an assessment, as it provides a lot of intelligent questions to ask when a service is being designed or redesigned.
So, what did I get out of sitting in someone else’s meeting?
Well, for a start, it’s always nice to get out of the office and see others working towards the same goals as us; a better online experience for customers! I also got a lot of ideas from the session as to how we can improve our in house reviews of online services and how we can approach user testing.
The first part of the assessment was of most interest to me. That’s the first three points on the standards that focus on user research and user testing. I have pages of notes that basically repeat the same user-centric message:
- ‘ask the user’
- ‘find the evidence’
- ‘run comparison testing’
- ‘check it works for the lowest skilled user and if it doesn’t, how are you going to assist them’
This is where the assessment hit home to me that we need to place more emphasis on user testing.
We are just getting started with our navigation testing and we know we want to do more. It’s just we haven’t got there yet.
As we move into alpha and beta versions of our new website, we will look to do a lot more user testing, both remotely through online channels and in person through lab days or pop up sessions in the local shopping centre. This research should then give us the reassurance that what we are doing is what the customer needs, or if it’s not, give us the knowledge and understanding to change and improve the customer experience.