Bonnie John is a Ph.D. in cognitive psychology and a leader in the UX community. She gave a talk at the NYCTech Council Meetup called “Predicting Success of a User Experience Without a Line of Code.” In her research and career, she has used cognitive modeling to test and evaluate the effectiveness of user interfaces. Here are some of my notes from her talk…
She started by discussing the fields of psychology and engineering, and how dissimilar that can be. In engineering, there’s a way to get to an answer. But the field of psychology is all about experiments and theories, not facts and answers. So how do you apply the field of psychology to a world of building things with specs and deadlines? Especially given that when you’re against tough deadlines, and rubber hits the road, the first thing to go is validating, testing, and iterating on UX.
“Scientists want to the know the truth. Engineers want to know what’s good enough to get things done.”
Unfortunately, the benefits to human behavior modeling are perceived to be too low, and the costs, too low. But in Bonnie’s experience, cognitive models have been used to validate designs or interface concepts on an enormous scale. For example:
– The IRS put proposals out for a new office automating system and software.
– Three companies responded with proposals above $700 million
– IRS used quantitative analysis with models to concluded that the vendor charging $1.4 billion would increase user’s productivity the most, despite nearly double the cost
Typically UX design starts with understanding what creates value to the customer, design to those values, and then test the heck out of designs
UI DESIGN –> TESTING –> CHANGES
Usability testing is the “gold standard” in the industry, and follows the simple formula of getting many people in front of the product and seeing how they interact. But this is expensive and time consuming (especially if recruting specific types of users is involved). And testing often happens too late, because it depends on having some type of working product.
Wouldn’t it be great if we could replace user testing with simulated models? aka: “COGNITIVE CRASH DUMMIES”
– Computer-based computational model that mimics human behavior: perception, cognition, motor actions
– Can be run against a storyboard (no working system or code needed)
– Produce quantitative predictions of: task efficiency, discoverability and errors, effects of moderator variables (like fatigue, time of day, etc.)
Cognitive models are great for testing and comparing the efficiency of UIs. And there is some research between the correlation of cognition and emotion. For example, there’s an emotional connection between efficiency and satisfaction.
But when it comes to testing emotional elements like desirability or downright “love” of a product” — these models break.
There is enormous value to testing UI and design concepts before investing in development efforts. And when many stakeholders are involved, the value of testing increases. Because with testing “It’s no longer opinion — it’s quantitative analysis.”
Cog-Tool was created to reduce the cost of learning through a “toe in the door philosophy” to make the simplest models easily accessible
How To Use Cog-tool
1) mockup a design in a storyboard
2) demonstrate tasks
3) hit compute and predictions appear in a table
UI designers can use this tool and the visualization output fro hit to extract design recommendations.
What about non-task driven testing?
– Predicting Exploratory Behavior, aka “Discoverability”
– Information finding tasks or performing new tasks are supported by a theory called “information foragining.” It’s called that because theree’s a mathematical theory for predicting the behavior of birds and animals — the likely hood that they’ll continue to forage in the same area, vs. venture out to a different area.
Cog-tool can test for “information scent” — test displayed information against a set goal. Factors that influence “information scent”:
– placement on the page
– adjacency and grouping with other objects and information