A week or two ago, I sat in a meeting where we attempted to weigh the intelligence of our faculty and students. Oh, we were too polite to call it that exactly; ostensibly at least, we were discussing how we could make our wikis more user-friendly. The discussion covered a range of possible cures for the perceived disease, from more intensive faculty training to student scaffolding to more and better tutorials. All well and good, this desire to make things easier for our end users. As a fairly recent convert to Usability, I embrace its tenets and evangelize for its centrality in our design process. Still, I wonder if things are really so difficult for our users, and if so, for whom and how many? Are we looking at a usability issue or a user issue? Are we simply reacting to the squeakiest wheel?
We use PBworks for collaborative work spaces in select online courses at SNL Online. Most have been used successfully, and most difficulties can usually be traced to unfamiliarity with the user interface: difficulties that are generally addressed and resolved with PBworks’ video tutorials and the task-specific tutorials we produce in-house. However, some students and faculty struggle mightily with the same interface, and the same support materials do little to alleviate their distress. Why is this, and what should be done about it?
Now, I will grant you that users differ in their facility with tools. And I will further grant you that we designers sometimes fail to make things as clear as they might be.
However. When the only appreciable difference between successful and unsuccessful engagement with a technology is the user set, I feel we have to examine whether we’re responding to a real usability issue, one that is intrinsic to the design of the technology and its interface, or to a problem of poor users.
I fear too often it’s the latter. Lacking any data on our users, we respond to the complaints of a few and extrapolate their difficulties to the general population. If a faculty member or student complains about their problems using a tool, we immediately jump to the conclusion that the tool is defective and devote hours of support and development time creating resources to ameliorate the perceived deficiencies, resources that we assume are better than those already produced by the makers of the technology. All this effort is expended to solve the problems of a small set of users who will never be made comfortable with a new technology by any level of support. Further, all of this activity occurs in the absence of any data indicating real need or a cost-benefit analysis.
I’m not sure what the solution to this might be. My department doesn’t have the resources to conduct extensive user testing for each technology we might introduce. However, we also don’t have limitless resources to chase the ephemeral perfect tutorial or provide one-on-one student and faculty support. Perhaps we have to admit we can’t help everyone, every time. Perhaps, once in a while, the squeaky wheel must go ungreased.
Hi there, regarding your post, I’m wondering who the “We” is that jumps to conclusions and trains everyone when confusion is expressed by some users. Wouldn’t it make sense to just survey folks, using simple freeware like surveymonkey, to see which areas require additional documentation or training?
Thanks for the feedback Catherine. The “we” includes me, my team mates and other stakeholders in the department I work for. Surveys might be useful if we had the resources (time and personnel) to implement them and make use of the data. And stats are only as good as the sample, method and interpretation. But that’s my point; in the absence of good user data we’re reactive and disproportionately expend resources to solve problems that (in my belief) only affect a small percentage of users.