Time on task is a simple way of studying the time it takes to perform a given function or task. It’s an effective way of measuring the efficiency of a workflow or design. Time on task can be quantified by studying the elapsed time from the beginning of a task until the end of a task.
More often then not the phrase is represented as a measurement within usability testing and human factors. There’s a fairly basic notion that if a user can perform X operation in Y amount of time, where Y is desirable, then the experience is better. That pretty much blends well with most of my beliefs and studies about page responsiveness. The faster a page responds, the more engaged and comfortable the user becomes with interacting with the page. Could you imagine a user actually complaining about how fast a page responded?
I have to admit, I have yet to work with a user who complained a page request was too fast. There have been some cases where users questioned the experience entirely. Speed was an attribute of their questioning. The context centered around the validity of data. In the few isolated cases that I can recall users questioning responsiveness, it was that the data returned was questioned. The page request may have come back in sub-second, but the data was disjointed or inaccurate. In this case the experience was muffed.
It’s not necessarily fair either to say that time on task is better when it’s faster. From an academic sense, there’s widely accepted research that suggests faster isn’t better. I’m by no means argue whether this is right or not. What I can say is that it makes sense to me that in many educational scenarios, it makes sense to determine the appropriate boundaries of time to complete a task beyond faster is better.
Inside the software world, it’s important for certain tasks to take minimal amounts of time. Tasks which require minimal thinking, unlikely the exercise the brain are optimal candidates for short time on task measurements. Tasks that are performed redundantly by users are additional candidates, though with redundancy of task comes the opportunity for optimization of workflow. Critical tasks that can make or break adoption of a feature set are also candidates for short time on task. I believe they are important purely from the perspective that if a task is too cluegy to perform or simply takes too long, users performing that task are going to quickly become frustrated. The saviest of users will look for short cuts. When they can’t find the short cut, the either lose interest or abandon. Both cases directly affect adoption of a new task.
Usability engineers use time on task as a core metric for observing the efficiency of a task. Elapsed time is often measured directly (stop watch or embedded timers) or indirectly via recording tools. Multiple samples of the same task are studied and analyzed. Often the data is presented in the same fashion we present data in Performance Engineering. Mean values (specifically the Geometric Mean) to complete the task, as well confidence intervals (UCI and LCI]) are studied to present a stastical view of time on task.
I think the key piece of information that needs to be applied to SPE has to do with task efficiency. When it comes to responsiveness, we try to place a cognitive value on a task. The way we do that is apply a utility value for performing the task. We combine the utility value with a patience rating of a user. The combined utility + patience dictates the abandonment factor.
I believe time on task is in fact the missing piece of data that would make our abandonment decisions more meaningful. Come to think of it, really our abandonment decisions are guesses on what we believe will be the rational behavior of a user who becomes frustrated. It uses arbitrary response time factors to determine whether a user will become frustrated or not.
How exactly can we be the authoritarian on likelihood of abandonment if we do not have much context on expected time on task?