Evaluation

Final Evaluation Results

There were a few iterations we went through before deciding on our final implementation. For example, we attempted to implement a decision tree so that our application could adapt to user's brain waves. Based on our observations, however, users became frustrated that the application didn't work out-of-the-box and that it needed to be trained.

Here are the questions we asked our testers:

  1. On a scale of 1-10, what would you rate the accuracy?
  2. On a scale of 1-10, what would you rate the interface?
  3. What worked well?
  4. What could be improved?
  5. Would you use this headset?
  6. If yes, why? If no, what improvements would it take for you to use it?
Full Reviews

Average Perceived Rating

7.6/10

Average Interface Rating

9.4/10

Key Trends of What worked well

Accuracy

It worked well when I was zoning out while everyone was talking, and when I was just staring at the whiteboard.

It's accurate in terms of detecting when I'm paying attention and when I'm feeling tired.

Alert

The sharpness of the alarm tone would wake me up.

The alert methods got my attention very easily. I would imagine the vibration to be the most applicable to us college students.

Key Trends of What Could be Improved

Comfortability

100% of our testers stated that the headset needed to be more comfortable.

The comfortability of the headset. It just hurt a little on the right node on the forehead.

Accuracy

The accuracy of telling when eyes are closed. There wasn't really an issue with false positives.

Would you use this headset?

80% of our testers said "Yes"

However, most of them cited under the contingency that the headset were made more comfortable. When probing for why they would wear it, we discovered that people really value their lives and would wear the headset for safety reasons. This is consistent with our needfinding interviews.

Full Reviews