Overview
This was my MS-HCI capstone project: 2 full semesters of user & competitive research, design, user evaluation, and iteration. 

The wearable scanner came first: an engineering prototype and team project. My adviser and I agreed that I would further pursue the concept as my individual MS-HCI project. User research led me to evolve the concept into iterative prototypes of a mobile app, testing whether smartphone capabilities and interactions could be leveraged to capture text in the environment, for use as digital text in other apps.

Prior Work
The ThimbleScan engineering prototype was developed with Tara Ramanan as a research project for a class in Fall 2016. It was a scanner constructed to fit on the index finger to scan short textual information (words, phrases, or URLs of limited length) printed on physical paper, and transmit them to an attached computer, which would convert the physical text into digital. Download a full description.
This video (no sound) shows ThimbleScan reading a printed URL, and successfully opening that webpage in the browser. Note this is a 2 min 30 sec speeded-up summary; the actual run took 28 minutes. 
(Please fast forward to see the last 10 seconds!) 
Research
Research of competitive devices identified IRISPen 7, which worked much like a non-wearable version of ThimbleScan, offering Bluetooth connectivity to a variety of devices. For situations where a single line scan is needed, this solution is very close to the envisioned functionality of ThimbleScan, other than post-scan control being implemented through an app running on the device (ThimbleScan used gesture control). Click on the thumbnail below to visit IRISPen's website.
The Scenarios and Pain Points Interviews were one-on-one interviews, conducted in-person or remotely via video call. The interview guide used open ended questions to identify typical scenarios and examples of pain points when dealing with text information in the environment that a participant would want to capture electronically, and understand what participants would want to accomplish with that information after capture. Click on the thumbnail below to download a PDF of the entire Interview Guide.
Interview Analysis: Interviews were recorded and transcribed. I used top-down closed coding to identify relevant categories (source type, content type, frequency, destination, the Process used, Pain Points). I then summarized details and grouped bottom-up to identify emerging themes from the research:
Decision: pivot from a wearable scanner to an app 
Because half of scenarios mentioned electronic sources, I realized a wearable scanner would not fulfill as many scenarios as a smartphone app with picture input. That evidence, together with finding 100% of participants carried smartphones every day (but only 43% reported using a wearable device), led me to pivot from a wearable scanner to a smartphone app. 
Design
I started with two iterations of pencil sketches, kind of a hybrid wireframe-storyboard: 
The more-finished storyboard shown below established the concept for the first iteration design:
This more detailed conversation flow transitioned into construction of the first prototype:
The design target was to follow iOS 10 standards, and be evaluated on an iPhone 6s. Sketch templates supporting iOS 10 standards were retrieved from Apple and other sources and used to construct screen images. 
Prototyping and User Evaluation
Design prototypes were constructed as click-through medium fidelity prototypes simulating a smartphone app. Prototypes were constructed using Sketch and InVision. During three iterations of design and user evaluation, participants evaluated prototypes on an iPhone 6s (the InVision Design Collaboration App for iOS was used to display the prototypes). 
Four tasks (two major and two minor) were selected to evaluate the second prototype:
1. Import a picture taken of a (fictitious) flyer (“Rejection Study”), and send the content via text message. 
2. [minor] A picture with text was imported into the app inadvertently; remove from the app.
3. Import a picture taken of a presentation slide (“Portfolio Suggestions”), and send the content via text message. 
4. [minor] A picture with text was imported into the app; edit to remove unwanted text, and save the remaining text.
Each prototype user evaluation was video captured, with video focusing on the participant's hand and audio capturing the participant's voice. Participants were asked to “think aloud” while completing the evaluation. Click on the thumbnail below to download a PDF of the entire User Evaluation Script.
Post-Evaluation, each participant completed three activities. See the thumbnails below, or click the links in the list to download a PDF document:  
1. a System Usability Scale (SUS) questionnaire 
2. a questionnaire ranking four potential new features, in order by perceived relative value
3. a post-evaluation debriefing interview of six questions (view the Interview Guide
Summarizing the results of 12 User Evaluations:
Results & Retrospective
What was delivered: The final paper can be downloaded from this link. It's in PDF format, 89 pages (with appendix). 
What I learned as a designer: I learned a lot about research, including how challenging it is to recruit/schedule participants, and how difficult/time consuming it is to analyze results of interviews and user evaluations. 
As a designer, I learned yet again that quick iterations are good, and (especially early in the process) to be aggressive with iterating quickly on feedback from one or two trials. 
Most importantly, I learned that what occurs during a task is important, but not as critical as the transition between tasks. Using the results of one task to inform what is done in the next task has substantial impact on project results.

You may also like

Back to Top