How do you see the techniques in the papers you have read relate to the techniques you are using in your projects?
Permanent link to this article: https://www.gillianhayes.com/Inf231F12/r4-user-research-case-studies/
Oct 25
How do you see the techniques in the papers you have read relate to the techniques you are using in your projects?
Permanent link to this article: https://www.gillianhayes.com/Inf231F12/r4-user-research-case-studies/
30 comments
Skip to comment form ↓
Armando Pensado
October 28, 2012 at 6:26 pm (UTC 0) Link to this comment
The user research techniques in the papers are very similar to the ones we are doing for our project and differ mostly on the limitations.
For instance, the sustainable mobile phone paper did a survey and based on the results from the survey picked people to interview so as to have a range of different characteristics in the sample. We are also doing a survey with more in-depth interviews but we are doing ours concurrently since we do not have enough time to conduct the survey and then start doing interviews. Also since our survey sample is much smaller, we would not be able to be as selective on the types of people we want in our interview sample.
The museum visit paper combined questionnaire, observation, and interviews all into the same day on the same people. While we are also doing a survey, interviews, and an observation study as well, we are doing them on different groups of people since it is easier for us—as students—to ask people for only a short period of their time rather than an involved hours-long process. Unlike the museum study, we do have not much to offer as incentive.
All in all the techniques in the papers and the ones we are using for our project are similar in the use of surveys and questionnaires to gather lots of data (breadth) and then doing interviews for more detailed information (depth).
Anshu
October 28, 2012 at 8:36 pm (UTC 0) Link to this comment
Sotto Voce: Similar to the way visitors of the museum integrate guidebook and its function in their activity of viewing objects of interest, we the Saiyan team are planning to design a portal which will help any person in the UCI community integrate their various search efforts under one site. Like guidebook enriches one-on-one and group conversations, our vision is to enhance the user experience by catering to their needs under a single roof and give them the options of selecting categories, setting filters and preferences. We conducted our research by experimenting on diverse populations based on age, gender, relationship, status, and group-couple-single visits. In similar methodology we conducted our surveys with students and interviews with faculty and staff.
Sustainable Interaction Design for Mobile Phones: The paper presented a study geared towards designing more sustainable mobile interfaces for the future. For this the authors conducted surveys and semi structured phone interviews. This is same kind of essence of our project as well. In our effort to collaborate and combine desired features from various sources, we are working on designing a sustainable portal, that we anticipate students and faculty will find useful. We can foresee this portal’s sustainability based on factors like trust (as it will cater to only UCI members), ease on availability/conveyance, saving cost and time. And for this reason we elicited requirements from many UCI members who can be a potential user of this site, in the form on on-line surveys, questionnaire handouts, and one on one interviews.
Xinlu Tong
October 28, 2012 at 10:58 pm (UTC 0) Link to this comment
In Huang and Truong’s research, they conducted a 34-question web survey with 79 samples. We also used web survey method in our project, but the results of our survey may come into two categories, because I posted our web survey link both in my Facebook and Renren (Chinese Facebook) pages. So the answers mostly consist of Chinese and American users. I think this survey can reflect more problems than survey in the paper, which only involved people in US and Canada.
Another similarity between Huang and Truong’s survey and our survey is we both used questions without explicit bias. I set many multiple choice questions in our survey which covered most cases for each question. However, their paper used semi-structured phone interviews based on survey responses, while we chose someone that is not necessarily related to the survey to interview. Huang and Truong chose people whose answers to the survey are interesting or special, and we found some authorities in concerned domain to get convincing information.
In another paper about Museum visiting, the author found a historic house to conduct interview. Our research, on the contrary, found several places to do interviews and observations. This is because our research questions are about language learning, and we need to audit different language classes and participate in some language clubs. In addition, the research in paper offered participants some incentive, which will help to recruit more people and improve the diversity of samples.
Sreevatsa Sreeraman
October 29, 2012 at 4:13 pm (UTC 0) Link to this comment
Adding to this, we are speaking to language instructors (who are not part of the survey) to understand the problems that could be faced by someone learning a new language. While, speaking to people who are learning the language is the most important bit, getting a perspective about the problems faced while learning from an expert in teaching the language, will expose generalities which can be incorporated into design. Contrary to this, Huang and Truong chose to only speak to the users of mobile phones. Relating to our efforts, with respect to the Huong and Truong research, a method analogous to ours is to speak to mobile phone dealers about phone usage, who cater to a large number of people buying mobile phones on a daily basis. The mobile phone dealers may have a demographic of the kind of mobile phones sold and the reasons why people would buy a new phone.
Regarding the museum visiting system, the paper speaks about evaluation of a system already designed. Moreover, the system being evaluated is in the third iteration of design-evaluation cycle. It would have been easier to relate to the first iteration of their design since we are at our first iteration. So, once we can gather requirements and come up with a prototype, we plan to conduct a similar, albeit toned down, evaluation by getting feedback on the prototype from some of the users of the survey and the language instructors we have spoken to.
Surendra Bisht
October 29, 2012 at 3:51 am (UTC 0) Link to this comment
Techniques used in our project is quite similar to the techniques used in Huang & Truong’s research paper. They have used web based survey and phone interviews for the user research. Similarly, we have also used online survey to gather user responses but instead of phone interviews, we are conducting personal interviews. Our online survey technique is similar to the one mentioned in the research paper in a sense that it portrays itself as a general inquiry and does not reveal the actual intention in order to avoid any bias in user responses.
I can also relate to the interviewing technique mentioned in the paper about interactive museum guidebook, “Sotto Voce”, since we have also used this technique as I have mentioned earlier. Although, we do not advertise our user study, we recruit users by approaching them and identifying whether they fit into our requirements for the user research. The paper also describes observation technique in which they conducted video recording of users interaction with other companions and the guidebook. We are also conducting similar study by observing people in their workplace and gym, and taking notes of these observations. Although, any kind of audio/video recording is not done as part of observation or interview techniques.
Chuxiong Wu
October 29, 2012 at 5:57 am (UTC 0) Link to this comment
In both papers, authors provide some research techniques that related to the techniques that our group are using for gathering information from stakeholders.
In sustainable interaction design for mobile phones, authors used a qualitative survey and interview-based study of mobile phone ownership. It revealed the investigation that the reasons why people to replace and give up their mobile devices and also showed information that how people dealing with old phones. Authors noted that they used web survey to investigate participants and phone to interview participants from survey. Same strategy as our group, we chose survey and questionnaire method to investigate a broad and general overview of the needs and practices of current UC Irvine students who are seeking and hosting events. We will use both web-based survey and paper-based survey to identify fundamental issues within the current techniques.
We setup 6 questions that related to our project. Unlike phone interview in the paper, we will interview people in person; we more concern about the real awareness of the users. We believe the face to face interview will provide us a narrow domain that helps us to find out their perceived wants and needs.
In another paper, secondary research is also applied to assist analysis background. This method reveals historical development on guidebook in museum. Similarly, we will supplement our information from additional materials. These secondary research will be assist to compare with survey and help us to seek potential problems that how we understand users’ awareness in an appropriate way.
Timothy Young
October 30, 2012 at 1:08 am (UTC 0) Link to this comment
The secondary research for this project also helps us identify how our design questions can be tweaked. Through evaluation of how users interact with the materials in our secondary research and how they receive and perceive information from the secondary materials is vital to our project’s understanding of allocating event data to be presented in a straightforward manner. Since our project also deals with event discovery, sometimes survey methods are not definite in assisting us with design issues. We are finding that prototyping design methods may be necessary so that users will know what they will react positively towards. For the discovery aspect of our application, it is sometimes difficult to know exactly what the user wants as they themselves are unsure of what they are looking for with discovery. In this case, we do our best by trying to read deeper into our survey question responses, and not just take them from surface value.
Xinning Gui
October 30, 2012 at 2:34 am (UTC 0) Link to this comment
In “Breaking the Disposable Technology Paradigm: Opportunities for Sustainable Interaction Design for Mobile Phones”, the author recruited participants by snowball sampling and distributed a brief and simple web survey via Surveymonkey.com. For our project, we also designed a survey on Surveymonkey.com and got responses by snowball sampling, so that we could gain a broad understanding about our stakeholders. Our survey was also designed concisely like the authors’ to avoid costing participants too much time. All of our questions offered multiple choices as well. In addition, the authors conducted one-on-one interviews based on the survey responses. We are also going to do interviews to dig deep into our stakeholders’ requirements, but the difference is that we are planning to adopt group interviews (also known as “focus groups”) instead of one-on-one interviews, because we want to show our paper prototype to a group of people simultaneously to generate feedback and new ideas through conversation and interaction in a short period.
In “Revisiting the Visit: Understanding How Technology Can Shape the Museum Visit”, the participants of questionnaire signed a consent form. However, we did not provide consent forms. The reasons are as follows: our research is conducted for a class project, which matches the standards of exemption from IRB process. Also, our survey did not involve participants’ demographic information. It was totally anonymous and voluntary. Additionally, in this paper, the researchers let participants try the guidebooks. Although we have not designed out a physical product to let potential users try, we will show them our paper prototype. Moreover, the researchers in this paper invited the participants to enter a private space for a group interview. We will also use this kind of focus group method.
Jie
October 29, 2012 at 7:37 am (UTC 0) Link to this comment
In the paper about museum visit, the authors asked visitors to use their guidebook and then interviewed those people regarding their current and previous experiences with or without the guidebook. In our project, we also interviewed several people about their sleep habits and other possible issues related to sleep, although we don’t have any product for people to use. The responses to some questions from different interviewees are divergent from each other, which makes it difficult to implement one product to satisfy everyone’s need, and also prompts us to rethink about our design question and to focus on a more specific area. On the other hand, various responses offer us several directions to extend our design ideas.
In the paper about sustainable interaction design for mobile phones, the authors did a web survey and a series of phone interviews with participants selected base on survey responses. Our team also did a web survey (n=72), and a series of face-to-face interviews. But the participants in our interviews and web survey are different people and we focus on university students based on our design question. In our project, the survey offers us general information on students’ sleep habits and what cause them to sleep late, etc. The in-depth interviews also let us probe individual experiences, which make us understand the reasons and motivation behind the habits.
To sum up, the user research techniques used in the two papers are quite similar to ours. That is, using survey to collect a broad set of data like what people usually do, and using interviews to probe the reason behind their activities.
XIaoyue Xiao
October 29, 2012 at 8:51 pm (UTC 0) Link to this comment
In the paper about revisiting the visit, they used questionnaire and interview to gather data in order to explain the pros of guidebook used in museum. In the other paper, they used similar research methods to investigate how do people deal with their used phones.
In our project, we have designed a questionnaire of 10 questions trying to figure out that does people interest in our idea, their current custom using similar products, the affordable price and their basic requirements of the product etc. From the copies we collect back now, we’ve found some weakness in our questionnaire which need to pay attention on next time when designing another one. The most obvious weakness of our questionnaire is that in some questions, we did not considerate to list all of the possible situations for people to choose from.
Then, we’ve designed one-to-one interview of eight people to understand what the prospective users need the product to be in detail. Such like the size of the product, their expectation of the product, their advice for the design team, the different roles who will use the product, their complaint about the current product, etc.
It seems that questionnaire provides an overall perspective of the project, reflecting what do people care most about the product and their common desire of the new one. It is a quick way to make sense what the basic or original requirements are in shortest time. While when it comes to the on-to-one interview, our team found out some reasons why people chose specific option during the questionnaire phrase. They also generously gave us some valuable design suggestions, as well as pointed out the unreasonable aspects in our questionnaire.
Jacob
October 30, 2012 at 3:42 am (UTC 0) Link to this comment
The researchers at the museum relied primarily upon interviews with their participants and the phone re-users used conducted a survey followed by in-depth phone interviews. our team has made thorough use of the phone interview, though we haven’t conducted surveys.
Keep in mind that research papers, like the ones we’ve read here, typically focus on the experiment and the follow-up analysis, but don’t talk much about the design processes that created the system being tested. In addition to the interviews that we’ve conducted, our team has done a role-playing exercise and created a flowchart to map out the use of our product. I would imagine that both of those methods would be great for designing a system like Sotto Voce, but I wouldn’t expect to see that in a research paper.
We are currently in the design phase, so we can’t do any follow-up testing right now. So even when there is overlap in the methods we do with these papers, our analysis will be associated with designing the prototype while there analysis is analyzing the performance of something that has already been designed.
Jared Young
October 30, 2012 at 4:12 am (UTC 0) Link to this comment
In the paper, ‘Opportunities for sustainable interaction design for mobile phones’ authors conducted a 34- question web survey
and a series of in-depth, semi-structured phone interviews. Interestingly, the phone interviewers were selected based on the web survey
responses. However for the web survey, participants were recruited via snowball sampling. The different methods of sampling for each
research method is interesting. The tactic of using the survey to select people to interview can prove to be useful
for finding very qualified people for interviews. For our project we are conducting online surveys, interviews, and observations. Structured
interviews will be conducted, as in the literature, however the people we select to interview won’t be based off of the survey. However,
Interviewees are not excluded from the survey. Our interview questions are also inspired by our survey questions, and are
more in depth and attempt to ‘probe’ the interviewee for more detailed information and data about experiences.
In the paper, ‘Revisiting the Visit: Understanding How Technology Can Shape the Museum Visit’, authors recruited interviewees
using signs located inside a visitor center. A clever tactic of using incentives were used to recruit people. Incentives like, “the use of technology”
or viewing something that was “off limits” to the general public helped bring people to sign up for the study. In contrast to our group project, we did not think of
incentives to lure people into taking our surveys/ interviews. We used advertising on the web (via social networking) to recruit people
and our own personal friends/coworkers/acquaintances to select people to interview.
Pushkar
October 30, 2012 at 4:13 am (UTC 0) Link to this comment
The techniques that we are using in our project are very similar to the ones being used by the authors of the paper: ‘Breaking the Disposable Technology Paradigm: Opportunities for Sustainable Interaction Design for Mobile Phones’. In this paper, the authors conduct a web survey using survey monkey, and this survey has about 34 questions. For our project, we too started off with an online survey. We created a small survey which takes about 5 – 10 minutes to fill. It mainly has multiple choice questions and some questions where the participants need to rate their experience with some products. In our survey, we too have an optional open ended question that asks about the challenges that the participants have faced when they try to buy/sell used items. The online survey link was posted in facebook in order to reach wider audience. In addition to the web survey, we also handing out printed versions of it. Another factor in which our method is similar to the 1 in the paper is that we are mostly interviewing participants who took the survey. During the interview we ask them about their experiences with using the existing buying/selling methods, in detail.
In the other paper related to ‘Sotto Voce’ the authors suggest a method where the recruiters select participants who are completely unfamiliar with him and the interviewer. They are then asked to fill a small questionnaire about their demographic details and their museum experiences. After that, they are given a guidebook and are asked to visit 3 rooms of the museum mentioned in the guidebook. Once the participants have finished touring the rooms, the interviewer conducts a 10 – 30 minutes interview. The main difference between this method and ours is that, in our approach we interview the participants based on their experiences with the products which currently exist and not a prototype of our product. But in case of the method used in the paper, they make the participants use their device itself and ask them about their opinion. Another difference is that the method uses video cameras to record the participant’s interactions, but we are not planning to do so.
Chunzi Zheng
October 30, 2012 at 4:28 am (UTC 0) Link to this comment
In the paper “Breaking the Disposable Technology Paradigm: Opportunities for Sustainable Interaction Design for Mobile Phones”, the researchers observed the visitors activities to analyze user’s habits, and collected data by survey and interview. And In the paper about Sustainable Interaction Design for Mobile Phones, researchers used web-based survey to get to know perspective user’s preferences and used semi-structured phone interviews based on the survey response.
We also used survey to get to know more about users, but the different thing is that we used only paper-based survey. We came up with a questionnaire with 10 questions. Most of our survey questions are close-ended questions, so the only thing the respondent need to do is to choose an answer he or she prefers.
And our group also use one-to-one interview to know more about perspective user’s idea about our product. By asking them open-ended questions, we got more information about different user’s special preferences and more details of user’s requirement. And some interviewees provided us some creative ideas that could help us a lot.
Anirudh
October 30, 2012 at 4:36 am (UTC 0) Link to this comment
Sotto Voce:
In this paper the authors try to understand how museum visitors interact with each other. The sample consisted of couples/pairs who visited the museum. The participants in this activity were the recruiter, interviewer & the visitors. Visitors chosen had no prior interaction with the recruiter or the interviewer. The first stage of their process consisted of the recruiter handing sotto voce devices to the pair & explaining the various features of the system such as eavesdropping, volume control. This allowed the pair to interact with each other in 4 different ways. The duration of this stage was limited to visiting of 3 rooms. The second stage consisted of a discussion/interview with the visitors using the (audio/video) recording of their conversation to understand the nature of their visit & their relationship to understand the interaction.
At a broad level the process followed by the authors was to conduct prototype usage & semi-structured interviews. This differs from the methods used in our project as we are using survey & interviews. Our first stage is completely different & provides a larger sample so that we can understand the requirements. Also the survey we are conducting is across varied demography (based on role, department/branch, residency status). In interviews we are using audio recording & are following semi-structured interviewing technique. The difference between Sotto Voce & our project is that in Sotto Voce users experience the proposed system whereas in our project users provide details about their prior interactions (in buying/selling/rentals).
Breaking the Disposable Technology Paradigm: Opportunities for Sustainable Interaction Design for Mobile Phones –
In this paper the authors use surveys & interviews to understand why mobile phone users replace/recycle/discard their old phones so that they can present ideas on sustainable mobile phone interfaces. The survey conducted by the authors consists of 34 questions which takes 10-15 minutes to answer. The survey is distributed through surveymonkey (an online survey hosting service). In the second phase, the authors conduct semi-structured interviews in which questions are based on user’ feedback in survey. This helps them analyze why a user chose a particular option in the survey.
In our project we are following similar process by conducting surveys & semi-structured interviews. Our survey primarily consists of 13 questions about how users (UCI community) buy/sell/rent used items. We also have 3 questions which will help us understand the demographics. The survey is hosted on SurveyGizmo (an online survey hosting service) & has been distributed via facebook & manual copies. Based on the responses we have received we have designed ~10 interview questions which help us understand the process a user follows while buying/selling/renting. We also elicit challenges/successes in this process from the user. Our stakeholders are faculty, students, alumni & staff. Hence we have carried out surveys & interviews (still in progress) across all our stakeholders. From this understanding we aim to derive our requirements for the proposed online portal.
Martin S.
October 30, 2012 at 4:53 am (UTC 0) Link to this comment
In “Opportunities for Sustainable Interaction Design for Mobile Phones” researchers employed surveys and interviews in their data collection. Similarly, “Revisiting the Visit” described a method of interviewing asking participants to consider their museum experience with and without the use of a guidebook.
Our group intends to develop a tool to improve the process of lesson planning for teachers, particularly for the first time. Because of the very specific task with the limited participant pool we have outlined, we found surveys would not be ideal, and in situ observation would be nearly impossible. The literature helped to clarify real-world applications, and we simultaneously looked to the IDEO method cards to inform our approach. We supplemented semi-structured interviews with a flow analysis of the lesson planning process, and also performed role-playing exercises in which we attempted to create original lesson plans. The interview process informed our flow analysis, and roleplaying exposed us to difficulties that helped direct our design process.
Chandra Bhavanasi
October 30, 2012 at 5:36 am (UTC 0) Link to this comment
Both the papers provide us with valuable user research techniques.
Our project is in a way similar to what the sustainable mobile phone paper did, though had a few minor differences. We did do web surveys as in the paper, but given the constraints of this project and time, we didn’t have as many as 34 questions, neither did we have as many as 79 responses collected. Also, our interview process is not telephonic, but a direct one-on-one. We plan to do skype interviews too, if it’s not possible to do a direct interview due to interviewee’s schedule conflicts. Also, the paper picks people from the survey (by the way, our surveys are anonymous) for telephonic interviews, which is not the case in our project.
In the museum paper, they did interviews based on the experiences of using the guide book, which is more prototype usage, but in our project we didn’t have any sort of prototype. Also, the interviews in the paper are based on the experience of users before and after using the prototype, but our project deals with usage and interaction (or lack of it) with any tools for budget management.
Parul Seth
October 30, 2012 at 5:54 am (UTC 0) Link to this comment
The relation between the techniques used in the paper and our project can be observed by highlighting similarities coupled with differences; classified under stages of selection, execution, exploration and evaluation of the user research as follows:
Selection: The Huang & Truong study used snowball sampling for recruiting the participants through brief and simple online surveys. These surveys were used as basis for selecting candidates for the in-depth phone interviews as well. The Sotto Voce research used opportunity sampling by observing visitors at Filoli, and approaching them based on judgments of the recruiter. They also made the population at the historic house aware of the ongoing study with the help of an advertisement. We are using a combination of both the sampling approaches i.e. the friend of friend approach for the online survey, along with the opportunity sampling by targeting our surveys to specific online groups. Also, we are interviewing people who we think will be appropriate for covering the depth for our research, this selection is independent of whether or not a person currently uses a similar tool as ours. We are involving present users of the tool as well as novice users.
Execution: Unlike the Sotto Voce we are not having any written consent forms for the participants, nor are we giving any sort of training to the participants before interviewing them, participation for survey is voluntary and ad-hoc. Rather than doing the phone interviews with people selected based on survey responses like in Huang & Truong paper, we are conducting one-on-one interviews in parallel with the online survey; the answers are being logged in the form of notes and audio recordings.
Exploration & Evaluation: In both the studies the data collected is being referenced to the data that is handpicked in the secondary research. This is helpful in validating the findings in surveys and interviews, i.e. statistical results and quotes from the participants. The use of affinity diagrams also seems intriguing for clustering and idea brainstorming, but we have not currently employed them for the exploration and evaluation, rather our focus is on. We are adopting a similar approach for analyzing, validating and bench-marking our findings by doing a competitive product survey. We see our competitors as our baseline for requirements and a means by which we can cultivate a meaningful, engaging and usable design that is helpful for our stakeholders. We like the Huang & Truong team want to be creative while being relevant and grounded. Interestingly, few of the competitor products are the ones that our participants are using currently; we call them our “prime gurus”. Participant experiences with these products are proving to be very helpful in figuring out the features of foremost importance and the gaps that can be filled with our design.
Dongzi Chen
October 30, 2012 at 6:51 am (UTC 0) Link to this comment
The research method from “Revisiting the Visit” is hard to us to realize, because if we want to use this kind of research way, we have to have a physical prototype which is complete enough to use. Than we can get the feedbacks from the users’ using experience to help us to improve our devices. Because we don’t have a physical prototype, our research way is more similar to mobile phone case.
The first similarity is both of us use survey as a research method. For our project, we try to figure out a way to make our apartment door safer and try to make a more powerful smart key. Instead of a web survey, we did a paper survey to ask potential customers some basic questions. And our questions also contain two parts; focused on real experience questions and hypothetical questions.
Second, we also have similar interviews. The interview questions are based on the survey responses. We also conducted a set of semi-structured interviews to probe for people’s personal stories about current key using experience and basic ideal regarding the safety of the apartment. The interview guide included a series of main questions inspired by the survey responses. Each interview also included individual questions based on that participant’s survey answers.
Karen
October 30, 2012 at 6:51 am (UTC 0) Link to this comment
In terms of older readings, we are actually implementing many of the techniques we have read about into our projects, though I would say that our method matches design ethnography and contextual inquiry more than ethnography. In our techniques (e.g., interviewing target users, role-playing…), we are trying to understand what educators do when they design new courses, how they go about designing new courses, why they use existing methods, and what they want and need when they are designing new courses. We would like to understand how the design we create can fit into their lives. Though we role-played as new educators, we are not really participating with them, as ethnographers would. We also already have the clear cut goal of using this information to create a new design or improve upon existing designs in a specific context.
In terms of this week’s readings, although we did not conduct surveys, I did incorporate some of the elements in the “Tailored Design” reading into the way I approached my interview. For instance, I emphasized my gratitude to my potential interviewees for helping me in advance, and thanked them after they agreed to participate, and after the interview. I also gave them information about what the interview was for and why I wished to interview them. I tried to make it convenient for them to participate by allowing them to communicate with me via phone, rather than in person, and being as flexible as possible with the time of the interview (my interviewees live on the East coast). The first interview I conducted (the other one should be conducted tomorrow) followed the template of an ethnographic interview as outlined in “Interviewing an Informant.” I greeted him, told him why I was interviewing him, and asked friendly questions about how he was to get him comfortable rather than launching directly into questions. Then I explained to him I was ignorant about designing courses so that he would feel like he was making a very valuable contribution. I was directive in our interaction and asked him descriptive questions (“Can you describe to me, step-by-step, the process through which you design a syllabus? How about a lesson plan?”) and structural questions (“What are the different resources that you use in creating a syllabus? How about a lesson plan?”). I would express interest in what he was saying and restate what he said to further emphasize my interest, and to make sure that I was understanding him correctly. He had a lot to say–it was a little difficult to get him to stop talking sometimes and to move onto the next question…but I feel like these methods worked overall. I wonder how the interview would have gone if I did not already know him though; it could be harder to establish rapport.
Jianlin
October 30, 2012 at 7:23 am (UTC 0) Link to this comment
As far as research methods are concerned, these two papers are similar–they both use interviews. Huang also included a survey study in order to capture the opinions from a larger population. However, the purposes of these two studies are different. Grinter was trying to investigate how people would use their guidebook, while Hang was trying to understand current people’s perception regarding mobile phone sustainability in order to reflect some ideas in designs and services. At current stage, we will adopt interviews and surveys in order to investigate user requirement, which is closer to Hang’s study. Hang’s paper gave us a good example to demonstrate how to link general user study questions to new idea reflections. I guess they seemed to have some ideas in mind already before starting that research, otherwise their research are less likely to be approved at the first place. This situation is also similar to us. Now we have some design ideas in mind, but need to be adjusted and refined through this kind of user study. On the other hand, I would consider Grinter’s study a part of design idea iteration process, because they have gotten some kind of product in hand before study, and used their product to probe users’ interactions with it and among users themselves. Although Grinter didn’t provide further analysis towards design indications, their study could serve as a primary user study about social guidebook. Also their findings can be used for future designs. In the following stage of our project, probably we can also adopt a similar method.
Dakuo
October 30, 2012 at 7:45 am (UTC 0) Link to this comment
In the museum Sotto Voce paper, the authors conduct observation method to get some background information about visitors need. Then the authors combine prototype experience, questionnaires, surveys and interview method. Compared to our project, we don’t have prototype at this point. But we should definitely have user to experience our design and get feed back from them. Also anther difference is we don’t have incentive for users to participate in our study. Most research methods have been done by our social connections. Other than that, we almost used all the rest of research methods mentioned in this paper, such as observation and interview.
In Huang and Truong’s paper, they collect data by two components: a 34-question web survey (n=79), and a series of in-depth, semi-structured phone interviews with participants selected based on survey responses (n=10). Our class project is basically follows the same routine as I mentioned above. Similarly, our interview expectation would be n=4 considering time limitation. But the online survey has 60 participants response till today. Besides, we plan to do another 4 observation studies to complement the drawback of our small number of interviews. Our interview is also semi-structured, as discussed in the paper. Since we all group members will do interview separately, a semi-structured interview could maintain uniform format while leave space response for different situation.
I think our research methods choice is good because we considered both depth and breadth. We are using online web survey to collect data broadly and by conducting interview and observation we could get details of the user demands, which ensures depth of our data.
Matthew Chan
October 30, 2012 at 8:00 am (UTC 0) Link to this comment
Although surveys would be nice to reach a large number of users, we need a more intimate view of how educators construct lesson plans. The immediate methodologies were role playing (constructing a lesson plan ourselves) and interviews. Flow analysis was also important to highlight the activities and bottlenecks of the entire process.
<img
Matthew Chan
October 30, 2012 at 8:02 am (UTC 0) Link to this comment
(continued)
in Huang and Truong’s paper, they conducted a survey and phone interviews. In Sotto Voce, the methodologies were more diverse from observations to prototyping to surveys and interviews. This highlights the triangulation method discussed in class since some methods have weaknesses that can be compensated by other methods. For instance, interviews may highlight pain points by users, but observations may reveal a deeper pain point that users did not realize.
Furthermore, at my group’s rate, we may build a prototype and evaluate + iterate its design and functionality
Ramraj
October 30, 2012 at 8:49 am (UTC 0) Link to this comment
The guidebook facilitated four kinds of activity: shared listening, independent use, following and check in. The paper also describes about the strategies for visiting museums. Museums are the place where technology can be used to the extent. The audio guided books have a challenge that it can deliver the content through audio but the visitors get isolated. We can use the concept of wall in our project. We can arrange all the house hold utilities in virtual walls just like sotto voce has objects on the walls. We can incorporate shared listening into our project where the users can speak and listen to each other via smart apartment key. We can use “following” technique as the complete residents in an apartment can listen to the owner or they can listen to same set of songs. We can use “Checking in” technique so that anyone who wants to know what the other person is listening to can know by using this technique.
The second paper is about knowing the factors such as styling, service contracts and functionality of the mobile phones. This paper revealed the complexity of actions and decision-making process involved in phone ownership and replacement. The better design interfaces of a mobile phone can reduce the replacement or disposal rate. Sustainability is another factor that we are going to consider in our modeling of smart key because the key should not be replaceable within short span of time. Surveys and interviews are the techniques we will be using to gather people’s needs and opinions about smart apartment key.
The styling, design can make the customers to use the mobile phones for longer time. The better designs and user interfaces along with the advanced technology helps the user to update instead of replacing the phone
Jeffrey
October 30, 2012 at 9:30 am (UTC 0) Link to this comment
Both the Grinter and the Huang paper report usages of user research techniques that are quite similar to the techniques being applied to our project. More specifically, our group has decided to conduct surveys and interviews to further understand which user requirements are crucial in achieving our design goals.
Huang’s paper reported that surveys were conducted to facilitate the process of picking participants for their interviews. Such methods were implemented to assure that the participants involved with the research possessed a diverse range of characteristics. This is somewhat similar to the methods being used for our group’s project. However, our group has decided to conduct surveys alongside the interviews due to the time constraints given by the project; therefore, our surveys will not be used to recruit individuals for the interviews. This is better for our project anyway, since our survey sample is smaller than the sample used in Huang’s research. Having a smaller sample would limit the selectiveness on the types of people we want in our interviews. Additionally, the interview group would be less selective due to the fewer number of questions present in our survey compared to the survey in Huang’s research. Instead, people who we think would be appropriate for providing depth to our research will be interviewed, along with those who are friends of our friends.
In Grinter’s paper, interviews were conducted based on the experiences in using the guide book provided; in other words, some sort of product was presented to those being interviewed. However, our project does not provide those being interviewed/surveyed with a prototype in order to receive feedback. Our group is analyzing and validating our observations and results by doing a competitive product comparison. Such a comparison will allow our group to see what user requirements are necessary to design an engrossing and easily usable/efficient design that is beneficial for all of our stakeholders. Therefore, understanding user’s experiences with current products is extremely beneficial as it provides us with details on which features are essential for a specific design and what concerns need to be addressed in order to achieve our design goals.
Ishita Shah
October 30, 2012 at 2:42 pm (UTC 0) Link to this comment
The “Understanding how Technology can shape the Museum Visit” paper describes that it is not enough to design a usable and useful system and people want to enjoy shareable experiences. The guidebook incorporated shared listening, independent use, following one another, and checking in on each other, and we seek to build our event discovery application along similar lines. We also carried out anonymous surveys and interviews, however we did not ask our stakeholders to sign any consent forms or share their demographic information.
In the Huang and Truong paper, the authors had got responses by snowball sampling. Given the time constraints of this project, we carried out quick multiple choice surveys. We also conducted focus group interviews as opposed to one-on-one interviews. Prototyping methods will turn out to very useful for our project so after building a paper prototype, we would get feedback from users and do further analysis.
Jinelle D'souza
October 30, 2012 at 3:53 pm (UTC 0) Link to this comment
Our project deals with the helping people learn a new language. In the paper, Breaking the Disposable Technology paradigm, they spoke only to those who owned mobile phones. I think that with regard to learning a new language, in order to come up with a unique way to do this, we should speak also to people who do not have a mobile device or those who do not reply on it much, and ask them if they were to learn a new language, how would they do it. The reason is that there could be a group of people who would say the screen of their mobile device too small for such a hobby.
For this data gathering, they undertook web surveys. These surveys were aimed at collecting a broad set of data. In our project, we will focus on smaller groups for data since, we need specifics from them, and would prefer more of the interview method of data gathering as a more reliable method than surveys. This is because we would like to integrate the non-verbal cues as well. They did use interviewing, but only after their survey and they used survey questions to guide them.
In the paper, Revisiting the Visit, I stumbled upon an important design constraint i.e audio. In learning a new language, pronunciation will definitely be a key factor while designing. So the audio system for a mobile device would be the headphones, just like its use for music. Also, they gathered data through observation of people in the museum. In our project this will not be possible since, we do not know in which leisure part of the day people choose to do this.
Yao
October 30, 2012 at 4:23 pm (UTC 0) Link to this comment
In the mobile reuse paper they had a survey, which has some differences with our method. I think our basic idea is the same: no speculative and hypothetical questions in order to avoid bias. Later they conducted semi-structured interviews based on the survey response, but in our interview we use little information about our survey result. One reason is that we concurrently made our survey and interviews, another reason is that we have little knowledge on our research question so we want to use both survey and interviews as a quick way to understand our stakeholders. Also they have a good diversity in their survey, but we focus more on students, which we think might be our biggest stakeholders. After they finish their interviews they built an affinity diagram, which is really cool and might be very useful, and I hope in our project we could gather more information to do some similar analysis.
As for the museum paper, they have a recruiting process and incentive to participate, and later the participants are asked to visit the museum for their “live” study. However, since our topic is related to some privacy issues, we mostly gather user experiences by asking interviewees to tell us about their stories. Therefore, we asked our participants whether we could take some pictures of them rooms, hoping to get more information, and because it’s a little hard for us to get information related to privacy, we might have a second interview based on our survey and first interview.
Shih Chieh Lee
October 31, 2012 at 8:49 am (UTC 0) Link to this comment
In the paper of mobile phone, they used both a qualitative and a qualitative method to complement each other by combining a web-survey and a phone interview. Although we also conducted the web-survey and an interview, the interview presented in the paper was conducted based on the survey responses, which is different from our methods. With limited time of our project, we could only conduct the different methods simultaneously. The limitation leads to the iteration and we have do another interview which includes the questions inspired by the survey responses and the previous interview answers.
In the project Sotto Voce, they conducted three design-evaluation iterations. This iteration reminds me of our methods. Although the techniques are different, the iteration is inevitable. In our project, we interviewed several people based on our first design questions. When we tried to collect the data about people’s sleepiness, compared to the techniques mentioned in the paper, it’s hard for us to ask about their feeling or experiences while sleeping. Besides, after we analyzed the data, we found that the complexity of actions and activities couldn’t be revealed within few questions. Hence, we also used video camera to record the participants’ activities and tried to explore the implicit thoughts and impressions of the users’ experiences.