top of page

Week 8: Research and Ethics

  • Writer: danielclarke1981
    danielclarke1981
  • Apr 2, 2022
  • 7 min read

Updated: Apr 13, 2022

This week's topic for study is Research and Ethics. There was a lot of content to process this week and considering we are in the middle of a rapid ideation I feel this post may be published late.


After watching resources on ethics approval we were asked to consider different research scenarios. I was surprised that any user participation in research is instantly deemed a medium risk, regardless of the subject matter. I envisage I will become better acquainted with the ethics review form, in that case, moving forward. It made me contemplate the implications of user research if I was ever to develop my second ideation concept further. The intended audiences are children and social work professionals. As the application deals with emotions and processing trauma, users may become highly stressed, and this would definitely set it in the high category. I would be concerned that the testing would have a psychological impact on the child participants, and I would need to ensure that both a Subject Matter Expert (SME) and a trained child psychologist would be on hand throughout the user testing phase.


We were also asked to watch another video, this time by Erik Geelhoed, who along with a career in academic research, worked at Hewlett Packard Research lab with an impressive list of achievements. The video was an introduction to user and audience research, and I felt slightly jilted when I couldn’t see the follow-up videos Erik alluded to, but I am in little doubt this will be revisited in one of the next course modules.


Erik went into briefly explore qualitative and quantitative methods of user research, and I was asked to consider how audience research could be incorporated into my recent rapid ideation session. To address this, I would have to consider two different users - the social worker and the child. Each one of these two groups would require significantly different approaches. Younger children may have difficulty articulating what they are trying to achieve with the app from a usability perspective as the social worker is also actively involved in asking questions as they proceed through the app. This additional load on the child could be quite overwhelming. Considering the child is already trying to process emotions bought on by particular situations and articulating them through drawing, it would be impossible to ask them to provide a cognitive walkthrough without compromising the application's use, and the child may not have the vocabulary to communicate their thoughts in any rate. Instead, I believe simple observation would be more beneficial. The social worker however would be able to articulate much more, and I would explore the use of individual interviews, focus groups, and potentially a user diary. I would also be reluctant to ask for a cognitive walkthrough as this may disrupt the participatory nature of the app and confuse the child. A softer approach may be possible with an ethnographic interview using a process known as Contextual Inquiry (Beyer and Holtzblatt, 1998) which frames the user as the expert and the interviewer as the eager novice. Distinctive features of Contextual Inquiry are:

  • Interacting with the user in their own, natural environment rather than a testing suite

  • The tone of the interview should be collaborative, alternating between observing and discussion

  • Interpretation - the tester should read between the lines and analyse the process holistically to understand the design implications while avoiding assumptions based on their own interpretations of the facts without verification from the user.

  • Focus - the interviewer needs to subtly direct the interview to keep it on point but also avoid a ridged set of questionnaire style questions,

Many of these traits could be played off as simply being interested in what the child is doing, particularly with the drawing element of the app. However, I still feel there is a very fine balance of observation and disruption that would need to be carefully navigated. For Quantitative methods, I would use questionnaires for the social workers but would minimise the use of any for the child. Electroencephalography (EEG) and galvanic skin response tests would be distressing for a child and would be counterproductive in understanding any therapeutic benefits of the app.


We were also asked to watch a video by Alcwyn Parker regarding ethics. As suggested I again began to look at my proposed app and decided to lay it out in a Relativist Table to see the impact it would have on the child participant as well as the benefits of the research:


​​Cost to the Individual

Benefits

Could cause the child to relive trauma

Discover new ways to discover the child's history

Could cause embarrassment or humiliation to the child

Build rapport with the child

Could frustrate the child

learn more about the children's motor skills with regards to the app

Could distress the child in processing emotions they are not ready to deal with yet.

Learn more about the way children process this application and adapt for different VARK styles

Could learn more about potential accessibility requirements for the application

Could help the child externalise and process trauma

Looking at the ethical dilemmas the video highlights, there would be some grey areas if the application was going out for field testing in regards to consent with child participants. The app would be an extension of tools the social worker already uses so informed consent would ideally be from the parents or guardian, who may be in a highly charged state considering the nature of the interaction. This raises the question of consent, field testing, and ethnographic studies in this area. Parents may be highly stressed by the process already and possibly for self-preservation purposes, may be inclined to block the research process.


Other dilemmas I considered involved invading participants' privacy as the tool may be used to probe the child's traumatic history and this could also affect the child to self-worth and esteem. I would be remise to offload many of these responsibilities to the social worker interviewing the child as it would need to be a balance between allowing the social worker to do their job and myself as the researcher needing to manage the research process. The subject matter is highly sensitive and may prove to be ill-suited to field research with genuinely traumatised individuals. The research may be better suited in controlled conditions with children using the app to simply test the functionality and the usability of the application's UI, without the probing question that accompanies it. If the research was even considered, I would also need to ensure that due to the nature of the app, all user participation data would be anonymised.


In the extended resources for this week, the content referenced the 2012 Facebook Emotional Contagion study which was carried out to establish if emotions were contagious online. For one week, Facebook manipulated the feeds of over half a million users both positively and negatively with little regard for the ethical implications. The findings provided statistically significant evidence of emotional contagion via social networks (Kramer et al., 2014).


While reading about the study, (which I found abhorrent) I was struck by an excerpt from a blog by Tal Yarkoni, director of the Psychoinformatics Lab at UT Austin in defense of Facebook:

Data scientists and user experience researchers at Facebook, Twitter, Google, etc. routinely run dozens, hundreds or thousands of experiments a day, all of which involve random assignment of users to different conditions. Typically, these manipulations aren’t conducted in order to test basic questions about emotional contagion; they’re conducted with the explicit goal of helping to increase revenue.

(Yarkoni, 2022)

This was a jolt to the system. Personally, I have conducted half a dozen A/B tests using Google Optimize for my current employer, with little regard to the ethical considerations of the practice. As a commercial entity, the goal is to increase revenue by improving conversions. The changes may have only been minor - a button colour change or the positioning or wording of a link - but put bluntly, the goal was to manipulate the user for the benefit of the business. In theory, hundreds of visitors to the site have been uninformed participants in experiments. The fact they gave no consent started to make me feel uneasy. How do you conduct A/B testing ethically and is there a framework to do so? Would all A/B testing of a website be considered high risk by the Research Integrity and Ethics Sub-Committee?


Some solace came from an article I read by Raquel Benbunan-Fich regarding the unsavory events of Facebook's experiments (and several other offenders) and how it differentiated from classic A/B testing:


Unlike typical forms of A/B testing, where two versions of the same website are presented to different users to evaluate interface changes, algorithm modification is a deeper form of testing where changes in program code induce user deception. Thus, we propose to call this new approach C/D experimentation to distinguish it from the surface-level website evaluation associated with A/B testing

(Benbunan-Fich, 2016)


Benbunan-Fich goes on to justify A/B testing over C/D testing by virtue of the fact the former's objectives are to produce better and more accurate results for all users whereas, with the latter, the changes are made to falsify and distort for some users in the name of research (Benbunan-Fich, 2016). This certainly went some way in alleviating my concerns, the experiments I conducted weren't manipulation or deception in this regard, and there weren't any dark patterns employed.(Falbe, et al., 2017) (Mathur, et al., 2021) The changes were for the benefit of the user and the business.


There was a lot to process this week and I know we have an entire module based on research so this was a nice taster of things to come. It was interesting to look at my project through the lens of research and it made me appreciate and contexturalise the subject matter more.



References



Benbunan-Fich, R., 2016. The ethics of online research with unsuspecting users: From A/B testing to C/D experimentation. Research Ethics, [online] 13(3-4), pp.200-218. Available at: <https://journals.sagepub.com/doi/full/10.1177/1747016116680664> [Accessed 18 March 2022].


Beyer, H. and Holtzblatt, K., 1998. Contextual design. San Francisco, Calif: Morgan Kaufmann.


Falbe, T., Andersen, K. and Frederiksen, M., 2017. White hat UX. Freiburg: [Smashing Media AG], p.38.


Kramer, A., Guillory, J. and Hancock, J., 2014. Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, [online] 111(29), pp.10779-10779. Available at: <https://www.pnas.org/doi/epdf/10.1073/pnas.1320040111> [Accessed 18 March 2022].


Mathur, A., Mayer, J. and Kshirsagar, M., 2021. What Makes a Dark Pattern... Dark?. CHI '21:


Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, [online] (360), pp.1–18. Available at: <https://arxiv.org/pdf/2101.04843.pdf> [Accessed 19 March 2022].


Yarkoni, T., 2014. In defense of Facebook. [online] http://www.talyarkoni.org/. Available at: <http://www.talyarkoni.org/blog/2014/06/28/in-defense-of-facebook/> [Accessed 18 March 2022].


Cover image by UX Indonesia on Unsplash

 
 
 

Comments


Post: Blog2_Post
  • Facebook
  • Twitter
  • LinkedIn

©2022 by Dan Clarkes Critical Reflection Journal. Proudly created with Wix.com

bottom of page