Intro:
- Protecting Artificial Team-Mates: More Seems Like Less
- Merritt, Tim, and Kevin McGee. (2012). Protecting Artificial Team-Mates: More Seems Like Less. Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI 2012), 2793-2802.
- Author Biographies:
- Tim Merritt is a current PhD student at the NUS Graduate School for Integrative Sciences & Engineering in Singapore. He began his PhD in 2008 and is studying under Kevin McGee.
- Kevin McGee is an associate professor at the National University of Singapore. He teaches in the Department of Communications and New Media. His research revolves around partner technology design/implementation and studies for entertainment purposes.
The goal of the research is to determine the cohesive nature that human gamers adopt for artificial intelligence beings in gaming situations. In fact, the belief that a gamer has about the identity of a teammate has tremendous impacts on behavior, despite the true identity of the teammate. Thus, a middle ground must be established between behavior and interpretations of game events.
The study focused on gamers behaviors that have the option of "drawing gunfire" away from teammates onto themselves. The experiment was repeated twice with A.I. teammates each time. However, during the second game, researchers told the participants that they were playing with a human teammate. For shorthand, the latter will be referred to as PH, for presumed human. The game interface looks similar to the figure below:
The measurements of the game involved the actual number of times the player decided to "draw gunfire" and the number of times the player reported that he/she drew gunfire after the gaming session was over. After the experiment was over, a series of eleven questions were asked to each participant to gauge self evaluation, predisposed stereotypes, personal pressures, and explanation of observed behaviors. The conclusion of the research was that humans were more cooperative with AI figures that PH figures. However, this contrasts the self-reporting at the end of the game in which gamers declared themselves more cooperative with the PH teammates.
Related work not referenced in the paper:
1) "Developing & Validating a Synthetic Teammate" by Dr. Christopher W. Myers
2) "Real-time team-mate AI in Games" by McGee and Abraham
3) "Teammates and Trainers: The Fusion of SAF’s and ITS’s" by Schaafstal, Lyons, and Reynolds
4) "TeamMATTE: Computer Game Environment for Collaborative and Social Interaction" by Thomas and Vlacic
5) "Behavior Modeling in Commercial Games" by Diller
6) "Evolution of Human-Competitive Agents in Modern Computer Games" by Priesterjahn, Krammer, Weimer, and Goebels
7) "Applying Collaborative Intelligence to RoboCup" by Carrera
8) "Approaches to measuring Difficulties in Computer Games" by Costello
9) "The Evolution of Abstract Resource Sharing Dilemmas Computer Games" by Cunningham
10) "Team Based Behaviour in Artificial Intelligence for Real Time Strategy Games" by Burke
The work in most of these papers is novel. Although most are related to the gaming industry, this can be instructional for real life human-computer interaction. However, the one inconsistency I would abide in change for is the lack of the combination of sophistication and relevance. Most of the papers, including this one, contained either high technology techniques or conclusive findings that were relevant and useful, but never both. In these papers, the related work section was complete and helped direct me to other similar sources on the topic of artificial intelligent teammates in game type environments. In essence, the difference of this paper derives from the evaluation of humans and presumed humans playing on the same team and how the humans evaluate their experience.
Evaluation:
In order to evaluate the results, paired samples T-tests were used to compare the logged data during both sessions. The researchers quantitatively and recorded unbiased measured the number of times the human player distracted the gunman away from there teammate, which was more for the AI player. However, the questionnaire at the end of the game will help dissect the subjective aspects of the research conducted. The first question, was quantitative and subjective which measured the amount that the gamers thought they helped the PH more than the AI, which was 71%. A reason behind this was a sense of empathy.
The remaining questions involved the subjective nature of participants. They were instructed to evaluate what they thought the teammate was "thinking" at the time or what the objectives were. The researchers analyzed all responses on a subjective nature since the students provided open ended results or rated a given question on a scale from one to five. In essence, the researchers did an effective job of utilizing both quantitative and subjective results.
Discussion:
All in all, the paper brought about some startling discoveries related to humans and how they interact with computers and presumed humans. Although players wanted to appear more loyal to their fellow species, they actually aided the AI teammates more than the presumed human. The authors attribute this altruistic behavior to that of the humans thinking that AI was inferior to their own, and thus required more help. There are a variety of other explains offered, but this is the most logical.
My considerations of the research include that the authors touched upon a hidden gem in the collaboration between human and machine. However, their experiment was trivial and extremely primitive. I would not base any conclusions on the simplistic nature of their work. Although, I would definitiely want the researchers to conduct their experiment on a deeper level with more realistic terms. Their evaluation was mostly subjective, but they did a proper job of analyzing the feedback and noting the limitations. In essence, this topic provides some interesting insights, and should be examined further.