Living Beyond Limits Emotional Coding Lab – Stanford

Project Description

by Janine Giese-Davis, Ph.D.

Principal Investigator: Dr. David Spiegel
Project Director: Janine Giese-Davis, Ph.D.
Co-Creators: Bita Nouriani, M.F.C.C., Jennifer Boyce, B.A., Sanjay Chakrapani, B.A., Diana Edwards, B.A., Barbara Symons, B.A., Julie Choe, B.A., Casey Alt, B.A., David Weibel, M.B.A., Jane Benson, M.S.W.

History

In 1995, I was awarded a postdoctoral fellowship grant (by the Breast Cancer Research Program of California) to create a technology for the study of change over time in the emotional expression of metastatic breast cancer patients as they participate in David Spiegel's supportive-expressive group. I was also funded by a portion of David Spiegel's MacArthur Foundation Mind-Body Network grant to create an extension to this coding system to investigate the social interaction linked to emotional expression in these groups. My colleagues and I have created a new type of system for the coding of emotion and behavior in group settings by layering 5 separate levels of coding. We make 5 passes through each videotape to code the full extent of our system. In this way, we will eventually be able to understand not only how emotional expression changes over time in the group, but how supported each woman is by the group in her emotional expression, and whether there are particular topics more likely to speed the therapeutic process.

Technical Aspects

Hardware/Software: We code through a software/hardware connection (James Long System) linking a vcr with the

Vickie Chang and Barbara Symons

in the Emotional Coding Lab

computer. The software samples from the keyboard at 60 times per second making the collection of continuous data possible. We therefore have frame-by-frame coded data, but the task of the coder is merely to strike the key associated with a particular code. Layering of the data is accomplished through merging time-synchronized codes in the multiple layers

Sampling:

  1. Rolling Enrollment in the Study over a Period of 7 Years
  2. Dates from Baseline, 4-month, 8-month, and 12-month Follow-ups Were Used to Time-link Sampling of Videotapes with the Cortisol and Questionnaires
  3. Four Tapes per Follow-up per Woman Were Selected Based on Accurate Attendance in Group
  4. All Videotapes in the Current Sample Have Been Coded by at Least Two Coders and Have Passed a Criterion of above a .60 Kappa.
  5. Average Kappa for the Videotapes Included in this Sample = .67 (Range = .595 - 1.0)
  6. 152 Videotape by Woman Segments Currently Completed

Coders are blind to hypotheses: I have been particularly careful to guard against creating results that were hypothesized by blinding coders in the following ways:

  1. Because we intend to test hypotheses related to change over time, we assigned random numbers to each videotape so that coders would be less likely to know the order, date, or even the year when the videotape was made.
  2. Coders are prohibited from knowing any hypotheses while being an active coder
  3. Upon leaving the lab, coders are debriefed as to specific hypotheses--None have guessed the hypotheses.
  4. Coders are prohibited from sharing with each other the knowledge of a specific group member's death, or to name specifically events discussed on tape.
  5. Coders were unaware with whom their tapes would be compared for kappa.
  6. Tapes were assigned randomly in ways that would make it difficult to establish an order for the sequence of events in the group.
  7. Coders discussed emotional reactions to the viewing of these videotapes as a regular part of weekly lab meetings to reduce secondary post-traumatic stress reactions. During these discussions, coders could talk about their reactions to events on tape, but could not name a particular woman or discuss her disease course. A licensed social worker associated with the lab was available at any time for confidential counseling for secondary stress reactions.

Coding Emotion in a Group Setting

  1. Code Only When Person Is Speaking
  2. First Code Videotape for Who Is Speaking and Who Is Listening (Backchanneling)
  3. Average Time to code one 1-2 hour videotape for Speaker
    = 4.04 Hours   Range = 2.25 - 10.00 Hours
  4. Speaker and Listener Coding Are 2 Separate Passes Through the Videotape
  5. Average Time to code one 1-2 hour videotape for Listener
    = 7.07 Hours   Range = 5.0 - 10.00 Hours
  6. Emotion Coding Is Completed by Bringing the Speaker Coding into the Edit Window, Then Fast Forwarding the Tape to Each Place Where the Woman Being Coded Is Speaking
  7. Coders Then Emotion Code Those Segments, One Woman at a Time
  8. Average Time to Code One Woman as speaker per Tape
    = 1.5 Hours   Range = .25 - 5.5 Hours
  9. Average Time to Code all "listeners" for emotion per Tape
    = 6.25 Hours   Range =5.0 - 7.0 Hours
  10. Average Time to Code all "listeners" for emotion per Tape
    = 3.06 Hours   Range =2.0 - 5.0 Hours

Coders Contributing Valid Data (in order of appearance):

Jennifer Boyce, B.A.
Sanjay Chakrapani, B.A.
Barbara Symons, B.A.
Diana Edwards, B.A.
Julie Choe, B.A.
Kris Kamikawa, B.A.
Casey Alt
Grace Cheng, B.A.
Kelly Evans
Caryn Bernstein
Jennifer Beck, M.A.
Amy Troppman, B.A.
Tiffany Chang
Jocelyn R. Ibanez
Lisa Fleisher
Monique Leroux
Joan Chiao
Dan Chavira, M.A.
Michelle Tsuda
Eunice Chung
Vickie Chang, B.A.
Laurel Hill, M.A.
David Weibel, M.B.A.
Caroline Perry
Elizabeth Myers
Kristina Roth, B.A.
Lindsay Rene Gervacio
Negar Azihi


Back to the top
Home | Contact | Director | Mission | Studies | Publications | Personnel | Jobs | Links
Web design by webfeetcreations.com Updated 11/8/99