Connie Yuan
Email: <yy239@cornell.edu>
Andrea Stevenson Won (principal contact)
Email: <asw248@cornell.edu>
Nicole Tan, <nat34@cornell.edu>, is setting up a team for this project. If you are interested in joining the team, please contact her.
The project explores how virtual reality can be designed and redesigned to support intercultural collaboration. The project is led by two professors in the Department of Communication: Dr. Connie Yuan, who studies organizational communication and intercultural communication, and Dr. Andrea Stevenson Won, who studies embodiment in virtual and mixed reality. They both are also members of the Field of Information Science.
Specifically, this project will modulate the verbal and nonverbal cues available through appearance, gesture, voice and accent to users interacting as avatars in an immersive virtual reality environment. By modulating cues that may lead users to misunderstand each other, we aim to build a level playing field that may help to optimize communication between users from different groups, including differences based on gender, culture, or personality trait (for example, introvert vs. extrovert). We are particularly interested in working with groups that are not co-located, so building a networked environment that would function well across international borders, for example, between the U.S. and China, would be very much of interest.
The clients are looking for a team to complete the following tasks using consumer virtual reality equipment (this equipment allows for head and hand position and orientation tracking):
Build a collaborative virtual environment, including a repertoire of culture-themed backgrounds such that an avatar can easily change a meeting’s cultural background per his/her personal taste.
Build a repertoire of options for an avatar to change his/her looks.
Create built-in tools to track and export avatar’s behaviors in VR.
Create the ability to transform participants’ movements, such that participant’s avatars would automatically match each others’ gestures or postures.
Add a module that can automatically turn conversations in VR into captions on screen.
Add a module that can help reduce accent for non-native speakers.
Of the specific tasks outlined above, Tasks 1-4 are essential to the success of the project; 5-6 are desirable additions.
This project is a collaboration using the equipment of the Virtual Embodiment Lab in the Department of Communication. Such equipment (for example, the Oculus Rift/Touch or HTC Vive systems) allows for head and hand position and orientation tracking. Generally, the virtual environments in this lab are created using the game engine Unity 3D. Models are purchased or created using 3DS modeling programs and imported into the virtual environment. While there is some existing functionality from the plug-ins available through Steam VR (for example, the team will not need to create the "cameras" for the headset) these special interactions will need to be scripted, generally using C#.