I am a PhD Candidate at Dynamic Graphics Project at the Department of Computer Science, University of Toronto.
My current area of focus is enhancing the effectiveness of remote meetings by addressing the frictions caused by the use of remote meeting systems like Zoom and Microsoft Teams, from a Human-Computer Interaction (HCI) perspective. I developed prototypes such as JollyGesture and CoExplorer (the latter during my internship at Microsoft Research) to help presenters better convey visual non-verbal cues and optimize screen-sharing using generative AI. I also worked on the self-view augmentation project and a project about common ground, which aims to enable participants in remote meetings to more effectively convey (visual non-verbal) feedback to other participants in a way that is easily interpretable, thereby improving interaction in remote meetings.
CHI 2026 and CSCW 2026 submissions.
LLM-initiated task-space management for meetings.
VR presentation gestures.
Tangible Neural Network learning.
Tangible video lookup controller.
Deep learning epigenomics feature detector.
An exploratory research project remote meetings for specialised use cases.
Topics: Human-computer interaction, and visualisation
An exploratory research project relating to computer-mediated meetings. A complete set of result is obtained, and it is published.
Topics: Human-computer interaction, virtual reality, and natural language processing
With Dr Nicolai Marquardt: I suggested the Tangible Artificial Neural Network project and worked on it.
With Prof. Yvonne Rogers and Prof. Anthony Steed: I contributed to 5 different PhD projects, and 2 of which are published as of now.
Conducted a research project on the topic of the rate-based controller for virtual reality environment
Topic: Human-computer interaction
2026 |
1 CHI 2026 full paper, and 1 CSCW 2026 full paper in submission. |
2025 |
Designing Mechanisms to Empower Attendees of Remote Meetings to Promote More Effective and Helpful Meetings |
2024 |
The CoExplorer Technology Probe: A Generative AI-Powered Adaptive Interface to Support Intentionality in Planning and Running Video Meetings |
JollyGesture: Exploring Dual-Purpose Gestures in VR Presentations |
|
CoExplorer: Generative AI Powered 2D and 3D Adaptive Interfaces to Support Intentionality in Video Meetings |
|
2023 |
Comparing Mixed Reality Agent Representations: Studies in the Lab and in the Wild |
Lessons Learned Running Distributed and Remote Mixed Reality Experiments |
|
2022 |
Should Chatbots Chat or Probe? Perceptions of Conversational Interfaces That Probe Human Decision-Making |
2021 |
LDEncoder: reference deep learning-based feature detector for transfer learning in the field of epigenomics Poster |
Kind | Description |
---|---|
Honours and Scholarships |
|
Service |
|
Invited Presentations |
|
Teaching |
|
Professional Membership |
![]() |
© 2025 Warren Park. All rights reserved.