Space Agent
Ph.D Dissertation, 2016 - 2020
"A Room Alive"
Imagine a working space: it understands what you're doing, how you feel, and what you need. It physically reconfigures the space to support your different work activities. It works with you when you're active, and rest with you when you're tired. It's your partner, friend, and companion. You may think this is too futuristic, however, I would say this future is near.

In this dissertation, my team and I together developed a novel technology, "continuum robot surface" as an exemplar of "Space-making Robot" (robots that reconfigure and redefine spaces). What's shown in the video is a low-fidelity prototype with 3 motors, tendon driven actuation system, and grid of red dots for trajectory captures using Kinect. For now, it could achieve five different configurations including: rest position, strong bend, soft bend, twist, and angled. We ran simulations in Matlab and compared the simulation results with the prototype performances. The following paper is submitted for IEEE CASE 2019:
Richa Sirohi, Yixiao Wang, Keith Evan Green, et al. (Submitted). “Design and Characterization of a Novel, Continuum-Robot Surface for the Human Environment”. In Proceedings of IEEE CASE 2019.



Communication theory suggests that people tend to interact with interactive artifacts as if these were human. For decades, this understanding has been applied to designing singular, embedded artifacts at a small physical scale. In this project, we extend the same theory and practice to the dimension of space — to designing interactive, physical environments and their components, such as space-making robot surfaces. The author construct a systematic framework for designing “collaborative environments” shaped, as well, by our own concepts, Direct Mapping, Conveyed Mapping, and Space Agency, to strive for more human-human-like interactions between human beings and their physical surroundings. We presented this design framework in TEI 2019 conference:
Yixiao Wang and Keith Evan Green. 2019. A Pattern-Based, Design Framework for Designing Collaborative Environments. Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction - TEI 19(2019), pp. 595-604. DOI:http://dx.doi.org/10.1145/3294109.3295652
Below are images showing different space typologies supporting different human activities:







Will users perceive space-making robots as agents? Categorically distinct from many robots and far removed from humanoids, space-making robots are not objects in space, not anthropomorphic, not animal-like, not mobile, but instead, integral with the physical environment, embedded in or forming walls, ceilings, floors, partitions, vehicle interiors, and building envelopes. Given their distinctiveness, space-making robots offer a novel human-machine interaction. This part of the dissertation investigates whether users perceive space-making robots as agents — artificial social actors characterized by the capacity for intelligence, recognition, and intention. Results of an in-lab experiment with 11 participants and an online, between-group experiment with 120 participants show that people attribute agency metrics of intelligence, intention, recognition, cooperation, collaboration, friendliness, and welcome to space-making robots.
Y. Wang, F. Guimbretiere, and K. E. Green, “Are Space-making Robots, Agents? Investigations on User Perception of an Embedded Robotic Surface,” 2020 IEEE 29th International Conference on Robot & Human Interactive Communication (RO-MAN), submitted under review.
Below are diagram of experiment room setup, videos of experiment process (treatment group & control group), and experiment results.








How would users interact with space-making robots? What are the user preferences of different interaction modes when users performing different tasks? We proposed six different interactions which are presented to potential users through GIF animations as shown below. These potential users give us feedback on their interaction preferences through the online survey conducted using Amazon Turk (an online crowdsourcing platform). The survey results are both qualitatively and quantitatively analyzed, and reported in our future paper. The six interactions we proposed are: Button, Voice Control, Human Activity Recognition, Natural Language Processing, GUI Interface, and Touch Sensor.

