NYU-X Holodeck Research Areas

NSF MRI #1626098

Visual

The recent, rapid, and disruptive emergence of promising consumer grade VR as gaming platforms is a prime example of the need to accommodate rapidly evolving technologies.

Technical parameters optimized by diverse visual equipment throughout the Holodeck and research include:

• Physiological fidelity and comfort (VR stability, eye-strain/fatigue, and latency induced nausea); • Resolution and field of view (pixel/aspect ratio); • Degree-of-freedom augmentation (hybridizing motion capture technology with headset gyroscopes for superior tracking within a space); • Wireless connectivity (unencumbered physical interactions); • Occlusion (due to multiple participants, projection vs. headsets); • Eye-tracking and facial expression recognition; • Refresh rates (involving hardware, software, caching, and bandwidth parameters).
• The goal is to create seamlessly realistic and truly immersive virtual experiences.


Audio

The audio capabilities of the instrument will use loudspeaker and HRTF-processed headphones, capable of reproducing high quality spatial audio. High quality headphones (e.g. Sennheiser HD650) will be equipped with Head-Related Transfer Function (HRTF) processing technology through the Max/MSP software. HRTFs will be personally measured using ScanIR - the impulse response measurement software developed at NYU, or user selected using procedures such scan IR, an impulse response software developed at NYU.

The instrument will be the first integrated auditory system, comprised of the most advanced audio reproduction systems that is tightly coupled with multimodal capacities across all equipment categories.


Physical & Robotics

Research involving physical and virtual smart objects, manipulatives, and robotic interfaces, including Tangible User Interfaces (TUI), and malleable interfaces.

NYU Experiential SuperComputing Collaboration capabilities include: • Simulation tools • Robotic platforms • Haptic interfaces (data-gloves, polhemus, etc.) • Fabrication tools

The goal is to enhance experience and provide and integrate 3D rapid prototyping and modeling with stereoscopic, interactive physical and immersive 3D visualizations on site and in distributed and virtual environments.


Human Dynamics

Holodeck distributed capabilities include: • Rich multimodal sensing capabilities (motion capture, wearable and physiological computing). • High quality, camera array-based motion capture system to acquire, analyze, classify, and stream spatio-temporal data from a wide variety of kinematic sources • Large-scale Tactonic Technologies (pressure sensing & imaging floor) • Brain Computer Interfaces • Eye-tracking (wearable/mobile, at a distance, and multi-person) • Affective and sociometric sensors and algorithms

The rich data sets that result are used to develop animations and other displayable forms; extrapolate data from nonverbal expressions; and for real-time control within the context of performance and HCI/UX research around body-centric manipulation of rich media.

These distributed facilities can support physical isolation of human subjects performers, social actors, and control and analyze both social and nonverbal cues.


Collaborative Research Areas & Applications

Co-located and Distributed, Agents (Social/Robotic), Expert Systems, and Crowdsourcing

Research foci within the NYU Experiential SuperComputing Collaboration include enhancing and investigating social collaboration, and co-located and distributed human-human, human-agent, and/or human-robot interaction. Human-Agent and Human-Robotic Interaction research involves a repertoire of social and embodied agents and personal and service robotic capabilities.

Physical and character interaction with natural spoken dialogue is supported wto create and study perceptive and expressive robotic characters that closely model dynamic face-to-face communication. Customizable ethnically-, age-, gender-, and culturally-diverse embodied virtual agents and social robots, are created and controlled by a mature suite of technologies that support natural spoken dialogs and mirror rich emotive facial gestures and social characteristics that synchronize with visual, physical, and prosodic speech and gesture production by the agents and robots.

The equipment capacities integrate to realize rich multimodal environments, real-time data streams and analysis from sophisticated analytical models of individual and team behaviors, interactions, and creativity.


Cyberlearning (Learning Sciences and Games for Learning)

The NYU Experiential SuperComputing Collaboration project team has a strong record of using advanced sensors, rich data collection, and personalized real-time modeling to advance learning technologies, intelligent tutoring systems, sophisticated learner-models, and games for learning, (including Mathspring, Gamestar Mechanics) impacting hundreds of thousands of learners.

The NYU Experiential SuperComputing Collaboration augments learning research with capabilities such as multi-modal formative, in situ, and summative assessments to: • Iteratively determine the zone of proximal development; • Sense cognitive load and affect; • Integrate real-time feedback and foster metacognition; and • Utilize attribution, cognitive aids, stress inoculation, and personalized individual and team support to improve decision-making processes, learning, and creativity.

The NYU Experiential SuperComputing Collaboration offers distributed motion-capture, sound/speech capture, and physiological sensing capabilities to operationalize self-report variables and permit real time measurement. Potential research variables include General State Variables (Prior Knowledge, Learning Strategies, Goal Orientation, Self-Regulation), General Trait Variables (Spatial Ability, Verbal Ability, Executive Functions), Situation-Specific State Variables (Engagement, Emotion, Cognitive Load, Situational Interest), and Learning Outcomes (Skills, Knowledge, Comprehension, Transfer).


Physical Acoustics and Collaboration

The NYU Experiential SuperComputing Collaboration enables research on spatial and 3D sound to create accurate simulations of acoustic spaces and reproductions of sounds as they would appear in a natural environment in ways that enhance learning and collaboration.

Projects include: • Evaluation of immersive sound reproduction technologies • Environments that permit several sound reproduction technologies that can be combined to examine application synergies and tradeoffs • Enhanced collaboration of performers at different sites • Study and develop of extended distributed performance technologies, and advance novel instruments for musical expression


Urban and Planetary Environments

Tackling challenging urban informatics research questions (e.g., multimodal transportation and congestion, weather prediction, preparedness and resiliency, contingency planning, security and privacy). Coupling expert systems (e.g., IBM Watson and collective intelligence) and embedded assessment to accelerate transdisciplinary learning and discovery, empowering individuals, teams and multiple stakeholders to discover solutions to difficult challenges that may occur infrequently in the real world.

Other synergies include the development of data modeling and visualization methods to process the complex sensor data captured in the NYU Experiential SuperComputing Collaboration.

As consumer-grade VR, digital manipulatives, and low-cost robotic and fabrication platforms emerge, Holodeck research and infrastructure will revolutionize cyberlearning research. The Holodeck seeks to create transformative findings and experiences impacting formal and informal learning environments (classrooms, museums, homes, etc.), across the entire socioeconomic spectrum.


Scientific Simulation (Modeling, Visualization, and Verisimilitude)

The NYU Experiential SuperComputing Collaboration team integrates novel technologies and immersive environments to advance high-fidelity simulation, applicable to engineering, urban planning, bioengineering, creative expression and education. Capabilities include: assessing individual and team dynamics; skill development using gaming and simulation; and leveraging the expertise of co-located and distributed human and hybrid human agent/robotic teams through telepresence, tele-robotics, and human robot interaction.

The team nurtures collaborative transdisciplinary research including: mixed-reality environments and experiences; scientific and artistic visualizations; haptic input and feedback; and head-up augmented and virtual reality. Additional research capacity supports novel simulation robots with autonomy and capacity for social and emotional expression. A unique aspect involves the capacity for collaborative design of physical artifacts and related prototyping of advanced physical simulations; and the iterative assessment and improvement of compelling simulation scenarios involving data, outcomes and performance measures salient to multiple stakeholders (e.g., in urban planning: planners, designers, and residents; in education: students, parents, teachers, and local, regional and national administrators).

Studies of Teamwork, Design, and Outcomes

Support for individual and small group immersive simulation research and assessment of relative benefits of individual and team training in virtual, hybrid, distributed and physically co-located simulation environments. Wearable computing and sensing devices (Brain Computer Interfaces (BCI), heads-up display, eye-tracking, skin conductance, and sociometric badges) are used to assess and interact with the attitudes, behaviors, and emotional intelligence of individuals and teams and distributed collective intelligence in Flow (optimal experience) and STUCK! (non-optimal experience).

The collaboration also integrates physical fabrication communities fostering design thinking strategies, placing sophisticated design tools in the hands of distributed communities and end-users.


Scientific Modeling

The team and instrument will also improve understanding of our recent experiments and simulations on cooperative fluid dynamical effects in flocks of flyers. The NYU Experiential SuperComputing Collaboration supports rapid prototyping of diverse modeling levels, using symbolic modeling tools that lie above basic computational modules. • Theoretical understanding of the emergence of cooperatively created structures in biology and physics. • “Virtual dissection of” a 3D spindle reconstruction to reveal regions of microtubule alignment and cross-linking. • Modeling and simulating cellular processes and materials using hierarchies of models • Immersive tools for simulation, visualization of results, and the manipulation of parameters to change the way models are calibrated against experiments and against each other.