Developed at: MIT Media Lab
Role: Lead interaction design and research
Team: Chin-Yi Cheng (Developer), Hiroshi Ishii (Advisor)

Project Background:

LatticeMorph is a research project looking at gestural interfaces for manipulation of 3D digital models. The research focuses on the computational phenomena of rescaling and resizing.

The interactions we perform on-screen allow us to create a number of gestural motions through small movements of a mouse, providing the illusion of a larger bodily movement or action having taken place. Subtle push/pull actions are able to mathematically redefine a shape through intuitive, vision-based decision-making. This phenomenon of fluidly resizing objects is something specifically attributed to a digital realm. Therefore, the question we ask is; how can we experience the digital phenomena of scalability through the bodily input of a tangible user interface?

For this research we have carried out a series of experiments utilizing existing sensors to derive a node-based system that allows for fluid graphic scaling in 3D space.

The research focuses on utilizing easily attainable sensor hardware, coupled with custom-designed parametric software. By appropriating cheap, off-the-shelf sensors, we are able to design for an open-source approach to deployment.

Screen Shot 2016-04-11 at 7.56.47 PM

Understandably, the role of the digital in this circumstance is a necessary requirement for such an action to be accomplished, as we are well aware of the law of the conservation of mass (a system must remain constant over time, as system mass cannot change quantity if it is not added or removed.)

Therefore, what is attempted through this research is to provide a framework for users to experience scaling actions specific to 3D modeling, through the use of a wireframe tool, or box resizer, acting as a tangible component mimicking the cage point structure of a 3D model.

The system we have developed uses a network of slide potentiometers within a custom-designed, 3D printed casing.  The design works in conjunction with Grasshopper software using the Firefly plugin for communication with the microcontroller. A network of capacitive touch sensors are used to identify which section of the tool is being used.


Research exploring natural transition from on-screen actions to tangible interactions: The learnt behavior we establish from screen-based interactions deeply penetrates our cognitive responses, for instance, many people have to have referred to mentally performing the command-z action in response to making an error when using a pen on paper. This is a cognitive phenomenon, whereby a learnt logic that applies in one environment is irrelevant in another, suggesting multidimensionality of the sphere we inhabit.

This cognitive learning of an action that becomes impossible within the physical environment is a phenomenon that we hope to transition beyond, providing the tools for learnt digital actions to be applicable beyond the screen.




Breakdown of single node components