My final project for Physical Computing is a series of toys that interact with the iPad screen to introduce you to a world of mirrors, illusions and surprise.
Out of the many objects I imagined and sketched, I only had the time to build a rough prototype of two of them, and a fully working demo of one. You can watch it working in the following video:
As you see, it’s sort of a magical box that needs to be placed on top of the iPad to reveal its content in form of a 3D hologram. You should imagine the final toy slick, dark and mysterious, not made with cardboard and duct tape.
I’m very excited about all the possibilites of this project and all the things I want to explore. I really want to push the project forward and, since it’s in in a quite early stage, I’m not sure about how should I document this, so I’ll try to summarize the parts that look more important to me.
The iPad is an awesome device, but the way we interact with isn’t really satisfying in many cases, it’s cold and we get almost no physical feedback from it. I wanted to use the capabilities of its touchscreen to do something more physical and engaging. My initial idea was to go for something weird and unexpected, but ended up being more minimalistic.
Mike Knuepfel‘s final thesis was a big reference for me, and also the few projects that are being developed in this area. Especially the interactive toys and cards by the french company Les Editions Volumiques, and some (fake) hologram projects like N-3D or the Holodesk, all based on the old pepper’s ghost effect.
So I basically wanted to take the concept of mirrors, holograms and illusions from this projects and make them more interactive, make the iPad know their position, react to them and provoke some amusement.
The objects I’m working on are based on simple geometric shapes. I’d like to be able to associate each of these shapes to one concept or behavior, in order to create a context for the toys that could, eventually, interact among themselves when placed on the iPad.
I’m very interested in the mirror as a material, so the aesthetics I’m currently following are minimalistic, mostly black and mirror , although I’m not sure about that. I might try something more warm, like paper, but there are many constrains regarding the need of conductive materials.
That’s, in fact, one of my main worries. I really want the object to be mysterious, so I need to hide the copper tape somehow. Good news is that it works even if you cover it with paper, but will it do with something like vinyl? Another option could be building it from metal, but it’s always harder to work with.
I also need to find the right hinge or mechanism for the cap, since I want it to be in two positions only: closed or at 45º. This is necessary for the periscope effect.
The capacitive screen of the iPad can detect up to 11 touch points which dont necessarily need to be human, so the way my objects interact with it is by extending your touch through conductive materials. The initial prototype uses copper tape, but any conductive material can potentially be used (the anti-static foam that protects integrated circuits is another popular one).
So I started making a debugging app to test if points were properly recognized. As you can see in the pictures at the end of the post, some materials are not working as I expected, like conductive ink, or copper tape when the mirror is so close to the surface (I suspect its coating is a little conductive).
Also the screen is supposed to detect electrical inputs, and it does it when testing with the battery if you apply some pressure. But when trying coin batteries it didn’t seem very powerful and I couldn’t rely on that for this time. It’s something I want to keep researching, though, because I’d really like to make the objects independent of human touch.
Graphics and code
Both the debugging app and the demo are built with the OpenFrameworks C++ library, that makes simpler to deploy graphic applications to iOS. Recognizing the points of touch and drawing shapes on the screen is simple; the two complex tasks the app performs are the ones you don’t notice when using it:
- Shape recognition: I didn’t expect the algorithm to detect the triangles determining the position and orientation of the object would be so hard to code, so I actually made a lot of assumptions for the demo: there’s only one object, individual touches (fingers) have no use… But it you want to prevent all the possible errors (that is, building a real app) it can be a nightmare.
- Hologram display: Since I was playing with the see-through mirror, I wanted the objects to be 3D and look like holograms. I load some models using the assimp library (which doesn’t compile in the last version of XCode, btw) and display them, and this is no problem in my first demo because the angle of the mirror is 45º, but with any other orientation I’d have to distort the image by doing some heavy maths. This is not moving the OpenGL camera as I firstly thought, it’s rather doing something like anamorphosis or projective texture mapping. It’s, in fact, the same that some artists do on the sidewalks asdfdsf
My future objectives are improving the recognition algorithm, using the code base to create something more fun or visually interesting and trying new possibilities, like using the built-in camera (if I can get an iPad 2).
And last but not least: I need a name for this project! Help me find it!