BOINC is a platform for volunteer computing and grid computing. At a first glance, it looks dated and unappealing but, after some digging, one finds himself really engaged with the myriad of research projects that are using the system. As a novice user, I felt a great admiration for the people who built the platform, but unfortunately that didn’t encourage me to download the software and contribute, given that my computer is already too full of junk.
In my opinion, there’s a lot of drawbacks in being such a centralized, huge platform. The projects end up looking too similar, and, since all the contribution is passive, it’s harder to get engaged. It ends up looking like a very niche space.
Some projects, like FoldIt or GPUGrid certainly did their marketing better, and look more modern and appealing, but the overall experience could be greatly improved by spending some money in a redesign of the websites, interface and general information architecture. The existing community around the projects is amazing and should probably be the frontline of BOINC.
Zooniverse is the total opposite. It was created in 2007 and, I don’t know if they had this website and system from the beginning, but they obviously did their homework to reach a more mainstream audience.
We just have to compare their two taglines: Open-source software for volunteer computing and grid computing (BOINC) vs. Real Science Online (Zooniverse).
Or the first menu that appears on the site: Volunteer – Download · Help · Documentation · Add-ons · Links vs. Take part in Science Projects, Experiment in the Laboratory
They are based on quite different ideas or needs, so it’s not perfectly fair to compare them, but it’s fair to say that Zooniverse is more attractive for most of the people. It’s web based, the projects require active participation, but they mostly have some kind of gamification, and there’s a certain humanitarian aspect in each of them.
On my first visit, I found myself quickly joining Moon Zoo and identifying moon craters, which turned to be harder than I expected. I really liked that, despite being a modern website, the registration process was so simple that it didn’t even require a confirmation email. It was a proof of confidence towards their users that I really appreciated. The only thing that I missed, and this applies also to BOINC, was an up-to-date status of the project. I somehow needed to know how was the project progressing, if it was finished or just started… and couldn’t find it clear enough.
Take your time. Breath. Open your all senses to surroundings. What do you see? What do you hear, smell and feel? What fascinates you and caught your attention? Are you a changed person because you were forced out of your mind and made observations on your daily walk? Document your week to share in class.
After spending many days trying to figure out such an open assignment, I’ve decided to take a step back. Instead of trying to capture the soul of everything I find in the street, I’ll rather dissect something I’m already familiar with: my building. Because, well, I know, it’s just a regular appartment building, and it’s not that different from the one I lived in Spain, except in… everything. And this actually fascinates me.
It fascinates me that having lived here for 3 months I still find new corners with little details that, even though being completely normal, I would never find in my hometown, and that appear to me as the real personality of the building. The signage, the plunger, the paint… but more specially, the figure that’s responsible for most of this details: the Super.
So, in order to document the character of my Building, I’ve followed Willie’s traces, trying to create some kind of narrative that can approach a stranger to my vision.
Being kind of self-taught in design, I have read many lists of what’s considered good and bad design. From Dieter Rams’ principles to Don Norman’s essays, and of course all the websites that every once in a while decide to illustrate readers with the rules of minimalist web design.
They make a lot of sense and I obviously agree with most, including Graham’s points. I enjoy how he keeps comparing every aspect in the world of design with art, science or engineering, somehow stating that, at the end of the day, what works well in one field also does it in the others (although that’s probably too loose, in my opinion).
However, there’s something in the article that disturbs me. Graham writes an introduction about taste and beauty to end up talking about good design. He virtually equals the meanings of good and beautiful, with the only argument of:
Instead of treating beauty as an airy abstraction, to be either blathered about or avoided depending on how one feels about airy abstractions, let’s try considering it as a practical question: how do you make good stuff?
Don Norman, in opposition, has always considered beauty –or aesthetics, or attractiveness– as another property of a design, like simplicity, that may or may not contribute to its final objective: to be effective, to be good.
Graham has certainly a point in his argument, but I feel it has too many linguistic games to convince me. I believe, as him, that there’s good and bad taste. That there are ways to objectively tell the difference between beautiful and ugly, and also between good and bad design. But I’m still not sure they are all the same thing.
My final project for Physical Computing is a series of toys that interact with the iPad screen to introduce you to a world of mirrors, illusions and surprise.
Out of the many objects I imagined and sketched, I only had the time to build a rough prototype of two of them, and a fully working demo of one. You can watch it working in the following video:
As you see, it’s sort of a magical box that needs to be placed on top of the iPad to reveal its content in form of a 3D hologram. You should imagine the final toy slick, dark and mysterious, not made with cardboard and duct tape.
I’m very excited about all the possibilites of this project and all the things I want to explore. I really want to push the project forward and, since it’s in in a quite early stage, I’m not sure about how should I document this, so I’ll try to summarize the parts that look more important to me.
The iPad is an awesome device, but the way we interact with isn’t really satisfying in many cases, it’s cold and we get almost no physical feedback from it. I wanted to use the capabilities of its touchscreen to do something more physical and engaging. My initial idea was to go for something weird and unexpected, but ended up being more minimalistic.
Mike Knuepfel‘s final thesis was a big reference for me, and also the few projects that are being developed in this area. Especially the interactive toys and cards by the french company Les Editions Volumiques, and some (fake) hologram projects like N-3D or the Holodesk, all based on the old pepper’s ghost effect.
So I basically wanted to take the concept of mirrors, holograms and illusions from this projects and make them more interactive, make the iPad know their position, react to them and provoke some amusement.
The objects I’m working on are based on simple geometric shapes. I’d like to be able to associate each of these shapes to one concept or behavior, in order to create a context for the toys that could, eventually, interact among themselves when placed on the iPad.
I’m very interested in the mirror as a material, so the aesthetics I’m currently following are minimalistic, mostly black and mirror , although I’m not sure about that. I might try something more warm, like paper, but there are many constrains regarding the need of conductive materials.
That’s, in fact, one of my main worries. I really want the object to be mysterious, so I need to hide the copper tape somehow. Good news is that it works even if you cover it with paper, but will it do with something like vinyl? Another option could be building it from metal, but it’s always harder to work with.
I also need to find the right hinge or mechanism for the cap, since I want it to be in two positions only: closed or at 45º. This is necessary for the periscope effect.
The capacitive screen of the iPad can detect up to 11 touch points which dont necessarily need to be human, so the way my objects interact with it is by extending your touch through conductive materials. The initial prototype uses copper tape, but any conductive material can potentially be used (the anti-static foam that protects integrated circuits is another popular one).
So I started making a debugging app to test if points were properly recognized. As you can see in the pictures at the end of the post, some materials are not working as I expected, like conductive ink, or copper tape when the mirror is so close to the surface (I suspect its coating is a little conductive).
Also the screen is supposed to detect electrical inputs, and it does it when testing with the battery if you apply some pressure. But when trying coin batteries it didn’t seem very powerful and I couldn’t rely on that for this time. It’s something I want to keep researching, though, because I’d really like to make the objects independent of human touch.
Graphics and code
Both the debugging app and the demo are built with the OpenFrameworks C++ library, that makes simpler to deploy graphic applications to iOS. Recognizing the points of touch and drawing shapes on the screen is simple; the two complex tasks the app performs are the ones you don’t notice when using it:
Shape recognition: I didn’t expect the algorithm to detect the triangles determining the position and orientation of the object would be so hard to code, so I actually made a lot of assumptions for the demo: there’s only one object, individual touches (fingers) have no use… But it you want to prevent all the possible errors (that is, building a real app) it can be a nightmare.
Hologram display: Since I was playing with the see-through mirror, I wanted the objects to be 3D and look like holograms. I load some models using the assimp library (which doesn’t compile in the last version of XCode, btw) and display them, and this is no problem in my first demo because the angle of the mirror is 45º, but with any other orientation I’d have to distort the image by doing some heavy maths. This is not moving the OpenGL camera as I firstly thought, it’s rather doing something like anamorphosis or projective texture mapping. It’s, in fact, the same that some artists do on the sidewalks asdfdsf
My future objectives are improving the recognition algorithm, using the code base to create something more fun or visually interesting and trying new possibilities, like using the built-in camera (if I can get an iPad 2).
And last but not least: I need a name for this project! Help me find it!