✨ Interaction Design
Interaction Design, Srishti, 2017
1
Interaction, as described by Wikipedia, is a kind of action that occurs as two or more objects have an effect upon one another. It can also be thought of as two-way communication. Having said that, what is the need to design interactions? What gave way for the world to need Interaction Designers? Such questions, along with interaction design trends in the world initiated towards making our lives easier were discussed and formed the basis of the first week of this course. Softface (or software interface) was the predecessor for the term ‘interaction design’ coined in 1980s by Bill Moggeridge and Bill Verplank. Bill Verplank’s video describing the IxD process as a series of ‘how do you… (Feel/do/know)?’ says a lot about everything I have come across about IxD. How one feels about an existing technological interface or even a general system that may be offline says a lot about what needs to be done with it. Improvising on an existing system or creating a new one requires enough knowledge and experience associated so that something useful may be created for further use. What one does out of that gathered knowledge, how they interpret their needs onto their choice of technology and what features and needs they give priority to, also forms a crucial phase while designing an interaction. Most importantly, knowing the consequences of the design and even the slightest hunch being felt with the design should be accounted for, or at the very least, taken a stance for. These nudges work on a very cognitive level giving way to the ‘Human’ aspect of Human Computer Interaction. The ideals of subjectivity and addressal of needs that may not be generic but specific to a certain audience are also accounted for. We looked deeper into technological trends in the world beginning from mechanical calculators like the Differential Engine to electrical computing devices like ENIAC. My own research about the functioning of modern computers kept me fascinated on every step but the fact that every little program or function I use on my phone or laptop is written on a binary code struck me the most. Not that I didn’t know that before, but to understand on a deeper level how from vaccum tubes to transistors and semiconductors are being used to store data, and being made smaller and smaller everyday to shrink in as much data as possible is truly fascinating. While I’ve constantly heard the importance of maintaining our electrical devices, not letting them heat up etc. as a child, I never really realised why until now. Every chip on the motherboard of the computer on a basic level is binary code storing data in either the form of electrical or magnetic signals, each of which could be lost on overheating. Technology can be fascinating to some, but can also be intimidating to others. And while development within these fields happens to be almost constant, there is a need for these designs to be blending in with a person’s natural ecosystem. The gap between an interface and a human’s intervention is aimed to be almost invisible for a design to be successful. Every development is a step towards a change, whose better or worse can only be known once it is out for people to use; each equally likely to bring drastic changes to the world.
2
“I think, therefore, I am.” – Descartes Cognitive Sciences drive this field a lot more than we can comprehend. It feels good to be aware of it. The third week began with the importance of citation and references being made clear to us, given the need of giving credit to where it is due. We went ahead with Information Architecture, which involves organizing, structuring and labelling content in an effective and sustainable way. It should help the user navigate through the interface and find their desired information with ease. I found this part of the presentation really interesting- Information Architecture should help answer: Am I in the right place? (Where); Do they have what I am looking for? (What have they found?); and What do I do know? (What to expect). This basically helps summarize the need for a properly structured interface because it only takes a few seconds for a user to decide they don’t like what they are seeing, which in turn can cost a company a lot of money. This also somehow took me back to revisit, wherein my major concern while talking to the people visiting us was the edge between engineers and designers. The value of allowing the user to build a simple mental model of how a system is organized and is working is only felt once the idea, no matter how great, isn’t executed well. Proper execution involves labelling, structuring and organizing content properly so that maximum amount of content is provided with minimum cognitive burden on the user. It was also discussed that in order to accomplish a good design, navigation must be planned and driven by the flow of information, and not by the wire framing. Embodied interactions, from what I understood, are those that involve a tangible or a visible computing interface that the user can interact with. MIT’s ‘Skinput’, with the usage of our body and its intuitiveness to function gave me a nudge to always keeping in mind the engagement of the body as a whole while designing an interaction and not just the brain. Piezoelectric effect by the Curie Brothers was another fascinating thing to look into. The widespread application of the Piezo sensor in mics, speakers and headphones made sense when we were told that the effect works on vibration of the crystal, by applying pressure around it. The pressure variations change the amount of voltage being passed through, which can give rise to many applications of the sensor. I feel that this has been sufficient to begin with, but we should have programmed the sensor to see its capability before being asked to imagine micro interactions with it. I think the same for everything, its like we begin with a new concept but don’t look into it deep enough before moving onto the next thing; because this way I’m not sure I grasp everything to its potential to be able to put it to good use!
3 (TopCoder Workshop Week)
I was excited for the TopCoder workshop because I was really looking forward to hands on work for this course, however my idea of TopCoder changed when I realized that it also involved wire framing and not coding per say while we were addressed. It urged us all to think more in the direction of not just interaction with the screen and placement with the objects but also how the experience for a user can be made easier and efficient. The RUX competitions focused on both tablet and mobile based interaction for a person looking for servicing of their device and getting notified about their products inventory. While they were easily do-able tasks, an hour didn’t seem like enough time to complete it to the best I could because I felt a like I needed to add a lot of things and tried to push it all together in a composition hoping it would make sense ultimately instead of sitting down and planning it first. The aspect of following a process was missing and I could only realize that once the task got over. Also it was the first time I ever worked on a wireframe so I was still alright with what I could come up with. Looking at the work other people came up with, it was interesting to see how differently they could think and conceptualize, and also execute their ideas. My ideas when I saw them after the allotted hour, seemed very basic and preexisting while I thought the whole time I was doing something different. That struck to a constant dilemma of having familiarity for a user to efficiently figure out what is happening and what all is possible within the app and also at the same time bring a newer, better experience for their needs. Trying to figure out a balance between the two criterions looks like the ultimate task when it comes to designing these interactions and experiences. I liked how everything that was being talked about was something I could grasp because of the discussions we’ve been having in class the past month. It was disappointing however, to see that I was unable to apply any of it to my wireframes, or even think of doing the same. I really wish we’d have practiced a little in class with familiar assignments to have a sense of how things can be and how they are done before being participants in this workshop; because all that I was able to generate was solely based on what I have come across the devices and applications I have used. To be able to design something requires practice at the other end of the table, and not the user’s. Nonetheless, it was a great initiative to make us realize where we stand and what we can or should know how to do; and maybe even see if our interests lie there at all. We discussed our Piezo sensor usage ideas in class with the concept videos and diagrams. For my group, our idea is to correct a person’s walk based on how they take steps and what part of their feet they put most pressure on. The data being generated can show graphical patterns and can be helpful for someone who is willing to use it. The help we are trying to provide at our end is to make that data available, in terms of the pressure being applied and knowing what part of the foot is being used first, to see patterns and rectify where they are going wrong. At the very least, the users can also have an idea what their state of progress is and whether they are improving. We faced criticism when we displayed the idea to the class, receiving questions we couldn’t answer. But we were also asked not to answer them unnecessarily, and I connect that with my general studies class ‘Say it with charts’ where we’ve discussed time and again the need for presenting data to the world. Data is considered nothing less than scientific findings, or facts that need to be presented to the world just as it is. And I believe we are moving along the same lines.
4
The journey from conceptualizing our piezo sensor idea to actually making it work was an interesting process. Our project of providing relevant data about the amount of pressure a person applies when they walk ( intended for the physically disabled/ children learning to walk/ anyone in need) was chosen to consider the possibility of being able to predict when a person can recover or start walking properly, and to track their progress. We as designers only take the stance to provide information to the users and they’re free to put it to use in their own way. When it came to our presentation, we were unable to work with the output properly. We could see our code work considering every tap gave a pressure value output- but there was no means of recording it. Our initial idea was to have the output values from arduino being recorded in an Excel Spreadsheet- there was a supporting tool called Parallax DAQ which we tried. It didn’t work out because it was incompatible with Windows 10. Our serial monitor would only show values too fast and the plotter would plot all values in the same graph. After much struggle, we were asked to try attaining an output from Processing, and we tried understanding the sample github code for the same, but it couldn’t work out in our favor because Processing requires a whole other level of understanding that we lacked at that point of time. We received feedback from Naveen to have more embodyment in our project- to be able to engage the user in a subconscious way- which led us to thinking of having interactions engaging people’s habits of taking their shoe off. Some kind of solution that will ensure that interaction isn’t superficial but a part of their daily lives. A kind of movement beyond the screens. But in order to present the data to a user, predict patterns and estimate when one may recover needs computers. Output can also be within the sock/shoe, and that was what we were left to consider. These inputs were very different from our take on the same aspect and it was very insightful for us to have a different lens to design a better solution. If I’m being honest, I expected a lot more from this class in terms of what we planned on doing, what we could accomplish out of it and what we actually learnt out of it all. Every one of us bought proper sensors and programming boards only to try figuring them out ourselves, which isn’t a practice I personally appreciate. I liked the first two weeks where we’d be nudged with an introduction to
computer peripherals and working, getting pushed in a direction where our curiosities were hinted at- making looking deeper into those concepts on our own seem less like a task and more like a self initiated
practice- making learning fun, just what Srishti preaches. To only be able to light an LED with an Arduino didn’t lead us anywhere with the possibilities that the small programmable board has- and that was something I was really looking forward to exploring in class- which goes beyond just discussing fancy technologies that now prevail in the world. Same goes for the sensors we got- a little bit of explanation about each of the objects would really have helped each of us gain a better understanding of how it basically functions and what is possible with it and then in turn expand our minds beyond what we currently think. I really feel that this class had the potential to teach us more than it did and it couldn’t meet that potential, but that’s just my opinion. But also, at the same time, I realise that my perspective about design in terms of UI UX and beyond, and the Human centric approach has been defined- or at least is in the process of developing a proper conceptual model for it and I presume that it is what is going to be the foundation of my practice as a human centered designer, and that’s what I appreciate about this class. We’ve been exposed to the possibilities of the future of interactions and that fascinates me every day.