Reality is a perception. Perceptions are not always based on facts, and are strongly influenced by illusions. Inquisitiveness is hence indispensable

Monday, March 29, 2010

Event models and Event driven programming

Traditional web applications have a sequential flow, A form which is filled and submitted to controller, and some magic before showing a html page. Rich GUI on the other hand supported event driven flow. A button clicked on form causes the controller/presenter to sense data from widgets (data-binding), then some magic and finally a view is shown to the user. If the logic of rendering is built into view it is MVC, if any intermediate intelligent helper is present it is MVP. Until the advent of AJAX, web applications could not benefit from event driven style of programming. Reason: server-side code could not listen on client side entities.

What is event driven code any way? Lets understand what is not event driven. A program is typically a list of chores. This is procedural. The program is to be thought of a continuous loop only pausing for inputs. Once an input is given the program runs and marks an internal state (session state). Then there is another input, the program again runs and updates the internal state. Some of these states are shown to the user using conditional logic. This is not important, the important point is to note how the program is processing inputs in a sequential pattern. In case you are wondering, I made the most blatant assumptions about multi-tasking abilities of the underlying process. In reality, the inputs are accepted in a different thread/process and passed on using multi-threading and multi-processing. Thus simulating the concurrent execution and support for multi-user paradigm.

In event driven, the program is thought to be comprised of multiple standalone modules each of which can be triggered independently. A user inputs triggers only portions of program, depending on their nature. The state is set in these portions. The user inputs assume no sequential flow. There can be multiple triggers and no assumptions are made about order of triggering. The keyboard is a good analogy, to play a tune the user selects the keys and changes the state. Most real world tasks are event-driven. Unfortunately, in CS the mind set is rather procedural.

Do you see any problems with event-driven approach? Event driven code relies on underlying sequential code. This causes unwanted side-effects. For eg: if the code itself is capable of creating events or user triggers multiple events the order in which events are raised cause a huge furore, nearly impossible to debug as they are difficult to be repeated. I once had a code which ran perfectly in debug mode when I was stepping through and just refused to yield in run mode. Mea-culpa. The solution prevent user-inputs when undesired (by locking the application). Try to queue up events if required.

Another problem, the observer-observable pattern causes a mesh. It is nearly impossible to keep track of who is listening to whom. The whereabouts of event are cryptic when visualising the big picture. Another problem, the observers need a handle/reference to observable. So there would be unwanted reference propagation. The solution, I call it message exchange. I tried to address this issue by making a bulletin analogy. Observables are only observed by the bulletin, in turn the bulletin takes the responsibility of broadcasting the events globally. Observers only listen to bulletin. The bulletin is sort of radio station or telephone exchange.

Think of this mathematical problem, if there are N houses with telephones and each of them need a line to call other, how many lines are required? N(N-1)/2 is the answer. It is O(N^2). For 10 houses it would be 45, for 12 houses it is 66. A difference of 2 and see the extra lines required. With a message exchange the increase is linear. For 10 houses we need 10 lines and for 12 houses it is 12 lines. The penalty is paid in real-world in terms of dropped calls or busy tones. Under the sufficient assumption that computers are fast, we can get away with this. However, it is always a good practice to freeze the application and thus preventing unwanted inputs. One may wonder about concurrency, in a properly designed application the freeze would be applicable for a single user (especially session state). Another advantage of message exchange is the ability to debug/analyse the application. The testability of application also improves greatly.

Sunday, March 28, 2010

Patterns to UI, MVC and MVP

To begin with the most common patterns are MVC and MVP. Model View Controller and Model View Presenter, for the few who may not know. Both of them deal with one major fact, categorising or boxing intelligence. Model represents an idea and actions that can be conducted. Controller performs the actions and decides when/where/whether or not to perform the actions. View is the camera man or the commentator who showcases the whole play. MVC unfortunately is interpreted in a diverse fashion. Presenters on other hand act like the intelligent chaps on whom view can rely. Having a separate intelligent view model changes most MVC architectures to MVP. The view model needs to provide information required for rendering widgets and hence is a composition of other view models. If the view observes view model/presenter it is called passive MVP, on the other hand if view model dictates the changes it is active MVP.

Think of it as an orchestra, model is the theme Bethoven/Mozart/Bollywood dance/Americal Idol etc. Controller is the composer/director. S/he directs/delegates to the actors. The camera just captures the show. To do that effectively, the view needs to understand what is to be shown. The camera doesn't show the whole act - the rehearsals/the makeup/the merchandise are all skipped. This knowledge is obtained from the controller. If the spectators come to a decision based on the view and trigger appropriate action. The action needs to be understood by controller. The dumb view cannot do that, so it has one more trick, a mechanism to convey intent to Controller. Note that view and controller are mostly aware of each other. This is MVC-1, Another option is to keep the view unaware of controller and model making use of observer pattern (pure MVC). Often, as it is perceived that view is a reflection of model rather than controller, the view acts as an observer on model. So changes made by controller on model are immediately shown to user.

The most important facts about MVC now. However, there is an additional overhead in MVC. The view is not a simple reflection of model. View is a bells and whistles representation. Like putting up a high-light color for invalid values or showing a tabular list. Where does this logic go? In traditional MVC, the answer is open for interpretation. Some use another layer of objects say a view model that listens on model and computes the logic required by the dumb view. This can be orchestrated by the controller where view model is another controller(pure MVC) or can be a direct observer on model, thus acting as a view model (MVC-1). Both of them can be called variants of Presentation model (as both address the grievances of view).

MVP is similar but different, Model View Presenter skips the controller part. MVC mostly deals with full-fledged views and doesn't care about widgets. MVP emphasies on widgets and view constituents. In the later view is not a single entity, but a structure of entities. The rational for MVP comes from the observation that, in UI models there is little need for controller arbitration. There is a new guy called presentation model who is understood by views and who manipulates the models. How is it different from Controller. They are similar remember, the controller doesn't speak of widgets, controller speaks of forms and content to be shown. Presenter speaks to widgets. Controller orchestrates the symphony between view and model. Presenter orchestrates only view and observes the model. Most often you realise that view and controller are coupled to form a presenter. MVP is what most of us desire as the explicit flow dictation by controller is too cumbersome (if I am allowed to say so).

An analogy for MVP, think of a refrigerator with automatic defrost feature. The user puts water into the frost shelf. The fridge senses the frost levels time to time (observes) and turns on/off the power. In sense the fridge is a presentor model. If it were MVC, the user puts water in the freezer and explictly states the time (to avoid frost). Checks the frost level and adjusts the cooling level. The interactions are explicit, the fridge doesn't sense changes, the user senses them and in turn becomes the controller.

In your application, you can detect the pattern used (MVC or MVP) by looking for the intelligence built into the code. In case of MVP the presenters are fully automated and pass information to widgets. In case of MVC, the rendering intelligence is pushed to view.

Coffee cup analogy to code/design aesthetics

Code aesthetics are often understated, good developers copy, great developers don't care and greater ones steal. Larry Wall once said: "easy things are easy and hard things are possible". Nothing else in the world gives a better description when it comes to requirement realisation. Most of us have read design patterns and have understood code etiquette. Some great programmers don't, they get away with it owing to their greatness, but the code monster catches up. It is said that if smart code is written, then you need to be 200% smarter to debug it. The essence of people centered development can thus be simplified as writing simple code which other educated peers can easily grasp.

There is often as debate on whether HTML/XML are to be construed as programming languages. To start the discussion they are not turing complete, and one of them is a mark up language and other is a data descriptor. What is turing complete, in simple words any language that can simulate condition branching (if-else) and goto (evil JMP) are turing complete. They in essence can read any arbitrary input and act upon it. HTML and XML don't qualify in this regards. What about GUI? GUI is just an abstract realisation of underlying program. Imagine a coffee cup, the coffee is the essence, the cup is the interface. It is not coffee, however it is a coffee cup when it holds coffee. Without coffee it is any other cup. Same with GUI, gui becomes the app just as coffee-cup is synonymous with coffee.

An application can do several things; a coffee cup can only serve one task, provide access to coffee. Keep this in mind. Most of us do this, and complain about nasty the application is :(. It is the cup that is spoiling the taste not the coffee par say. The cup might be leaking or cracking or lacking an handle, this spoils the whole coffee experience. So the greatness of coffee cannot be appreciated without a good cup. The cup need not be aesthetic, it is not required to be a masterpiece, it needs to do one thing that matters the most.

Laws that govern usability and pragmatic usage

Fitts' law


Fitts' law is quite intuitive, even a toddler knows that, you always go for the largest cookie/candy/cake/blah-blah...; even when it doesn't fit in the mouth. Might be the evolutionary instinct to pick up the most healthy objects! Well human vision can broadly perceive two kinds of visual inputs. Our brains are fine tuned to focus (a predatory trait), when we do that our peripheral vision takes a back seat. Then we do have peripheral vision, which kicks in autonomic-ally; more like a gag reflex. Like seeing a car coming or ball being passed from the back of your eye. Practice can enhance these traits, an archer would focus where as a ping-pong player relies on the lateral part.

Reading involves focus, so when someone looks at the screen, s/he immediately feels an urge to know what's around. So any attention grabbing activity attracts attention. The essence of this rule is simple, try to keep things together and highlight the action arena by making it big and closer to points in focus. Also, avoid placing unwanted focus-grabbers (remember lack of annoyances). Areas of application: Buttons, Labels, images, check boxes and radio buttons and almost every thinkable UI component that supports mouse activity.


Steerings' Law


Fitts' law highlights the point and click activity, Steerings' law extrapolates it to 'drag/draw' action. Wondering about things that fit into this? Stylus and tablets and pens and laser-pointers and touch-screens and pinch gestures WHOA... never thought of this before. On a typical application, you tend to use scroll-bars which `were` bad-ass as per this rule, (no wonder eldery folks hate UI). Try to customize the scrollbar, even more problems, people who have learnt UI rarely recognize the customized scrollbars. Now you have two problems. The you may have cross-browser issues (hundred's of problems).

So why are we even discussing this. Some widgets are not that popular and can benefit from steering's law. For example, most image-editing apps have a palette, It is customary nowadays to enable drag and drop on palette contents (point-click-relax and point-click-relax vs. point-click-drag-relax with focus thrown somewhere in between). Every such widget needs to support margin of motion. The user should never be constricted to a specific area.

On a side note: How to improve scroll bar? Add a on-hover effect which enlarges the scroll bar making it easier to click.


Crossing action is rarely seen supported in most UI applications, so we will put is aside for the time being.

Thursday, March 25, 2010

Laws that govern usability

Paraphrased and extracted form Wikipedia:

Fitts' law: The time required to rapidly move to a target area is a function of the distance to and the size of the target. Fitts's law is used to model the act of pointing.

Consequence, use big areas and try to keep them close to preceding action area. If it is not possible try to increase distance between conflicting hotspots, to help preferred areas.


BAD BAD
Content 1: ---------- o Content 2: ---------- [ ]
---------- ----------


GOOD
Content 3: ---------- [ ]
----------


Accot-Zhai_steering_law: It is easier/faster to navigate through a wider tunnel than a narrower one. Also, it is easier to navigate through a tunnel with thinner wall(s).

Consequence, for fluid motion - enable more space. That is don't force the user to perform orthogonal tasks (straight drags) provide margin of action.


Example 1:

---------- Hard ----------
| | -- \ | |
| | -- / | |
---------- ----------

Dragging/moving from left to right, is harder in above case than that in below.

----- -----
| | Easy | |
| | ---\ | |
| | ---/ | |
| | | |
| | | |
----- -----



Laws of crossing: It is easier to cross an object in a perpendicular direction. Crossing is an action which looks like a strike (drag) and is considered to be easier to achieve than pointing.

Using past illustration, crossing (top to down) is harder in Example 2 than in Example 1.

Usability a simplified and quantified

In continuation to the long rant, which I felt didn't do justice to aspiring learners; I decided to come up with a more tangible definition. Usability is all about not having annoyances or minimising them to such a level that people can easily adapt. What are annoyances? Nice question, to quantify a good user friendly app we need to quantify annoyances. Annoyances in this article are quantified with a margin/window of acceptability. There cannot be an absolute definition.

Annoyances story


The user who sits before any tool, intends to convey his intention and get his work done. The ability to convey the intention needs to be of absolute priority. Every step that separates him from the goal is annoyance. What are the possible intermediateries? Think of what all can be done with the tool, say the computer. Typing, clicking, drag-drop, scroll, context-switching, reading... name it. All these are annoyances. Unfortunately we cann't communicate telepathically. So we have to rely on these. Think of a series of actions that take the user closer to the action/intent. The shortest sequence of steps wins the game. This is usability.

So if some of you feel that, counting all these alternatives and arriving at a conclusion is usability, you are bang on. Usability has nothing to do with a GUI application. A command line app can also suffer from these pitfalls. For example: the simple `rm` command on unix, which deletes a file or files can be used as an illustration. rm *.c deletes all c files in the current directory, however before each delete it prompts for user interference. This is an annoyance, a useful one if the deletion is unintentional, but an annoyance in intentional cases. This command has another thoughtful alternative `rm -f`, which forces the delete action with out user interference.

Some common examples of usability in this scenario, are ability to perform direct actions, like click of a button or by a sequence of keystrokes. Toolbars and Menubars are excellent examples of these.

Focus and friends


Another aspect of usability is not to dissuade/distract the user. How do we inadvertently do this? By causing the user to pay attention to lots of unwanted detail. MS-Office does this, try looking at the menu/tool options. This is still better than other alternatives. Exposing a great deal of information causes user to loose his/her focus. Focus is the keyword. Think of it as a snipers scope. The field of vision is pretty much narrowed. This actually forms a circular picture. However, we rarely see circular interfaces. IPod is a brilliant exception though. Why don't we see them? It was not always the case, In case you had seen a grandfather radio with round knobs, you would realise it. Early mechanical devices had this. Most dslr cameras have similar dials.

GUI apps are intended to convey textual information for the most part. One can argue about multi-media and graphical representations, but let us generalize a bit. Humans are taught to read text in a linear fashion, horizontally or vertically depending on the cultural and linguistic backdrop. Circles don't use space effectively. If you were to fill a box with balls against cubes, there would be empty spaces. This is another reason. The space efficient ones are not used to save space but on the contrary, to emphasize on space. Space is a very important aspect. Reason, it helps people focus better, what is easier to identify a needle in a haystack or a needle on a table?

To enable better focus, space is a pre-requisite. What else? Well contrast and Fitt's law. Imagine borat in a corporate setting, funny and easily catches attention. Imagine him on a beach setting, not as easy to see. Contrast here implies ability to stand-out of the context. Fitt's law speaks of the same. Yes, I mentioned it earlier, Fitt's law in simple terms: The larger the object and closer to the context, the easier it gets picked.

No silver bullets


Adherence to above points doesn't result in usable apps, non-adherence causes annoying apps. Remember our definition of usability, eliminate annoyances. Confusing words in the first line? Well let me explain, It is important to understand rules so that we know how to break them. Adherence only helps in understanding, non-adherence at times help improve upon them. Lets see an example. Regardless of what I have stated, users like to focus, in fact so much that they become impatient if there is nothing to focus. Like the waiting time for a screen to load. So we need to distract their focus, causing them to switch to some other annoyance (a minor one), like a animation or a simple message or a change of color. This minor details make a lot of difference.

Ghosts that haunt


So simply put, usability is ease of focusing and features aiding in focusing. Lets see some anti-focus patterns. Small text is a killer, the focal circle contains too much detail. Asking for user interference is another killer, ever heard of UAC in Windows Vista. To be fair, a security concious person likes it, I do and I am a sceptic :). If a distraction comes up and goes away repetitively, it is an annoyance. For example, alert sounds or animations begging for attention. Do you know that certain visuals can induce epilepsy in certain demographics? You wont' certainly like talking to their lawyers, would you. Another major annoyance is small UI controls that violate Fitts' law. Why in the world is a radio button/check box so small? If you adhere to web standards, you would notice that clicking on the label lets to select the small UI control. Try yahoo, Pretty convenient isn't it. The problem with UI controls is that, the vendor owns them, and it is difficult to have customised UI controls. Note to self: let your grand-ma try the UI before shipping it.

Finally, device dependent interfaces have their own faulty assumptions. Some people are happy using mouse (most windows users). Some others with keyboard (Vi editor any one). Focus switching on an app needs to consider these two major demographics (at least, if I may add). In the next article we shall see some key components and their strengths and weaknesses.

::The most intuitive interface know to mankind is the nipple, the rest are just leant::

What makes UI development painful

UI development is a strange area, almost everyone loves it. Almost, everyone in the beginning. Then there are haters, almost everyone hates code developed by others. Then comes a new demographic, those how hate their own code. Why is it such a tumultuous relationship?

To understand this, we need to understand the relative frame of reference or the background perspective. Most developers don't work on UI for the most part of their time. They work of back-end solutions which require little user interaction. Like a automation which takes inputs once and then goes on its way. The results are not very apparent, until someone tests them. The defects once fixed are mostly fixed (subjective really, but assuming presence of a skilled person). There would be new defects but not the same ones and people perceive the end result in a boolean fashion (YES it works or NO).

UI on the other hand is an aesthetic, scientific and subjective dilemma.

Aesthetics!

Not the ones you perceive, whilst I agree that visual aesthetics are required, I am not speaking of them. What I mean to state is the design aesthetics, how easy it is for a complete stranger (who happens to be developer) to appreciate the classiness of code, ease of fixing and tweaking.

Scientific

kick me if you want, usability is a science, a cultural philosophy, a psychological undertaking. How many of us have read this, let alone understand? Added to this is the technical complexity, does my solution e platform support these features? do the requirement specs agree with this?

Subjective

This is slightly different from usability, as the usability definition covers the most part. I will relent if someone claims this needs to be grouped under usability, but it is my blog buddy. I call it subjective because of one question, When was the last time you were absolutely sure that your POV is same as that of the worlds. Not even mass media (cinema, popular literature, advts.) achieve this.

In the beginning, UI is all green and hunky-dory. I want a button/panel/widget/name it, I write and see it immediately. Wow from self, Wow from boss, Wow from stakeholders. Then comes the ugly monster, "Its all nice, now I need this and want this". Ok, a minor setback. Go back, jump into your RAMBO pants to save the world and achieve this. Not bad is it, repeat this 100 times. The testing team, the users, the changing requirements, the neighbourhood kid, grandma next door...God-dammit, stop! had enough. The world is insane, I quit. Then the boss says, dude what do I pay you for. Get it done. A big sigh.

Why so many questions? Simple, UI evolves and everyone perceives it. It is not taken for granted like back-end code. So where to start-off, Keep the following in mind.

  1. Nothing is perfect, not even the solution architecture being used. Ever heard of leaky abstractions? Go google it

  2. Design patterns are not meant to be read, but practiced

  3. Learn, learn alot; read, read a lot; what ticks, what doesn't; how the technology works. Question everything. Try to mimic the ones you liked

  4. People complain of minor annoyances, because repeated annoyances are irky. Annoyances force them to learn, A good UI is forgiving and robust, if they miss these they will complain. Ask any windows user who moved to MAC

  5. Don't try to write smart code or quick fixes. You need to be twice as smart to debug it/fix it. Invest time. If you don't have time now, how will you spare it later on?


I will post about tricks that will save the day in later posts. Keep reading.

Popular Posts

Labels

About Me

Well for a start, I dont' want to!. Yes I am reclusive, no I am not secretive; Candid? Yes; Aspergers? No :). My friends call me an enthusiast, my boss calls me purist, I call myself an explorer, to summarise; just an inquisitive child who didnt'learn to take things for granted. For the sake of living, I work as a S/W engineer. If you dont' know what it means, turn back right now.