Mobile Phones: microDesign

Till now, all Design-related posts talked about design issues that were directly perceptible to us. Like shape/colour, layout, or work-flow. But today, I am going to discuss a design point that is “minute” in nature.. i.e. beyond what our senses can easily detect.

Mobile phones. A device that is used by everybody today. Not just to talk, but to do absolutely everything.. But using the same buttons/touchscreen. So often, we are playing a game, say, and vigorously jabbing some menu key. And then a call comes.. and you accidentally disconnect it. Nothing could be more irritating.

So, let us understand in slow motion, what are the events happening in that fraction of a second: your mind is playing the game, and you are pressing the buttons in some fashion according to some calculation in your mind. Then, a phone comes, the first to know was the mobile.. it flashed the screen.. your eyes pick it up, your brain is told, and then the fingers pause. So now, the really crucial question is: when does the menu button’s function switch over from the game to the “Answer” key, or, say, “Reject” key? A well designed mobile will do this switch only after allowing a calculated delay from the first time the screen is flashed.

Now, this calculated delay, which equals the total response time of the eyes->brain->finger action, must be in the order of a fraction of a second. But, if the mobile does wait for that much time, consider how much it suddenly adds to the user experience! The user would just ‘know’ somehow that the device is better, but the reason is beyond their ordinary sense of perception.

(Note that even in the very old analog land-line phones, this issue is present: instead of a visual signal, we have a loud audio bell, and instead of a button-press, we pick up the handset.)

An interesting point that this raises: It might very much be the case that someone thought of this many years ago. But during the times of the first phones, it might not be possible to engineer such minute time delays into systems. And today, with microchips, this is doable. This shows that just having great ideas is not enough.. your production technology must keep in step with the requirements of your ideas, which keep growing.

I think that the world is full of so many opportunities of microDesign, that can simply change the way we do things, without actually feeling any change.

Advertisements

Image Manipulation: From Stack to Nodes

(Meta: I am posting after weeks.. I was away from the Net, for some time)

Those who have used programs like GIMP or Photoshop, would know that operations on images are invariably performed in a stack fashion. Seriously, nothing could be more annoying. For eg. suppose that I adjust the sharpness, and then do many more operations after that; and then the boss comes and says he’d like more sharpness to start with. The only way to revisit the old operator, is to empty the stack by keep doing undo. What a waste.

Contrast this (no pun intended) with what is seen in compositing software. For eg, Blender’s internal compositor. There, the fundamental unit is called a node, and it represents some operator (eg. contrast adjust, or Gaussian Blur). Basically, you connect the output of one node as an input to the next one, eventually building an “assembly line” of nodes, each node doing a particular operation on the image. For example, one can get a hazy glow effect by putting the original image through a blur node, then stepping up the brightness levels with another node, and then mixing this with the original:

The node-based approach has several advantages:
1. Even after you add more nodes, you can still scroll left and edit the previous nodes, or even change their setup entirely. The stack-based approach fails miserably here.

2. One can perform more complex operations, involving “previous” versions of the image. For example, in the node setup shown above, we mixed the changed image with the original one. A stack-based approach cannot allow simultaneous access to the original as well as the manipulated image (unless you took the trouble to save them as separate layers). Just like code reuse, this is an instance of resource reuse: when you apply several operations on an image, you should be able to reuse the intermediates. The stack-based flow effectively discards the intermediates.

3. We often have to do the same long set of operations over and over, for several images. The node-based approach makes this very easy: just duplicate the entire limb of your node setup, that corresponds to those operations. Or even better, you can merge those nodes together into a single “node group”. Node groups, in other words, represent an arbitrarily complex series of image operations, as a single entity that can be duplicated/relocated with ease. Of course, you can always edit the node group itself, if you need to.

(Photoshop does have so-called “adjustment layers”; but it is a sham, compared to all this.)

One thing to note, though, is that the requirements of an image manipulation program, are slightly different from those of a compositing software. Even so, many basic operations performed are very similar in nature. I think that the future of image manipulation is definitely moving away from stack-based editing, to node based.

Haha

Basically, I am going to laugh at this article. (Disclaimer: I guess that life in general, has mocked all of us several times by now, so I hope that this post is taken in the right spirit.)

The general idea put forth is that FIITJEE is not fit for JEE. And then, point-by-point, the article proceeds to contradict this general idea. For starters, the author has “plenty of examples” to show that FIITJEE percentile is “completely irrelevant” to JEE rank. Apparently, the author has a grip on correlation (neglecting the ambiguity in the “completely irrelevant” phrase used.. does that mean uncorrelated or independent?). But then, in order to really really convince us about the same, instead of presenting his data to us, he gives us a list of “methods”, which are the result of solving over 50 mock tests (and a real one, I guess by his rank). These methods should apparently boost our FIITJEE score much more than our JEE score, implying that the two are “entirely irrelevant”, QED. Atleast, that is what I think this killer line means:

[PS: MOST OF THESE METHODS DO NOT WORK FOR IIT-JEE, RATHER YOU WONT NEED THEM IN JEE, ‘COZ JEE PAPERS THESE DAYS ARE EASIER AND CAN BE SOLVED WITHOUT THESE METHODS. STILL YOU CAN USE THESE METHODS IN SOME PLACES TO SAVE TIME.]

Clearly, the author has a grip on reality as well. (BTW, an upper-case parenthesized postscript right in the middle of a completely boldfaced article is distracting.)

The first method shows how dimensional analysis could solve a 5 mark (real) JEE question. (Something is wrong with the exam, on the whole). It is followed by more trivia, like substituting values, substituting functions..

..and then comes a list of deep, moving thoughts: In FIITJEE, there cannot be consecutive questions with both answers A. There simply cannot be. The probability of a FIITJEE answer being B or C is 70%. The probability of the answer to the first question of a passage comprehension in a FIITJEE exam being option A is 10%.

Forget the numbers. What irony that the author who indicated earlier how this analysis is largely useless to the JEE aspirants, proceeds with the same zeal, to gather such intricate statistical details for the very same analysis!

But his efforts did pay off.. He has proudly named a tactic called the Inclusion Exclusion Principle. The reader may be eagerly expecting something on par with the combinatorial theorem by the same name. But in fact, this (new) principle goes further: it delicately tackles the intriguing problem of deciphering the cleverly rearranged terms in the options of certain chemistry questions. I am only worried that the subtle involvement of inclusion-exclusion and JEE in the whole affair might go unnoticed by a casual reader.

I sincerely hope that the author cherishes the combinatorial namesake with equal joy, if not more.