Tuesday, March 11, 2008

Personalized UI Generation

Working on campus has its benefits.

On Tuesday I attended a talk by Krzysztof Gajos titled: "Automatically Generating Personalized Adaptive User Interfaces". It was nice to see some solid research being done in this area with particular attention to users who have a motor/dexterity impairment. Here are some of my notes which of course may have wild inaccuracies.

Krzysztof seems to have applied theory in mathematics and algorithms, particularly "decision theoretic optimization" to graphical user interfaces. I'm not talking about Fitt's law. Think: AI meets HCI.

For his problem space he considers 3 things to adapt the GUI to:
1. devices
2. preferences
3. abilities

And things to adapt, or "UI building blocks":
A. Layout
B. Widget
C. Structure
D. Size (this last one he added later in his research)

The context seemed to be mouse based interaction in graphical user interfaces.

Treating GUI design as an optimization problem he developed quantitative metrics and a cost function, essentially with weights to be applied to widgets. For those with an AI background his bag of tricks included: branch-and-bound search and full constraint propagation. The "cost" function I think of as a "utility" function (common in game decision AI). Oh and perhaps most wonderful to hear him say was that "deep down it is a constraint satisfaction problem". Yes! I'm sort of known for saying a lot of things boil down to a constraint satisfaction problem and some of my colleagues would have a good laugh about this I think. Anyways, this isn't about me and I've lost track of what this paragraph is for.

He spoke of the geometric concepts that arise from considering each UI preference an axis in a multidimensional space, and that each preference of one UI element over another is a hyper-plane in that space. Overall the 'shape' of the solution space of elements that solve the preference criteria is a polytope. Way cool stuff to think about at a high level.

He mentioned four elements of UI Design:
a. Perceived effort
b. Cognitive effort
c. Motor effort
d. Aesthetics

In analyzing ability, or motor effort, he optimized for time as opposed to preference:
cost(rendering(UI)) = time

So what did I take away?

1. People without mobility impairments found the time optimized generated GUI ugly.
2. Everyone was more performant using their generated time optimized GUI.
3. Krzysztof is someone to watch.
4. The biggest gain in performance was using a widget or widget set that required less mousery to manipulate, for example: a set of 5 radio buttons, instead of a combo box with 5 options.

Krzysztof really made his algorithms quick and it was interesting to see a demo of UI changing dramatically as he changed a constraint, such as screen size. The widgets, and widget hierarchy changed on-the-fly, for example, widget groups became tab panels in an auto-generated tab container.

But no spinning angry face. Sorry... inside joke.

Thanks for reading.

2 comments:

Ethan Anderson said...

The only way to do that is to break every window into its components entirely, passing them off to something else, like plasma, and letting it handle all arranging of them.

I'd like to choose how all my windows are arranged myself; statically, but according to specific rule sets, thanks...

Right now i'm still stuck with this 'window' paradigm, and I'm sick of it.

Richard Majece said...

I think that information from https://pro-essay-writer.com/blog/uk-universities-2017 will be really useful for students. Here you can read more about UK universities