News‎ > ‎

Stephann gives keynote at OzCHI 2018

posted Dec 17, 2018, 9:47 AM by Stephann Makri   [ updated Dec 17, 2018, 9:53 AM ]
In December 2018, Stephann gave the opening keynote at the 30th Australian Conference on Human-Computer Interaction. Stephann presented his work on serendipitous information acquisition, using the conference theme 'physical, digital, interactive, human' to frame his talk.

In the keynote talk, Stephann argued that serendipity researchers may have "fallen into a recursive trap" by suggesting it is possible to 'create opportunities' for serendipity through design without systematising it to the point it loses meaning. He argued that "even attempting to systematise something that makes any attempt at recommending, suggesting, even nudging users may be trying to capture something that is too elusive."



An excerpt from his keynote follows:

"I am not concerned if some users become habituated to positive outcomes, as at least they are obtaining them. But I fundamentally question the premise that serendipity, as complex and context-dependent as it is, can be adequately modelled with today’s and even tomorrow’s capabilities. Even now that we are beginning to see recommender algorithms being built based on information theory, the tendency has been to try to ‘boil down’ serendipity into constituent components such as novelty and usefulness without attempting to model the user, information or environment context. This can potentially be done through user interest profiling, search history tracking, and perhaps behaviour tracking and analytics.

As the models become more and more sophisticated, perhaps some designers (and designs) will try to convince us that we actually needed ‘serendipity on a plate’ after all. And perhaps some will reject the urge to systematise it altogether and instead opt to design exploratory browse-based interfaces where the only ‘intelligence’ involves identifying patterns and anomalies as prompts to encourage users in making their own meaningful connections. Many may be starting to believe the AI hype again after decades of good old-fashioned disillusionment. But I’m not sure if AI can ever get to the stage where it can guarantee a user has not yet seen a piece of information, had a particular insight already or will find particular information interesting or useful. Future designers must reflect on whether they continue to design without such guarantees, or shift most or all of the connection-making ‘intelligence’ to the user.

Both approaches run the risk of systems never reaching their maximum serendipity potential. The former may potentially frustrate users by presenting them with information they already knew or does not turn out to be useful, the latter may potentially frustrate users by making them do all the work, in the name of providing them with maximum agency."

Comments