Intent discerning agent for more intuitive visualizations

Abstract

Today’s visual interfaces are capable of representing large, semantically-complex datasets; user interaction of subsets is used to maximize display space. However, what data to provide and in what context requires decisions that are computationally and graphically expensive. We have built an autonomous, rule-based intelligent agent, which sits underneath a visualization, observes user behavior, determines user intent, and, based on what was learned, predicts future interest. The agent observes user manipulation and gathers interaction information continuously. For the purposes of demonstration, the agent sits underneath a shallow visualized hierarchical graph. The agent makes a determination about user intent through simple computations on its gathered interaction information and passes its decisions back to the visualization, which displays them to the user via ambient overlay. Future work will enable the agent to direct the interface as to which data to display and in which context to display it, enabling more intuitive human-computer collaboration.


Back to Table of Contents