kwc.org Photos Spare Cycles MythBusters

CHI Wednesday notes

Collaboration at Work

Koala: Capture, Share, Automate, Personalize Business Processes on the Web

Greg Little, MIT, USA
Tessa Lau, Allen Cypher, James Lin, Eben Haber, Eser Kandogan, IBM, USA

Koala presents a new paradigm in end-user programming: programming by demonstration where recorded actions are human readable and editable; data stores to automatically personalize procedures; and a wiki for procedures.

Sidebar Firefox control (actually a simplified HTML page view of the script wiki) * Sidebar shows the process model in the sidebar. * Green highlights in sidebar and in browser show current action

4 Key Components * Sloppy programming * Programming by demonstration * Personalization database * Script wiki (Koalescence)

Sloppy Programming * Originated with keyword commands * Much simpler algorithm (very simple) * Resilient against DOM changes, but not word changes. * Has trouble reaching into dynamically generated DOM like GMail * No advanced control instructions yet, just "cookbook" sequential instructions. Interesting control structures in Web deal with things like "If asking for login..." -- author believes this is simpler than general case of control structures

Example instruction: "Click lucky button" (Google.com): Koala parses page and associates words with each page element: * E.g. ["Images", Link, Click], [Type, Textbox], ["I'm feeling lucky", button, click] * Element with highest count of common words wins * No grammar parsing, claims it works better without because the action space of Web browser is small with fairly distinct words

Controls: play, step, stop, record, Koala icon (script wiki open)

Demo * Title: "Find IBM", [x] private * Typed in google.com, instruction for navigating to google.com appeared in sidebar * Typed in IBM: "enter 'ibm' into the "Google Search"" textbox" * edited sidebar to change to 'click the lucky button'

Per-user * Enter "Tessa Lau" into the "Name:" textbox * Database of key/value pairs. Matches value "Tessa Lau" against "Full name" key * Substitutes Enter your "Full Name" into the "Name:" textbox

Script wiki (Koalescence) * Open in Sidebar, Run, Edit, Duplicate, Delete, Recommend as editor's Pick * Ratings, creator, private button

Understanding Memory Triggers for Task Tracking

A. J. Bernheim Brush, Brian R. Meyers, Desney S. Tan, Mary Czerwinski, Microsoft, USA

Describes an observational field study of status report writing. Presents implications for the design of computer assisted time tracking systems that support personal introspection and status report writing.

How many of you have ever written a status report?

PersonalVibe: store status in personal database * Three tabs: windows, documents, applications * Columns = days * 3D boxes where height = time spent (OMG I spend too much time in Outlook) * After novelty went away, people stopped using to write status reports

Understand task tracking * Process * Content: is duration important? * Effort: how hard is it to write status reports

Method: observation and structured interviews, 2 companies (Microsoft, other), affinity diagram

  1. Look at explicit triggers recorded (5/8), e.g. written notes, calendar datebook, excel, draft email.
  2. Recall triggers using other sources: memory, calendar, email

"If I haven't gone in every day and put in my hours and kind of written when I did that day then it [reporting] will take longer because I have to go back through my calendar" - interview

Triggers Not Detail: "sent count for offsite" -- like todos, just enough for recall

Content: For others. Not much time tracking, prioritization

Effort: 15-90 minutes per week. Perceived as lightweight.

What StatusWriter did wrong: 1. No way to create explicit triggers: support ongoing tracking in a lightweight manner, keep human in the loop 2. Emphasized detail rather than triggers: lots of time detail, pie charts. Thinking about augmenting calendar with triggers like documents worked on. Show triggers with detail on demand. 3. Separate tool: because perceived effort is low, don't want to load another tool. Should augment tools people already use.

Conclusion: Explicit and implicit triggers are used when people write their status reports

Tags, Tagging, and Notetaking

Selection-Based Note-Taking Applications

Aaron Bauer, Kenneth R. Koedinger, Carnegie Mellon University, USA

Presents a study comparing the note-taking behaviors and learning outcomes of four different note-taking applications. Describes implications for the design of selection-based note-taking applications.

71-91% of student take notes while reading, 99% in lecture

To what extend does learning benefit from note-taking techniques?

Type vs. 3 selection-based interaction techniques (e.g. copy/paste)

Livenotes (Kam et al, CHI '05) * collaborative note-taking software for students * handwrite as well as type * slide takes up 3/4 of page

Text-editing vs. handwriting * typing produces wordier notes but learning is also the same * typing also produced fewer ideas * Adding copy-paste functionality * far more verbatim, paste dominates * copy/paste forgot more * ideas recorded in a wordy fashion. wordier pasting = more forgotten

Wordiness: Does copy/paste wordiness reflected reduced attention to detail? -> restrict size of selections

Wording: is it important that students generate notes in their own words or just read a new version of the same idea? -> allow students to select alternate wordings, i.e. is it important that students generate the alternate wording?

Prototype * two 2/3: browser with course material * bottom 1/3: embedded javascript text editor

Tools 1. Typing: hypothesized -speed, +generation, +attention, +wording 2. Unrestricted copy/paste: hypothesized +speed -generation -attention -wording 3. 1-sentence copy/paste: hypothesized +speed +attention -generation -wording 4. Select with alternates: after selection, popup appears with several selections. Students have to correctly identify correct alternate wording -- some selections were distractors. -speed -generation +attention +wording

Developing restrictions * Goal: increase attention by reducing skimming * Easy to identify sentence boundaries, -> require them to select portion of sentences * Prevent gaming of restrictions/hard stops: Inappropriate selections (ones that span ideas) disappear

Developing selection tool * Previous data used to develop options, determine likely selections * No feedback for incorrect selection (turned out not to be an issue)

Time: restricted fastest, paste second fastest

Learning: paste and typing outperformed selected and restricted, i.e. novel tools outperformed by standard tools * Less key ideas were recorded with novel tools * users less satisfied with tools that had restrictions (selection most disliked). * users could not type when wanted to

Wordiness and learning: wordiness effect with * Paste recorded significantly more ideas (in general) than other tools by 40% on average. Paste also most wordy. * Performed better with less wordy paste notes

Hypotheses not supported, note-taking process may have got in the way. Frustrated with tools.

"I am good at designing tools that help reduce what students learn"

Elective technology * students choose whether and when to take notes, even in a laboratory setting * design better interventions: tradeoff effectiveness with desirability (i.e. allow more wordiness to get more use), user tests must include attitudes.

You should copy/paste, but may not want to allow typing+copy/paste (even though students want to). Worried about too much notetaking with copy/paste in the long run.

Future: highlighting, recommendations rather than restrictions

Getting Our Head in the Clouds: Toward Evaluation Studies of Tagclouds

A. W. Rivadeneira, University of Maryland, USA
Daniel M. Gruen, Michael J. Muller, David R. Millen, IBM, USA

Two studies explore impact of tagclouds’ textual and positional attributes on task effectiveness, leading toward a paradigm for tagcloud evaluation. Can help researchers and designers to improve social software displays.

Combination of: Font size, color, weight

List/block/spatial layout

Experiment 1: effect of word location and font size on memory * broke into four quadrants, 3 levels of font size * No significant difference for words located near largest word * Size and quadrant matter (upper left down to lower right) * Centrality does not matter

Experiment 2: effect of Layout and font size on memory and impression formation * tag cloud w/ alpha sorting, tag cloud w/ frequency sorting, spatial layout alg, column list w/ frequency sorting (scrollbar req'd) * largest words higher recognition rate * layout did not matter * impression formation: list is slightly better

Future investigation ideas: * Take advantage of the upper left quadrant by putting smaller words in upper left to adjust for size * search tasks

Distributed Interaction

Voyagers and Voyeurs: Supporting Asynchronous Collaborative Information Visualization

Jeffrey Heer, University of California, Berkeley, USA
Fernanda B. Viégas, Martin Wattenberg, IBM, USA

Describes mechanisms for asynchronous collaboration around interactive data visualizations. Includes novel collaboration mechanisms and design considerations for interactive visual media and presents observations of social data analysis processes.

Vizster: 1.5M Friendster profiles visualized, deployed at a SF party kiosk. * groups spent more time in front of display than individuals * friends encouraged each other to unearth relationship, probe boundaries, challenge the validity of the display ("what? she's not single"). Comparison of own world knowledge against display. * informal analysis in the form of social play, driven by story-telling of group histories

Baby Name Voyager (Martin Wattenberger): promo vehicle for his wife's book, surprised at popularity of Voyager itself * similar activities on blogs as in Vizster kiosk * Challenges to find dead names: "Roosevelt has two spikes right about where you'd expect them" * Baby boomer generation shows dip in usage of a/e/i/o names

Thesis: visual sensemaking is not only a cognitive exercise but a social activity * e.g. Jim Gray collaborative search * identify conditions for effective collaboration across time + space * systems for async collab viz

sense.us * collaborative Web viz of demographic data * job voyager, birthplace voyager, population pyramid, US census state map, IBM Dogear people, Dogear tags * used job voyager for control studies. examples: decline of farmer, rise of women, incongruity of male vs. female farmers despite collaborative role. * every query has a URL (URL updates as you type) * comments are not tied to a particular URL but instead to the state of the view (i.e. visible objects) * saved views combined with drag and drop links with preview * comments can be combined with freeform visual annotations

Social behaviors supported: * Pointing: annotations * Collecting and linking related views: book mark trails, drag and drop links * Awareness: serendipitous discovery, comment listings, profiles * Do no harm: individual exploration still primary

Examples and results: * Military: timeline completed by another user * Stock broker: user sketched person jumping off building at dip * collaborative sensemaking: 5 seed comments. dentist: flouride, candy, stratification (rise of dental technician) * Users posted findings in Dogear * High co-occurrence of observations, questions, and hypothesis. Very little affirmation comments in initial prototype * 50% of comments included annotations: pointing and drawing. Annotations 'not always necessary' but 'surprisingly satisfying'. Worries about professional look. * High usage of back button (hello Web 2.0), though dominant nav was visualization (70.5%)

2 complementary usages: * Voyager focused on data * Voyeur focused on comment listings

Future work: * scented widgets (show which options are more popular) * enhanced pointing and deictic reference * game play and incentives

Other systems: Spotfire, Swivel, Many-Eyes (successor to sense.us), Wikimapia

Power of the few vs. Wisdom of the Crowd: Wikipedia and the Rise of the Bourgeoisie (20 min)

AniketKittur,University of California, Los Angeles, USA
EdH.Chi,BryanA.Pendleton,BongwonSuh,PARC, USA
ToddMytkowicz,University of Colorado, Boulder, USA

In this study we examined how the influence of “elite” vs. “common” users changed over time in Wikipedia. The results suggest that although Wikipedia was driven by the influence of “elite” users early on, more recently there has been a dramatic shift in workload to the ”common” user. We also show the same shift in del.icio.us, a very different type of social collaborative knowledge system

Cathedral and the Bazaar -- still some centralized structures even bazaar models

Wikipedia * Dec '04: 2.5% of users made 50% of the edits. Jimbo Wales "Gang of 500" * admins have great power to se policies, ban users, resolve disputes

Post a comment


tags.

related entries.

what is this?

This page contains a single entry from kwc blog posted on May 2, 2007 12:30 PM.

The previous post was The Sports Illustrated way... in 2004.

The next post is Spiderman 3 suuuuuucks.

Current entries can be found on the main page.