CGL Meeting Agenda


Date: August 18th, 2004
Location: DC 1304
Time: 2:30 PM
Chair:
Jie Xu


1. Adoption of the Agenda - additions or deletions

2. Coffee Hour

Coffee hour last week:
Sylvain
Coffee hour this week:
???
Coffee hour next week:
???

3. Forthcoming



Date: August 25th September 1st September 8th September 15th
Location: DC2303 DC1304 DC1304 DC1304
Chair:
Mike Watch-your-left-ski

Daming Yao

Bryan Chan

Alex Clarke
Technical 
Presentation:

Mike Watch-your-left-ski

Jie Xu

Edwin Vane

Daming Yao

4. Technical Presentation

Kevin Moule

Title:COLLADA: A Brief Introduction

Abstract: COLLADA is an interchange format for digital assest exchange. It was showcased at this year's SIGGRAPH. I will be briefly discussing the internals of the format, touching on some of the novel and interesting features.

5. General Discussion Items

6. Action List

7. Conferences and Special Journal Issues

8. Directors' Meeting

9. Seminars and Events

Monday, 16 August 2004, 10:30AM - Computer Science , DC 1304 [Newish]
Arash Farzan: -- Cache-Oblivious Searching and Sorting in Multisets
 
Tuesday, 17 August 2004, 2:30PM - Computer Science (Algorithms and Complexity Group), DC 1304
Michael Spriggs: -- Angles and Lengths in Reconfigurations of Polygons and Polyhedra
 
Wednesday, 18 August 2004, 1:30PM - Computer Science , DC 1304 [Newish]
Bradford Hovinen: -- Blocked Lanczos-style Algorithms over Small Finite Fields
 
Wednesday, 18 August 2004, 2:00PM - Computer Science (Artifical Intelligence Group), DC 2306C [Newish]
Michael Bowling: -- PM Wednesday 18 August 2004 Computer Science Dept., University of Alberta Wednesday, 18 August 2004 48 28 14 13 7 104 5 225 1 wlrush cs Seminar DC 2306C false true Artifical Intelligence Group 129.97.74.97 wlrush Michael Bowling Learning in a multiagent system is a challenging problem due to two key factors. First, if other agents are simultaneously learning then the environment is no longer stationary, thus undermining convergence guarantees. Second, learning is often susceptible to deception, where the other agents may be able to exploit a learner's particular dynamics. In the worst case, this could result in poorer performance than if the agent was not learning at all. These challenges are identifiable in the two most common evaluation criteria for multiagent learning algorithms: convergence and regret. Algorithms focusing on convergence or regret in isolation are numerous. In this talk, I seek to address both criteria in a single algorithm by introducing GIGA-WoLF, a learning algorithm for normal-form games. The algorithm guarantees at most zero average regret, and converges in many situations of self-play, with both theoretical and empirical evidence. Finally, these results also suggest a third new learning criterion combining convergence and regret, called subzero regret. Convergence and No-Regret in Multiagent Learning Convergence and No-Regret in Multiagent Learning Convergence and No-Regret in Multiagent Learning Convergence and No-Regret in Multiagent LearningConvergence and No-Regret in Multiagent Learning 2004 7 18 14 00 0

10. Lab Cleanup !