Multi-Task Learning Tutorial

Announcing The IJCNN-2015 Multi-Task Learning Tutorial

While the origins of Multi-Task Learning (MTL) can be traced back to Thrun's seminal paper titled "Is learning the n-th thing any easier than learning the first?" in the mid-90's, it has been until recently (last 2-3 years) that the interest in MTL has experienced considerable growth (e.g. refer to Google Trends: http://goo.gl/ZyG8Rh).

In MTL, a set of conceptually-related tasks is co-learned under the hypothesis that doing so will improve per-task performance, when compared to the scenario, where each task is learned independently of each other. Such is the case, when the complexity of each task demands employing an elaborate model, for which not enough data are available to robustly train independently. In light of this, MTL can be viewed as a process of information sharing between tasks through related model parameters, which are estimated from pooled data among the tasks. Examples of where MTL has already been applied or could be applied include email spam detection, action recognition and recommender systems to name a few prominent ones.

The tutorial aspires to address the following MTL-related topics:

The tutorial will be delivered in the time span of 2 hours (1 hour per speaker). Attendees should be familiar with basic notions of constrained optimization and relevant algorithms.

A soft-copy of the tutorial is available here: IJCNN2015tutorial.pdf.