Computational Science Initiative Event

"CSI Seminar: Recurrent Networks and Corresponding Applications"

Presented by Lin SUn, Stanford University

Monday, March 27, 2017, 11:00 am — Seminar Room, Bldg. 725

Currently, the most successful learning models are based on the paradigm of successive learning of representations followed by a decision layer. This is most commonly actualized through feedforward multilayer neural networks, such as Convolution Neural Networks (CNNs), where each layer forms one of such successive representations. However, an alternative that can achieve the same goal is a recurrent networks or feedback networks, in which the representation is formed in an iterative manner according to a feedback received from previous iteration's outcome.
In this talk, we will deeply investigate the feedback networks, particularly the convolutional LSTM which illustrates several fundamental advantages over feedforward: it enables making early predictions at the query time, its output conforms to a hierarchical structure in the label space (e.g. a taxonomy), and it provides a new basis for Curriculum Learning. We put forth a working feedback based learning architecture for image classification task on par or better than existing feedforward networks with the addition of the above advantages. We further investigate its effectiveness on human pose estimation and action recognition problems.

Hosted by: Kerstin Kleese van Dam

12274  |  INT/EXT  |  Events Calendar

 

Not all computers/devices will add this event to your calendar automatically.

A calendar event file named "calendar.ics" will be placed in your downloads location. Depending on how your device/computer is configured, you may have to locate this file and double click on it to add the event to your calendar.

Event dates, times, and locations are subject to change. Event details will not be updated automatically once you add this event to your own calendar. Check the Lab's Events Calendar to ensure that you have the latest event information.