a2-dlearn2017

Ann Arbor Deep Learning Event
November 18


Unsplashed background img 1


Overview

a2-dlearn2017 – annual event to bring together deep learning enthusiasts, researchers and practitioners from a variety of backgrounds.

Started 2 years ago as a collaboration between the Ann Arbor – Natural Language Processing and Machine Learning: Data, Science and Industry meetup groups, a2-dlearn2017 is now supported by Michigan Institute for Data Science (MIDAS) and local companies to bring together amazing speakers at a great venue.

The event is free, but tickets are required. Previous years were very popular with all of our tickets claimed well in advance, so this year we are providing more tickets, but get them soon as they will go fast!

Program


10:00-10:30
  Check-in
10:30-11:30
 Task Generalization and Planning with Deep Reinforcement Learning 
Junhyuk Oh , University of Michigan
Junhyuk Oh is a PhD candidate at University of Michigan, advised by Professor Honglak Lee and Professor Satinder Singh. His research focuses on deep reinforcement learning problems such as action-conditional prediction, dealing with partial observability, generalization, and planning. His work was featured at MIT Technology Review and Daily Mail. He also interned at DeepMind and Microsoft Research.
Junhyuk Oh
Abstract: The ability to generalize over new tasks is an important research direction in order to build scalable reinforcement learning agents. In the first part of this talk, I will discuss how to easily train an agent to generalize to previously unseen tasks in a zero-shot fashion. In the second part of the talk, I will discuss how to build a model of the environment only in terms of rewards and values without needing to predict observations. I will describe how to use such a model to do look-ahead planning and show the advantage of our approach compared to conventional model-based reinforcement learning approaches.
11:30-12:30
  Keynote:  What Can Neural Networks Teach us about Language? 
Graham Neubig , Carnegie Mellon University
Graham Neubig is an assistant professor at the Language Technologies Intitute of Carnegie Mellon University. His work focuses on natural language processing, specifically multi-lingual models that work in many different languages, and natural language interfaces that allow humans to communicate with computers in their own language. Much of this work relies on machine learning to create these systems from data, and he is also active in developing methods and algorithms for machine learning over natural language data. He publishes regularly in the top venues in natural language processing, machine learning, and speech, and his work occasionally wins awards such as best papers at EMNLP, EACL, and WNMT. He is also active in developing open-source software, and is the main developer of the DyNet neural network toolkit.
Graham Neubig
Abstract: Neural networks have led to large improvements in the accuracy of natural language processing systems. These have mainly been based on supervised learning: we create linguistic annotations for a large amount of training data, and train networks to faithfully reproduce these annotations. But what if we didn't tell the neural net about explicitly, but instead *asked it what it thought* about language without injecting our prior biases? Would the neural network be able to learn from large amounts of data and confirm or discredit our existing linguistic hypotheses? Would we be able to learn linguistic information from lower-resourced languages where this information has not been annotated? In this talk I will discuss methods for unsupervised learning of linguistic information using neural networks that attempt to answer these questions. I will also explain briefly about automatic mini-batching, a computational method (implemented in the DyNet neural network toolkit), which greatly speeds large-scale experiments with complicated network structures needed for this type of unsupervised learning.
12:30-1:30
  Lunch (pizza provided)
1:30-2:30
  Keynote:  An overview of Deep Learning Frameworks and an introduction to PyTorch 
Soumith Chintala , Facebook AI Research (FAIR)
Soumith Chintala is a Researcher at Facebook AI Research, where he works on deep learning, reinforcement learning, generative image models, agents for video games and large-scale high-performance deep learning. Prior to joining Facebook in August 2014, he worked at MuseAmi, where he built deep learning models for music and vision targeted at mobile devices. He holds a Masters in CS from NYU, and spent time in Yann LeCun’s NYU lab building deep learning models for pedestrian detection, natural image OCR, depth-images among others.
Soumith Chintala
Abstract: In this talk, you will get an exposure to the various types of deep learning frameworks – declarative and imperative frameworks such as TensorFlow and PyTorch. After a broad overview of frameworks, you will be introduced to the PyTorch framework in more detail. We will discuss your perspective as a researcher and a user, formalizing the needs of research workflows (covering data pre-processing and loading, model building, etc.). Then, we shall see how the different features of PyTorch map to helping you with these workflows.
2:30-3:30
 Deep Learning without Deep Pockets 
Guido Zarrella , The MITRE Corporation
Guido is a principal research scientist at the MITRE Corporation, a not-for-profit R&D center, where he studies applications of artificial intelligence, deep learning, and natural language processing to solve problems in the public interest. His research interests include transfer learning in neural networks, machine creativity, and neuromorphic computation. Applications of his work have included methods for automated understanding and summarization of social media texts, detection of bot networks and information campaigns online, and machine learning tools for extracting useful insights from diverse data such as recordings of brain activity, student essays, and child-directed speech.
Guido Zarrella
Abstract: Neural networks are powerful function approximators but the training of state-of-the-art systems can require significant investments in hardware and data labeling. What can you do when you're short on money or time? Today's talk will discuss feature learning and transfer learning techniques to help train effective networks that solve problems ranging from natural language understanding to interpretation of brain activity.
3:30-4:00
  Break
4:00-5:00
 Beyond Objects: Image Understanding as Pixels to Graphs 
Alejandro Newell , University of Michigan
Alejandro is at the University of Michigan doing research in computer vision under the supervision of Jia Deng. His research interests include scene understanding and human pose estimation.
Alejandro Newell
Abstract: Tremendous progress has been made teaching computer vision systems to pick out the objects in an image. Each year, these systems can more reliably classify objects and better pinpoint their location in a scene. But a collection of objects is not enough. For full scene understanding, we must understand the connections and relationships between them. Graphs are an effective way of representing these connections, but are difficult to express given standard approaches for training deep convolutional networks. In this presentation, I will discuss how to supervise a network to predict graphs. I will go over the details of our approach including its application to relationship detection and advantages over existing methods.
5:00-6:00
 Toward weaker supervision for semantic segmentation based on deep convolutional neural network 
Seunghoon Hong , University of Michigan
Seunghoon Hong is a postdoctoral fellow at the EECS department, University of Michigan. He is currently working with Prof. Honglak Lee on topics related to deep learning and its application to computer vision. His research focuses on learning complex visual recognition tasks with minimum human supervision and improving interpretability of deep neural networks. He received the B.S. and Ph.D. degree from the Department of Computer Science and Engineering at POSTECH, Pohang, Korea in 2011 and 2017, respectively. He is a recipient of the Microsoft Research Asia Fellowship.
Seunghoon Hong
Abstract: Semantic segmentation is one of fundamental computer vision problem that aims to assign dense semantic labels to every pixel in the image. Although recent approaches based on Deep Convolutional Neural Network (DCNN) have achieved substantial improvement over traditional methods, training DCNN requires a large number of fine-quality segmentation annotations, which makes them difficult to scale up to various semantic categories. In this talk, I will present a series of our research on semantic segmentation that can leverage much weaker annotations for training. In the first part of the talk, I'll briefly introduce a semi-supervised approach for semantic segmentation, which aims to train a model with a small number of pixel-wise class labels and a large number of image-level class labels. In the second part of the talk, I'll introduce two variants of weakly-supervised semantic segmentation algorithms, and discuss how we train a model for semantic segmentation with only image-level class labels.

Register

Event is free, but registration is required due to limited space.

Details

When: 
November 18
Where: 
220 Chrysler Center for Continuing Engineering Education
2121 BONISTEEL BLVD, Ann Arbor, MI

Note to attendees: Please show up prior to the event and get situated so that we can start on time!

Sponsors


Interactions
Spark Ann Arbor
MIDAS