Final Projects

Titles:
1. Accent detection (Ishita, Gowri)
2. Motor synergies for control (Matthew, Jia, Grady)
3. Language prediction (Larry, Xiaojiang, Phan Anh)
4. Random graphs and associative learning (Sam, Samira)
5. Emotion through music (Miles, Robert, Clay)
6. Recognizing events from photos (Unaiza)
7. Modeling balance (Jyothi, Lakshmi, Neha)
8. Rehearsal effects (Mohit, Chris)
9. Rank-order preservation for visual invariance (Marissa, Saurabh, Siva)

Abstracts:
1. Accent Detection

Ishita Chordia, Gowri Nayar

As speech recognition systems become more and more commonplace, it is important for them to effectively distinguish between accents to enhance communication and understanding. This paper describes a neurally plausible algorithm that models how humans distinguish between accents, and uses a recursive neural net to test the validity of the model. We hypothesize that accent detection relies on the connection between our perception of sounds and our developed modules of mouth movement, as each native speakers of different languages develop varied modules.

2. A Motor Synergy Approach to Real Time Control

Matthew Barulic, Jia Tan, Grady Williams

One of the main hypothesis in biological motor control posits the existence of a hierarchical control structure, where a high level control is converted into low-level muscle activation patterns via muscle synergies (or “motor modules”). These synergies are thought to somehow simplify the task of motor control, although the precise mechanisms by which this is accomplished is unclear. Evidence for the muscle synergy model is based on EMG recording of a large number of individual muscles being explained by a small number of spatio-temporal patterns, these are called a “synergy” or a motor module. In our project we investigate how motor synergies can be learned in order to improve real-time control of robotic systems.

3. The sky is ____: Predicting class of word with LSTM neural networks, measured against a human baseline

Larry He, Xiaojing Ji, Phan Anh Nguyen

Children learn the language in increasing complexity, beginning with short sentences and progressing to complex, compound sentences. We explore whether an order in complexity influences the learning of LSTM neural networks, trained to predict part of speech using only a small number of examples. We compare the model’s accuracy with native as well as non-native speakers’ performance.

4. A Random Graph Model for Associative Learning

Samantha Petti and Samira Samadi

The goal is to create a random graph model that describes the encoding of associations between neural assemblies (sets of neurons that fire in a pattern when a concept is recognized). Recently,  Ison et. al. showed the existence of neurons that will fire when an image of Person A is shown, but will not fire when an image of Place B is shown. However, after seeing an image of Person A in Place B repeatedly, some of these neurons will then also fire when just Place B is displayed. We model this phenomena and show that association formation yields a neuron graph that exhibits an overrepresentation of the specific motifs found to be over-represented by Song et. al.’s analysis of the connectome graph.

5. The Communication of Emotion through Music

 Miles Raphael, Robert Schwieterman, Clay Washington

 We examine the similarities between language and music and propose that music provides an adequate language for the expression of emotion. Using computational models, we explore how music transmits emotion and how listeners might learn to interpret the emotion intended by a musical piece.

6. Recognizing Events from Photographs Using Concept Attributes

Unaiza Ahsan

The goal of this project is to identify events from images. Humans are quite adept at recognizing what event is taking place by looking at a single image and fixating on key aspects if the image. I propose that these aspects are simply concepts which have been refined and learned over time through experience, and when an image is presented to humans, certain concept detectors in the brain fire – leading to the conclusion that this is a graduation event. The aim is to discover event related concepts, train them as classifiers and test on images of complex social events.

7.  Computational model of balance

Jyothi Narayana, Lakshmi Nair, Neha Raje

We are not trying to solve the ago-old problem of balancing here. In fact, we are trying to draw parallels between how we learn to balance a pencil on our fingertips and how a machine can achieve it. Do the answers lie in equations of balance or there is something more innate to it?

8. Modeling the Rehearsal Effects of Humans

Chris Stevens, Mohit Agarwal

Our project explores the notion of forgetting within the realm of computational neural networks.  In particular we propose that forgetting is not the loss of information but rather the loss of fidelity as the neural network is shifted (i.e. the weights of each neural connection) towards the new concept.  Thus forgetting is merely the transformed pre-existing knowledge as modified by new inputs. The goal of our project was to model the learning process and demonstrate through repetition how a computational neural network recalls learned concepts, and how learning a new concept affects recall of previously learned concepts. We accomplished our goal by building a recurrent neural network (RNN) which is capable of associating a vector space (concept) with the spelling of a word.  The network is trained such that the original word is transformed into a vector space of related concepts (through the use of Google’s word2vec algorithm), this vector space is used to train our RNN such that the spelling of the original word can be output through each iteration of the RNN.

9. Rank Order Preservation for Transformation Invariance

Marissa Connor, Saurabh Kumar, Siva Manivasagam

Transformation invariant object recognition is a very difficult problem because an object can undergo a slight transformation, such as rotation, and the pixel representation will be very different in each view. Neural networks attempt to address transformation invariant recognition by providing the classifier with many examples of each class. We instead look to using rank order, a property maintained by neurons in the inferior temporal cortex for invariant recognition, as an approach in a computational setting.

Advertisements