THE CONFERENCE IS CANCELLED DUE TO THE CORONA VIRUS
MACHINE LEARNING CONFERENCE
Hosted by Department
of Mathematics & Statistics at Eastern Michigan University
Time and location:
8:30 am - 4:00 pm in room 201, Pray Harrold building
The conference theme for this year is Theoretical and Practical
Aspects of Machine Learning.
Conference Goal: To
bring together faculty and students, to offer the opportunity of showing
their work to others, and to invite them to discussions and prospective
future cooperation. The conference is an excellent platform to share your work in AI with the professional community,
as well as getting informed with the latest AI research topics and technologies.
There is no conference fee this year. The Department of
Mathematics&Statistics covers all the costs involving the room and breakfast.
If drive to the conference you may park either at the meter or in the paid
green areas on the parking map.
order to registrate for the conference, the interested paticipants shold
cut and paste the following link into an URL: https://forms.gle/E3wooYhJ6hvbv9J79
and then fill in the registration form. The registration deadline is March 10.
List of participants
A list with the conference participants and their affiliations, as of February 20, 2020
can be found here.
Ovidiu Calin, Department of Mathematics&Statistics, Eastern Michigan
University. If you have any questions please contact the conference chair
8:30 - 9:00
Registration and Breakfast, location: entrance hall of Pray Harrold
9:00 - 9:30
Arpan Kusari, Ford Motor Company:
Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning
Abstract: A common approach for defining a reward function for Multi-objective Reinforcement Learning (MORL) problems is the weighted
sum of the multiple objectives. The weights are then treated as design parameters dependent on the expertise (and preference) of the person performing the
learning, with the typical result that a new solution is required for any change in these settings. This presentation investigates the relationship between
the reward function and the optimal value function for MORL; specifically addressing the question of how to approximate the optimal value function well
beyond the set of weights for which the optimization problem was actually solved, thereby avoiding the need to recompute for any particular choice.
We prove that the value function transforms smoothly given a transformation of weights of the reward function (and thus a smooth interpolation in the policy space).
A Gaussian process is used to obtain a smooth interpolation over the reward function weights of the optimal value function for three well-known examples:
GridWorld, Objectworld and Pendulum. The results show that the interpolation can provide very robust values for sample states and action space in discrete
and continuous domain problems. Significant advantages arise from utilizing this interpolation technique in the domain of autonomous vehicles: easy, instant
adaptation of user preferences while driving and true randomization of obstacle vehicle behavior preferences during training.
9:30 - 10:00
Brian Lester, Interactions: How can I trust my model? Confidence in the Confidence of Neural Networks
Abstract: In most settings, one trains a model and then evaluates it on some held out test set; however, there are some domains where
one would rather have no answer than an uncertain one. For example, if you have a human in the loop you might want to send an example to the human to
double check it rather than just using the label produced by the model. Confidence models allows one to make decisions about how much trust to put
into a model's predictions.
This talk outlines the idea of confidence modeling, techniques used to evaluate models that allow for the rejection of examples based on confidences,
the difficulties of getting well calibrated posteriors from Deep Learning models, and a summary of current work using calibration techniques and
auxiliary models to produce high-fidelity confidence scores in the Natural Language Understanding module of a real world Dialogue System.
10:00 - 10:30
Sorin Alexe, : QML Alpha, LLC: Evolutionary Machine Learning for Financial Markets
Abstract: Machine Learning (ML) and Artificial Intelligence (AI) become increasingly important to many areas of Science and Technology.
The increased interest for the use of ML to develop financial quantitative strategies raises new challenges, as data are time series, dynamic, stochastic and noisy.
The proposed approach is to generate a large number of signals and to apply consensus methods to obtain models with increased performance. This requires advanced
model generation techniques and high level of automation. We present a generative algorithm that operates over an algebra of portfolios and applies mathematical and
ML operations to generate expressions. Grammar-like derivations are used to generate hypotheses that are backtested, filtered and analyzed.
10:30 - 11:00
Keith Lambert, OMElectric: Graphical models constrain statistical learning approaches
Abstract: AI/ML architectures have made leaps and bounds in classification, recognition and translation domains but
semantics and overall understanding still eludes these architectures. In this talk we shall discuss possible ways to bridge the gap of
semantics and provide more general systems with flexible reasoning and understanding.
11:00 - 11:30
Drew Parmelee, EMU:
A.I. learns to play atari game Breakout
Abstract: The purpose of this project is to create an A.I. application that learns to play the atari
game, Breakout. The goal is to make the A.I. application as general as possible, so that the
programmer should not have to hard code anything, as the A.I. will understand the rules of the
games from data and associated rewards.
11:30 - 12:00
Ovidiu Calin, EMU:
Deep Learning Architectures - Book presentation.
Abstract: This book describes how neural networks operate from the mathematical point of view. As a result, neural networks can be
interpreted both as function universal approximators and information processors. The book bridges the gap between ideas and concepts of neural networks,
which are used nowadays at an intuitive level, and the precise modern mathematical language, presenting the best practices of the former and enjoying the
robustness and elegance of the latter. The book can be used in a graduate course in deep learning, with the first few parts being accessible to senior undergraduates. In addition,
the book will be of wide interest to machine learning researchers who are interested in a theoretical understanding of the subject.
12:00 - 13:00 Lunch
Food will be served in the coffetaria located on the same floor.
13:00 - 13:30
Aisha Yousuf, Eaton Corporation: Genetic Programming as a Data Science tool for Model Development
Abstract: This talk will discuss a use case for how Genetic Programming can assist and augment the expert-driven process of developing data-driven
models for real-world applications. In industrial settings, modelers must develop hundreds of models that represent individual properties of parts, components, assets, systems and
meta-systems like power plants. Each of these models is developed with an objective in mind, like estimating the useful remaining life or detecting anomalies. This talk examines the
most basic example of when the experts select a kind of regression modeling approach and develop models from data. We then use that captured domain knowledge from their processes, as
well as end models to determine if Genetic Programming can augment, assist and improve their final results. The talk will conclude on comments on Genetic Programming as a data science
tool and its strengths and weaknesses to compared to other popular data science methods.
13:30 - 14:00
Tim Burtell, EMU:
Reinforcement Learning with Matlab
Abstract: Training a reinforcement learning agent in Matlab can be tricky but one can figure it out through trial and error.
I will demonstrate what I have discovered while training a Deep Q Network to traverse a gridworld terrain to reach a desired goal.
14:00 - 14:30
Jacob Kendrick and Ujunwa Mgboh, Eastern Michigan University: Machine learning applications for coin sounds recognition.
Abstract: The sound of a coin depends on its shape, size and material.
We shall show how a machine learning model involving RNNs can recognize the sound of a silver coin from the sounds of a cupper or tin coins.
14:30 - 15:00
Andrew M Ross, EMU: Ethics, Public Policy, and Machine Learning.
Abstract: I will discuss many issues in the ethical use of machine learning, and how it interacts with public policy. Examples include automated parole
recommendations and car insurance pricing. We will also talk about general characteristics of contexts that can use ethical/public policy issues in machine learning, and common traps like data
that is biases based on existing societal influences. This talk is inspired and based on the book "Weapons of Math Destruction" by Cathy O'Neill and other books.
15:00 - 15:30
Alex Polonsky, DAVG: Path Planning For DonkeyCar Using LiDAR.
Abstract: Come learn from the Detroit Autonomous Vehicle Group how to navigate a self-driving robot using a LiDAR. DAVG uses small,
low cost RC cars to develop autonomous driving platforms with low cost sensors. The power is in the brain. During this session you will learn how to use a sub
hundred dollar LiDAR to navigate a self-driving RC car (DonkeyCar). Topics covered will be Robotics Operating System, occupancy grid, RRT* (path planning algorithm),
and path planning optimization.
We plan to have a semi-working demo.
See you again next year!
2019 Machine Learning Conference