Nov 302015
 

While cruising through talks on youtube I found one titled “I am a legend: Hacking Hearthstone with machine learning“, an interesting application of machine learning in video games:

 

 

Hearthstone is a card game in which each player has a deck of characters, the goal of which is to use those cards to do damage against your opponents. I digress, the essence his talk had two major themes: finding imbalances in cards predicting an opponents moves. The former involved deriving an over determined system and simply applying least squares, the ladder involving machine learning. I think a simple explanation of what his algorithm does is extract sequences of cards played from a rich data set:

Experimenting with various dataset sizes, I started to get consistent results when using over 10,000 game replays. Yet using more than 50,000 games does not improve the results, so I settled for a dataset of 50,000 replays that were played between May and July 2014.

Part of writing a good talk is being able to balance technical and non technical aspects of a subject while maintaining the attention of audiences whose background is varied. I think Elie did a decent job of this, though of course I would be bias and hope for a bit more depth in the machine learning.

I was a bit upset that the speakers said they were going to release their software and then didn’t, but their reasons were sensible after contacting the creators of Hearthstone. However there are parts of their software that I think they should release, such as the front end web application and back end webserver, this would allow people to write their own statistical/machine learning plugin. Providing the machine learning portion of the software is what gives users an unfair advantage, which is what the creators of Hearthstone were concerned about.

Another extremely interesting application of machine learning to video games is MarI/O , an application of creating a neural network which can pass levels in Mario:

 

 

The author of MarI/O, youtube personality SethBling (I’m not sure what his real name is), used an application of neural networks. Essentially the neural network is a mapping of the buttons you can press to move your character to blocks on the screen and then associating a distance travelled based on that mapping. If Mario makes it further in the level then a previous mapping, then his fitness goes up and he is evolving. As an aside, neural networks are the same technology that companies like Nvidia use to identify vehicles. Jonathen Cohen, former director of the deep learning department at Nvidia, describes the methodology in a short demo.

These are only some relatively recent talks on machine learning in video games, there are certainly other applications and more we’ll see in the future.

Jul 122015
 

I’m happy to announce that the first working release of spgl1++ is now available. It is based on the Matlab implementation of Michael Friedlander and Ewout van den Berg and is meant to be a standalone library which can be included in your C++ projects.

It’s still “early” in development, as such it only solves the regular basis pursuit problem with real measurement matrix A:

\displaystyle \min_{x \in \mathbb{R}^d}||x||_1  subject to Ax=b

Features I’m currently working on, which shouldn’t be far away, include solving the weighted version with denoising and complex measurement matrix \Phi:

\displaystyle \min_{x \in \mathbb{R}^d}||x||_{1,w}  subject to ||\Phi x-b||_2 < \epsilon

I’ll be tweaking it and testing various matrix vector libraries. It currently works with Armadillo, a C++ linear algebra library whose syntax (API) is deliberately similar to Matlab.

Since the goal is to accomodate general matrix vector libraries, you will have to provide spgl1++ with certain template specializations,  such as how it does matrix vector products. For example, with Armadillo:

I say “should”, since I’ve only really tested it with Armadillo and I’m sure I’ve accidentally added some dependencies in the code and will have to work on making it as general as possible. In the coming days/weeks/months I’ll be doing testing and adding documentation.

Happy sparse reconstruction!

Jul 012015
 

One of my goals when I started my website just after I finished grad school was to start sharing little snippets of code I had used or developed over the course of my undergraduate and graduate degree. One thing I did use, and still use frequently, is the Fourier transform. The Fourier transform is used in countlessly many algorithms in seismic imaging because of it’s relationship to sampling theory, ability to  identify frequency content in signals, and provide solutions to differential equations.

One day I stumbled upon a neat series of lectures by my undergraduate NSERC supervisor, Dr. Michael Lamoureux, for a Seismic Imaging Summer School in 2006. Hidden in a footnote of this set of lecture notes contains one of my favorite quotes about a mathematical theorem, namely the divergence theorem:

And, as I like to explain to my kids, the Divergence Theorem simply says you can measure how much methane gas a cow produces by either measuring how much is produced in each cubic centimeter of the cow, or simply by measuring how much leaks out its mouth and rear end, and other places on the surface of the cow. – Michael Lamoureux

However, I digress! In these lecture notes Michael describes a variety of theoretical considerations when investigating partial differential equations, and in particular the famous wave equation with initial conditions:

u_{tt} - c^2 \nabla^2 u = 0
u(x,0) = f(x)
u_{t}(x,0) = g(x)

In the third lecture of these notes, the Fourier transform is used to find a solution of the equation. This is also given as an example in Elias Stein’s famous book on harmonic analysis (though I have it written here as Page 395, I’m not sure which volume. It’s likely volume 1). In this case, the Fourier transform of the solution can be written as the sum of sines and cosines:

\hat{u}(k,t) = \hat{f}(k)cos(2\pi c|k|t) + \hat{g}(k)\frac{sin(2\pi c|k|)}{2\pi c|k|}

In the future, I might include the derivation though for now this will be sufficient. We simply apply the inverse Fourier transform to obtain a solution. In 2009 or so, I wrote a little Matlab code to demonstrate this solution. There are some scaling factors to match that of the Fourier transform implementation of Matlab. Luckily I’m a bit of a hoarder when it comes to old school work so here it is:

To be honest, there are a lot of poor programming practices going on here. For example when I wrote this, prior to my introduction into software development, I probably gave myself a nice little pat on the back for putting in those comments. Those who employ clean coding practices see where I’m going with this. If you take a look through the code, every time there is a comment it could be naturally refactored into a function with a descriptive name. Moreover, who am I trying to kid with variable names like “W”! No one would quickly realize that I’m referring to a grid of frequencies. Though it is academic code in nature, it’s still embarrassing!

Moving forward, we can then use this function to plot the solution for a couple different initial conditions:

Here are two sample videos:

Notice that we see some funky stuff happening at the boundaries. The mathematical solution given is valid on the entire plane, so it should just disappear off our screen. The fact that there is reflections at the boundary is an artifact of the discrete nature of computational mathematics, and these are typically handled using things like absorbing boundary conditions or perfectly matched layers. The easiest, but most computational inefficient way of handling it, would be to make an even larger grid and only record for the amount of time it takes for the reflection to reach the area you are recording.

Jan 072015
 

So I’ve decided to take the plunge and have begun making repositories for my projects (old and new). I can’t believe I used to code without version control, it’s insanity! During grad school I had a neat project from my course in sparse matrix computation on Google Pagerank that I’m sharing on Github now. In my quest for making cool project names, I’ve dubbed this project LRPR(Low Rank Page Rank) and it can be found here: https://github.com/bee-rock/LRPR. Though it contains only one type of low rank approximation at the moment, perhaps that will change in the future.

The project report can be found here: The inner-outer solver for Pagerank combined with a subsampling scheme

The project starts out with a short introduction to Pagerank, formulated as en eigenvalue problem:

 Ax = x .

Abstract:

We investigate what the effect of a low rank approximation for the transition matrix has on the power method and an inner-outer iteration for solving the Pagerank problem. The purpose of the low rank approximation is two fold: (1) to reduce memory requirements (2) to decrease computational time. We show that we see an improvement in storage requirements and a decrease in computational time if we discard the time it takes to perform the low rank approximation, however at the sacrifice of accuracy.