GSoC 2017: final report

It has been about 4 months since Google and Julia org kindly accepted my first GSoC proposal, and the exciting program is ending. My project is focused on improving the training process of Flux.jl, but since that don't fill 3 months full-time work, I also made some enhancements for Knet.jl in the last a few weeks.

About Flux

Flux is a julia ML library under active development. At the time when I start participating this program, Flux compiles arbitrary julia function wraped by a macro @net to a computation graph which can then run on MXNet or TensorFlow backend. However a few weeks ago, it switched to a custom auto diff implementation and runs on CUDANative. The code base changed like a totally different project, but the idea is the same: you just write the forward pass of your model in plain julia and get the ability to train it on GPUs for free.

My main contributions are:

Most of them are already merged. However, since Flux is switching to a new archtecture, some of these changes are outdated and some are broken. They need to be rebased when Flux become relatively stable again.

About Knet

Knet is another ML library which is built from scratch. It contains an AD implementation seperated as AutoGrad.jl, an KArray type that runs on GPU, and some neural network functions. Knet records the computation graph dynamically, so the user can freely run a model contains branches or loops without unrolling or something like that.

My main contributions are:

Conclusion

This summer I learnt and practised a lot of things of Julia and neural networks, with the stipends from Google. I really thank my mentor Mike J Innes who keep indicating the right direction to go and helping out when I get stucked.