Tags / AI
Projects
2019
- DLI: "Build Your Own TensorFlow" TensorFlow practicals for the Deep Learning Indaba in Kenya For the 2019 Deep Learning Indaba (held in Nairobi, Kenya), I was privileged to co-organize the tutorial sessions with Jamie Allingham, with the extensive help and support of Avishkar Boopchand, Stephan Gouws, and Ulrich Paquet of DeepMind. We had more than 50 tutors this year, covering more 500 students across 2 parallel sessions each day. Our goal for this year was to expand the range of material that the tutorials covered, to address pain points we saw in previous years, and to further …
- Spieeltjie experiment with multi-agent RL on zero-sum differentiable games Spieeltjie is a single-file package for doing simple experiments with multi-agent reinforcement learning on symmetric zero-sum games. For more information see “Open-ended learning in Symmetric Zero-Sum Games” and “A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning”. The name “spieeltjie” comes from the Afrikaans word for “tournament”. […] This first set of images shows trajectories when starting from a set of random …
- Wasserstein GAN DFL contributed to Depth First Learning course on Wasserstein GANs While tutoring at the 2019 Deep Learning Indaba, I got to know the multi-talented Cinjon Resnik, who is currently doing his PhD with Kyunghyun Cho at NYU. After the Indaba, Cinjon invited me to join an experiment he is running in distributed teaching and learning called Depth First Learning. One innovation that particularly resonated with me was the effort DFL makes to plot a path through “paper space”, as a way to explain a core idea or story. The story we chose as the backbone for …
- Biologically Plausible Backprop feedback alignment & activity perturbation in PyTorch SOAP (Second Order Activity Perturbation) is a package for experimenting with a computational neuroscience phenomenon called feedback alignment in PyTorch. It formed my project for the 3-week IBRO-Simons Computational Neuroscience Summer School in 2019. It relates to an important challenge in reconciling how learning works in artificial neural networks with what we know about how real neurons behave, a topic called biologically plausible back-propogation. […] When a neuron fires, its axon …