Document:MSR-AI-Residency-2019

From LQ's wiki
Revision as of 10:45, 29 January 2019 by Changtau2005 (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Li Quan Khoo Icon-mail.pngchangtau2005@gmail.com  •  li.khoo.11@ucl.ac.uk
Icon-home.pnghttp://lqkhoo.com Icon-github.pnghttps://github.com/lqkhoo
Icon-linkedin.pnghttp://www.linkedin.com/pub/li-quan-khoo/89/a27/8aa


I joined MSR Cambridge as a Bright Minds intern, initially with little clue as to what to expect, and I came away with a newfound appreciation of the field, after having learned a tremendous amount about research processes, from the people and culture at MSR, and became a better practitioner in the process. "There are many good ideas around us," co-researcher and mentor Filip Radlinski said to me during the OneWeek Hackathon. "But the key is choosing the right opportunity, and having the right people and tools to make a difference."

I am excited at the opportunity to be part of MSR again, this time for a much longer term as a resident. After graduation, I've had the chance to work in different industries, and managed to gain some insight into domain-specific challenges, for instance, in modelling risk, preventing trajectory singularities, accounting for missing data, and so on. In between work, I have been pursuing Stanford's Graduate Certificate in AI in order to keep up with the literature over the years.

The project I'm most proud of is Bounding Out-of-Sample Objects (PDF, presentation), a semi-supervised Resnet aimed at addressing the need to be able to bound novel object categories which lack sufficient annotation, much like how the quality of machine translation depends on the number of available corpus pairs. Having worked in this domain, it's really inspiring to see how far researchers have taken tasks like localization over the years (e.g. Mask R-CNN), not to mention more difficult ones like caption-to-image (ChatPainter, AttnGAN etc.)

As for my background, thanks to my stint in medical school, I'm familiar with clinical settings, trials, and methods in bioinformatics when it comes to healthcare applications. At the other end of the spectrum, when Patrick Stobbs, co-founder of Jukedeck, approached me about a role, he remarked that my knowledge of advanced music theory and HMMs is a rare combination. On the language side of things, I am a multilingual speaker, but I'm especially interested in the Japanese language as it has features that challenge traditional methods like word alignment during machine translation, due to omitted subjects, how nuance is expressed, having differences between spoken/written, male/female, and having social status encoded by different levels of formality and reservation, just to name a few. I find it interesting because thinking in Japanese (or a similarly foreign grammar) often means thinking with a different set of priorities. Finally, on the gaming side of things, I'm familiar with with efforts by OpenAI, DeepMind, and MSR in the DotA, Starcraft II, and Minecraft scene, as I happen to have played all three!

Right now I'm waist-deep in Stanford's CS234 reinforcement learning, working my way through deep Q-learning to implement a pong-playing agent and a project proposal. I definitely don't find it easy-going, but at the same time, it is also really encouraging to think that I could be applying these methods with fellow researchers and practitioners in September, to solve engineering problems that really matter.

I look forward to a favorable response.


Yours faithfully,

Li