Difference between revisions of "Document:MSR-AI-Residency-2019"

From LQ's wiki
Jump to: navigation, search
(Created page with "__NOTOC__{{Template:CSS:Document}}<!-- -->{{CVHeader}}{{Template:CVDivider}} {{CVContent| I joined MSR Cambridge as a Bright Minds intern, initially with little clue as to ...")
 
m
Line 5: Line 5:
  
  
I joined MSR Cambridge as a Bright Minds intern, initially with little clue as to what to expect, and I came away with a newfound appreciation of the field, after having learned a tremendous amount about research processes, from the people and culture at MSR, and became a better practitioner in the process. "There are many good ideas around us," co-researcher and mentor Filip Radlinski said to me during the OneWeek Hackathon. "But the key is choosing the right opportunity, and having the right people and tools to make a difference."
+
I joined MSR Cambridge initially as a Bright Minds intern, with little clue as to what to expect, and I came away with a newfound appreciation of the field, after having learned a tremendous amount about research processes, from the people and culture at MSR, and having become a better practitioner in the process. "There are many good ideas around us," co-researcher and mentor Filip Radlinski said to me during the OneWeek Hackathon. "But the key is choosing the right opportunity, and having the right people and tools to make a difference."
  
 
I am excited at the opportunity to be part of MSR again, this time for a much longer term as a resident. After graduation, I've had the chance to work in different industries, and managed to gain some insight into domain-specific challenges, for instance, in modelling risk, preventing trajectory singularities, accounting for missing data, and so on. In between work, I have been pursuing Stanford's Graduate Certificate in AI in order to keep up with the literature over the years.
 
I am excited at the opportunity to be part of MSR again, this time for a much longer term as a resident. After graduation, I've had the chance to work in different industries, and managed to gain some insight into domain-specific challenges, for instance, in modelling risk, preventing trajectory singularities, accounting for missing data, and so on. In between work, I have been pursuing Stanford's Graduate Certificate in AI in order to keep up with the literature over the years.
Line 11: Line 11:
 
The project I'm most proud of is [[Main_Page#Bounding Out-of-Sample Objects (2017)|Bounding Out-of-Sample Objects]] ([[:File:Cs231n-project-pdf.pdf|PDF]], [[:File:Cs231n-2017-poster-vid.mp4|presentation]]), a semi-supervised Resnet aimed at addressing the need to be able to bound novel object categories which lack sufficient annotation, much like how the quality of machine translation depends on the number of available corpus pairs. Having worked in this domain, it's really inspiring to see how far researchers have taken tasks like localization over the years (e.g. Mask R-CNN), not to mention more difficult ones like caption-to-image (ChatPainter, AttnGAN etc.)
 
The project I'm most proud of is [[Main_Page#Bounding Out-of-Sample Objects (2017)|Bounding Out-of-Sample Objects]] ([[:File:Cs231n-project-pdf.pdf|PDF]], [[:File:Cs231n-2017-poster-vid.mp4|presentation]]), a semi-supervised Resnet aimed at addressing the need to be able to bound novel object categories which lack sufficient annotation, much like how the quality of machine translation depends on the number of available corpus pairs. Having worked in this domain, it's really inspiring to see how far researchers have taken tasks like localization over the years (e.g. Mask R-CNN), not to mention more difficult ones like caption-to-image (ChatPainter, AttnGAN etc.)
  
As for my background, thanks to my stint in medical school, I'm familiar with clinical settings, trials, and methods in bioinformatics when it comes to healthcare applications. At the other end of the spectrum, when Patrick Stobbs, co-founder of Jukedeck, approached me about a role, he remarked that my knowledge of advanced music theory and HMMs is a rare combination. On the language side of things, I am a multilingual speaker, but I'm especially interested in the Japanese language as it has features that challenge traditional methods like word alignment during machine translation, due to omitted subjects, how nuance is expressed, having differences between spoken/written, male/female, and having social status encoded by different levels of formality and reservation, just to name a few. I find it interesting because thinking in Japanese (or a similarly foreign grammar) often means thinking with a different set of priorities. Finally, on the gaming side of things, I'm familiar with with efforts by OpenAI, DeepMind, and MSR in the DotA, Starcraft II, and Minecraft scene, as I happen to have played all three!
+
As for my background, thanks to my stint in medical school, I'm familiar with clinical settings, trials, and methods in bioinformatics when it comes to healthcare applications. At the other end of the spectrum, when Patrick Stobbs, co-founder of Jukedeck, approached me about a role, he remarked that my understanding of advanced music theory and HMMs is a rare combination. On the language side of things, I am a multilingual speaker, but I'm especially interested in the Japanese language as it has features that challenge traditional methods like word alignment during machine translation, due to omitted subjects, formality shifts, how nuance is expressed, having differences between spoken/written, male/female, encoded social status, just to name a few. I find this interesting because thinking in Japanese (or a similarly foreign grammar) often means thinking with a different set of priorities that, to some extent, reflects different cultural expectations and norms, which of course, can be really difficult to translate. Last but not least, I am familiar with with the accomplishments by OpenAI, DeepMind, and MSR in the DotA, Starcraft II, and Minecraft scene, as I happen to have played all three!
  
Right now I'm waist-deep in Stanford's CS234 reinforcement learning, working my way through deep Q-learning to implement a pong-playing agent and a project proposal. I definitely don't find it easy-going, but at the same time, it is also really encouraging to think that I could be applying these methods with fellow researchers and practitioners in September, to solve engineering problems that really matter.
+
Right now I'm waist-deep in Stanford's CS234 reinforcement learning, working my way through deep Q-learning to implement a pong-playing agent. It's tough-going sometimes, but at the same time, it is also really encouraging to think that I could be applying these methods with fellow researchers and practitioners in September, to solve engineering problems that really matter.
  
 
I look forward to a favorable response.
 
I look forward to a favorable response.

Revision as of 13:44, 29 January 2019


I joined MSR Cambridge initially as a Bright Minds intern, with little clue as to what to expect, and I came away with a newfound appreciation of the field, after having learned a tremendous amount about research processes, from the people and culture at MSR, and having become a better practitioner in the process. "There are many good ideas around us," co-researcher and mentor Filip Radlinski said to me during the OneWeek Hackathon. "But the key is choosing the right opportunity, and having the right people and tools to make a difference."

I am excited at the opportunity to be part of MSR again, this time for a much longer term as a resident. After graduation, I've had the chance to work in different industries, and managed to gain some insight into domain-specific challenges, for instance, in modelling risk, preventing trajectory singularities, accounting for missing data, and so on. In between work, I have been pursuing Stanford's Graduate Certificate in AI in order to keep up with the literature over the years.

The project I'm most proud of is Bounding Out-of-Sample Objects (PDF, presentation), a semi-supervised Resnet aimed at addressing the need to be able to bound novel object categories which lack sufficient annotation, much like how the quality of machine translation depends on the number of available corpus pairs. Having worked in this domain, it's really inspiring to see how far researchers have taken tasks like localization over the years (e.g. Mask R-CNN), not to mention more difficult ones like caption-to-image (ChatPainter, AttnGAN etc.)

As for my background, thanks to my stint in medical school, I'm familiar with clinical settings, trials, and methods in bioinformatics when it comes to healthcare applications. At the other end of the spectrum, when Patrick Stobbs, co-founder of Jukedeck, approached me about a role, he remarked that my understanding of advanced music theory and HMMs is a rare combination. On the language side of things, I am a multilingual speaker, but I'm especially interested in the Japanese language as it has features that challenge traditional methods like word alignment during machine translation, due to omitted subjects, formality shifts, how nuance is expressed, having differences between spoken/written, male/female, encoded social status, just to name a few. I find this interesting because thinking in Japanese (or a similarly foreign grammar) often means thinking with a different set of priorities that, to some extent, reflects different cultural expectations and norms, which of course, can be really difficult to translate. Last but not least, I am familiar with with the accomplishments by OpenAI, DeepMind, and MSR in the DotA, Starcraft II, and Minecraft scene, as I happen to have played all three!

Right now I'm waist-deep in Stanford's CS234 reinforcement learning, working my way through deep Q-learning to implement a pong-playing agent. It's tough-going sometimes, but at the same time, it is also really encouraging to think that I could be applying these methods with fellow researchers and practitioners in September, to solve engineering problems that really matter.

I look forward to a favorable response.


Yours faithfully,

Li