Research Scientist, Human-Machine Co-adaptive Interfaces

Employer

Job Description

Facebook's mission is to give people the power to build community and bring the world closer together. Through our family of apps and services, we're building a different kind of company that connects billions of people around the world, gives them ways to share what matters most to them, and helps bring people closer together. Whether we're creating new products or helping a small business expand its reach, people at Facebook are builders at heart. Our global teams are constantly iterating, solving problems, and working together to empower people around the world to build community and connect in meaningful ways. Together, we can help people build stronger communities - we're just getting started.

At Facebook Reality Labs Research, our goal is to explore, innovate, and design novel interfaces and hardware for virtual, augmented, and mixed reality experiences. We are driving research towards a vision of an always-on augmented reality device that can enable high-quality contextually relevant interactions across a range of complex, dynamic, real-world tasks in natural environments; to achieve this goal, our team draws on methods and knowledge from artificial intelligence, machine learning, computer vision, and human–computer interaction. We are looking for a skilled and motivated researcher with expertise in sequence modeling, model personalization, domain adaptation and/or active learning to join our team. More broadly, the chosen candidate will work with a diverse and highly interdisciplinary team of researchers and engineers and will have access to cutting edge technology, resources, and testing facilities.In this position, you will work with an interdisciplinary team of domain experts in embodied artificial intelligence (AI), human–computer interaction, computer vision, cognitive and perceptual science, and sensing and tracking on problems that contribute to creating human-machine co-adaptive interfaces which enable easy discoverability and human learning of novel input methods, along with online adaptation of input & action recognition models. The position will involve building models that integrate multimodal data sources—including electromyography (EMG), video, and other biosignals from wrist-wearable inputs and other sensing methods—to build personalized models for recognizing input commands to future augmented-reality devices. These models will leverage large-scale real-world data sets and the scale of Facebook machine-learning infrastructure, and will be deployed into AR/VR prototypes to uncover research questions on the path to the next era of human-centered computing.

Responsibilities
  • Formulate and evaluate hypotheses from ideation all the way through implementation and demonstration of live online experimental results
  • Design and build new datasets for exploring methods for developing new input interfaces using novel sensor system prototypes
  • Explore applied machine learning methods starting from building 0-to-1 baselines on novel problems & datasets through progression towards modern machine learning methods
  • Leverage advances in traditional machine learning domains such as Speech, online learning, active learning, reinforcement learning, and others for exploring improvements to decoding novel sensor data
Minimum Qualification
  • PhD degree in the field of deep learning, artificial intelligence, machine learning, computer science, computational neuroscience or related technical field
  • Demonstrated track record in developing scalable, robust systems for training deep-learning models
  • 3+ years of experience in PyTorch or equivalent framework
  • Interpersonal experience: cross-group and cross-culture collaboration
  • Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment
Preferred Qualification
  • Research experience with Automatic Speech Recognition, Machine Translation, and/or Text To Speech
  • Experience spanning hypothesis formulation, dataset preprocessing, training, and evaluation of new algorithms to implementations of reusable Python modules
  • Experience with deploying machine-learning/AI systems in closed-loop systems
  • Experience with joint hardware-software development and associated rapid prototyping
  • Experience with biosignals, body-machine interfaces, neural analysis, signal processing, or related fields
  • Experience working in a modern software development environment, including: unit testing, source control, and continuous integration
Facebook is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Facebook is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.