Job Description
Facebook's mission is to give people the power to build community and bring the world closer together. Through our family of apps and services, we're building a different kind of company that connects billions of people around the world, gives them ways to share what matters most to them, and helps bring people closer together. Whether we're creating new products or helping a small business expand its reach, people at Facebook are builders at heart. Our global teams are constantly iterating, solving problems, and working together to empower people around the world to build community and connect in meaningful ways. Together, we can help people build stronger communities - we're just getting started.
At Facebook Reality Labs Research, our goal is to explore, innovate, and design novel interfaces and hardware for virtual, augmented, and mixed reality experiences. We are driving research towards a vision of an always-on augmented reality device that can enable high-quality contextually relevant interactions across a range of complex, dynamic, real-world tasks in natural environments; to achieve this goal, our team draws on methods and knowledge from artificial intelligence, machine learning, computer vision, and human–computer interaction. We are looking for a skilled and motivated researcher with expertise in embodied artificial intelligence (AI), autonomous vehicles, robotics, human–robot interaction, or related fields to join our team. More broadly, the chosen candidate will work with a diverse and highly interdisciplinary team of researchers and engineers and will have access to cutting edge technology, resources, and testing facilities.
In this position, you will work with an interdisciplinary team of domain experts in embodied artificial intelligence (AI), human–computer interaction, computer vision, cognitive and perceptual science, and sensing and tracking on problems that contribute to creating human-centered contextual interactive systems. The position will involve leveraging concepts from embodied AI, autonomous vehicles, and robotics to define and lead research in developing end-to-end policies that leverage diverse multimodal sensors and data sources—including dense 3D reconstructions, egocentric video, audio, gaze, and other signals from wrist-wearable inputs—to predict contextually relevant information that will enhance interactions with future augmented-reality devices. These models will leverage large-scale real-world data sets and the scale of Facebook machine-learning infrastructure, and will be deployed into AR/VR prototypes to uncover research questions on the path to the next era of human-centered computing.
Responsibilities
- Develop and execute a cutting-edge research program with interdisciplinary collaborators aimed at developing representations, contextual models, and E2E policy pipelines from multimodal data sources, including 3D reconstructions
- Develop tasks, data-collection strategies, modeling approaches, and evaluation criteria to deliver on research program objectives
- Work collaboratively with other research scientists to develop novel solutions and models in service of contextualized AI for augmented reality
- Mentor MS/PhD interns and postdocs and collaborate with external academic groups to advance our research goals
Minimum Qualification
- Experience holding a faculty, industry, or government researcher position
- PhD degree in computer science, computer vision, machine learning, artificial intelligence, or related technical field
- Demonstrated track record in defining and leading research in embodied AI, autonomous vehicles, autonomous driving, robotics, or human–robot interaction, including developing E2E perception-to-action pipelines, multimodal sensor fusion for scene understanding, deep learning for perception, multi-agent simulation, or related areas
- 5+ years of experience in at least one deep-learning software library (e.g., PyTorch, Caffe2, TensorFlow, Keras, Chainer). This experience should include formulation, training, and evaluation of new algorithms and writing reusable Python modules
- 5+ years of experience developing end-to-end ML pipelines in at least three of the following areas: dataset preprocessing, model development and evaluation, software integration, and real-time deployment in embedded systems
- Interpersonal experience: cross-group and cross-culture collaboration
- Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment
Preferred Qualification
- Expertise in sequential decision making methods, including reinforcement learning, dynamic programming, optimal control, and planning
- Experience in geometric computer vision, including tracking, 3D reconstruction, localization, object detection, and scene understanding
- Experience leading a team of researchers toward executing on a complex technical goal in a cross-functional setting
- Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences such as CVPR, NeurIPS, ECCV/ICCV, ICCP, ICML, 3DV, BMVC, or SIGGRAPH
- Familiarity with ideas in representation learning, few-shot learning, and multimodal machine learning
Facebook is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Facebook is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.