People Matter

Research Scientist, Geopolitics

OpenAI

OpenAI

San Francisco, CA, USA
Posted on Wednesday, June 5, 2024

About the Team

The Policy Research team contributes directly to short-term and long-term analyses that inform OpenAI’s strategies as we pursue responsible research and deployment of our AI systems. We are looking for researchers who are excited to collaborate across the organization's technical, policy, and/or product teams. We consider the team to be built of experts working together to accomplish a unified mission.

About the Role
As a Research Scientist on the Geopolitics team, you will be responsible for identifying ground-breaking questions at the intersection of foundation models and international security/international development.

Some of our past success stories on the Geopolitics team include being the first AI lab to launch red teams and evaluations in CBRN, publishing on confidence-building measures for AI, and exploring the acceleration effects of model deployment on international technology competition. Today, we work to tackle AI for scientific discovery and global health, examine emergent risks in AI defense applications, and prototype AI tools to bolster treaty verification for international non-proliferation efforts.

We would be especially excited to interview candidates who research AI applications and impacts in the global south, which could include an emphasis on e.g. climate, supply-chain security, health security, or use of AI in conflict settings.

This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

In this role, you will:

  • Own and pursue a research agenda aimed at understanding the potential and actual impacts of AI systems, particularly as it relates to human security, geopolitical issues, and development.

  • Collaborate with other team members on experimental design, fieldwork, and stakeholder inclusion into the research process.

  • Understand and communicate potential harms and benefits to decision-makers at OpenAI.

  • Advise our technical and applied teams on AI impacts dimensions of decisions.

You might thrive in this role if you:

  • Are excited about executing on an AI research agenda that advances human security.

  • Possess a background in qualitative or quantitative research, with a strong understanding of both approaches.

  • Value teamwork and collaboration, preferring a 'co-author' approach to driving impactful research.

  • Are excited about solving complex problems alongside a dedicated team.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.