AI Safety Research Scientist
Company: Partnership
Location: San Francisco
Posted on: May 24, 2025
Job Description:
The Partnership on AI (PAI) is a nonprofit community of
academic, civil society, industry, and media organizations who come
together to address the most important questions concerning the
future of AI. Established in 2016 by leading AI
researchersrepresenting seven of the world's largest technology
companies: Apple, Amazon, DeepMind, Google, Meta, IBM, and
Microsoft, PAI has evolved into a diverse global community
committed to ensuring AI technologies benefit humanity.We work with
our Partners on the voluntary adoption of best practices and with
governments to advance policy innovation. Here's how we work:
- Convening diverse stakeholders across the world, including
experts and communities most impacted by AI
- Creating influential, accessible resources and recommendations
to shape responsible AI development
- Publishing progress reports that track improvements in Partner
practices and policy developmentsROLE SUMMARYPAI is seeking a
Research Scientist to join our Programs and Research team. This
role focuses on technical AI governance of advanced AI systems -
conducting research to understand how different safety approaches
work in practice and proposing new ways to implement them
effectively. Our Research Scientists play a highly visible role,
collaborating with Partners to shape how advanced AI systems are
developed and deployed.In this role, you'll lead technical research
to assess and enhance feasibility of governance options being
explored in public policy discussions on AI/AGI safety. This
includes evaluating the effectiveness of different interventions,
developing options to enhance implementation, and proposing
practical mechanisms for oversight and compliance. Working
alongside the Head of AI Safety program, you'll lead research that
informs how we govern increasingly capable AI systems.
- Developing industry guidelines and technical frameworks for
monitoring AI agents, considering tradeoffs between transparency,
privacy, implementation costs, and user trustYou'll collaborate
with diverse stakeholders to advance consensus around governance
approaches that work in practice, helping decision-makers in
industry and government understand when and how to intervene in
advanced AI development. The role can be performed remotely from
anywhere in the US or Canada.WHAT YOU'LL DOTechnical Governance
Research
- Lead research that connects technical analysis with policy
needs, identifying technical challenges underlying AI/AGI safety
discussions
- Propose governance interventions that could span different
layers - from model safety to supply chain considerations to
broader societal resilience measures
- Use a multistakeholder organizations' tools - rigorous
analysis, public and private communications, working groups, and
convenings - to gather insights from Partners on AI development
processes to ensure research outputs are practical and
impactful
- Author/co-author research papers, blogs, and op-eds with PAI
staff and Partners, and share insights at leading AI conferences
like NeurIPS, FAccT, and AIESProject Management and Stakeholder
Engagement
- Lead technical research workstreams with high autonomy,
supporting the broader AI safety program strategy
- Build and maintain strong relationships across PAI's internal
teams and Partner community to advance research objectives
- Represent PAI in key external forums, including technical
working groups and research collaborationsStrategic Communication
and Impact
- Translate complex technical findings into clear, actionable
recommendations for AI safety institutes, policymakers, industry
partners, and the public
- Support development of outreach strategies to increase adoption
of PAI's AI safety recommendations
ABOUT YOU
- PhD or MA with three or more years of research or practical
experience in a relevant field (e.g., computer science, machine
learning, economics, science and technology studies,
philosophy)
- Strong understanding of technical AI landscape and governance
challenges, including safety considerations for advanced AI
systems
- Demonstrated ability to conduct rigorous technical governance
research while considering broader policy and societal
implications
- Excellent communication skills, with proven ability to
translate complex technical concepts for different audiences
- Track record of building collaborative relationships and
working effectively across diverse stakeholder groups
- Adaptable and comfortable working in a dynamic, mission-driven
organizationFOLLOWING COULD BE AN ADVANTAGE
- Experience at frontier AI labs or tech companies (AI safety
experience not required; we welcome those with ML, product, policy
or engineering backgrounds) or government agencies working on
AI-related areas
- Subject matter expertise from relevant areas such as:
- AI system Trust & Safety (e.g., developing monitoring systems,
acceptable use policies, or safety metrics for large language
models)
- Privacy-preserving machine learning and differential
privacy
- Cybersecurity, particularly vulnerability assessment and
incident reportingQUALITIES THAT ARE IMPORTANT TO US:
- Builds Trust : Able to be transparent by being authentic,
conveying trust and communicating openly while involving key
stakeholders in decision-making.
- Visionary : Able to take a long-term perspective, conveying a
belief in an outcome, and displaying the confidence to reach
goals
- Inspirational : Able to inspire and motivate others in a
positive manner
- Courageous : Able to seek out opportunities for continuous
improvement, and fearless in intervening in challenging
situations
- Decisive : Able to make informed decisions in a timely
fashion
- Personal Development : Able to seek opportunities for
individual personal developmentADDITIONAL INFORMATION
- We know that research has shown that some potential applicants
will submit an application only when they feel that they have met
close to all of the qualifications for a role-we encourage you to
take a leap of faith with us and submit your application as long as
you are passionate about working to make real impact on responsible
AI. We are very interested in hearing from a diverse pool of
candidates.
- PAI offers a generous paid leave and benefits package,
currently including: Twenty vacation days; three personal
reflection days; sick leave and family leave above industry
standards; high-quality PPO and HMO health insurance plans, many
100% covered by PAI; Dental and vision insurance 100% covered by
PAI; up to a 7% 401K match, vested immediately; pre-tax commuter
benefits (Clipper via TriNet); automatic cell phone reimbursement
($75/month); up to $1,000 in professional development funds
annually; $150 per month to access co-working space; regular team
lunches & focused work days; opportunities to attend AI related
conferences and events and to collaborate with our roughly 100
partners across industry, academia and civil society. Please refer
to our careers page for an updated list of benefits.
- Must be eligible to work in the United States or Canada; we are
unable to sponsor visas at this time.
- PAI is headquartered in San Francisco, with a global membership
base and scope. This role is eligible for remote work within the
United States and Canada with no requirement to be located in San
Francisco.PAI is proud to be an equal opportunity employer. We
celebrate diversity and we are committed to creating an inclusive
environment in all aspects of employment, including recruiting,
hiring, promoting, training, education assistance, social and
recreational programs, compensation, benefits, transfers,
discipline, and all privileges and conditions of employment.
Employment decisions at PAI are based on business needs, job
requirements, and individual qualifications.PAI will consider for
employment qualified applicants with criminal histories, in a
manner consistent with the San Francisco Fair Chance Ordinance or
similar laws.The Partnership on AI may become subject to certain
governmental record keeping and reporting requirements for the
administration of civil rights laws and regulations. We also track
diversity in our workforce for the purpose of improving over time.
In order to comply with these goals, the Partnership on AI invites
employees to voluntarily self-identify their gender and
race/ethnicity. Submission of this information is voluntary and
refusal to provide it will not jeopardize or adversely affect
employment or any consideration you may receive for employment or
advancement. The information obtained will be kept confidential.HOW
TO APPLY - All Steps Required
- Resume and/or CV
- Cover Letter reflecting on your motivation for the role and
experiences illustrating fit
- 2-5 pages writing sample (please append to cover letter): this
can be any writing for which you were primarily an author and does
not have to be about any topic related to AI safetyApplications
will be reviewed on a rolling basis; early submission is
encouraged.Compensation Transparency
We are committed to fair and equitable compensation practices. The
anticipated salary range for this role is provided below to offer
transparency and set expectations. Actual compensation will be
determined based on factors such as experience, skills, and
geographic location, and will be discussed during the hiring
process.The pay range for this role is:140,000 - 165,000 USD per
year (Remote (United States))
#J-18808-Ljbffr
Keywords: Partnership, Milpitas , AI Safety Research Scientist, Other , San Francisco, California
Didn't find what you're looking for? Search again!
Loading more jobs...