Paper Specifications
Authors can submit one or more papers on AI safety policy. Papers should focus on AI safety risks, including but not limited to:
Misaligned objectives in AI systems
Risks from large language models or generative AI
Societal impacts of automated decision-making
Regulatory gaps in AI governance
We accept two forms of submissions:
Op-eds: Short, focused pieces of approximately 800–1,200 words that present a clear argument or perspective
Long papers: In-depth analyses of approximately 2,500–5,000 words that explore the topic in greater detail
All submissions must include a title, author name, and course or affiliation, and be in .doc or .docx format.
Selection Criteria
Submissions are evaluated based on:
Relevance
Originality
Rigor
Clarity and style
Impact
Review Process
All submissions are evaluated without author information to ensure impartial assessment.
Submissions that demonstrate originality, analytical depth, and policy relevance are shortlisted. You will be contacted through the email address provided within two weeks of your submission.
If your paper is selected, an editor will work with you to fine-tune your arguments, clarify reasoning, and incorporate feedback. Authors retain final approval over any changes.
After editorial discussion, the editorial board finalizes selections. Accepted papers are copy-edited and prepared for online publication.