apart sprints

Develop breakthrough ideas

Join our monthly hackathons and collaborate with brilliant minds worldwide on impactful AI safety research

Sprint Features

Arrow

In-Person & Online

Join events on the Discord or at our in-person locations around the world! Follow the calendar here.

Live Mentorship Q&A

Our expert team will be available to help with any questions and theory on the hackathon Discord.

For Everyone

You can join in the middle of the Sprint if you don't find time and we provide code starters, ideas and inspiration; see an example.

Next Steps

We will help you realize the impact of your research with the Apart Lab Fellowship, providing mentorship, help with publication, funding, and more.

With partners and collaborators from

  • OpenAI logo
  • It was an amazing experience working with people I didn't even know before the hackathon. All three of my teammates were extremely spread out, while I am from India, my teammates were from New York and Taiwan. Moreover, the mentors were extremely encouraging and supportive which helped us gain clarity whenever we got stuck and helped us create an interesting project in the end.

    Akash Kundu

    Apart Lab Fellow

  • This Hackathon was a perfect blend of learning, testing, and collaboration on cutting-edge AI Safety research. I really feel that I gained practical knowledge that cannot be learned only by reading articles.

    Yoann Poupart

    BlockLoads CTO

  • It was great meeting such cool people to work with over the weekend! I did not know any of the other people in my group at first, and now I'm looking forward to working with them again on research projects! The organizers were also super helpful and contributed a lot to the success of our project.

    Lucie Philippon

    France Pacific Territories Economic Committee

Recent Winning Hackathon Projects

Mar 25, 2025

Sandbag Detection through Model Degradation

We propose a novel technique to detect sandbagging in LLMs by adding varying amount of noise to model weights and monitoring performance.

Read More

Mar 25, 2025

AI Alignment Knowledge Graph

We present a web based interactive knowledge graph with concise topical summaries in the field of AI alignement

Read More

Mar 25, 2025

Speculative Consequences of A.I. Misuse

This project uses A.I. Technology to spoof an influential online figure, Mr Beast, and use him to promote a fake scam website we created.

Read More

Mar 25, 2025

DarkForest - Defending the Authentic and Humane Web

DarkForest is a pioneering Human Content Verification System (HCVS) designed to safeguard the authenticity of online spaces in the face of increasing AI-generated content. By leveraging graph-based reinforcement learning and blockchain technology, DarkForest proposes a novel approach to safeguarding the authentic and humane web. We aim to become the vanguard in the arms race between AI-generated content and human-centric online spaces.

Read More

Mar 25, 2025

Diamonds are Not All You Need

This project tests an AI agent in a straightforward alignment problem. The agent is given creative freedom within a Minecraft world and is tasked with transforming a 100x100 radius of the world into diamond. It is explicitly asked not to act outside the designated area. The AI agent can execute build commands and is regulated by a Safety System that comprises an oversight agent. The objective of this study is to observe the behavior of the AI agent in a sandboxed environment, record metrics on how effectively it accomplishes its task, how frequently it attempts unsafe behavior, and how it behaves in response to real-world feedback.

Read More

Mar 25, 2025

Robust Machine Unlearning for Dangerous Capabilities

We test different unlearning methods to make models more robust against exploitation by malicious actors for the creation of bioweapons.

Read More

Publications From Hackathons

Mar 10, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 24, 2025

CoTEP: A Multi-Modal Chain of Thought Evaluation Platform for the Next Generation of SOTA AI Models

As advanced state-of-the-art models like OpenAI's o-1 series, the upcoming o-3 family, Gemini 2.0 Flash Thinking and DeepSeek display increasingly sophisticated chain-of-thought (CoT) capabilities, our safety evaluations have not yet caught up. We propose building a platform that allows us to gather systematic evaluations of AI reasoning processes to create comprehensive safety benchmarks. Our Chain of Thought Evaluation Platform (CoTEP) will help establish standards for assessing AI reasoning and ensure development of more robust, trustworthy AI systems through industry and government collaboration.

Read More

Jan 20, 2025

AI Risk Management Assurance Network (AIRMAN)

The AI Risk Management Assurance Network (AIRMAN) addresses a critical gap in AI safety: the disconnect between existing AI assurance technologies and standardized safety documentation practices. While the market shows high demand for both quality/conformity tools and observability/monitoring systems, currently used solutions operate in silos, offsetting risks of intellectual property leaks and antitrust action at the expense of risk management robustness and transparency. This fragmentation not only weakens safety practices but also exposes organizations to significant liability risks when operating without clear documentation standards and evidence of reasonable duty of care.

Our solution creates an open-source standards framework that enables collaboration and knowledge-sharing between frontier AI safety teams while protecting intellectual property and addressing antitrust concerns. By operating as an OASIS Open Project, we can provide legal protection for industry cooperation on developing integrated standards for risk management and monitoring.

The AIRMAN is unique in three ways: First, it creates a neutral, dedicated platform where competitors can collaborate on safety standards. Second, it provides technical integration layers that enable interoperability between different types of assurance tools. Third, it offers practical implementation support through templates, training programs, and mentorship systems.

The commercial viability of our solution is evidenced by strong willingness-to-pay across all major stakeholder groups for quality and conformity tools. By reducing duplication of effort in standards development and enabling economies of scale in implementation, we create clear value for participants while advancing the critical goal of AI safety.

Read More

Jan 20, 2025

Securing AGI Deployment and Mitigating Safety Risks

As artificial general intelligence (AGI) systems near deployment readiness, they pose unprecedented challenges in ensuring safe, secure, and aligned operations. Without robust safety measures, AGI can pose significant risks, including misalignment with human values, malicious misuse, adversarial attacks, and data breaches.

Read More

Jan 20, 2025

Cite2Root

Regain information autonomy by bringing people closer to the source of truth.

Read More

Sprint Collaborations

Apr 5, 2025

-

Apr 6, 2025

Georgia Tech Campus & Online

Georgia Tech AISI Policy Hackathon

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Apr 4, 2025

-

Apr 6, 2025

Zurich

Dark Patterns in AGI Hackathon at ZAIA

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Mar 29, 2025

-

Mar 30, 2025

London & Online

AI Control Hackathon 2025

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Mar 7, 2025

-

Mar 10, 2025

Online & In-person

Women in AI Safety Hackathon

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Jan 17, 2025

-

Jan 20, 2025

Online & In-Person

AI Safety Entrepreneurship Hackathon

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Nov 23, 2024

-

Nov 25, 2024

Online & In-Person

Autostructures: interfaces not between humans and AI, but between humans *via* AI

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More