https://github.com/mathewvanh/reasoning_model_experiment
A look inside the deepseek-R1 distilled llama 3.1 8b thinking model.
https://github.com/mathewvanh/reasoning_model_experiment
llama3 mechanistic-interpretability
Last synced: 5 months ago
JSON representation
A look inside the deepseek-R1 distilled llama 3.1 8b thinking model.
- Host: GitHub
- URL: https://github.com/mathewvanh/reasoning_model_experiment
- Owner: Mathewvanh
- Created: 2025-02-28T18:15:27.000Z (9 months ago)
- Default Branch: master
- Last Pushed: 2025-03-03T06:39:01.000Z (8 months ago)
- Last Synced: 2025-03-03T07:28:19.392Z (8 months ago)
- Topics: llama3, mechanistic-interpretability
- Language: Jupyter Notebook
- Homepage:
- Size: 15 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Sentinel 2352 Investigation
This repository contains code and analysis investigating whether language models use structured reasoning mechanisms, specifically focusing on deepseek r1 distilled llama 3.1 8b
## Notebooks
- **model_inference_colab.ipynb**:
- Code for running inference using DeepSeek R1 Distill LLaMA 3.1 8B on NuminaMath dataset.
- Contains activation detection and ablation experiments.
- **explore_data.ipynb**:
- Detailed analysis of activation patterns and ablation results.
- Generates key graphs and metrics presented in the application.
## How to Run
1. `model_inference_colab.ipynb` to reproduce inference experiments.
2. We can directly use the generated data in `explore_data.ipynb` for analyzing and visualizing results.