Applied Research on Generative Models

Clear, practical experiments on generative models.

Applied Models is a public lab for evidence-first work on generative models: validation, interpretability, post-training, model anatomy, and constrained implementation. Each release starts with one hypothesis, one focused experiment, and a clear record of what happened.

One hypothesis One focused experiment One honest publication

Operating Loop

  • ChoosePick one model or setup to study.
  • DefineSet one hypothesis and the key questions around it.
  • RunExecute the smallest real experiment that produces evidence.
  • PublishRecord the result, even if it is partial or failed.

Editorial Boundary

  • No learning notesPublished work should not be passive study notes.
  • No repostingDo not summarize other people's blogs or papers as output.
  • Original onlyKeep the record tied to direct implementation and measurement.
  • Keep movingProgress matters more than chasing state-of-the-art optics.

Artifact Index

Recent artifacts in one compact table.

Use this as the quick reference layer: scan the latest published work, then open the artifact you need.

Type Title Reference Meta Action
Notebooks 0001 Agentic Evals Baseline Notebook

Python notebook

A Python notebook for prompt fixtures, scoring checks, and baseline observations for the first experiment. 2026-02-28 · Python notebook
Experiments Experiment 0001: Agentic Evals for Small Models

Eval suite

A compact evaluation suite for planning, tool choice, self-correction, and distractor resistance in smaller open models. 2026-02-28 · In progress · Eval suite
Articles Operating Principles for Applied Models

Working note

The project runs on small, hypothesis-driven experiments, original records of work, and continuous forward motion instead of SOTA chasing. 2026-02-28 · Working note