Skip to content
Build your own
15 views·1 installs·Apr 27, 2026
Shared stack plan · /s/kecJZWqb

I'm building an ML pipeline that fetches from Hugging Face and tracks experim…

I'm building an ML pipeline that fetches from Hugging Face and tracks experiments in Weights & Biases

Install with one command
$ npx mcpflix install kecJZWqb

Writes claude_desktop_config.json, prompts for any required API keys, and drops skills into ~/.claude/skills/. Backed up automatically.

What this stack is

You're building an ML experimentation pipeline that pulls models/datasets from Hugging Face and logs all runs, metrics, and artifacts to Weights & Biases for tracking and comparison. This stack connects Claude Code to both platforms via MCP servers, letting you manage the entire workflow—from model selection to experiment logging—through conversational AI without leaving the IDE.

Architecture

Loading diagram…

MCP Servers

Hugging Face

mcp-huggingface

Browse models, download datasets, and manage Hugging Face Hub resources directly from Claude Code.

Python Runtime

mcp-python

Execute training scripts, data preprocessing, and experiment code in your local Python environment with virtualenv support.

Install everything in one go

Copy a single setup guide that includes the MCP config and the skills installer script — paste into a doc to keep, or follow it section by section.

Implementation Plan

  1. Install Claude Code — Download from claude.ai/code or use your editor's extension marketplace.

  2. Set up the MCP servers in Claude Code config — Add to your .claude/mcp.json:

Loading code…
  1. Create a Python project with W&B — Initialize your local environment:
Loading code…
  1. Configure Hugging Face credentials — Set your HF token in your shell:
Loading code…
  1. Create a training script template — Ask Claude Code:

"Create a training script that loads a model from Hugging Face, trains it on a dataset, and logs metrics to Weights & Biases. Include checkpointing and artifact logging."

Claude will use the Python MCP to scaffold the script with proper W&B integration.

  1. Run your first experiment — Ask Claude Code:

"Run my training script and show me the W&B run URL when it completes."

Claude will execute via Python MCP, stream logs, and provide the direct link to your experiment dashboard.

  1. Compare experiments — Use Claude Code to query W&B:

"Fetch my last 5 runs from W&B, compare their best metrics, and recommend which model to deploy."

Claude will pull experiment history and summarize results in context.

  1. Automate with Loop skill — Set up recurring experiment runs:

"/loop Run my training pipeline every 6 hours and log results to W&B"

Claude will self-pace the loop and maintain experiment continuity.

Build your own — or save this one

Describe your project and our AI will design a complete stack — architecture diagram, MCP servers, skills, and step-by-step setup. Sign up free to save and share your own.