Model Context Protocol and Vibe Coding
ChatGPT & Benji Asperheim— Wed Oct 15th, 2025

Model Context Protocol and Vibe Coding

AI coding isn’t magic—it’s plumbing. The difference between dopamine-hit “vibes” and reliable output is whether your assistant can reach real tools and real data. Model Context Protocol (MCP) is the wiring standard that lets agents like Claude, Copilot, or Perplexity talk to your repos, error trackers, docs, and cloud—through consistent, permissioned endpoints instead of sketchy copy-paste. In practice, that means fewer hallucinations, faster fixes, and changes you can audit. Below, I distill Fireship’s take and add a pragmatic setup so you can ship with MCP instead of praying to the prod gods.

Check out Fireship’s video “How to make vibe coding not suck…”.

Model Context Protocol Video Overview

Fireship argues that “vibe coding” only works reliably when you wire your AI assistant into your real tools and data via Model Context Protocol (MCP) servers. MCP makes AI more deterministic by letting the model pull ground truth (docs, project data, APIs) and take bounded actions (e.g., query Sentry, fetch Figma, open a GitHub issue). It won’t cure bad prompts, but it does cut hallucinations and glue work when used well. (Anthropic)

What MCP actually is

Fireship’s video: Core Points

Critical Response to Fireship’s Video

Mostly agree, with caveats:

How to use MCPs Properly

Use one AI IDE (Claude Code is the reference implementation) and 3—5 high-leverage servers:

  1. GitHub (or GitLab) MCP — browse code, open PRs, link issues. (GitHub Docs)
  2. Docs/Knowledge MCP — point at your internal docs or a docs aggregator so answers cite exact versions. (The Verge)
  3. Sentry MCP — pull real errors; ask the model to propose diffs with repro steps. (Community servers exist; hook in read-only first.) (GitHub)
  4. Issue tracker (GitHub/Atlassian/Linear) MCP — triage/fix/close loops from chat. (GitHub)
  5. Cloud MCP (AWS/Cloudflare/Vercel) — optional; start locked to preview/staging projects only. (GitHub)

If you’re using Claude Code, it natively supports connecting to MCP servers (local or remote) and has good docs plus recent “remote MCP” support. (Claude Docs)

Minimal config sketch (conceptual)

Most tools expose a small config (per server) that declares command, args/env, and allowed operations. Claude Code then lists them as available tools/resources. See the official servers repo for examples and ready-made community servers. (GitHub)

Practical guardrails (so vibe coding doesn’t backfire)

What is Perplexity AI

Perplexity is an “answer engine” that does real-time web retrieval and cites sources. It’s great for fast research and grounded Q&A, and it now plugs into Model Context Protocol (MCP) both ways: (1) you can call Perplexity via API from an MCP server, and (2) you can connect MCP servers into Perplexity so its answers can use your tools/data. That makes it a solid research layer in an MCP-based dev workflow. (Perplexity AI)

How it ties to MCP

Opinionated Take on Perplexity

Conclusion

MCP isn’t hype; it’s hygiene. By standardizing how models fetch context and execute bounded actions, you swap guesswork for verifiable steps. Start small: one AI IDE (Claude Code/Copilot), three to five MCP servers (GitHub or GitLab, Docs/Knowledge, Sentry, Issues; add Cloud later), and strict guardrails (read-only first, version pinning, audit logs, kill switch). Bring Perplexity in as the research layer—great for current docs, changelogs, and standards—then let MCP-exposed tools handle the code and ops. That combo turns “vibe coding” into instrumented engineering: cited answers, repeatable fixes, safer deploys. (Anthropic)