Learn Programming as an Absolute Beginner (Video Review)
Here’s a blunt, 2025-aware review of Dave Gray’s “Learn Programming as an Absolute Beginner”, judged against today’s AI/LLM reality and the tougher junior job market. Short version: it’s a solid on-ramp, but it underplays the new bar for employability and skips some now-essential skills.
‘Learn Programming as an Absolute Beginner’ Video Review
The following is a breakdown of what the video covers, and some thoughts on each point..
Mindset & Scope
-
Master fundamentals / ignore the acronym soup (Agree). Still right. In 2025, LLMs erase much of the syntax barrier; what differentiates you is conceptual fluency: how the web works (HTTP, request/response, cookies/auth), the JS event loop, immutability/state, memory & data modeling, and how to debug. Treat “fundamentals” as runtime models + problem decomposition + debugging, not just tags and loops.
-
Cost isn’t a barrier (Mostly agree, with caveat). You can start free (CodePen, Replit, MDN Web Docs, freeCodeCamp). However, the minute you aim for employability you need Git/GitHub, a real dev environment (local or cloud dev container), and the ability to package/deploy code. Chromebooks/library PCs are fine to begin, but you’ll eventually need either a modest laptop or a cloud dev setup to learn Docker, tests, and repos properly.
-
Age and prior experience don’t matter (Agree, but market reality bites). Motivation wins for learning; hiring is another story. Junior roles are scarcer because LLM-augmented teams need fewer trainees. The counter: build a proof portfolio—deployed projects with users, readable code, tests, and a few issues/PRs merged in public repos. Show you can design, verify, and maintain AI-assisted code, not merely ask a chatbot to write functions.
-
Sustained focus (Strong agree). LLMs can become a distraction engine. Use them like a senior pair: paste context, set constraints, request tests, ask for diffs—not walls of code. Turn off notifications; work in 60—90-min focus blocks; end each block by writing down next steps.
Where to Start (Languages)
-
HTML + CSS + JavaScript (Agree, with a tweak). Good on-ramp, but add TypeScript early once you’re comfortable with JS syntax. TS is now table stakes in many teams and makes AI output safer by surfacing type mistakes fast. Sequence I recommend: HTML → CSS (Flexbox/Grid) → JS (DOM, async/await, fetch) → TS → pick one framework later (React/Vue/Angular/Svelte) after you can build vanilla DOM apps.
-
Python for data/AI (Agree, with realism). Python is great, but the “AI path” now means: environment management (venv/uv), data handling (pandas), model/LLM APIs, vector search, prompt design, and evaluation. The trap: calling an LLM API feels like progress but skips core CS skills. Anchor it with one real data/LLM project that you deploy and monitor.
Free Learning Resources
-
freeCodeCamp (Strong agree). Do the projects and publish them (GitHub + a live URL). Don’t farm solutions from an LLM—ask it to review your code and generate tests instead.
-
The Odin Project (Agree, with expectations). Solid, opinionated, long. You’ll learn more if you keep a cadence of shipping: every 1—2 weeks, deploy something that a non-developer can use. Skipping Rails at the beginning is fine; focus on JS/TS first.
-
What’s missing. Add MDN Web Docs (authoritative web references), Full Stack Open (modern JS/TS backend + frontend), and a quick pass through Git basics (branching, PRs, code review).
Practice / Projects
-
Projects are required (Strong agree). The 2025 market punishes toy to-do apps. Build narrow, useful tools: a PDF annotator; a small analytics dashboard; a browser extension; an LLM-assisted feature inside a CRUD app (e.g., summarization, semantic search). Each project should have:
- A clean README with scope, screenshots, and a link to the live app
- Basic tests (unit + a couple integration)
- Telemetry (even simple logs/metrics)
- A short postmortem: what broke, what you fixed
-
In-browser tools (Agree for day 1; outgrow quickly).
- CodePen: fine to learn DOM/CSS. Move to Vite/Parcel locally or StackBlitz to simulate real builds.
- Replit: fine to prototype Python. Also learn local venv/uv and a CLI workflow. Containerization (Docker) is worth learning once you’ve shipped 1—2 projects.
Quick Demos (What he shows)
-
CodePen ‘Hello World’. Centering with Flexbox is a good start; also practice Grid and responsive patterns. Extend the demo: fetch JSON from a public API, render a list, handle loading/error states, and write a tiny test (e.g., with Vitest) for your rendering function. That’s closer to real work.
-
Replit Python print() (Too shallow for 2025). Fine as a “hello,” but I’d immediately teach: virtual env, a small script that reads a file, and a function with a unit test. If you’re AI-curious, log one call to a local LLM (or a stub) and show how to evaluate outputs, not just marvel at them.
Closing Advice
- “The biggest hurdle is starting” (Agree; incomplete). Start—and finish small things. In 2025, “I can code” is meaningless without proof you can ship, review, and maintain. Your weekly loop should be: pick a tiny spec → build → ship → gather feedback → refactor → write a 200-word changelog. Use LLMs to accelerate, but you are accountable for correctness, tests, and design choices.
What the Video Gets Right
- Emphasizes fundamentals, persistence, and accessible on-ramps.
- Recommends credible free curricula.
- Encourages hands-on practice quickly.
What Needs Updating for 2025
- TypeScript early in the JS track.
- Git/GitHub, testing, and deployment as first-class fundamentals, not electives.
- A realistic take on LLM-augmented workflows: use AI for scaffolding and code review, but insist on verification (tests, types, logs).
- A frank note on the junior market: fewer roles, higher expectations. Portfolios must show end-to-end capability.
Bottom line
Great motivational starter and a sane path for absolute beginners. To be employable now, layer in TS, Git, tests, deploys, and an AI-aware workflow—and ship value-creating, useful projects that real humans touch.
High-Level Skills Matter More in 2025 (Automate the Boring Parts)
The problem is that in 2025 the leverage has shifted. LLMs make boilerplate and basic algorithms cheap; differentiation now comes from design, security, deployment, UX, and the ability to turn fuzzy goals into a shippable, reliable system. But you still need enough technical depth to verify and debug what AI produces.
AI can write a lot of the small stuff (loops, boilerplate, basic pages). What makes you valuable now is everything around the code: choosing the right plan, shipping safely, keeping users secure, fast, and happy. Below is a beginner-friendly map of those skills—with concrete “do this” and “show this” steps.
Project Design & Picking Your Tools (the “stack”)
Plain idea: Decide what you’re building, who it’s for, and the simplest tools that get it online.
- Do this: Write a one-page plan: the goal, 3 core features, who uses it, and what “done” means. Pick a simple setup (for web: HTML/CSS/JS or TypeScript + a basic backend).
- Show this: Add a README that explains what it does, how to run it, and why you chose those tools. Include a small diagram (boxes and arrows are fine).
MVP = Minimum Viable Product: the smallest feature set that actually helps a user (and could earn money or save time).
DevOps: Ship Often, Safely
Plain idea: Automate the boring parts so every change gets tested and deployed the same way.
- Do this: Set up CI/CD (Continuous Integration / Continuous Delivery: automatic build + tests + deploy). Use a free pipeline (GitHub Actions). Keep “dev” and “prod” environments.
- Show this: A passing build badge in your repo and a live demo URL. Add a “How I release” section to your README.
Security Basics (Day-1 Habits)
Plain idea: Don’t leak data and don’t trust input.
- Do this: Use HTTPS, environment variables for secrets, validate all inputs, escape all outputs, keep dependencies updated, turn on security headers.
- Show this: A short SECURITY.md listing what you did (e.g., input checks, auth rules) and a screenshot or report from a basic scan (e.g., OWASP ZAP) with fixes noted.
SEO & Site Speed
Plain idea: Help people (and Google) find your site, and make it load fast.
- Do this: Add titles/descriptions, clean URLs, sitemap, robots.txt, and structured data (simple JSON-LD). Compress images, cache static files, avoid huge bundles.
- Show this: Lighthouse scores (before/after) and 3 bullet points of what you changed to improve them.
SEO = Search Engine Optimization: making your site easy to discover and understand by search engines.
UX/UI & Accessibility
Plain idea: Make it easy and comfortable to use—for everyone.
- Do this: Sketch the main user flow first. Add clear empty/error/loading states. Ensure keyboard navigation works; check color contrast.
- Show this: A tiny usability note: “We watched 2 people use it. They got stuck on X, so we changed Y.” Include one screenshot or GIF.
Data & APIs (Clean Boundaries)
Plain idea: Keep data organized and your app’s “doors” predictable.
- Do this: Decide your main data tables early (users, orders, posts). Write simple, versioned endpoints (e.g.,
/api/v1/items
). - Show this: An OpenAPI (Swagger) file or even a markdown table listing your endpoints and fields.
API = Application Programming Interface: predictable URLs another app (or your frontend) can call to get or change data.
Adding AI Without the Buzzwords
Plain idea: Treat the AI call like any other feature: define input/output, test it, and cap cost/time.
- Do this: Start with a single helpful task (e.g., “summarize a note”). Log the prompt and the result. Add a cheap fallback (a friendly non-AI message) if the AI fails or is slow.
- Show this: A short AI.md: what problem it solves, how you test it (a few sample inputs and expected outputs), and limits (timeout, daily call cap).
Testing & Quality (Trust but Verify)
Plain idea: Small tests catch dumb mistakes before users do.
- Do this: Write 2—3 unit tests (test one function), 1—2 integration tests (function + DB or API), and one end-to-end test (fake a real user click).
- Show this: A test badge, a coverage number, and a one-liner: “We test login, saving a note, and rendering the note list.”
Planning & Communication (So Work Doesn’t Drift)
Plain idea: Put the plan in writing so everyone knows the target and the trade-offs.
- Do this: For new features, write a 1—2 page mini-spec: problem, success metric, scope (what’s in/out), risks, and a simple timeline.
- Show this: Link that doc in your repo and a short changelog (“v0.2: added search; fixed slow image load”).
Spec (Specification): a short document that explains what you’ll build and how you’ll know it works.
Product & Metrics (Build What Matters)
Plain idea: Ship small, measure, and adjust.
- Do this: Pick one number tied to value (e.g., % of users who complete signup). Ship one change a week that you believe moves that number.
- Show this: A tiny dashboard screenshot and a sentence: “Signup completion went from 55% → 68% after shortening the form.”
A/B test: show version A to some users and version B to others to see which performs better (only if you have enough traffic).
The Caveat: “Vibe Code” as Little as Possible
AI can speed you up, but you still need baseline technical skills to check its work and fix bugs (Check out our Vibe Coding blog post for more details).
-
Minimum tech to learn:
- Git (branch, commit, pull request), reading stack traces, using a debugger.
- HTTP basics (status codes, headers), simple SQL queries, and TypeScript types if you’re in JS.
- How to write a unit test and read logs.
-
How to use AI well: Ask for small, reviewable snippets. Always add or update tests. If it touches security or money, slow down and verify manually.
Your LLM Habits Shape Your Thinking (and the Model’s Output)
How you talk to an LLM changes two things at once: your own habits of mind and the tone/quality of what the model gives back. Think of it like a mirror with a small amplifier.
What happens in your head
- Habit loop: Repeating a style (polite, rude, sloppy, precise) trains it. What you practice becomes default.
- Cognitive offloading: If you let the model think for you, your deep processing drops. You remember less and accept shallow answers.
- Framing effect: The way you ask sets your expectations. Vague, rushed prompts lead to vague, rushed thinking.
- Self-talk spillover: Being snappy or contemptuous with a bot often leaks into how you speak to yourself and others. Practice irritation, get better at irritation.
You literally rewire your brain to be more socially maladapted when you mistreat LLMs, or speak rudely to them (and this goes for voice assisted tech like Apple’s Siri as well).
What happens to the content
- Prompt priming: The model mirrors your tone. Rude → combative/defensive. Calm + specific → clearer, more useful drafts.
- Quality drag: Hostile prompts push the model into hedging, disclaimers, or one-liners. You get less detail and worse reasoning.
- Bias reinforcement: If you demand hot takes, you’ll get them—then treat them as “the truth.” Your next prompt narrows even more.
Why being “rude” is a bad training plan—for you
- You rehearse low-empathy responses. Rehearsal sticks.
- You shorten your attention span: quick zingers in, quick zingers out.
- You normalize contempt. That shows up later with coworkers, support staff, or family.
Conclusion
The video is a solid spark, but 2025 expectations are higher. If you’re starting from zero, do this:
- Learn HTML/CSS/JS → add TypeScript early.
- Use LLMs as a tool—not a crutch!
- Use Git/GitHub from day one; write a couple of tests for every feature.
- Ship small, useful projects on a real stack (build → test → deploy → observe).
- Add one pragmatic LLM feature per project, and evaluate it (logs, limits, fallback).
- Learn Markdown, and publish clean README.md and SECURITY.md files
- Make your
git commit
changes small with a short changelog; track one metric that matters.
Bottom line: fundamentals + proof. Show you can design, verify, and maintain AI-accelerated code—then keep shipping. LLMs reflect and magnify your inputs. Set good norms—clear, respectful, test-driven—and you’ll get better output while keeping your own thinking sharp.