Summary of "Jonathan Blow on ChatGPT and what real programming means"
Technological concepts & product/tool analysis discussed
Rendering / engine optimization (beam wrapping on large levels)
- The speaker discusses a bug where beam segments fail to render when wrapping on very large levels.
- They describe an “AI-proposed” fix: force all eight offsets whenever a beam segment exceeds half of a level dimension.
- The speaker criticizes this as overkill (a blunt fix), noting potential problems such as:
- Overdraw and extra batches
- Potential “popping” right at the wrap threshold
- They contrast it with a more deterministic approach:
- Compute per-axis wrap crossings
- Draw only the necessary wrapped copies
- Aim for at most 4 copies (though the speaker argues the specific AI claims are incorrect)
- Overall evaluation: the proposed hysteresis/threshold-avoidance fixes are described as “trash” / “shitty.” The speaker argues correct rendering should avoid pops rather than paper over the issue with hysteresis.
Debugging via AI suggestions for game/editor runtime errors (fog volumes)
- The speaker describes an engine/editor error: more than one fog volume is active on the same floor after sky level fast travel.
- They list example causes AI suggested, including:
- A global fog AABB overlapping the sky zone
- A floor resolver that doesn’t clear the previous floor’s fog
- The sky zone inheriting the same floor ID
- A spawner failing to unregister fog properly
- The error message itself might be incorrect
- The speaker states the AI’s guesses were largely not true and recommends verification via instrumentation:
- Run a “dump fog volumes” command to log active volumes and find duplicates
- AI proposes “safe fixes,” such as:
- Making the fog system exclusive per floor
- A “self-healing” function idea like
fog system dot enforce exclusive for floor - Variants like disabling other volumes
- The speaker criticizes the AI’s reliance on hysteresis in floor detection as conceptually nonsensical and harmful—asking: “How does anyone ever get anything done with this?”
How ChatGPT-like systems work (and why the output is unreliable)
- The speaker argues the system’s core limitation is that it does not truly understand the codebase/game logic:
- Even with more context, it may better mimic code patterns, but it still “bullshits” and generates nonsensical suggestions.
- More context may improve formatting and coherence, but does not create true understanding
- They emphasize a view of programming correctness:
- The solution space is such that most programs are “complete trash”
- You can’t reliably “tune” a near-miss into correctness with small parameter changes (unlike an analog dial)
- Conclusion:
- AI code suggestions can be useful for brainstorming/learning, especially for beginners
- But they are not reliable for producing robust, correct production software
- They liken AI output to search:
- It can provide ideas similarly to what you’d find by searching, with some convenience
- They argue people get misled because formatted, compiled-looking code can appear to be evidence of correctness.
Developer workflow / version control discussion (GitHub bans & liability)
- A side discussion references a company banning someone, with context implied to possibly involve sexual/political content or profile-related issues.
- The debate includes whether hosted platforms avoid giving reasons to reduce:
- liability
- legal/PR risk
- The speaker advises:
- Don’t trust data fully on hosted services
- Maintain backups
General workflow references
- Mentions the mindset: “ship the core early, aesthetics later.”
- References rapid prototyping concepts, such as:
- Prove the idea
- Use a limited time window
- Avoid polish
- This frames a development approach that contrasts with AI’s unreliable “fixes.”
Key evaluations / takeaways (as stated)
- AI-suggested rendering fixes: described as incorrect, overcomplicated, or unsafe; hysteresis-based threshold behavior is rejected.
- AI-suggested debugging fixes: AI guesses are often wrong; verification requires engine instrumentation (e.g., dumping active fog volumes).
- Programming correctness: framed as needing precision/exactness; AI output doesn’t reliably produce it.
- Best use of AI: learning/brainstorming multiple approaches, not generating robust production code.
Main speakers / sources
- Jonathan Blow (primary speaker)
- References ChatGPT / GPT-like AI (as the tool being critiqued)
- Mentions version control systems: GitHub, plus Perforce, Mercurial, SVN
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.
Preparing reprocess...