Summary of "НЕЙРОСЕТИ VS BLENDER 3D / МЫ ПРОИГРАЛИ"
Overview / Experiment
The video stages a head-to-head test: modern generative neural networks vs. a traditional Blender 3D pipeline. A “client” brief (created via ChatGPT) requested:
- A Pixar-style girl and a realistic Shiba Inu.
- A pack of environment/prop models.
- A short cartoon of the girl and dog walking through a village at sunset.
- A day→night edit that preserves animations, camera angles and all details.
Two parallel workflows were executed:
- A neural-network-only workflow using generative services to produce models, textures, animations and video.
- A human-driven, traditional 3D pipeline using Blender and standard tools.
Neural-network Workflow — Services, Features, Results
Tools used
- Networks/services (as named in subtitles): VO3 (new release), “neuroncarpi,” Tripa, Mishai, 3D CSM, remsh (remesh module).
- ChatGPT used both to create strict client prompts and to generate multi-angle reference prompts/images.
What they did
- Generated multi-angle reference images and segmented models.
- Produced meshes and animations (walking cycles, animal animation modules).
- Attempted to render and compose animated scenes.
Strengths
- Fast concept and stylized imagery generation.
- Built-in segmentation and automatic part separation; some services include animal animation modules.
- Quickly produces pleasing frames for simple tasks.
Major technical problems and limitations
- Mesh and topology errors: inconsistent geometry, remeshing artifacts (e.g., heads cut off), poor retopology for production.
- Inconsistent continuity: animations stop or change between frames; characters can swap appearance or duplicate.
- Lighting and style control is limited — relighting (day→night) while preserving motion and camera is unreliable.
- High randomness: finding one perfect shot can require hundreds of attempts and be costly.
- Not production-grade for complex, multi-shot projects; better suited for short/simple creatives or previsualization.
Summary judgement
Impressive as accelerators for concept work and assistance, but not a reliable replacement for a skilled artist on complex pipelines requiring precise control and fixability.
Blender / Traditional 3D Workflow — Process and Strengths
Pipeline executed manually
- Character creation: blockout → detailed sculpting → retopology/remesh → UV unwrap → texturing (Substance Painter) → hair with Blender particle system → material work (nose, eyes, mouth) → rigging with custom controllers → animation from references.
- Environment/props: modeling (barrel, bucket, well), sculpting and baking displacement/normal maps for stone walls, texture painting, vegetation scattering, river, camera setup and animation.
- Lighting: manual day setup then relit for night; added a lantern prop to preserve animation, camera and finer details.
Strengths
- Full control and ability to make precise fixes (e.g., change a single stone without regenerating everything).
- Robust, production-quality topology, textures, rigging and consistent animation across shots.
- Faster, deterministic iterations on client corrections because the scene lives in a single editable project.
Weaknesses
- More time- and labor-intensive than neural generation; requires expertise.
Round-by-Round Experiment Outcomes
-
Round 1 — character & dog models
- Neural: plausible results fast but with topology/mask/cropping problems.
- Blender: higher-fidelity, production-ready models with realistic hair, oral details, and stronger rigging/animation.
-
Round 2 — environment/props
- Neural: handled simple props and background buildings reasonably but with mesh issues.
- Blender: produced coherent, textured, modular assets for a full scene.
-
Round 3 — animated walk-through at sunset
- Neural: attractive footage but failed technical specs (consistent animation, precise actions, camera control).
- Blender: produced the specified sequence with preserved motion and camera control.
-
Round 4 — day→night edit preserving all details
- Neural: could not reliably change lighting while preserving animations/angles.
- Blender: relit the scene and added a lantern while keeping everything intact.
Overall verdict: a tie in some simple areas, but Blender (human-driven 3D pipeline) wins for production tasks requiring repeatability, precise control, and fixability. Neural nets are powerful assistants but not replacements for professional 3D artists on complex projects.
Technical Recommendations and Practical Notes
- Use neural networks to accelerate ideation, generate references, or produce simple assets/animations; expect significant post-processing or manual correction for production.
- For client work with strict specs and continuity, keep the project inside a 3D package so changes remain deterministic and minimal to implement.
- Adopt a hybrid approach: learn both neural tools and 3D software. Use generative nets for speed and exploration, and 3D tools for final deliverables.
- Achieving production-ready results purely with generative nets currently requires elaborate, time-consuming workaround workflows (e.g., pre-generating keyframes, frame-by-frame management) that can be costly.
Tools, Services and Integrations Mentioned
- Neural nets/services: VO3, Tripa, Mishai, 3D CSM, remesh modules (auto-segmentation, animal animation modules, remesh caveats).
- Generation helpers: ChatGPT for client briefs and prompt generation.
- 3D tools: Blender (modeling, hair particles, rigging, animation), Substance Painter (texturing), standard particle hair and material workflows.
- Compositing/relighting: Luma (possible but messy), After Effects.
- Cloud hardware: Selectel remote desktop — recommended for renting GPU-backed virtual machines to run Blender/After Effects via browser for scalable rendering, team access and backups.
Courses / Tutorials / Guides Advertised
- Character Creation course: Pixar-style girl and realistic dog; 100+ lessons / 100+ hours; full character pipeline.
- Location Creation course: building locations from scratch, texturing in Substance Painter, and a pack of 3D models.
- Cinematics course: shot creation, composition, lighting and camera work.
- Neural Networks + VFX course: combining neural nets with graphics, advanced integration with VFX pipelines.
- Marvels Design (clothing): complex outfit simulation and motion.
- Embergen simulations course: explosions, fire, smoke, magic simulations.
- Tracking course: Blender, Nuke, PFTrack, 3D Equalizer, Mocha Pro, After Effects, 360° workflows.
Course features: homework after each lesson, tutor support, installment payment (3–36 months), international card payments, promo code “Promo15” for 15% discount. The presenter emphasizes learning both neural and 3D workflows and how to combine them.
Sponsor / Practical Tip
Selectel remote desktop is recommended for running Blender and After Effects on low-power devices by renting GPU virtual machines via a browser. It provides scalable resources, team collaboration and cloud backups.
Conclusions / Analysis
- Neural networks are impressive and useful, but not a turnkey replacement for skilled 3D artists on complex, multi-shot, continuity-sensitive projects.
- Deterministic control over topology, rigs and scene files remains critical for production—this is where traditional 3D pipelines excel.
- Best practice: master both neural generation tools and traditional 3D production software to leverage the strengths of each approach.
Main Speakers / Sources
- Presenter / 3D artist: Starikov Production (author of the experiment and the Blender work).
- ChatGPT: used as the “client” to generate the technical brief and prompts.
- Neural network services referenced: VO3, Tripa, Mishai, 3D CSM, remesh modules (names from subtitles; some may be auto-transcribed).
- Cloud GPU sponsor/service: Selectel remote desktop.
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.