Summary of "Стать тестировщиком с нуля до 200к / Полный курс по тестированию (QA)"
Main ideas and lessons conveyed
1) What an “IT mentor” is and how the course is structured
- The course is introduced as part of a mentoring system (“conscious commercialism” / “IT mentors”).
- Mentors are described as financially incentivized to:
- get candidates employed, and
- support them during probation,
- not just provide study materials.
- Course design goals:
- Emphasize topics most relevant for interviews and passing probation.
- Continuously prune unnecessary theory and add market-relevant practice questions.
- Provide “practically applicable knowledge” for tricky interview questions.
- Course structure:
- Multiple extensive chapters, each broken into parts with timecodes.
- Theory is followed immediately by practical tasks.
- Home tasks include detailed analysis and explanations of correct answers (hosted for community members).
- Several speakers contribute chapters.
- Rationale for a public course:
- The value of a mentor is framed as:
- diagnostics (checking readiness for interviews),
- psychological/moral support,
- non-hallucinating, market-relevant Q&A (contrasted with GPT being portrayed as sometimes giving general/hallucinated answers),
- career guidance beyond “teaching basics”.
- The course also promises interview-oriented coverage of:
- testing basics,
- programming basics (at least in the context of QA needs).
- The value of a mentor is framed as:
2) Core QA concepts: Testing, Quality Control, Quality Assurance
Testing
- Testing (definition): a technical check that software behavior matches expected behavior.
- Purpose:
- identify errors,
- ensure quality before release,
- communicate findings to stakeholders.
- Why testing before release matters:
- Finding errors early reduces cost.
- Helps verify compliance with requirements.
- Gives a user-side view of the product.
- Helps uncover unplanned scenarios.
- Stated testing principles:
- Testing shows the presence of defects, but cannot prove their absence.
- Exhaustive testing is impossible (except in trivial cases).
- Early testing finds defects earlier.
- Defects often accumulate in limited/code modules.
- Pesticide paradox: repeated tests eventually stop detecting new defects.
- Testing depends on project context (e.g., banking vs news portal).
- “No defects found” ≠ “product is ready”.
- QA pros are paid not only to find bugs, but also to:
- improve quality processes,
- reduce reputational and delivery losses.
Differences among roles/levels
- Bug-related work is distinct from quality/process work.
- Quality Control (QC):
- includes testing plus controlling how fixing is done for discovered errors,
- focuses on the finished product/result.
- Quality Assurance (QA system):
- includes QC plus organizing processes to prevent errors,
- focuses more on process quality than only end-product checking.
- Relationship:
- Testing ⊂ QC ⊂ broader QA system
- “Tester vs developer” framing:
- Both aim at a high-quality working product.
- Developer: implement requirements, but must manage risks.
- Tester: designate risks in advance so output meets customer needs.
- Example: if release is near, a very small bug may be deferred to the next sprint rather than blocking the release.
- Addressed misconception:
- “Bug hunting only” is not enough; testing should be driven by real risk/impact.
- Example referenced: controversy around Google Gemini imagery not matching cultural/historical facts.
3) Tester involvement across the software lifecycle + daily routine
- Ideal picture: tester participates across all stages (requirements → release → post-release support), but real life may differ (stages can be missing; tasks blur).
- Presented as a guideline, not strict regulation.
Typical daily tasks:
- Review/adjust test-related work; plan tests.
- Analyze new tasks and write/maintain test cases.
- Perform manual and automated testing.
- Analyze results and report/document bugs.
- Discuss issues with the team and internal stakeholders.
- Update documentation and suggest improvements.
4) Development methodologies: Agile vs Waterfall (and Scrum/Kanban)
Development methodology (definition)
An approach that determines:
- the order of task execution,
- implementation methods,
- control,
- cost and deadlines.
Simplified view: a “map” from idea to release.
Agile
- Flexible approach with short cycles (iterations) adapting to change.
- Agile Manifesto values (4):
- People/interactions > processes/tools
- Working product > comprehensive documentation
- Customer collaboration > contract negotiation
- Responding to change > following a fixed plan
- Highway/path metaphor: direction can change at forks.
Agile sub-approaches:
- Scrum
- Work divided into sprints (1–4 weeks).
- Ceremonies/events:
- planning,
- daily stand-up (~15 minutes),
- retro,
- demonstration of sprint output.
- Roles:
- Product Owner,
- Scrum Master,
- Development Team.
- Tester emphasis: helps manage risks/time costs and provides test-related feedback inside ceremonies.
- Kanban
- Continuous flow rather than sprints.
- Core ideas:
- visualize work,
- limit work in progress (WIP).
- Tasks move across statuses on a board (e.g., “to do”, “ready”, etc.).
- Statuses can be tailored; transparency for all team members.
Waterfall
- Strict sequence of stages:
- analysis → design → development → testing → implementation.
- Tester mainly connects during the testing stage.
- Pros:
- clear sequence,
- strong documentation and control.
- Cons:
- hard to change later,
- users see results only at the end,
- risk of building an obsolete product by launch time.
Comparison summary:
- Agile: iteration flexibility + early feedback.
- Waterfall: predictability + control + consistency, but low flexibility.
Interview/practice topics promised as homework:
- What is testing and its goal?
- QA vs QC vs “tester” difference?
- Agile methodologies: what they are and how they differ?
- Scrum and Kanban tester specifics.
- Possible “catch” question: mention of a “Lin” methodology may appear.
5) Team roles and tester collaboration model
Speakers/lectures describe where QA fits within the team:
- Developers
- Front-end: UI/interactions.
- Back-end: business logic, DBs, APIs.
- Tester interactions:
- clarify how features are implemented,
- understand architecture/data exchange,
- know technical constraints.
- Analysts (requirements shaping)
- Tester clarifies requirements for already built behavior and during feature spec creation.
- Tester finds weaknesses/ambiguities early to avoid major rework.
- Designers (UI/UX)
- Tester clarifies detailed behavior: hover states, step order, hints/errors, popups/warnings.
- Tester ensures “voice of the user” inside the team.
- Product Manager / Project Manager
- Distinction:
- Project manager: process/deadlines/meetings.
- Product manager: goals/strategy/metrics/priorities and business impact of bugs.
- In small teams roles may be combined.
- Distinction:
- DevOps / Systems Engineers
- Provide environment support: test servers, CI/CD processes, logs, configurations.
- Without this, the tester can’t test effectively.
- Other testers
- Coordinate responsibilities to avoid duplicated checks.
- Share findings and test approaches.
- Be able to replace each other during vacations/illnesses.
Tester as a “director” analogy
- Not only verifying expected vs actual, but influencing production and coordinating team work.
6) Software testing lifecycle: how it should work (ideal steps)
A structured “testing lifecycle” with stages:
-
Requirements analysis
- Study documentation (technical assignments, user stories, layouts).
- Extract specific requirements from vague statements.
- Find ambiguity/contradictions.
-
Planning testing
- Create a test plan including:
- scope/volume,
- approach,
- resources and schedule,
- entry and exit criteria.
- Decide:
- what will be tested vs not tested,
- who will test,
- what devices/browsers,
- success criteria.
- Trade-off example: when time is limited, focus most on critical flows (ordering/payment/delivery) and less on secondary features.
- Create a test plan including:
-
Test design
- Create test cases/scenarios based on requirements.
- For complex systems:
- use diagram/state-transition mapping,
- then general checklists,
- then detailed cases.
- Build user scenarios with negative cases and transitions.
-
Preparing the test environment
- Stand setup, software installation, deploy the test build.
- Prepare required test data.
-
Testing execution
- Run test cases and document results.
- If a discrepancy occurs:
- localize the issue as specifically as possible,
- create an actionable error report.
-
Analysis of results and reporting
- Produce reports with agreed metrics:
- counts of passed/failed tests,
- defects found and fixed,
- trends (e.g., many integration-related bugs).
- In agile, stages can run in parallel across different tasks.
- Produce reports with agreed metrics:
Key takeaway: Good testing is not “finding many bugs,” but preventing defects and building confidence in release quality.
7) Types of testing (hierarchy and examples)
- Manual vs Automated testing
- Manual: QA uses tools/software manually (e.g., like dev tools/postman-style tooling).
- Automated: QA uses programming + testing frameworks to run checks via code.
- Functional vs Non-functional
- Functional: checks that the system does what it should (per requirements).
- Non-functional: checks properties like performance, usability, safety.
- Smoke / Sanity / Regression
- Smoke: quick basic check of critical modules before further testing.
- Sanity: after changes, verify affected functionality before broader regression.
- Regression: broad checks after new features or modifications to ensure core business processes aren’t broken; includes dynamic regression based on potential impact.
- Black/Gray/White box
- Black box: test like a user, without internal knowledge.
- Gray box: some knowledge (users + API + DB + code described generally).
- White box: full code access; find issues in logic.
- Levels
- Unit/modular: individual functions/modules (mostly developers).
- Integration: interactions between components/services.
- System: whole system behavior.
- End-to-end (E2E): most important user paths.
- Cross-platform / Cross-browser
- Cross-platform: check across OS/device families.
- Cross-browser: check across browser engines/versions.
- Shift Left vs Shift Right
- Shift Left: earlier testing (requirements stage).
- Shift Right: later/production-like stages (real users or partial rollout).
8) Requirements basics (what they are and QA-friendly properties)
- Requirements describe:
- what software should do,
- how it behaves,
- under what conditions and restrictions.
Types:
- Business demands (why the product is needed)
- User requirements (what users need to accomplish)
- Functional requirements
- Non-functional requirements (performance/security/constraints)
- System/technical requirements (infrastructure, tech stack, integrations)
QA-friendly quality properties:
- Unambiguous (no multiple interpretations)
- Complete (covers full functionality)
- Verifiable/testable
- Consistent (no contradictions)
- Atomic (one function/characteristic per requirement)
- Relevant and updated
- Prioritized
- Feasible (realistic within project constraints)
- Traceable (linked to goals/release items)
Emphasis: the earlier bugs are found in requirements, the cheaper they are to fix.
9) Verification vs Validation
- Verification: “Did we build it right?” (meets specs/requirements)
- Examples: check technical specs correctness; check layout/logic after development.
- Validation: “Did we build the right product?” (useful/suitable for users)
- Example: usability checks in real scenarios.
10) Test documentation: plans, checklists, test cases, bug reports, reports
A) Test plan and variants
- Test plan: goals/scope, entry/exit criteria, risks, resources, schedule.
- Program & methodology: more formal doc with objectives/objects/methodology/order/assessment criteria; common in regulated contexts.
- Checklist: free-format list of checks (often for regression/smoke-style coverage).
- Advice: avoid repeating the word “check” inside checklist items; each item is already implicitly a check.
B) Test case (formal)
- Test case: steps to test a single functionality/expected outcome.
- Typical fields:
- Unique ID
- Name (matches functionality)
- Preconditions
- Steps
- Expected result
- Status (passed/failed/skipped/blocked)
- Postconditions (cleanup)
- Priority / seriousness
- Test type
- Test data
- Automation status
- Author
- Why test cases matter:
- more executable detail,
- helpful when teams differ and roles are separate.
C) Bug report (“bugreport”)
- Stored in issue trackers (e.g., Jira-like systems).
- Typical required fields:
- Unique bug ID
- Title (“what broke, where, under what conditions”)
- Steps to reproduce
- Actual result
- Expected result
- Attachments (screenshots/screencasts/logs)
- Priority vs Severity
- Environment (OS/browser/app version/device/build)
- Assignee/Status
- Recommendations:
- Avoid vague wording like “wrong/correct”; use measurable specifics.
- Don’t duplicate preconditions as steps if they’re already in preconditions.
- Steps should include only actions performed; observations go in separate fields.
- Don’t pack multiple unrelated bugs into one report.
- Title should allow developers to infer the problem.
D) Test report (summary)
- Audience-dependent:
- formal reports (e.g., customer-facing acceptance),
- team/lead summaries (include passed/failed counts and links to bugs).
- Can include operational/system counts:
- auto-generated reports from a TMS,
- quick operational statuses shared in chat.
11) Test design techniques (how to reduce test effort while improving coverage)
- Equivalence classes: partition inputs into groups expected to behave similarly; test one representative per class.
- Boundary value analysis: many bugs are near valid/invalid boundaries; test boundary values and adjacent values.
- Pairwise testing: cover intelligent combinations (often pairs) rather than all possibilities.
- State/transition diagrams: model the system as states and transitions; derive test cases from them.
- Prediction/exploratory testing:
- predictive: aimed at finding bugs,
- exploratory: aimed at covering and documenting new understanding simultaneously.
- Decision table / acceptance table: systematically cover condition/action combinations.
12) Tools for test management (TMS) and practical workflow demonstration
A case-study using a popular TMS (explicitly named: Qase):
- Create projects (one repository per product/service).
- Create tests (groups of test cases).
- Create test cases (with steps).
- Create test plans (select test cases for a run).
- Execute a test run:
- mark each case status (passed/failed/blocked/skipped),
- generate a report with stats and links.
Benefits:
- transparency for the team,
- history/graphs,
- easy navigation from report → specific test case.
13) Web testing fundamentals: HTTP/HTTPS, browser architecture, client-server, REST, DBs, SQL
HTTP/HTTPS and status codes
- HTTP request parts:
- method, URI, HTTP version, headers, optional body.
- HTTP response status codes by major groups:
- 1xx informational
- 2xx success
- 3xx redirect
- 4xx client errors (e.g., 401 Unauthorized, 403 Forbidden, 404 Not Found)
- 5xx server error (e.g., 500 Internal Server Error)
What changes with HTTPS
- HTTPS = HTTP + encryption (TLS)
- Certificates (SSL/TLS) provide authenticity and secure session keys.
Browser request flow
- Cache → DNS lookup → TCP (3-way handshake) → TLS → send request → receive HTML → rendering.
- URL structure: domain name points to server(s).
Client-server architecture and types
- Client: interface/UX (frontend).
- Server: business logic/DB.
- Types:
- single-tier (rare),
- two-tier,
- three-tier,
- monolith vs microservices.
Monolith vs microservices
- Monolith
- single integrated unit, often one deployable artifact,
- simpler dependency tracking and often easier E2E testing,
- downside: scaling and failure impact can become system-wide.
- Microservices
- multiple services communicating via APIs/contracts,
- pros: scalability, different stacks, independent updates, better failure isolation,
- cons: orchestration complexity, DevOps infrastructure burden, communication delays, harder testing,
- includes contract testing + integration testing.
REST and API concepts
- REST = architectural style for HTTP-based APIs.
- Principles mentioned:
- client-server,
- statelessness,
- uniform interface and consistent naming/formats,
- layered system,
- cacheability,
- optional “code on demand”.
HTTP methods and properties:
- Idempotency discussed; included methods:
- GET (read, idempotent, cached)
- POST (create, not idempotent; may bypass URL limits)
- PUT (full update, idempotent; may create if absent)
- PATCH (partial update; can be idempotent or not)
- DELETE (delete, idempotent)
- HEAD and OPTIONS
Databases and DBMS + SQL basics
- Relational DBs
- tables, rows (tuples), columns (attributes),
- primary key, foreign keys, joins.
- Non-relational DBs (NoSQL)
- document/key-value/graph/column families,
- advantages include horizontal scaling and more flexible schemas.
Category
Educational
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.