Summary of "Algorithmic Decision Making - 2"
AI assistant vs AI agent
-
AI assistant
- Intelligent support tools such as Alexa and chatbots.
- Provide help but do not autonomously decide or act on the user’s behalf.
- Typically have session-limited memory (no persistent user memory across sessions unless explicitly designed).
- Prompt-driven and often rule-based or algorithmic; prompt engineering is important (covered later).
-
AI agent
- Autonomous systems that design and execute workflows on behalf of users (examples: automated trading systems, autonomous vehicles).
- Characterized by multicomponent autonomy, persistent memory (tracks historical interactions), and decision-making capability.
- Act as substitutes for human agents/decision-makers.
Five decision-making conditions (literature overview)
The speaker frames these five axes as the main dimensions scholars use to compare machine and human decision-making:
-
Specificity of decision/search space
- Algorithms generally require well-specified inputs and goals. Humans can often act on vague or underspecified instructions (until AGI advances).
-
Interpretability / explainability
- Many machine-learning models (especially deep neural nets) are black boxes; human decisions are often more explainable. Explainability (XAI) is a critical managerial issue.
-
Size of alternative set and bounded rationality
- Algorithms can evaluate far larger solution sets than humans, who are boundedly rational.
-
Decision speed and replicability
- Machines are faster and more reproducible (same inputs → same outputs); humans vary.
-
Overall comparison framing
- These axes together are used to weigh when machines should make decisions versus humans.
Choice of predictors — causation vs correlation
Important warning: models may learn correlations that are not causal; using such predictors for decisions can be misleading or harmful.
Examples discussed:
- Facebook / Starbucks (HBR example)
- Users who “liked” Starbucks spent 8% more — this could reflect pre-existing preferences (confounding), not a causal effect of the like.
- Running speed vs sneaker color
- A learned correlation between sneaker color and performance was actually an artifact of supplier color codes tied to shoe size; shoe size correlates with speed. This is a local spurious correlation that harms generalization.
Practical implications:
- Require careful causal thinking, experimental controls, and validation before using correlations for managerial decisions.
Modeling risks & trade-offs
-
Overfitting
- Flexible models can learn spurious, local patterns and fail to generalize.
-
Bias–variance trade-off
- Less flexible models can be biased but may generalize better; highly flexible models reduce bias but risk overfitting.
-
Data and model quality issues
- Algorithms produce biased or wrong outputs if predictors, data, or model choices are poor. Developers and managers must attend to data quality, feature selection, and model validation.
Points flagged for deeper coverage later
- Prompt engineering (for assistants)
- Interpretability and explainability methods (XAI)
- Causes of wrong predictors and bad data (sources of bias, noise, overfitting)
Referenced / main sources and speakers
- Primary speaker: course lecturer (unnamed) presenting literature and examples
- Cited organizations and works mentioned in the talk:
- IBM (practical insights on AI assistants vs agents)
- California Management Review paper (referred to as “Shaa and others”)
- Harvard Business Review example/study (referred to as “John and others” about Facebook/Starbucks)
- Barukas and others — chapter from Fairness and Machine Learning (“When is automated decision-making legitimate?”) discussing correlational pitfalls
No further action requested.
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.
Preparing reprocess...