AI glossary
Created for ‘AI करेल ना बे!’ being presented at BMM 2024, Bay Area.
Artificial Intelligence (AI): A field of computer science focused on creating systems capable
of performing tasks that normally require human intelligence.
Natural Language Processing (NLP): A subfield of AI that focuses on the interaction between
computers and humans through natural language.
Bias: A tendency of an AI system to make decisions based on prejudiced or unfair preferences,
often reflecting societal prejudices. Biases can also arise from hallucinations due to edge cases,
unreliable data, or a lack of understanding of the context.
Algorithm: A set of rules or steps a computer follows to perform a task.
Training Data: The data used to teach an AI model how to make decisions. Biases in training data
can lead to biased AI.
Machine Learning: A method of teaching AI systems to learn from data and improve over time without
being explicitly programmed.
Model: A mathematical representation of a problem or task that an AI system uses to make decisions.
Ethics in AI: The study of moral principles and how they apply to the development and use of AI technologies.
Fairness: Ensuring that AI systems make decisions impartially, without favoritism or bias.
Transparency: The practice of making the workings and decisions of AI systems understandable to humans.
Accountability: The responsibility of developers and organizations to ensure AI systems are used ethically and fairly.
Human-in-the-Loop: A model of AI decision-making that involves human oversight and intervention to
ensure accurate and fair outcomes.
Cognitive Bias: Patterns of thinking that can lead to errors in judgment, affecting both humans and AI.
Mitigation Strategies: Techniques and methods used to reduce or eliminate bias in AI systems.
Explainability: The ability of an AI system to explain its decisions in a way that humans can understand.
Trust in AI: Building confidence that AI systems will perform reliably and ethically.
Citizen Science: Public participation in scientific research, often involving collaboration with professional
scientists to collect and analyze data, contributing to AI development and validation.
Long-Term Impact: The potential future consequences of AI technologies on society, which can be positive
if managed responsibly.
Key Points
Bias Awareness: Both AI and humans can have biases; it's important to recognize
and address them, especially biases arising from hallucinations due to edge cases,
unreliable data, or lack of understanding.
Ethical Use: Using AI responsibly and ethically ensures fairness and transparency.
Human Oversight: Humans play a crucial role in overseeing AI to prevent and
correct biases.
Citizen Science: Public involvement in scientific research enhances AI developmen
t and helps ensure diverse and comprehensive data.
Positive Future: With careful management, AI can lead to beneficial outcomes for society.






.jpeg)
.jpeg)


