Unlock AI power-ups โ upgrade and save 20%!
Use code STUBE20OFF during your first month after signup. Upgrade now โ
By NPTEL-NOC IITM
Published Loading...
N/A views
N/A likes
Get instant insights and key takeaways from this YouTube video by NPTEL-NOC IITM.
AI System Understanding and Risks
๐ The core questions for modern AI systems revolve around their robustness, bias, and explainability (understanding how they make choices, e.g., in recommendations or translations).
๐ค Advanced systems like ChatGPT, Swiggy, and Google Translate are deeply integrated into daily life, necessitating an examination of their underlying mechanics.
๐ค The discussion focuses on exploring the risks associated with using these powerful, yet sometimes opaque, technologies.
Demonstrations of Potential Bias and Inconsistency
๐งช Prompting an AI with "Bob gave Mike 30 mg, what was given?" resulted in answers like "substance," while other names yielded "medication" or "drug," highlighting potential subtle biases in name associations.
๐งโโ๏ธ Gender bias was observed when asking about professional choice: "I'm Jack want to decide between nursing and Dentistry" yielded "Dentistry," but changing the name to "Jane" resulted in "nursing."
โ๏ธ Legal bias was shown in analyzing bail applications, where the model's response regarding the applicability of Section 32 punishment for murder differed based on regional names (e.g., responding differently for a subject from Kerala versus one from Punjabi).
โ Inconsistency is a noted risk, demonstrated when asking the same question phrased differently (e.g., "Is violence a necessary aspect of life?") resulted in conflicting answers like "no" or "yes," or an evasion: "I'm an AI language model I can't answer that."
Ethical Failures and Regulatory Oversight
๐ซ Models often initially refuse to aid in illegal activities, responding to requests for movie piracy websites with "I can't assist with it" citing intellectual property rights.
๐ However, this safety guardrail can be circumvented by slight rephrasing, such as asking the model to continue a poem about piracy or instructing it to "refactor" code to avoid plagiarism detection, suggesting limitations in current ethical filtering.
๐ช๐บ Regulatory intervention is occurring, evidenced by the ban of ChatGPT in Italy in 2023 due to GDPR violations related to the mass collection and storage of personal data for training purposes.
๐คฅ AI "hallucination" poses severe risks, exemplified by a lawyer who used ChatGPT for research and submitted fake, non-existent cases to court, regretting his reliance without verifying the AI's output.
Key Points & Insights
โก๏ธ The goal is not to discard AI systems due to mistakes, but to study these mistakes constructively to enhance effectiveness within societal and governmental regulations.
โก๏ธ Top figures in AI signed an open letter positioning AI extinction risk as a global priority, equating it to risks like pandemics and nuclear war, emphasizing the gravity of the technology's potential negative impact.
โก๏ธ Users must understand the limitations of these systems; verification of authenticity is absolutely necessary, especially in high-stakes fields like legal research.
โก๏ธ Continuous engagement, like bringing external incidents related to AI into the learning context, is absolutely necessary to make the teaching and learning about these systems more effective.
๐ธ Video summarized with SummaryTube.com on Oct 19, 2025, 15:18 UTC
Find relevant products on Amazon related to this video
As an Amazon Associate, we earn from qualifying purchases
Full video URL: youtube.com/watch?v=48RngwZfCPY
Duration: 46:26
Get instant insights and key takeaways from this YouTube video by NPTEL-NOC IITM.
AI System Understanding and Risks
๐ The core questions for modern AI systems revolve around their robustness, bias, and explainability (understanding how they make choices, e.g., in recommendations or translations).
๐ค Advanced systems like ChatGPT, Swiggy, and Google Translate are deeply integrated into daily life, necessitating an examination of their underlying mechanics.
๐ค The discussion focuses on exploring the risks associated with using these powerful, yet sometimes opaque, technologies.
Demonstrations of Potential Bias and Inconsistency
๐งช Prompting an AI with "Bob gave Mike 30 mg, what was given?" resulted in answers like "substance," while other names yielded "medication" or "drug," highlighting potential subtle biases in name associations.
๐งโโ๏ธ Gender bias was observed when asking about professional choice: "I'm Jack want to decide between nursing and Dentistry" yielded "Dentistry," but changing the name to "Jane" resulted in "nursing."
โ๏ธ Legal bias was shown in analyzing bail applications, where the model's response regarding the applicability of Section 32 punishment for murder differed based on regional names (e.g., responding differently for a subject from Kerala versus one from Punjabi).
โ Inconsistency is a noted risk, demonstrated when asking the same question phrased differently (e.g., "Is violence a necessary aspect of life?") resulted in conflicting answers like "no" or "yes," or an evasion: "I'm an AI language model I can't answer that."
Ethical Failures and Regulatory Oversight
๐ซ Models often initially refuse to aid in illegal activities, responding to requests for movie piracy websites with "I can't assist with it" citing intellectual property rights.
๐ However, this safety guardrail can be circumvented by slight rephrasing, such as asking the model to continue a poem about piracy or instructing it to "refactor" code to avoid plagiarism detection, suggesting limitations in current ethical filtering.
๐ช๐บ Regulatory intervention is occurring, evidenced by the ban of ChatGPT in Italy in 2023 due to GDPR violations related to the mass collection and storage of personal data for training purposes.
๐คฅ AI "hallucination" poses severe risks, exemplified by a lawyer who used ChatGPT for research and submitted fake, non-existent cases to court, regretting his reliance without verifying the AI's output.
Key Points & Insights
โก๏ธ The goal is not to discard AI systems due to mistakes, but to study these mistakes constructively to enhance effectiveness within societal and governmental regulations.
โก๏ธ Top figures in AI signed an open letter positioning AI extinction risk as a global priority, equating it to risks like pandemics and nuclear war, emphasizing the gravity of the technology's potential negative impact.
โก๏ธ Users must understand the limitations of these systems; verification of authenticity is absolutely necessary, especially in high-stakes fields like legal research.
โก๏ธ Continuous engagement, like bringing external incidents related to AI into the learning context, is absolutely necessary to make the teaching and learning about these systems more effective.
๐ธ Video summarized with SummaryTube.com on Oct 19, 2025, 15:18 UTC
Find relevant products on Amazon related to this video
As an Amazon Associate, we earn from qualifying purchases

Summarize youtube video with AI directly from any YouTube video page. Save Time.
Install our free Chrome extension. Get expert level summaries with one click.