February 18, 2024 6:00 PM PST
This meeting focused on the topic of Responsible AI, discussing key aspects such as privacy, fairness, and the regulatory landscape across various countries. The presentation highlighted significant cases and challenges in the field, as well as best practices for developers to ensure responsible AI implementation.
Presenter: Ying Ying Liu, Data Scientist, PhD
Key Points
1. Definitions of Responsible AI
- Privacy
- Fairness
2. Regulatory Landscape
- China: Most advanced in AI law development.
- United States: Regulations like HIPAA; more focused on government use of AI.
- Canada: More detailed regulations at both national and provincial levels.
3. Notable Cases
- Royal Free and DeepMind Collaboration: Involvement with personally identifiable information.
- Clearview AI: Collection of online images used by law enforcement; investigated by Canada in 2020.
- Deepfakes: Examples include manipulated images of celebrities.
4. Canadian Regulations and Guidelines
- Legal authority and consent
- Appropriate purposes
- Necessity and proportionality
- Openness and accountability
- Individual access to data processes
- Limiting collection, use, and disclosure
- Accuracy and safeguards
5. Privacy Techniques
- Differential Privacy:
- Grouping/aggregation
- Balancing anonymity vs. utility
- Adding Laplace noise to raw data, controlled by epsilon (increased epsilon improves utility but decreases privacy).
- Used by major companies like Apple, Google, and Microsoft.
6. Fairness in AI
- Issues of unfair treatment and transparency.
- Historical cases of bias:
- Amazon's AI hiring tool (2007) discriminated based on gender and skin color.
- Racial bias in health scores (2019).
- 2023 AI index report highlighted gender bias in LLMs.
- Legal actions against Facebook for discriminatory ad practices.
7. Approaches to Improve Fairness
- Fair machine learning practices.
- De-biasing data techniques.
- Genetic programming and symbolic regression to find optimal algorithms.
- Objectives of accuracy and fairness in AI models.
Summary
- Fairness in AI is still a developing area, while privacy practices are more established.
- Countries may learn from China's advancements in AI regulation.
- Developers should adhere to best practices, ensure transparency, and involve subject matter experts in the process.
Audience Q&A Highlights
-
Q: Is unfairness due to structural injustice or data errors?
- A: Standards exist to measure fairness based on gender and ethnicity.
-
Q: What aspects of AI are more mature?
- A: Reliability, safety, and security are more technically solvable problems.
-
Q: What is genetic programming?
- A: Involves tree nodes for operators and leaf nodes for input data, useful for tuning algorithms.
-
Q: What documentation is needed for transparency?
- A: Includes algorithm design, human-in-the-loop processes, and privacy impact assessments.
Conclusion
The meeting underscored the importance of addressing fairness and privacy in AI development, emphasizing the need for collaboration across departments and adherence to regulatory standards.