A Human Rights Approach to AI
1. Proportionality and "Do No Harm"
- AI should only be used for specific, good reasons. We need to figure out what could go wrong when using AI and stop those things from happening.
2. Safety and Security
- AI should be safe and secure. It shouldn't hurt anyone or be easy to break into.
3. Right to Privacy and Data Protection
- AI should keep people's information safe and private. There should be rules in place to protect this information.
4. Multi-Stakeholder and Adaptive Governance & Collaboration
- AI should follow the rules of different countries. Everyone should help decide how to make AI work for everyone.
5. Responsibility and Accountability
- We need to be able to see how AI works and what it does. There should be rules in place to make sure AI doesn't hurt people or the planet.
6. Transparency and Explainability
- AI should be easy to understand. We need to know how it works and why it does what it does. But sometimes, knowing everything about AI might be hard to do without breaking other rules, like keeping things private or safe. So, we need to find the right balance.
7. Human Oversight and Determination
- People, not robots, should be in charge. Even if AI helps make decisions, humans are still responsible for the final outcome.
8. Sustainability
- We need to check if AI is good for the planet and people. This means looking at how AI affects things like climate change, pollution, and making sure it helps everyone, not just some people.
9. Awareness and Literacy
- We need to teach people about AI and data. This means helping everyone understand what AI is, how it works, and how it can be used in a good way. We also need to teach people how to use computers and how to find and understand information online.
10. Fairness and Non-Discrimination
- AI should be fair to everyone. It should treat everyone equally and not favor one group over another. Also, everyone should be able to use AI and get good things from it.
All information from https://www.unesco.org/en/artificial-intelligence/recommendation-ethics