Dos and Don’ts: A first step toward AI policies

Key principles to consider when developing AI strategies and policies. Click here for more recommendations on safeguarding freedom of expression in the use of AI for content governance.

Do

Do

Put the protection and promotion of freedom of expression and other human rights at the center of AI strategies and policies

Do

Recognize that solutions to policy challenges like hate speech, violent extremism, propaganda, and disinformation are complex and cannot be solved by the use of automated technologies alone

Do

Conduct thorough human rights due diligence on the potential human rights impacts of AI policies or regulations throughout the entire lifecycle of an AI system and before enacting any such policies into law

Do

Encourage and fund research toward AI systems fostering freedom of expression and media pluralism

Do

Create a transparent, whole-of-society approach for developing evidence-based policies that includes experts from academia, civil society, and the public

Do

Engage internationally, to ensure that freedom of expression and media freedom considerations are incorporated into national, regional and global AI strategies

Do

Implement information and digital literacy initiatives, to empower individuals and strengthen democratic resilience

Don't

Don't

Don’t use “ethical” or “responsible” AI frameworks as a substitute for human rights based AI-governance frameworks. Ethics are neither legally binding nor enforceable

Don't

Don’t expect AI technologies to solve deeply entrenched societal problems manifesting online

Don't

Don’t disregard human rights commitments and obligations when developing laws, policies, and regulations that are applicable to the AI sector

Don't

Don’t treat digital literacy as an afterthought, or exclude educating on the impact of AI on freedom of expression and other human rights

Don't

Don’t assume that more technology is always better, and prioritize efficiency over accuracy and fairness. Also, don’t ignore bias and error rates, neglecting responsibility mechanisms

Don't

Don’t enact laws or regulations based on an overly optimistic view of AI's future capabilities, and avoid compromising the necessity of human review and judgement

Don't

Don’t adopt laws or policies from other jurisdictions without assessing their potential impacts on human rights in your own country