Fly On Wall Street

The EU releases guidelines to encourage ethical AI development

The European Union illustrated on a background

The European Union illustrated on a background

No technology raises ethical concerns (and outright fear) quite like artificial intelligence. And it’s not just individual citizens who are worried. Facebook, Google and Stanford University have invested in AI ethics research centers. Late last year, Canada and France teamed up to create an international panel to discuss AI’s “responsible adoption.” Today, the European Commission released its own guidelines calling for “trustworthy AI.”

According to the EU, AI should adhere to the basic ethical principles of respect for human autonomy, prevention of harm, fairness and accountability. The guidelines include seven requirements — listed below — and call particular attention to protecting vulnerable groups, like children and people with disabilities. They also state that citizens should have full control over their data.

The European Commission recommends using an assessment list when developing or deploying AI, but the guidelines aren’t meant to be — or interfere with — policy or regulation. Instead, they offer a loose framework. This summer, the Commission will work with stakeholders to identify areas where additional guidance might be necessary and figure out how to best implement and verify its recommendations. In early 2020, the expert group will incorporate feedback from the pilot phase. As we develop the potential to build things like autonomous weapons and fake news-generating algorithms, it’s likely more governments will take a stand on the ethical concerns AI brings to the table.

Exit mobile version