There is widespread awareness among researchers, companies, policy makers, and the public that the use of artificial intelligence (AI) and big data analytics often raises ethical challenges involving such things as justice, fairness, privacy, autonomy, transparency, and accountability. Organizations are increasingly expected to address these issues. However, they are asked to do so in the absence of robust regulatory guidance. Furthermore, social and ethical expectations exceed legal standards, and they will continue to do so because the rate of technological innovation and adoption outpaces that of regulatory and policy processes.
In response, many organizations—private companies, nongovernmental organizations, and governmental entities—have adopted AI or data ethics frameworks and principles. They are meant to demonstrate a commitment to addressing the ethical challenges posed by AI and, crucially, guide organizational efforts to develop and implement AI in socially and ethically responsible ways.