There are mounting concerns across the globe about the reliability and safety of artificial intelligence (AI) based systems and its enabling technologies such as machine learning (ML). On the air are worrisome anti-AI posts about automated lethal weapons “with a mind of their own”, social media manipulation, invasion of privacy and social oppression as well as discrimination. Notable profiles such as the late renowned physicist Stephen Hawking and tech innovator Elon Musk opined that the use of AI has dangerous implications. They even made perturbing statements such as “more dangerous than nukes” and “the end of the human race”. Having admitted that there are enough reasons to be cautious, Microsoft co-founder Bill Gates, however, mutes AI-related concerns saying “it’s a solvable problem”. And, Gates is not alone in the pro-AI communique. Corporates, academia, governments and inter-government organizations have come up defending AI and the immense potential to solve problems – including those considered impossible for humans.
As a provider of ML-powered regulatory compliance solutions, Tookitaki too falls in line with the latter. However, we believe that developers of AI solutions should follow certain ethical norms to ensure that the solutions are always beneficial to their subjects. For us, being ethical means to do what’s good for all stakeholders of our solutions, including clients, regulators and the communities our clients serve. To be ethical, AI solutions need to be impartial, responsible, explainable, governable and reliable. They should not make human beings feel alienated, devalued and frustrated. There are high chances that ML models become susceptible to biases and errors introduced by their makers, and the consequences would be disastrous for users. Amazon recently killed its AI model for recruitment, after having found out that the algorithm preferred more male candidates than female. The root cause of the issue: the training of their AI model with data collected over a 10-year period included mostly male candidates.
There have been attempts across the globe to promote ethical AI and bring in guidelines to encourage the responsible use of AI. One such initiative is the FEAT principles initiated by the Monetary Authority of Singapore (MAS). The principles are intended at promoting fairness, ethics, accountability and transparency in the use of AI and data analytics in finance and are directed towards firms offering financial products and services using AI to strengthen internal governance around data management and use. “This will foster greater confidence and trust in the use of AI and data analytics, as firms increasingly adopt technology tools and solutions to support business strategies and in risk management,” according to MAS.
There are also the Ethics Guidelines on AI recommended by the EU High-Level Expert Group on Artificial Intelligence: a list of seven requirements that AI systems should adhere to in order to be trustworthy. They are human agency and oversight; technical robustness and safety; privacy and data governance; transparency, diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. The Organisation for Economic Co-operation and Development (OECD) also contributed its Principles on AI, a list of five “values-based principles for the responsible stewardship of trustworthy AI”. Following the OECD guidelines, the G20 group of countries adopted human-centered AI principles. There are also numerous private initiatives from tech firms such as Google and SAP and non-profits such as the Association of Computing Machinery and Amnesty International published guidelines pertaining to ethical AI. Also, there is a UK-based think tank, the Institute for Ethical AI and Machine Learning, which has put forward eight machine learning principles, guiding technologists to develop machine learning systems responsibly. Ethics is an evolving topic both in the case of both human beings and machines. As creators of machine learning-enabled solutions, we are waiting for holistic, universally accepted ethical AI norms. What we could deduce after going through various ethical AI documents are five common principles; they are explainability, accountability, security and safety, privacy and unbiasedness.
Being a regulatory technology company that provides machine learning-powered compliance solutions, Tookitaki believes that business users and regulators should have visibility into machine learning models and their predictions. Therefore, we have enabled ‘glass-box’ transparency in our solutions – Anti-Money Laundering Suite and Reconciliation Suite – helping users with explanations in business language. This is done with the help of our patent-pending explainability framework. Unbiasedness is another area that we are working on. We ensure the accuracy, quality and relevance of data used to make fair predictions. We also have processes in place to monitor, understand and document potential bias in development and production. In order to ensure safety, security and privacy, we ensure the robustness of the technology and create data and model security processes and infrastructure.
Yes, modern machines need to be ethical as we inject different types of human intelligence into them in a phased manner. The AI world is truly caring a lot for ethics and so do we at Tookitaki. It is a good sign that many governments, inter-governmental bodies, think tanks and corporates agree on certain key guiding norms on ethical AI. However, there are challenges with regard to underrepresentation and difference of opinion over means to be used to ensure ethics. There needs to be a consensus across the globe and conscious efforts need to be in place to involve underrepresented regions and make ethical AI norms as comprehensive as possible without doing away with basic values.
Subscribe to Our Newsletter
Content that might peak your interest
Time to reform your compliances
Kickstart your journey by exploring our products or request a demonstration with us.
