The proliferation of machine learning technologies this decade hasn’t come without its controversies and ethical concerns.
Earlier this year, actress Scarlett Johansson was left ‘shocked’ and ‘angered’ after the founders of the revolutionary large language model, OpenAI, launched a chatbot with an ‘eerily similar’ voice to her own. Johansson had previously turned down an approach by the company to voice its new chatbot, which reads text aloud to users.
While the story of an A-lister’s voice being stolen by AI grabbed the headlines, the real narrative here that is going unnoticed is the potential for AI to innovate and charge ahead without the consideration of ethical risks. AI development can easily blind users by the amazing possibilities, however the ethical risks are the boring and arguably more important aspect that are easily overlooked. A lack of ethical practices in AI threatens to undermine its full potential and endanger individuals and institutions. In the worst case, such systems will learn unethical behavior that has been embedded in their training data in ways that will be difficult to predict or even discover.
As such, it is vital that AI models are developed ethically and responsibly, with appropriate considerations and protocols. Specifically, a robust data governance framework should be in place to define data quality standards, processes, and roles, with employees across an organisation aware of the protocols.
In addition, adopting legitimate and proper policies at a societal level toward AI could help address lingering concerns. These policies can include:
General Ethical Considerations: Companies should incorporate ethical considerations into their development processes. This means thinking critically about the potential social and psychological impacts of their technologies before bringing them to market.
Avoiding Hubris: Addressing the hubris of assuming that market players can control or mitigate the risks inherent in these advanced technologies. Companies should be more humble and cautious in their approach.
User Consent and Transparency: Ensuring that users are fully informed and have given their consent before using technologies that mimic or draw on individuals. Transparency about how these technologies are developed and their potential impacts can build trust and avoid backlash.
Learning from Criticism: Companies should be open to criticism and learn from public and expert feedback. This can help them to identify potential issues early on and address them proactively.
Focus on Human-Centric Design: Emphasising technologies that enhance human connection and well-being, rather than replacing or diminishing it, can align technological advancements with positive societal outcomes.
In terms of responsible AI model development and integration, there are several factors that could be considered:
Incorporating Ethics into Development: Ethical considerations should be integrated from the very beginning of the model development process. This involves having dedicated ethics teams or committees that can provide oversight and guidance on potential ethical dilemmas.
Diverse Perspectives: Bringing in a diverse range of perspectives can help identify potential ethical issues that might not be immediately apparent. This could include involving ethicists, sociologists, psychologists, and other experts in the development process to provide a broader understanding of the societal impacts.
User-Centered Design: Emphasising user-centered design can ensure that technologies serve the needs and well-being of users. This involves continuous user feedback and iterative design processes that prioritise the user experience and ethical implications.
Transparency and Accountability: Transparency in how technologies are developed and used is of critical importance. Companies should be open about their data sources, methodologies, and the potential impacts of their technologies. Accountability mechanisms should also be in place to address any misuse or unintended consequences.
Regulatory Compliance: Adhering to existing regulations and advocating for new ones where necessary can help ensure that ethical standards are met. Regulatory bodies have a huge role in overseeing the development and deployment of advanced technologies.
Testing and Validation: Ensuring that AI models are regularly tested and data inputs are validated to ensure models are free from bias.
Education and Training: Providing ongoing education and training for developers and engineers on ethical considerations and responsible AI practices can help instill a culture of ethical awareness within tech companies.
Encouragingly, in the financial sector, developments in this regard are being made across different jurisdictions. A recent example is Singapore, who is adopting a principles-based model to AI adoption through its financial regulator, the Monetary Authority of Singapore (MAS), and establishing a FEAT framework (fairness, ethics, accountability, and transparency) for financial institutions. The FEAT framework encourages firms to conduct self-assessments to ensure AI systems, particularly in credit scoring, operate transparently and avoid biases against specific demographic groups.
Safeguarding against nascent AI technologies that lack ethicality is currently a key challenge for regulators. For the safe application and use of AI, society must put the relevant protections in place, which would undoubtedly help improve confidence toward AI technologies and help the industry to reach its full potential. At this time, companies can and should take it upon themselves to integrate ethical considerations and practices into their AI models if they are to avoid possible reputational and financial consequences that come with being immoral.
For further information, please do not hesitate to contact us at london@greyspark.com with any questions or comments you may have. We are always happy to elaborate on the wider implications of these headlines from our unique capital markets consultative perspective.