Please embed image: https://images.pexels.com/photos/8386365/pexels-photo-8386365.jpeg
The world is on the cusp of an artificial intelligence (AI) revolution. Every aspect of society will be transformed, from how we interact with services and people to how we work.
The impact of AI is already being felt, with more and more companies using generative AI chatbots to communicate with customers, as well as content creation models to draft documents. 83% of companies claim that AI is a top priority in their business plans, with the AI industry value projected to increase by over 13x over the next 6 years. The full potential of AI is almost incomprehensible, and much research is being done on its positive impact on the world. However, alongside these positive beliefs is the fear that AI could be a technology beyond control and used for unethical and illegal purposes. To combat this, AI governance and regulation are being rapidly drawn up and applied, and it could change the tech industry forever.
Examples of Unethical AI
The most common unethical uses of AI are spreading misinformation, creating job displacement, and violating a person’s privacy. Misinformation is becoming a key concern as it can affect public opinion and cause reputational damage to individuals and companies. Because AI can create and distribute information very quickly, it can be difficult to determine where it originated from and prevent it from spreading. In terms of job displacement, there is a justified fear that AI will replace many jobs at all levels of society and that workers will have little to no protection to prevent this. Already, we are seeing The Rise of AI Avatars in Business Marketing, an AI that mimics human behavior and interaction, can answer customer inquiries in real-time, and build rapport by delivering a human-like conversational experience. The privacy concerns of AI are also valid, as tech companies rely on the gathering of huge amounts of information to train their AI models. There is very little insight into how this data is being collected and processed, and much of it is being harvested without consent.
AI Governance and Regulation Across the World
As AI becomes more widely used, there has been a big push to introduce regulations that could affect the tech industry by making them legally responsible for making their AI models and systems ethical. A guide to AI by MongoDB details how AI ethics focuses on the moral and ethical implications of AI tools and technologies — i.e., fairness, privacy, transparency, and accountability. Currently, there is no global standard on the governance of AI, despite UNESCO putting forward the Global AI Ethics and Governance Observatory, which is designed to provide a global resource for policymakers, regulators, academics, the private sector, and civil society to find solutions to the most pressing challenges posed by AI. In the US, there is a planned Bill of Rights that will provide comprehensive standards for AI use, while the European Union passed the AI Act, providing a legal framework to regulate AI. Other organizations that have drawn up AI governance and regulation frameworks include the Organization for Economic Co-operation and Development (OECD), the US National Institute of Standards and Technology (NIST), and the Group of Seven (G7).
How Governance and Regulations Could Affect the Tech World
These regulations could force tech companies to be much more open in how they develop and integrate AI systems. There have already been examples of AI companies being sued over unethical AI. One of the biggest ongoing cases is a copyright violation lawsuit brought about by Asian News International (ANI) that accuses OpenAI of unlawfully using copyrighted content to train Large Language Models (LLMs) like ChatGPT. In the 287-page lawsuit, ANI accuses OpenAI of illegally using its data to enhance its LMMs’ performance and alleges that some AI-generated outputs falsely credited fabricated information to the news agency. If the lawsuit is successful, it could make AI tech companies much more open in how they harvest their data.
Ethical guidelines will also force tech companies to adhere to compliance standards regarding bias. There have been many examples of AI bias in healthcare, online advertising, and image generation, where minority communities have been targeted negatively. Clear and enforced governance will ensure tech companies build software that implements counterfactual fairness. This is a model in AI that aims to ensure that algorithm-based decisions are fair. Tech companies could legally be required to provide evidence that these fairness models have been installed.
The tech industry, which has so far enjoyed much freedom in developing and implementing AI, is about to experience a huge global shakeup. As more countries introduce their own AI regulations to govern how the technology is used, we can expect to see many tech companies held to account. The question is whether the governance and regulations can keep up with the rapid evolution of AI.