Like the internet before it, the evolution of artificial intelligence (AI) is set to radically transform the way businesses work. However, while it is poised to bring many benefits to companies, customers and society as a whole, it is also a technology that poses significant threats. Indeed, Geoffrey Hinton, the Nobel Laureate known as the ‘Godfather of AI’ even quit his post at Google so he could speak about the technology’s dangers. In this post, we look at the ethical issues surrounding AI and explain why UK businesses must work with the technology in a responsible way.
Ethical issues with AI
The biggest threat from AI is not yet with us. That will come when we develop artificial general intelligence (AGI), where AI models have the same or even greater intelligence as humans. While some argue that this can lead to fantastic new discoveries, like cures for diseases or ways to address climate change, others, like the late Professor Stephen Hawkin, claimed it, ‘could spell the end of the human race.’ Superintelligent computers may have the capability to improve their own abilities beyond the control of people, giving them free rein to pursue their objectives even if this meant causing harm to humans. The main ethical issue here is whether these models should be developed at all and, if so, how do we ensure they can be controlled and not get into the hands of malicious users?
What can AI do for you? Read: Generative AI: What is it and How Can it Benefit Website Owners?
While today’s models are much less evolved, they still pose significant threats. One of these is that they can be biased and provide unfair solutions. That bias comes not from the AI systems themselves but from the historical data used in their training. However, if the data is biased, AI models will reflect that in their outcomes. For example, if a company creating an AI recruitment tool uses historical data on its existing staff to find the best new recruits, that tool may be biased if the existing employees are predominantly of one gender, race or type of background. As a result, the tool may be biased in who it recommends giving jobs to. Not only is this grounds for discrimination; it also hampers the company’s search for the best candidates. Similar biases can be found in credit scoring, advertising and countless other AI algorithms. To avoid this happening, businesses need to identify and remove bias from their training data and AI systems.
Another issue with AI is its lack of transparency. This is the result of the fact that though we know how AI works, even developers don’t fully understand how these systems make decisions. While this is of no consequence for many AI operations, there are some instances, especially when its decision-making affects people, that a lack of transparency makes it difficult for businesses to justify those decisions. Potential examples here include risk analyses AI models that make decisions over mortgage applications or insurance models that determine whether policies will be given and their pricing. Businesses that wish to operate ethically will need to ensure that their AI-driven decisions are transparent and can be explained to customers and regulators.
Data privacy is one of the biggest issues for businesses using AI. AI models rely on vast amounts of data for their training, however, regulations like GDPR mean there are strict rules on how that data can be used. For instance, a company that uses a third-party AI solution may need to transfer its data to a different company or even a different country, potentially exposing it to others who have no right to access it. Self-inflicted breaches such as this can be highly damaging and lead to significant fines from the Information Commissioner’s Office (ICO).
There are also issues regarding where AI models access their data. Today, many news websites and other online publications are actively preventing AI bots from accessing their websites in an attempt to prevent models from using their copyrighted information for training. In the US, lawsuits have been filed against both Google and Open AI for scraping internet data.
Want to Start an e-commerce store? Read: How Visual AI is Reshaping eCommerce
One of the most talked about issues with AI is its ability to replace human jobs and alter many others. In many ways, this is no different to the development of machines during the Industrial Revolution. It is undeniable that some roles will no longer be required while in others, tasks can be made easier enabling workers to become more productive and the company to be more efficient. AI chatbots are a typical example of how businesses can streamline their online customer service.
Businesses that deploy these AI tools, however, need to consider the ethical impact of their decisions. While roles will be lost, job losses can be minimised by retraining staff to work in the new environment and undertake different roles. As a result, workforces can be upskilled, which is vital for future success, while the social impact of unemployment can be mitigated.
The final question surrounding AI is one of accountability – who is to blame if something goes wrong? If someone dies as a result of an autonomous AI-driven vehicle going astray or a medical AI making a wrong diagnosis, is the fault that of the AI developer, the organisation using the AI or the actual AI model? While this is a grey area for all concerned, businesses will need to put clear accountability mechanisms in place in case problems arise from AI and legal cases are raised.
AI regulation and good practice
At present, the two main regulations that impact the use of AI in the UK are GDPR, which affects how data can be used by AI models, and the 2010 Equality Act, which, as it prohibits discrimination, will have implications for AI bias. In the future, however, regulation is expected to increase.
As part of its National AI Strategy, the government has introduced a 7-point framework for the ethical use of AI. The 7 points in question are: testing AI models to avoid any unintended outcomes; delivering fair services for users; providing clarity over accountability; handling data safely; helping users to understand how AI impacts them; ensuring that AI models are future proof, and compliance with regulations and laws.
In terms of good practice, companies should regularly audit their AI models to ensure they are unbiased, fair and transparent. This will involve testing for discriminatory patterns and ensuring that decision-making processes can be explained. Beyond the actual AI, it is important that the teams that build, train and deploy them are from a diverse range of backgrounds. This helps eliminate the biases of individual contributors when creating algorithms.
Conclusion
AI offers enormous potential to businesses; however, it can equally be a force for bad as it can for good. Companies involved in developing AI and those that make use of it must operate ethically to prevent unwanted issues from happening. To do this, they should adopt best practices, work within regulatory frameworks, be accountable and ensure their models are fair and transparent.
Are you considering deploying AI? Our Cloud Servers are built for critical applications and deliver the security, performance, scalability and uptime AI needs. For more information, visit our Cloud Servers page.