Artificial Intelligence is a brilliant business tool that holds the potential to save countless hours of repetitive labor, freeing people to carry out work of greater value, greater interest, and of higher creative worth. It’s clearly here to stay.
IT trade body Comptia1 calculates that:
- 91% of leading businesses are investing in AI
- 97% of mobile users use AI-powered voice assistants
- 4 billion-plus devices already work on AI-powered voice assistants
- 40% of people use an AI-powered voice search function every day
McKinsey reports that that AI may contribute an extra $13 trillion to global GDP by 20302, as automation increases productivity and fuels innovation in products and services.
It’s about to get really personal too.
AI changes personal and professional lives
It seems likely that AI will feature much more in the way people organize their private lives as well as their work. Bill Gates posits that individualized digital assistants – known as agents – will learn enough about our lives, personalities, and preferences to provide a service that’s unique for each of us.
“An agent will be able to help you with all your activities if you want it to,” he writes.3
“With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.”
So far so good – yet that’s not the whole story.
Real-world impacts of AI
AI is still limited in what it can do. It is also limited in what it should do.
Crucially, AI can only work inside the data sets to which it has access. What might seem like original work pouring out of an app is nothing more than a distillation of information that already exists in the world’s digital brain bank.
Deeply impressive, yes. But also deeply flawed.
It reflects biases, and they can be amplified with every new iteration.
AI is entirely neutral. It has no ethical sense. It plagiarizes. It invents. It hallucinates, proposing sequences of facts and events which have no relation to reality. If it can’t find the real-world example you ask for, it can offer a hypothetical case – with worrying consequences for the careless or lazy user who passes it off as an actual example and passes it out into the world.
Five areas of potential bias in AI
This has real-world impacts, bringing a threat of bias and discrimination into an organization’s activity. This is already clear in five areas of AI’s use:
1. Facial recognition
Some systems have been found to be less accurate when identifying faces of females or people with darker skin compared to males or lighter-skinned counterparts.
A study by Joy Buolamwini, from MIT Media Lab, and Timnit Gebru, formerly a researcher at Microsoft4, exposed big differences in the accuracy of facial recognition systems based on gender and skin type.
2. Predictive policing
Has been criticized for perpetuating and even amplifying existing bias in law enforcement data. If historical arrest data is biased, the AI model may inadvertently target specific communities, leading to over-policing and reinforcing stereotypes.
3. Job recruitment
AI-driven hiring tools have faced scrutiny for gender and racial bias. If historical hiring data reflects a skewed workforce, AI may exacerbate bias by favoring certain groups, leading to discrimination. Even Amazon5, one of the great pioneers and advocates of AI, has admitted its own processes have been affected. In 2018 it was reported that Amazon had developed machine learning to assess CVs. It was trained on CVs submitted over a 10-year period when most applicants were male. As a result, the system reportedly favored CVs that included male-centric language and penalized those that included terms more commonly found in CVs submitted by women. Amazon abandoned the tool after discovering the bias.
4. Credit scores
If historical lending data is biased, AI algorithms may result in discriminatory lending practices, affecting certain racial or socioeconomic groups. The National Bureau of Economic Research published a study ‘Consumer-Lending Discrimination in the FinTech Era, 2018’ 6, based on research by Stanford University, Microsoft Research, and the University of California, Irvine. It found that applicants from minorities, particularly Black and Latinx customers, were more likely to be charged higher interest than other customers.
5. Chatbots and virtual assistants
AI-driven bots and virtual assistants may reflect gender bias in their responses. Some have been criticized for responding inappropriately or reinforcing stereotypes. In 2016 Microsoft launched a chatbot, Tay, to interact on social media. It lasted less than a day before it had to be shut down7, as it absorbed and repeated racist and abusive language from users.
None of this means that we should thank AI for its services and pull the plug on a powerful innovation that is already firmly embedded in working practices and processes. We don’t need to cast machine learning on the scrap heap of unwanted or unnecessary inventions – along with Google Glass, the Segway– or Bill Gates’s affectionately maligned proto-digital assistant, Clippy.
It has long since gone beyond the tipping point of acceptance into mainstream life. Amazon Web Services defines generative AI – the creation of new images, media, and graphics – as the fastest-growing trend in AI. ChatGPT – probably the best-known AI platform – reached one million users in just five days. That compares to 75 days for Instagram and 150 days for Spotify.
So even if it was possible to cram the genie back in the bottle, that’s not where we are.
Attensi and partners are already gaining real benefits from the application of AI in five key areas:
1. Translations
Adapting our training quickly and easily for more languages around the world.
2. Generating AI characters
Creating super-realistic avatar-style figures that bring a new level of believability to digital scenarios. A great example is the character named Makayla whom Attensi has introduced to the world. She is a recognizable personality, created with new animation tools, voice recognition, and synthetic voice generation. You can have a real conversation with Makayla, and she is an exciting prototype that we’re working on in developing future solutions.
3. Faster content
For example: Input a PDF and the AI can set up 20 multiple-choice questions based on the content.
4. Create dialogues
Generating realistic interactions that make simulations credible and compelling.
5. AI voicing
Making digital characters sound precisely and convincingly human, just like Makayla.
Using AI with care
What it does mean is that businesses – and individuals for that matter – must use AI with care. It means that people must ensure that the human element remains in the driving seat – and that they should bring their ethical sense and intuition to every application of AI. And critically, they must remain the master and not the servant of this technology.
It all speaks to the importance of careful design and vigilant scrutiny to mitigate bias in AI systems. It requires the promotion of a culture where ethics always comes before speed and convenience. That’s a moral issue, yes. But it is also a business issue – as any organization with aspirations for long-term sustainability must guard their reputation zealously, so that people still respect them, and want to trade with them and work for them.
Tackling the potential danger of discrimination is essential to ensure fair outcomes, and to establish AI as a lasting and valuable benefit for business and for humanity.
Interested in learning more about AI? Listen to our CEO, Trond Aas and Muhammad Sajid, Senior Solution Architect at Amazon Web Services discuss the future of AI and how it’s becoming part of our lives.