Predestination – the idea that all events in your life have been predetermined and that free will is an illusion – has been around for a long time. However, in the age of the algorithm the question of free will has taken on new significance. Today artificial intelligence (AI) and algorithms operate as gatekeepers between brands and consumers and play a huge role in predetermining our choices.
As consumers we already have some awareness of this – we all know that Google’s algorithms decide which search results appear at the top of the page or Facebook’s that promote certain stories to the top of our newsfeeds. And now these algorithms are entering the physical world with intelligent assistants like Amazon’s Alexa that can develop biases to favour certain products or brands a consumer has purchased from before. These algorithms are often extremely convenient for us as consumers, yet it can be a short journey from convenient to creepy. Established players including major banks, retailers and telcos are all having to learn how to navigate the digital world, one of the biggest lessons is how to harness the power of the algorithm.
The robotic consumer
The first problem for businesses is figuring out how to survive in a world where algorithms are starting to take more and more decisions away from consumers. For example, if I order something through my Alexa, rather than giving me every option on Amazon, Alexa’s algorithms will present me with at best one or two choices – usually the most popular ones or those I’ve shown a preference for through my searches and previous behaviour. This makes it difficult for challenger brands to gain traction, something that negatively affects consumers in the long run. The situation is even worse for brands in a market where one company has become synonymous with the product, such as Kleenex and tissues, or Duracell and batteries.
The example above shows how algorithms are becoming the guardians of the customer experience. Tech giants like Google and Facebook now own the customer interaction and are able, either on purpose or by accident, to exclude certain companies from communicating to their customers. In fact, small changes to their algorithms has a multitude of knock-on effects for companies, from pricing to delivery partners. For example, if one product becomes the default via Amazon, others may have to cut costs or offer free one-day delivery in order to try and compete. In order to thrive, companies need to understand how algorithms are changing the customer experience dynamic and strategically deploy them to their own advantage. Yet that is easier said than done.
Don’t leave algorithms to their own devices
When deploying algorithms, there is a very fine balance to be struck between improving the customer experience and overstepping the mark. Take KFC’s recent innovation, a machine that determines what you might like to eat based on scanning your face, and your perceived age and gender. For example, it might recommend that a male customer in his early 20’s would like a chicken burger set meal, a side of wings and a coke for lunch, but a woman in her 50’s might receive a recommendation for a healthy salad and still water. Not only that but the machine can remember all previous customers and will be able to tailor these recommendations based on each person’s ordering history. In the context of a restaurant it’s a nice addition that adds a bit of fun to the experience for customers, however could be read as insulting if a customer doesn’t like the result.
However context is everything. A badly designed algorithm is a recipe for offending customers. Google suffered a PR nightmare a few years ago when an algorithm began identifying black people as gorillas – and they’re not alone, that is just one incident on a long list of algorithmic faux-pas.
Nor is a stained reputation the only risk. There are numerous regulatory risks associated with employing improperly calibrated algorithms. Imagine an insurance company adopting the same approach as KFC and offering its policies on the basis of a scan of someone’s age or gender. It’s not hard to envisage a scenario where the company is suddenly facing complaints about discrimination because the algorithm has determined that a particular policy isn’t suitable for anyone over 45, or earning a certain amount, or assumes a woman of a certain age intends to have children and offers related products. Witness the recent furore when it was reported that Admiral was providing different quotes for car insurance based on the ethnicity of the customer’s name. When left to themselves, algorithms can easily fall into such traps and it’s essential that businesses continuously monitor the outcomes from automated processes to ensure that such issues are eradicated before they become a regulatory or reputational liability.
A predictive sandbox
So what should companies do? For better or worse, algorithms are here to stay, and businesses must work out how best to navigate an algorithm-first world. One of the best ways of doing this is to model changes in a safe environment – a sandbox – and see how the results pan out. As ever, preparation is key. When dealing with algorithms, especially self-improving ones, small changes can have significant consequences. Some of the most common issues occur thanks to issues like incomplete or inaccurate data being used, or the subconscious biases of the programmers being encoded into the algorithm. Businesses need to safeguard against these and other issues, and should ensure that they are confident of the outcomes of any alterations to their service.
Beyond that, firms need to review how and where human input is still needed. For all that AI programmes and algorithms can help enhance the customer experience, it isn’t going to replace the human element. Merging humans with robots, rather than replacing them, will be the key to the successful digital enterprise and firms must ensure they have the know-how and expertise to do so smoothly and effectively.
The article was originally published on InformationAge and is reposted here by permission.