Perceived Wisdom: Artificial Intelligence

Published on

Artificial Intelligence is subject that everyone finds fascinating and will often crop up as a potential area of interest within product development teams.

When this happens, I have a habit of saying “There is no such thing as AI”. I’m aware this seems like a weird thing to say, after all my degree had a focus on AI. So I want to clear up what I mean by that. To me, it’s common sense but I can see why there is confusion. All the big companies talk about their AIs. All the small companies say their product is powered by AI. So to say there is no such thing sounds like a very controversial statement.

Why is there no such thing as AI? What is wrong with saying “let’s add AI to this”? Why does “powered by AI” make no sense to me?

It is not to be provocative, it’s to make the conversation more useful. What does it mean to say “we are going to add AI”? It’s such an abstract idea within these conversations that it doesn’t really mean anything. If it doesn’t mean anything, how can it be a useful topic of discussion, and what can we change in the way we talk about potential uses of AI to make it a more productive conversation?

My understanding of AI may well be wrong or clash with what other people perceive it to be. So I will be clear about how I view it. To me, Artificial Intelligence is an emergent property of a system. You do not directly “add AI”, it results from the way you develop your system and perhaps use tools and techniques such as neural networks or genetic algorithms.

To be even more precise, I see it as AI being the emergent property of a sufficiently advanced system where the output is not pre-defined. A deterministic algorithm, where you always get the same output for a given input (such as 1 + 1 = 2, it will always be true), doesn’t fall under the scope of AI because you pre-define how that computation is achieved.

You could go even further and say if the emergent behaviours literally are the Artificial Intelligence we talk about then AI could never be an implementation detail. If a system is built which is able to output results that were never considered, never programmed in, then that’s interesting. Think of a Boids simulation, you set up three simple rules (and maybe a few more to make it more unique) and the Boids fly around in groups… the clustering is inferred by the rules you set but it isn’t prescriptive. Then if your Boids act in such a way that something else happens (maybe they somehow end up preferring to fly with others depending on the size of the flock) then that’s where your naive basic Artificial Intelligence exists… it’s the interesting output of a system that you do not define. So what’s a useful AI? One where the output, the emerging behaviour is consistent and reliable but not specified within the implementation. At a very naive level, it’s the difference between Boids preferring small groups because you predefine a limit to the group size and the same outcome due to essentially an interpretation of the rules (an emergent behaviour) that was not directly set.


Let me put it this way: Artificial Intelligence is an emergent property of a system that meets the sufficient success criteria.

Okay, so we have an understanding that essentially when we talk about AI we probably mean having some kind of “black box” in which something (be it basic rules or more complex systems) somehow gets interpreted to produce useful results without being prescriptive about how that happens. It’s the result that we care about, not the implementation. Though in a product development meeting… I promise you, the developers will be concerned about the implementation details!

Now what about if we look at the most basic way to program anything? Using a combination of if statements. If we go from talking about AI emerging from a fuzzy interpretation of rules, it almost feels easy to say that if statements are the complete antithesis to it. With a set of if statements you have complete control of what happens from input through to output. However, maybe it’s more nuanced than that.

Chatbots have been around for a very long time, since the early ELIZA back in the 1960s. While they are no doubt more complicated these days, at the core it is a set of responses to a more or less predefined set of inputs. A basic implementation of a chatbot is never going to pass the Turing test, which looks for Artificial Intelligence that is indistinguishable from Human intelligence. However, I remember as a kid playing with the ALICE bot (an early 2000s succession to ELIZA) for hours at a time having fairly realistic conversations. So I think it’s fair to say to some extent from a large enough set of pre-defined inputs (including the ability to do good enough pattern matching) and responses, a form of AI does emerge in the same way as it could in more complex systems.

Likewise, a game AI that is completely scripted but feels like real interactions cannot be laughed at. But equally, you wouldn’t say the developers used artificial intelligence to achieve it, it’s a form of AI that emerges from a sufficient set of scripted events.

On the other hand, neural networks are one of the leading technologies for AI at the moment. You have a neural network that perfectly matches the criteria of a black box in which you have many inputs and many outputs without a full understanding of what’s happening in the middle. But even then, it’s a push to say your product is powered by AI isn’t it? It’s a neural network that is trained well enough to get you good results, somehow.


So what’s my point?

In all of these cases, the implementation detail is not the thing that makes it AI. Whether a set of prescribed patterns or a very advanced technology, it’s the output that makes it AI. It’s the emerging behaviour in which someone will feel like they are interacting with something clever that can give them the experience or the set of results that seem to be sufficiently successful enough. It’s like the uncanny valley in computer graphics, where the Simpsons is realistic on one end and a perfect human-looking CGI is realistic on the other end… but in the middle, in the uncanny valley, it just doesn’t work. Same goes here, no matter which technology you use to achieve it, it’s only “AI” if the results are good enough.

Going back to the beginning, in a product development discussion… the reason saying “there is no such thing as AI” is useful is that we need to work out what a good result would be. If everyone agreed that yep, at some point we will add AI… that doesn’t answer any question of what that actually means. It would be impossible to estimate, impossible to figure out how to test or even to know what the point of it is.

Another simple example of this is satisfaction scores. Most product development teams will have some kind of measure of their success and user satisfaction is a useful way to do that. Now, it wouldn’t make sense to talk about “adding user satisfaction” would it? You wouldn’t schedule some time in the roadmap to “do satisfaction”. You might instead say “let’s spend some time to work out why that result is low and to try to improve it”. Equally you might say “we need a way to generate X result without predefining all the ways that is possible to do”, you’re talking in the realms of AI without using it as a noun, as something that can just be added.

Now stepping back from all of that… am I just talking semantics? Maybe we are just too far gone. Like the old adage of “the cloud is just someone else’s computer”, maybe when people talk about adding AI to a system we need to just let them. Instead of arguing like I so often do, maybe I just need to hear the words “let’s add AI” as “let’s do something interesting”. If that means create a basic set of calculations to do something new. Or if it means hook into neural network and get interesting results… so be it. A discussion needn’t be stopped just because it doesn’t quite make sense, but I do think anything we can do to be more precise about intentions and outcomes is worthwhile. Otherwise work could be planned in and by the time you get around to doing it, everyone will ask “right… what exactly are we doing?”. It is semantics, but it’s useful semantics.