Does the rift in AI matter to marketing?

Given all the promises we are hearing about artificial intelligence, it may come as a surprise to learn that researchers are deeply divided on how the field should evolve. The gap is between proponents of traditional logic-based AI and proponents of neural network modeling. As computer scientist Michael Woolridge puts it in a brief summary of the controversy, “Should we model the mind or the brain?”

This is of course not that easy (with AI it never is). Fortunately, it is not impossible to explain the difference.

Symbolic AI. Artificial intelligence has its historical roots in a thought experiment published by Alan Turing, known as the “Turing Test”. Without thinking in detail, the test should show some success in shaping human intelligence: the mind.

Successful intelligence modeling has been the primary focus of AI for decades. “Symbolic Artificial Intelligence” refers to the common assumption that human intelligence can be reduced to logical statements, the kind that can be captured by symbolic logic.

This approach has enabled AI to make great strides in addressing well-defined areas of human intelligence through clearly defined rules. This, of course, included mathematical calculations and – famously – chess. The problem is that much of human thinking does not make these rules explicit, even though they form the basis of our thought processes. Traditional AI has lagged behind in pattern recognition and thus in image comprehension. And the power to define a set of rules for skills such as baseball or cycling. We have learned to do these things (whether or not we do them), but without studying a series of statements describing the actions involved.

Deep learning. An alternative approach to artificial intelligence is sometimes misinterpreted as being based on model networks in the human brain. Instead, it’s inspired by how human neural networks work. Without going too deep, large artificial networks learn ‘nodes’ trained in coherent datasets to recognize statistical relationships in the data; and a feedback loop between the layers of the nodes creates the possibility of self-correction. The high throughput and very low nodes call this approach ‘deep learning’.

It was the scale that hindered the development of this approach. Until recently, there was not enough data or computing power to make deep learning practical and easy. But things are changing, which is why we’ve seen rapid improvements in AI image recognition in recent years.

The downside, especially when it comes to understanding letters, is that this extremely powerful engine flies blind. It recognizes large amounts of correlations in the data it receives and responds accordingly. If you don’t understand the data, the mistakes, or, as people have pointed out, your biases, it becomes more ingrained unless you step in to fix things.

Simply put, a deep learning system sufficient to take up the entire internet will absorb a lot of nonsense, some of which is harmful. This approach, I might add, also leaves a huge carbon footprint.

What are the consequences? Is it important to marketers? As for the marketing not invested in the human intelligence modeling project, maybe not, although he argues that the idea of entrusting the strategy to artificial intelligence or business planning is still a long way off.

If there is an advantage, this gap between symbolic AI and deep learning supports what may be desirable: that AI can support a range of marketing functions – campaign optimization, personalization, data management – and marketing gives the freedom to be strategic and creative.

It’s not so much a marketing problem that they hope the AI   won’t take their job. Artificial intelligence is far from ready for this.

Artificial intelligence is at your fingertips. Neither option is promising, but does it really matter?

Given all the promises we are hearing about artificial intelligence, it may come as a surprise to learn that researchers are deeply divided on how the field should evolve. The gap is between proponents of traditional logic-based AI and proponents of neural network modeling. As computer scientist Michael Woolridge puts it in a brief summary of the controversy, “Should we shape the mind or the brain?”

Obviously, this is not that easy (with AI it never is). Fortunately, it is not impossible to explain the difference.

Symbolic AI. Artificial intelligence publishes its historical roots in Alan Turing’s spiritual experiment known as the “Turing Test”. Without thinking about the details, the test must be able to shape the human intelligence: the mind.

Successful intelligence modeling has been the primary focus of AI for decades. “Symbolic Artificial Intelligence” refers to the common assumption that human intelligence can be reduced to logical expressions, the kind that can be captured by symbolic logic.

Thanks to this approach, AI has made the leap to address well-defined areas of human intelligence through clearly defined rules. This, of course, included mathematical calculations and – famously – chess. The problem is that many human minds do not make these rules explicit, even if they are inferior to our mental processes. Traditional AI has lagged behind in pattern recognition and thus in image comprehension. And force them to set rules for skills like baseball or cycling. We have learned to do these things (whether or not we do them), but without studying a series of statements describing the actions involved.

Deep learning. An alternative approach to artificial intelligence is sometimes overlooked because it relies on network modeling in the human brain. Instead, it’s inspired by the way human neural networks work. Without going too deeply, large artificial networks teach the trained ‘nodes’ in large data sets to recognize statistical relationships in the data; and a feedback loop between the low nodes creates the possibility of self-correction. High yield and very low nodes define it as a “deep learning” approach.

It was the scale that hindered the development of this approach. Until recently, there was not enough data or computing power to make deep learning practical and accessible. But things are changing, which is why we’ve seen rapid improvements in AI image recognition in recent years.

The downside, especially when it comes to letters, is that this extremely powerful engine flies blind. It recognizes large amounts of correlations in the data it receives and responds accordingly. If you don’t understand the facts, the mistakes, or as the people have pointed out, the prejudices will just be more entrenched unless human action takes the right action.

Simply put, a deep learning system eager to take up the entire internet is going to take a lot of nonsense in a way. This approach is not only addictive but also leaves a huge ecological footprint.

Translate »