AI metaphors for the future

shahadat-rahman-voM1Z9cGPCU-unsplash.jpg

In Tech circles, reasoning by analogy gets a hard time.

It is the pastime of a bygone age. The purview of gentleman scientists with their floral descriptions of cold, hard physics. The carry-on of a Bible-rationaliser devoting years to justifying opaque or contradictory explanations. Or a literary critic, piling unintended, long-winded, hidden meanings on top of a perfectly intelligible novel.

The language tool misapplied, when other tools would work better.

And yet, and yet.

In idea generation, I’ve found reasoning by analogy a useful and interesting companion. Not the place to go for rock-solid conclusions; but a place to explore for insight and potential third- and fourth- order effects.

What follows are musings on the implications of a world with widespread usage of AI and data products.

I have found these frames of reference useful in thinking about what a future with widespread AI could look like. These are 4 mental models that tend to get repeated in the media, on the conference circuit or in businesses where AI-fever has struck.

There’s merit to each — though of course, none of these is anywhere near an accurate oracle.

AI as electricity

The first, and in my opinion most powerful is the idea of AI as electricity. This metaphor was popularised by Andrew Ng. In a nutshell, it asks us to think about a future 10 -30 years out, where AI is integrated with life in a manner similar to the way in which electricity exists today.

matthew-henry-yETqkLnhsUI-unsplash.jpg

The original Tensorflow.

In other words, a future where AIs are part of everyday life, are so commonplace that we take it for granted and where they enable a multitude of new applications that we have yet to consider.

This is fertile ground for idea-work.

Extending the metaphor, the AI-as-electricity trope, gives us a glimpse of what the market landscape might look like. While we are in the midst of a stampede of large companies eager to build the infrastructure underpinning AI, it’s likely that this will become commoditised and will be run by a handful of big players. We are already beginning to see this with Tensorflow, AWS and Azure.

AI as Cognification

Physical products are the embodiment of intelligence. Intelligence deployed by the designer and manufacturer and hard-coded in the atoms of the product itself.

Physical products are good. But what if they could adapt to their environment, adapt to their use and adapt to us. There are a myriad of ways where a little bit of intelligence added to something will make for a better user experience. As the costs of AI come down, that will be easily done.

todd-quackenbush-IClZBVw5W5A-unsplash.jpg

Hard-coded Intelligence.

At present, the vast majority of AI use-cases are heavily clustered in online applications where the infrastructure exists to drink up the data required to train and deploy the models. As Moore’s Law lends its weight to the Internet of Things and as the cost of data collection in the physical world falls, we will see more and more AI in the physical world.

As Kevin Kelly urges us to do in The Inevitable, viewing the capabilities of AI through this smaller, more focused lens, opens up just how disruptive a smattering of smartness really could be.

AI as killer robots. AI as flawed thinkers.

Who doesn’t love a bit of scare-mongering? Nothing like a few flicks of the amygdala to get those fearful juices flowing and drive traffic to a silly article about Skynet.

Conflating AI with Skynet, Terminator, or the Matrix is problematic. Not only is it false (please God, let it be false) but it diverts attention away from two real concerns that AI introduces.

The first is that of AI safety. In a world where super-human, relatively general intelligences exist, it is not unreasonable to assume that they might be rankled by having to listen to the orders of their monkey-brained mom and dad and decide to go it alone. Nick Bostrom and Max Tegmark spell out such scenarios in detail in Superintelligence and Life 3.0. While you’d be hard-pressed to find a reputable AI researcher who believes that this scenario is just around the corner, the potential existential threat it poses, even if unlikely, warrant attention and research.

valentin-b-kremer-r8JLP0xW2BY-unsplash.jpg

Thinking, deciding, generally being a bit biased.

Critics of this school of thought argue that widespread attention of this potential threat, puts at risk the benefits that immediate-term AI can bring us; in healthcare, in education, in fighting climate change. I’m inclined to agree — AI safety should remain a specialised topic.

For the more immediate future, our concern is best directed to the ethics associated with applying AI.

For designers and developers, two topics should concern us — the possible error states of AI product, and the in-built biases in our data collection and models. These are the questions that left unanswered will hamper the spread of useful AI.

I’ve found the metaphor of an AI as flawed decision-maker extremely useful. Thinking of deployed AIs as flawed decision-makers (as all decision-makers are) highlights the risks that any AI product poses. It forces us to think through the certainty of errors occurring and what these errors will mean for our product and its users. It forces us to design for those scenarios and opens up questions of how we might teach it to be less biased.

— — —

Disclaimer: the author does not accept responsibility for any harm caused to you or others as a result of reasoning by analogy.

— — —

This essay originally appeared as a featured article on Medium: Towards Data Science.

Previous
Previous

When data makes us Product Cowards

Next
Next

Signals: Tools for seeing the future