A model for ChatGPT as a thinking partner for strategy and innovation
“Any sufficiently advanced technology is indistinguishable from magic” - Arthur C. Clarke
ChatGPT has got a lot of buzz (did it self generate this?! That would be beautiful), so is it here to save us, replace us or is it “just another” tool?
Tom reflects on the usefulness of ChatGPT in helping you through the innovation process, so where to lean on and where to avoid, with some interesting examples from elsewhere. Great tips, a measured reflections and good examples so enjoy!
Ultimately the tensions probably sit in whether you think the brain is a processing unit, in which case worry, or something rather different and unique. Tom takes us through the case, hints at the latter, and leaves us hoping! - TOBY
When the first silent movie was shown to an audience in Paris, the lifesize moving image of a train pulling into a station towards the camera allegedly caused the audience to stampede out of the theatre, for fear of being hit.
I think we’re living through something similar with ChatGPT.
The first ‘this changes everything’ realisation for Chat GPT came from its copywriting ability - like these instructions for how to remove a peanut butter sandwich from a VCR, in the style of the King Jame Bible.
How we laughed, then shuddered at the spectre of a machine that seemed unbelievably, hilariously human.
These copywriting tricks are certainly a fun diversion, but I’m more interested in its value as an innovation and strategy tool - how it might challenge, change and improve the process of innovation, and those of us who practise it.
Based on my own experimentation, on observations of how various different strategy and creative teams are using it at Imagination and on the torrent of conversation online, I think we’re seeing some emerging use cases and operating ‘modes’, as well as the limitations - which in turn point to where the future value might lie, and how it might impact businesses in the creative industries.
To explore them, I’d suggest that we need to think of Chat GPT not as a performing monkey, or as a really smart autocomplete as some are arguing, but rather as a thinking tool or even a thinking partner.
Let’s break that down by looking at two thinking styles that underpin the practice of innovation and strategy formation: divergent and convergent thinking.
Using ChatGPT in divergent thinking
In divergent thinking we’re either exploring the landscape of the problem or opportunity, in search of clues and insights, asking ‘what’s going on’ and ‘why’; or we’re going broad and beyond the immediately plausible to develop a wealth of early ideas that explore the breadth of an opportunity, push the boundaries of the known.
In this mode, Generative AI such as ChatGPT can be used as a discovery tool to help to codify the patterns or ‘rules’ of a category, and then work outwards to find related examples.
A great example of this is the thread from Azeem Azhar, in which he co-created a board game with ChatGPT, notice how at the beginning he feeds it an example, and uses it to establish an abstracted set of principles or rules, which can then be fed back into the tool, to search for other, similar examples that meet those criteria. I’ve done the same with business models and value propositions.
I see this as a fantastic time-saving device. The answers aren’t perfect, but they get you close enough they provide excellent raw material to tweak and re-use.
Another fairly straightforward but powerful use within the divergent discovery phases is to quickly summarise existing academic papers, arguments and concepts and frameworks - here’s an example
. Again, not giving the answer, but saving huge amounts of time in the early stages, when the breadth of exploration serves to open up an opportunity space.
In each of these examples, I think you’re seeing the power of generative AI to expand the surface area of the opportunity - opening up what Stephen Johnson calls The Adjacent Possible, the idea that creativity and invention happens through the combination of known things in new ways and therefore the more that you know, the greater the span of your adjacent possible, and the further that you can get. In this way, AI tools can open up the adjacent possible with considerable less effort than might previously have been required.
In this sense, ChatGPT has thrown into stark relief the limitations of Google as a research tool, where it is becoming increasingly difficult to find quality, relevant content in amongst the vast slurry of content marketing that pollutes the results. As a sidenote, it seems, likely that the ease with which ChatGPT can generate decent quality content will cause a tidal wave of content marketing designed to max out the Google ads model. Google in its current form will literally disappear down its own business model
Using ChatGPT in convergent thinking
Now let’s flip from divergent thinking, to convergent, where we’re no longer exploring, but making choices, developing strategies and solutions, then validating them
In this mode, Generative AI can be used to quickly flesh out ideas - in the example from Azeem Azhar, he used gave the tool some broad parameters as to what the board game might be, and the tool quickly fleshed it out with data, information and examples, many of which would have taken a lot longer to find using traditional search. In this mode, it’s rather like someone colouring in a picture - the better the original prompt, the richer the picture will be.
This is where the buzz about ChatGPT has been loudest to date - there’s something undeniably magical about giving it a creative brief (the more ludicrous the better) and watching it instantly return a decent (often astounding) result in seconds.
In strategy and innovation, ideas are often the easy part - they’re ten a penny. It’s finding the right idea, matched to a real, human need or behaviour that’s the difficult part.
Generative AI can be a useful thinking tool in helping to overcome bias in the convergent phase. At this point, we’re trying to land on the right strategy or idea. As humans in this stage we’re prone to confirmation bias - we fall in love with our ideas, once we think something is right, we will look for reasons that confirm this. I’ve used ChatGPT as a quick test to give an outside-in view of an idea. Ask it to build forward into a story of success that you can look at, give it a kick, see which points feel least plausible and go out and test or ideate on those ones. On this note.
I think this turns one of the problems with ChatGPT into a useful tool: the fact that it has no concept of meaning, it doesn’t make value judgements, so you can get it to argue a point or explore an idea (provided you give it enough direction) and it will go out and heroically do its best to convince you. In this sense, it can be a great way of quickly exploring early hunches, developing counterfactuals, flushing out the assumptions that lie inside ideas and acting as a sparring partner - arguing against your first hunch. Having someone give you a rational argument as to why your hunch is NOT true is a surefire way to sharpen your thinking.
Another important part of the process, especially in the convergent stage is the ability to reach a shared understanding - whether that’s in working with teams or in pitching ideas back to clients and stakeholders. I’ve found ChatGPT can be a powerful articulation tool, to simplify complex explanations, make things less wordy or more accessible.
I have a hunch that this might prove to be really useful as a way to explain or summarise ideas within groups of people who are not accustomed to working together or who are perhaps working asynchronously. In the creative process, early stage thoughts and ideas are often lost because they are misunderstood or misconstrued. Valuable time is lost through lack of understanding between teams. I’ve used it as a tool to turn reams of turgid consultant-speak into a few sentences of plain English. I wonder whether this tool might be a useful interlocutor between humans, effectively ‘retelling’ an idea or a line of thinking in a way that is less complicated, more understandable? In this form, it could certainly speed up the cycle time of creative teams. This is similar to how Google chat uses AI to write short summaries of conversations, to save people from having to scan long threads of conversations.
Here’s an example that shows both the benefits and the limitations - see how ChatGPT can make a paragraph of Dostoesvsky both easily understandable and utterly boring.
What is ChatGPT not good for? What human superpowers (and weaknesses) should we double down on?
I believe that the board game example also demonstrates a structural limitation of relying heavily on ChatGPT: it is brilliant at playing in the world of the ‘known unknowns’ - its field of reference is limited to the data that it was trained on, and the quality of the input. OpenAI remains pretty tightlipped on what data it was trained on, but one can assume that it *at least* carries the biases that are baked into the data publicly available on the internet. 56% of all content on the internet is in English, (fun fact, the next most prevalent is Russian at 5.2%). Without some inventive human input, these tools can only ever conceive of things that resemble other things from the past. Sometimes that’s sufficient, but the truly inventive, creative, original strategies and ideas live outside of our common frames of reference and expectations for how the world should work.
One example of this is how ChatGPT is pretty terrible at metaphors. Even when directly prompted, it usually fails George Orwell’s famous first rule of writing: “Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.”
Metaphors are in themselves are small moments of linguistic innovation - a great metaphor is surprising and satisfying - it shines a light on a quality of the object being described that you hadn’t appreciated, it can make you smile or grimace, they can create ‘aha’ moments of deeper understanding, being extremely useful for communicating abstract ideas. Great metaphors are particularly human.
The reason for this is that ChatGPT is trained on vast amounts of *written* content published on the internet. Shocking fact: 27% of the world’s population has never been on the internet. A real danger of ChatGPT is that it will create an ever-increasing pool of content in an ever-decreasing range of perspectives and life experiences. The human value is in the personal, the incidental, the observed. The snatches of conversations, emotions and experience that are not recorded, categorised and stored in the cloud.
It’s also worth remembering that despite its unnervingly human tone, ChatGPT has precisely zero human insight. A critical part of strategy and innovation is looking at the real world, at human behaviours and asking ‘why?’ to understand the deeper reasons behind people’s actions. In my experience, this usually comes from observation, from unstructured conversations and from empathy - well outside the scope of Generative AI.
It’s also curiously limited by one of its most jawdropping capabilities - its speed of response. In the world of creativity and innovation, there is a lot to be said for a bit of friction, and some procrastination. There’s a healthy body of research that points to the value of both of these things in enabling us humans to slip out of directed, convergent thinking into more creative, intuitive frames of mind. Sherlock Holmes referred to ‘three pipe problems’, Darwin famously had his thinking path where he wore a groove while working through his thinking for On the Origin of the Species.
Might it be more than just a tool..?
It’s obvious but important to recognise that though these tools seem human-like, they are simply cleverly engineered machines: learning, knowledge and articulation as a series of binary switches. That’s not how the brain works, but the way that it can support us humans to explore our ideas is extremely powerful.
For that reason, I think that the chat interface of ChatGPT may prove to be a gamechanger for how the internet is used within knowledge industries. Like it or not, over the past decade or so, Google has become one of our primary tools for research. The google search prompt is beautifully simple, but unidirectional - we put in a search term, and get back a set of results that we must evaluate, and then go again. It puts us into a ‘command and demand’ type of mental mode. By contrast, the chat interface, and the human-like response from ChatGPT can create a back-and-forth that can bump you into a different mental mode, more playful, more experimental, more creative - much like a human thinking partner.
So what does this mean for the practice (and by extension the business model) of innovation and strategy? This is a tool that can speed cycle times from the ability to find more stuff, more quickly, synthesise first versions, flesh out and articulate ideas. It will certainly increase the speed and perhaps reduce the cost of some activities, but may equally increase the premium on more difficult human-centred skills.
But also, garbage in, garbage out. The output is only as good as the input. The innovators, strategists and creatives who thrive will be those who understand both the power and the limitations of ChatGPT and use both to their advantage, who can step out into the messy, unstructured reality of the real world to bring back original observations and insights, who can ask creative questions, spot hidden patterns, find the weak points and creatively address them, who can articulate, inspire and connect with others and ultimately - and dial up the humanity.