9 AI predictions for 2023

Setting the Scene

It was once upon a time said that “software is eating the world”. If the last few months are anything to go by, one might believe that AI is now re-writing it, re-painting it, and, generally, generating all kinds of new versions of it.

The launch of ChatGPT, and subsequent growth to 1 million users in less than a week, appears to have been a watershed moment, as the wider public consciousness began to awaken to the current state and future possibilities of large language models (LLMs), generative AI, and other artificial intelligence. There is, though, far more to be considered beyond the hype surrounding one particular product, and the overall mainstreaming of AI. So, looking beyond that, what technological advancement, and what applications, might we expect to see more of in the coming year?

Disclaimer: Predictions are hard (if you’re not a machine)

As Bill Gates put it “Most people overestimate what they can do in one year and underestimate what they can do in ten years.”

The first easy-to-make prediction here has to be that such a statement surely fits the common view of AI. In this context, the short-term overestimation will come from a lack of appreciation for the challenges related to data infrastructure and data quality, as well as the difficulties in solving for all the nuances and complexity involved in deploying the tech in products and workflows used by normal people. The long-term underestimation will come from an inability to fully imagine the compounding effect of advancing technology.

Since our launch in 2020, we at J12 have had a strong focus towards the data infrastructure required to build intelligent products, as well as backing teams building various advanced applications of AI. From data quality (Validio), to applications in retail (Dema and Formulate), health (unannounced), energy, (Ostrom), DevOps (Rely.io, Codeball), cybersecurity (CYBR.ai), and plenty more, we’ve seen how teams at the forefront of their sectors are utilising data and AI to break new ground.

So, here’s what we expect to see over the coming year, and the areas in which we are particularly excited. In other words, let’s have a go at overestimating the year ahead.

2023 Outlook

Surrounded by Models

ChatGPT, built on top of OpenAI’s GPT-3, is so far the model that has been crowned with the most glory. However, even Sam Altman — CEO at OpenAI — has expressed his surprise that no one built ChatGPT before them:

“We had the model for ChatGPT in the API for 10 months before we made ChatGPT… I sort of thought someone (else) was gonna just build it”.

This ought to serve as a clear reminder that there are a number of similar unbuilt and unreleased products right now, and OpenAI is far from alone. The suggestion that ChatGPT and OpenAI will prove to be the killers of Google search is somewhat fanciful, even if it does raise the question as to how search will evolve now, and what that may do for Google’s business model. Both Google and Meta have been developing similar tech, and have the talent to continue to do so, but OpenAI has so far gained an advantage via its practice of releasing language models for the public to use and provide feedback to.

Google is likely to fully release LaMDA to the public this year, while Meta will likely not be left behind. On OpenAI’s part, the next generation GPT-4 is expected for release in the early part of 2023 — and while it won’t have the widely mentioned 100 trillion parameters compared to the 175 billion of GPT-3 — it will represent another leap forward.

The race has just begun.

And it’s not only big tech that is standing on the start line, several startups — such as Cohere, Character.AI, Adept, and Inflection.AI — have developed similar models that will be primed for release.

Structuring an Unstructured World

Around 80–90% of data generated by organisations is unstructured. Examples include email, presentations, phone recordings, media files, surveillance imagery, and so on. Despite these data types containing a wealth of information, they have typically been difficult to analyse.

Large language models (LLMs) can now address these challenges — understanding large sets of unstructured data and allowing organisations to uncover actionable intelligence. We can expect to see these capabilities employed across a range of business workflows, improving search within enterprise software, empowering governments to improve national security, and anywhere where the structuring of data may reveal hidden patterns that can support scientific breakthroughs or improve customer experiences.

Applications, Applications, Applications — but Who Wins?

As the generative AI infrastructural and platform layers start to take shape, we are seeing the emergence of the application layer.

Generative AI is drastically lowering barriers to creation — with powerful products for written content, text-to-image, text-to-video, and 3D game creation. Just as drastically, we’re seeing huge productivity gains — with solutions enabling code generation, sales team efficiency, customer service improvements, and the automatic creation of everything from marketing material and ads, to legal documents.

Today, it is difficult to see any systemic moats in generative AI, as applications lack differentiation due to the use of similar models. In the near-term, companies will likely gain positioning as a result of being early to apply these models in different contexts, and as a result of building smooth products to serve them. While business understanding and design can provide some defensibility, these tend not to be the strongest barriers over time.

A number of application companies have been able to grow rapidly — successfully acquiring customers that are quick to try new solutions that may give them an edge — but have struggled with retention and differentiation as new players enter the market. Over the coming year, we will learn more as to the appetite of customers to pay for these solutions and really integrate them in their processes, while we’ll also start to see what kind of moats can be established by the companies that are paving the way. These may be related to data (e.g. the use of proprietary data sets on top of foundation models), related to model performance (e.g. the ability to tailor a model to a specific data set), or they may be more traditional (such as relating to distribution and network effects).

Multimodality

Modality refers to the way in which something happens or is experienced”, and, so far, the models and applications we interact with carry out single-modality tasks — they respond to purely text interaction, outputting text (in the case of ChatGPT), or outputting an image in the case of DALL-E).

Multimodal means the ability to combine the understanding of various data types (text, image, audio, video, numerical data), and utilise it to solve tasks involving any modality. Such a model could take a text input and produce video, or take image and audio input combined in order to produce text. Just as our knowledge of the world comes from a combination of visual, language, audio, and other sensory cues — this greatly improves the common sense and contextual understanding of AI.

One promising application is in healthcare, where diagnoses could be made from a multimodal analysis of images, described symptoms, and patient data.

Sam Altman recently stated that he expects to see a multimodal model in the near future, although he hasn’t gone so far as to state that GPT-4 will be that model. Likely it will not be, but at some point this year we might get more of a look as to what multimodal AI is capable of.

Recommendations that you didn’t know you need

While generative AI claims all the headlines, we can expect to see many products continue to be built upon improving algorithmic recommendation systems. In recent years, TikTok has disrupted social media with its algorithmically curated content, while in fashion and commerce, SHEIN’s remarkable rise has in part been driven by its ability to recommend clothing to purchase, and Spotify continues to lead the music streaming world by serving the right music to you at the right time.

As Rex Woodbury at Index Ventures put it, “the major consumer applications of AI will lean heavily into sophisticated recommendations that anticipate your wants and desires before you even know them”.

Categories that have in the last decade been defined by players that provided the best digital or mobile experiences may be further redefined by the players that can best harness powerful recommendation systems to serve users with the most personalised content. Everything from finance and travel, to job searching, dating, and all areas of commerce. Anything that has previously involved a personal adviser or coach is up for disruption.

The Rise of Digital Twins

A new breed of true-to-reality digital twins is emerging, as simulations of real-world assets and environments become increasingly complex, and useful. Examples include:

  • A so-called industrial revolution via simulation, where everything built in the real world is first simulated in a virtual world (as described by Bob Pette). Simulated factories and workplaces enable the optimisation of manufacturing processes and improvements to worker safety, while virtual cities provide the opportunity to take urban planning to the next level.

  • Digital twins that can, at every scale, simulate environments in which to train highly-skilled professionals. Whether it is surgeons in a virtual operating room rehearsing surgeries, or operators working advanced machinery on a factory floor.

AI Cybersecurity vs AI Cyber Threats

The rise of generative AI — across written content, images, art, video, and voice — means an increase in deepfakes, fraudulent content, scams, and cybersecurity threats.

Phishing scammers now have far greater capability to replicate human communication in text, email, and voice calls, in order to hack systems. While generative AI can also be used to synthesise passwords or create lookalike fingerprints to break authentication, or to better disguise malware. We expect to see increased cyber awareness, and the emergence of solutions that aim to protect the rights, privacy, and security of individuals and businesses.

Data and AI in the Climate Fight

At the intersection of the biggest threat of this generation, and one of the biggest technological revolutions.

As the energy grid becomes increasingly complex due to the rise of renewables and distributed energy resources, grid operators require better monitoring tools, while utility companies must utilise AI in order to improve demand forecasting and efficiency through load balancing. At the same time, energy data can be used to move businesses and households towards a more sustainable energy mix, and more optimised consumption patterns.

New Companies Galore

While various macro factors may lead some to feel it’s a less-than-ideal time to take the risk of starting a new company, there are other positive drivers that can have an outweighing effect.

  • Generative AI represents the sort of technological paradigm shift that has the potential to transform on a scale beyond even the rise of mobile and cloud a little more than a decade ago.

  • We now see the emergence of the application layer, as new products begin to touch our everyday lives. This increased visibility of the technical potential, seen through the applications and the opportunity to explore the capabilities of the underlying models, serves to awaken the imagination of creators and builders.

  • With hiring freezes and mass layoffs at big tech companies, as well as stock options that are no longer in the money and locking employees in to vest, an unprecedented amount of entrepreneurial talent will be liberated this year.

  • These push and pull dynamics will together funnel generational talent to a generational opportunity to leverage groundbreaking technology and build category-defining companies.

Previous
Previous

J12 Commercial Excellence Playbook — the Ideal Customer Profile (ICP) Formula.

Next
Next

J12 Commercial Excellence Playbook — The Most Important Sales Channels