Back to news
Insights

Our emerging views on generative AI

November 30, 2022 was a fascinating day in our office. Many of us went tools down to explore ChatGPT as its web traffic surged. We posted silly poems to Slack channels, sent short stories to friends and family, and gradually found real utility from the tool.

We also reflected on what this step forward meant for our portfolio companies. Will they want to incorporate generative AI into their product? What will the AI-enabled companies in our portfolio - like Aidoc, Deci AI, Exodigo, Neara, Tomorrow.io, Q-CTRL, ROKT, Feedvisor, Partly, retrain.ai, Talon.One, Vi, Voyantis and more to be announced soon - make of this huge new wave?

In the three months since, and across all the geographies we cover, we’ve seen an explosion of hungry entrepreneurs capitalising on the step change in accessibility of these large language models (LLMs).

At Square Peg we often talk about the concept of having a “prepared mind” - an informed perspective. Given the depth of talent founding exciting businesses in the space, we’ve been building this prepared mind on generative AI, and want to share our emerging questions.

In doing so, we really want your thoughts and feedback. It will help make our understanding stronger than thinking in a silo. Get in touch: jethro@spc.vc, lucy@spc.vc, and casey@spc.vc.

At the highest level, we’re asking ourselves four questions:  

  1. Is this a great business in its own right?
  2. Will generative AI unlock a new way to solve a problem that is 10x better than before?
  3. How does the business build a sustainable competitive advantage, particularly where there is heavy reliance on third-party foundational models?
  4. Can the business find a distribution angle over incumbents that can add AI to their existing products?

We explore each of these in more detail below.

ONE | Is this a great business in its own right?

When new technologies generate vast amounts of excitement, there is a risk that teams put the cart before the horse in their eagerness to utilise the technology, and build a product that doesn’t solve a real problem. Like the birth of the internet, the surge in IoT innovation, and the recent wave of crypto, we suspect this will be true for some generative AI businesses.

Our starting point is to think through whether a business has great foundations, putting novel use of AI aside. To answer this, we ask ourselves a combination of the questions below to pressure test whether there’s meat behind the solution:

  • Is the team solving a frequent and painful problem, and who are they solving it for? Do these customers have the budget to spend to solve the problem?
  • How will the product change customers’ behaviour, and how accepted will this change be? Who is impacted inside the organisation?
  • Is this a feature or a product? Can the initial wedge lead to a standalone product over time? How will they expand the breadth of use cases or move across different segments of the market?
  • What are the most important factors to AI adoption in these domains in success? How do customers’ IT maturity impact purchasing behaviours?

Aidoc, an Israeli portfolio company, is a great example. Although not a generative AI business, their deep learning focus has many parallels. When Aidoc was founded, demand for radiology services was skyrocketing, but the supply of candidates wasn’t keeping up with demand. Radiologists desperately needed support from their employers. Where Aidoc started its journey as a decision support tool to make radiologists more accurate, it expanded its product scope to include patient prioritisation tools and other specialties - becoming the platform to make decisions faster, more accurately, all while reducing complications by devoting time to the right patients.

Although the accuracy of Aidoc’s models is paramount, the real magic has been their approach to product. When we first met co-founder and CEO, Elad, he went so far as to tell us it would never be the models themselves that unlock the massive success Aidoc has experienced. Instead, creating an integrated product workflow that let radiologists access the insights these models produced in a seamless way within their existing tooling stack would enable real distribution. Aidoc was built in such a way that its users didn’t need to meaningfully change their behaviour.

TWO | Will generative AI unlock a new way to solve the problem that is 10x better than before?

Peter Thiel’s adage that true product success comes in the form of products at least 10x better than their competitors has rung loudly in our ears as the volume of generative AI businesses we’re meeting skyrockets. Changing user behaviour is incredibly hard unless there’s a noticeable step-change benefit.

When we meet businesses at the forefront of leveraging generative AI, we push ourselves to question whether this technology drives a step change in productivity, creativity, or quality. For example, video content creation has traditionally been labour-intensive. But Synthesia can now generate videos from text in minutes. In the field of gaming, as another example, startups are fundamentally changing the way in-game assets are created by fine-tuning a model on sample designs and asking it to produce 10,000 in-game assets with the same style. These are huge improvements on previous workflows.

Beyond the product experience alone, there are unit economics factors that mean startups should push themselves on how much value generative AI is adding. The benefits delivered need to outweigh the cost of compute associated with making these AI capabilities available to end users. If a problem can be solved through simpler, higher margin means (e.g. web apps linked to databases, imperative rule-based systems), businesses must decide whether the cost-benefit analysis checks out to add “generative AI” features. As one researcher we spoke with told us, “SaaS unit economics just won’t work in the case of generative AI - there is massive OPEX to deploy and run these models. That means the value add of generative AI has to be so much higher to justify it.”

THREE | How does the business build a sustainable competitive advantage, particularly where there is heavy reliance on third-party foundational models?

This feels like the million-dollar question bouncing around the ecosystem - a conversation many founding teams and investment firms are debating in real-time. Where does value accrue? Why?

As we stated at the outset of this article that although the AI space has its idiosyncracies, we believe that many of the fundamentals of building a great tech business still apply. You need to build a great product that solves a real problem. You need a scalable and cost-efficient way to get it into the hands of your customers. As part of this, you need to build a moat over time. For example, we’ve seen a lot of thin UX wrappers of late, building on top of LLMs hosted by third parties like OpenAI. We find it hard to get comfortable with the defensibility of these businesses.

In saying that, we do believe there are specific areas where defensibility will emerge. We discuss two below, conscious this list is not exhaustive.

First, highly domain-specific use cases where proprietary data and finetuning are needed to serve that use case. One deep learning PhD researcher told us that “if you have really genuine proprietary data unlike anything on the open internet, you have an advantage.”

There are two caveats to accompany this statement:

  1. Many of the latest models are highly “sample efficient” - they don’t require as much data as previous models. We have heard from many founders that they expect they’ll accumulate a data advantage that’s based on the volume of data, as opposed to acquiring unique data that aligns to a very specific task. We are sceptical of volume-based arguments for a data advantage.
  2. LLMs seem highly capable of completing a wide variety of tasks pretty well. We expect that some founders will look to outperform them on specific tasks by using specialised data. We have two questions in this case:
    1. How truly proprietary is your data? Why can’t others get their hands on it?
    2. Could a generic model get close enough to good enough for most use cases? Does proprietary data allow for either a much better or much more efficient offering than using a pre-built model?

An example might be drug discovery, for which the high-quality biological data required to produce a good model has been extremely difficult to obtain. One company, Absci, has developed an advantage in this space by acquiring this data over the last decade through its work with antibody therapeutics makers.

We also ask ourselves if that proprietary data has depth and durability. This can come about from activating a data flywheel where the more usage you attract, the more data you’re able to collect to fine-tune your models, and the resulting change in the quality of model performance drives even more usage. A recent unannounced investment we made in this space leveraged this data flywheel by giving users a first draft of an artifact, and then tracking the iterations they subsequently made to that draft - sometimes over 1,000 changes! The engagement from end users was phenomenal, which gave us confidence that defensibility could be built over time.

In saying that, not all “network effects” are created equal - and we love the insights shared by professors Andrei Hagiu and Julian Wright on their blog, Platform Chronicles. They share that data-based network effects tend to be less powerful than traditional network effects for four reasons:  

  1. Buying data is easier than acquiring customers, and that helps overcome the cold-start problem making it easier for new entrants to catch up.  
  2. Data network effects tend to diminish quickly after attracting a small number of customers, largely due to improvements in algorithms. In some cases, just a few large customers provide enough data to reach performance benchmarks.
  3. The benefit of one additional user to a network is less clear to users in the case of data network effects. It’s only ever an indirect benefit, not a direct benefit like having our friends all use WhatsApp, or our colleagues using Monday.com.
  4. Lastly, a company needs to keep putting in work to benefit from data based effects, whereas traditional network effects businesses can simply keep operating and benefit without innovating further. In the case of LLMs, there’s also a real cost associated with this. It can cost $5-10m to re-train an LLM.

Second, the optimisation of unit economics for inference. A research engineer at a large generative AI startup spoke to us about this nuance: “most training workloads look very similar. Most inference workloads do not. You can create a moat by optimising inference costs where you might be able to get it down to 1/10th of the cost of your competitors by simply exploiting different usage patterns”.

Our portfolio company Deci AI helps companies optimise and deploy their computer vision and NLP applications, in some cases reducing inference costs by 5x. As companies leveraging generative AI reach a certain point in their usage curve, getting their costs down will become an increasingly important priority.  

FOUR | Can the business find a distribution angle over incumbents that can add AI to their existing products?

A startup’s currency is speed. They’re able to grow unencumbered and rapidly, taking market share from incumbents quickly before the incumbents are able to react. Slack, for example, had a 3-year head start, before Microsoft responded with Teams. Ultimately though, the distribution advantage Microsoft allowed it to blow past Slack in daily active users within a few years of Teams’ release.

In the case of generative AI, an analogy can be made to the whole category of text editors and content generators that spun up with a thin UX wrapper over GPT3. These businesses will now be severely challenged as Notion, Google Docs, and the like introduce similar functionality as a feature in their product and distribute it en masse to their existing user base.

This is especially true when it comes to enterprise sales where a startup will need to jump through multiple procurement hoops and security reviews. The founder of Snorkel AI, who sells programmatic data labelling to massive blue chip businesses, describes these challenges well: even getting the models into production is a huge lift as enterprises must weigh up all manner of issues around cost, latency, governance, risk, and bias. In enterprise, the best product doesn’t always win - better distribution and ease of procurement can be key.

However, startups have always been able to find an underserved segment or a clever distribution pathway that has gone unnoticed. While this might be trickier in the generative AI space given the level of focus Big Tech has placed on it, we believe that there will be niche fields that are better addressed by startups that can apply laser focus to fulfilling the needs of a very specific end-user: the biomedical researcher, the lawyer, the safety engineer at construction sites.

We’re incredibly excited to see this space develop, no doubt at a frenetic pace. We hope you share your thoughts with us (particularly contrarian views given no one really knows what will happen). Even people working at the cutting edge of research that we have spoken to are humble about their ability to predict how things will develop and where value will accrue.

If you’re building in this space, come talk to us about what you’re doing!

We’ve also founded a community to discuss AI, where we’re sharing our notes and hosting calls with experts. On Wednesday 8 March 5:30pm AEDT we’ll be hosting an open discussion and Q&A about this article. Join the community for the link by completing this form.

Enjoyed this post?

Share with your network!