9 Exciting Areas in AI
Here are 9 areas in AI where I’m excited about.
Model & Research Layer:
1️⃣ Smaller models –Throughout history, technologies usually get bigger and more complex before they get smaller and simpler (see computers, cell phones, lower-level program languages, etc..). This will apply to LLMs as well. Until we exhaust the amount of training tokens available, SOTA models will continue to grow bigger. But newer, lightweight models such as Gemma 2B show a lot of promise. These smaller models are well-suited for simpler tasks such as summarization and can also allow for edge deployment.
2️⃣ Domain-specific models – Most of the research and funding so far has gone into general-purpose LLMs. While these models are very good, they may be inefficient from a cost and latency perspective. The emergence of smaller, domain-specific models like Orbital Materials and Cradle may lead to more specialized FMs.
3️⃣ Long-term memory – Long-term memory is a necessary ingredient for personalization and agents. Initiatives like OpenAIs recent memory controls for ChatGPT hint at the future. While the current solution for long-term memory includes a mixture of vector databases, RAG, and larger context windows, the field remains ripe for further research.
4️⃣ Emotional AI – Current AI systems have a hard time understanding and conveying human emotions. Startups like Hume AI is building a FM to bridge this empathy gap.
Infrastructure / Tooling Layer:
5️⃣ Agent-to-agent collaboration – Many startups are working on either specific agents (Cognition Harvey) or general agents (Adept MultiOn). In the world where many agents are working together, there must exist third-party middleware that act as both agent connectors (“agent APIs”) and agent evaluators.
6️⃣ Data center peripherals – Most of the attention today has been focused on silicon. But equally important are the peripheral infrastructure that surrounds the chip. One area that we’re closely monitoring is the liquid cooling space. As power density approaches 1kw+ on the latest GPUs, we will need to re-think how cooling is done.
7️⃣ Data tooling – Given that we are still early in the adoption curve, many companies are still figuring out their data strategy as part of their overall generative AI deployment. Startups like unstructured.io Superlinked and DatologyAI are helping in this area.
Application layer:
8️⃣ Video generation – OpenAI's Sora showcased the potential of AI in video generation. Industries steeped in video content, from advertising to entertainment, will be amongst the first ones disrupted by generative AI. Startups like like Pika and Creatify AI are on the cutting edge of this space.
9️⃣ Intellectual Property - There have been several highly publicized cases of IP lawsuits in generative AI (NYT, Getty). Copyright protection and attribution will only grow in importance as the technology becomes more ubiquitous.