
"All roads lead to Rome." But each road is unique, and poses its own sets of challenges.
Since OpenAI introduced ChatGPT, pretty much all tech companies, large and small, race towards creating the best of all generative AIs. Prominent players, like OpenAI and Microsoft use GPT, for example. This proven way, pioneered by OpenAI uses 'generative pre-trained transformer' for generating human-like text.
But Liquid AI is trying a different approach.
Just like how Google uses its own Large Language Models and how Anthropic utilizes its own proprietary LLM, Liquid AI creates what's called the "Liquid Foundation Models," or LFMs.
Building its own model using a fundamentally new architecture, the company said that the AI manages to deliver impressive performance.
It excels in so many things, that the model is on a par with, or even superior to, some of the best LLMs out there.
Today we introduce Liquid Foundation Models (LFMs) to the world with the first series of our Language LFMs: A 1B, 3B, and a 40B model. (/n) pic.twitter.com/0GGL8EaqJZ
— Liquid AI (@LiquidAI_) September 30, 2024
The Boston-based startup was founded by a team of researchers from the Massachusetts Institute of Technology (MIT), including Ramin Hasani, Mathias Lechner, Alexander Amini, and Daniela Rus.
They are recognized as pioneers in the field of “Liquid Neural Networks,” a type of AI model that significantly differs from GPT.
Liquid Neural Networks, or LNN, is a novel class of AI that represents a departure from traditional neural network architectures, because it's designed to be more flexible, adaptive, and efficient in handling dynamic and complex environments than GPT.
Unlike conventional neural networks, which have fixed architectures, LNN boasts the ability to change their structure and connections over time.
This adaptability allows it to respond more effectively to varying input conditions and tasks.
LNN is also capable of continuous learning, meaning the AI can update its weights and configurations as new data becomes available. This makes it particularly well-suited for real-time applications where the data stream is constantly changing.
Whereas GPT requires a lot of neurons to perform computing tasks, LNNs can achieve the same performance with fewer. It does this by combining those neurons with innovative mathematical formulations, enabling it to do much more with less.
This is possible because LNN is able to maintain and utilize memory over time, enabling it to process sequences of data more effectively.
As a result, LNN is more resource-efficient.
These are real advantages, considering that GPT requires more computational resources for training and inference.
Despite having strong performance in text generation and understanding due to extensive training, GPT suffers from its fixed architecture and lack of continuous learning.
Long story short, LNN uses minimal system memory while delivering exceptional computing power.
According to Liquid AI, its LFMs represent a new generation of AI systems that are designed with both performance and efficiency in mind.
In all, the AI that is based on the principles from dynamical systems, numerical linear algebra, and signal processing, making them well-suited for managing different forms of sequential data, including text, audio, images, video, and signals.
Putting it to the test, LFM doesn't disappoint.
LFMs are Memory efficient LFMs have a reduced memory footprint compared to transformer architectures. This is particularly true for long inputs, where the KV cache in transformer-based LLMs grows linearly with sequence length. pic.twitter.com/finhVnbtOK
— Liquid AI (@LiquidAI_) September 30, 2024
In a web page on its website, Liquid AI said that it's introducing its first set of generative AI models:
- A dense 1.3B model, ideal for highly resource-constrained environments.
- A dense 3.1B model, optimized for edge deployment.
- A 40.3B Mixture of Experts (MoE) model, designed for tackling more complex tasks.
And here, Liquic AI said that LFMs offer a new best performance/size tradeoff in the 1B, 3B, and 12B (active parameters) categories.
LFM-1B performs well on public benchmarks in the 1B category, making it the new state-of-the-art model at this size. This is the first time a non-GPT architecture significantly outperforms transformer-based models. pic.twitter.com/w9AGaiouxL
— Liquid AI (@LiquidAI_) September 30, 2024
LFM-3B delivers incredible performance for its size. It positions itself as first place among 3B parameter transformers, hybrids, and RNN models, but also outperforms the previous generation of 7B and 13B models. It is also on par with Phi-3.5-mini on multiple benchmarks, while… pic.twitter.com/HmLmaOyWuY
— Liquid AI (@LiquidAI_) September 30, 2024
LFM-40B offers a new balance between model size and output quality. It leverages 12B activated parameters at use. Its performance is comparable to models larger than itself, while its MoE architecture enables higher throughput and deployment on more cost-effective hardware. pic.twitter.com/098682K7MJ
— Liquid AI (@LiquidAI_) September 30, 2024
Liquid AI aims to develop highly capable and efficient general-purpose models suitable for organizations of all sizes. To achieve this, it focuses on creating LFM-based AI systems that can operate effectively at all scales, from the network edge to enterprise-grade deployments.
"Architecture work cannot happen in a vacuum – our goal is to develop useful models that are competitive with the current best-in-class LLMs. In doing so, we hope to show that model performance isn’t just about scale – it’s also about innovation," the company said.
What Language LFMs are good at today:
General and expert knowledge
Mathematics and logical reasoning
Efficient and effective long-context tasks
A primary language of English, with secondary multilingual capabilities in Spanish, French, German, Chinese, Arabic, Japanese, and…— Liquid AI (@LiquidAI_) September 30, 2024
What Language LFMs are not good at today:
Zero-shot code tasks
Precise numerical calculations
Time-sensitive information
Counting r’s in the word “Strawberry”!
Human preference optimization techniques have not yet been applied to our models, extensively.— Liquid AI (@LiquidAI_) September 30, 2024