Generative AI is all the hype, and the race is on.
These models can be so knowledgeable because it fed on huge amounts of data taken from books and the web, which give them the ability to generate response in a human-like manner. But these AI can be quite unpredictable, and this is why they're typically given further training by humans to make them less likely to produce offensive, rude, or dangerous outputs.
With rules to follow and guardrails to control the output, generative AI chatbots are more likely to answer questions in ways that seem coherent and plausibly correct.
Despite the possible occasional biases and errors, commercial AI products, typically refuse to, for example, offer advice on how to commit crimes, and will stop from speaking about racy materials.
With all the hype, and as the race rages on, there is a new kids on the block.
Also powered by large-language model, the name is 'Grok,' a conversational generative AI, developed as an initiative by Elon Musk as a direct response to the rise of OpenAI's ChatGPT, which Musk also co-founded.
Grok AI is a generative AI, but with a little bit of twist.
Just like others, it combines the depth of knowledge with a personality. It's created to to address critical time-consuming operational tasks such as noise reduction, correlation, root cause analysis, and incident prediction, among others.
But in this case, Grok, which is powered by an engine called 'Grok-1', is designed to be both witty and irreverent.
This happened because Grok’s approach is different from most generative AI solutions.
The company xAI believes in real-time processing of data and not learning from historical data. It believes in unsupervised learning and building sophisticated machine learning models without sophisticated configuration and programming.
Because of this xAI develops Grok by taking the complexity out of AI and machine learning, and allowing users to quickly harness the benefits of the AI with its plug and play approach.
Grok, a word that means "to understand" to some tech people, "is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!" reads an announcement on the company’s website.
"It will also answer spicy questions that are rejected by most other AI systems."
The company explained that Grok has been "modeled after The Hitchhiker’s Guide to the Galaxy, so intended to answer almost anything."
xAI developer tool for improving and understanding large AI models https://t.co/Xkmz5yf9bl
— Elon Musk (@elonmusk) November 6, 2023
The publication of this AI project began in November 2023, when xAI previewed a chatbot to select users.
Initially, xAI described the AI as "a very early beta product – the best we could do with 2 months of training," and that it could "improve rapidly with each passing week."
After all, it's also not a slouch.
Being trained with 33 billion parameters, and that it's fundamental advantage is its "real-time knowledge of the world via the X platform," or the platform formerly known as Twitter, which Musk acquired for $44 billion in 2022, Grok should be able to show its differences.
In fact, while still in beta testing, early results suggest that Grok outperforms other models in machine learning benchmarking, and came second in a test against OpenAI's GPT-4.
The thing is, whereas xAI's approach is to make Grok less biased by letting it do its thing through unsupervised learning, most other commercial AI models refuse using this approach because of the tendencies of the AI generating sexually explicit, violent, or illegal content.
Example of Grok vs typical GPT, where Grok has current information, but other doesn’t pic.twitter.com/hBRXmQ8KFi
— Elon Musk (@elonmusk) November 5, 2023
Unlike xAI, others think that generative AIs pick up their biases from their training data and because of that, guardrails are needed.
Without such guardrails, they worry that their AI models allow discriminate users based on characteristics such as race, gender, or age, or worse, like helping cybercriminals create malware, or aid terrorists in creating bombs.
At least initially, this is why an xAI employee suggested that the chatbot would have a toggle between a "regular mode" and a "fun mode".