Meet 'Claude', A Google-Backed, ChatGPT-Challenger Created By Former OpenAI Employees

Anthropic Claude

The generative AI race is on, thanks to OpenAI.

Things were rather dull, and peaceful in the AI industry. But when OpenAI announced ChatGPT, tech companies started scrambling for their own solutions. While some opt to use ChatGPT, others become competitors.

Google for example, launched Bard, but botched at first, costing the company $100 billion.

This time, there is a new kid on the block, and its name is Anthropic.

And in response to the popularity of ChatGPT, the company introduces what it calls 'Claude.'

What it does, is that it can do everything ChatGPT can, but without the "harmful outputs."

Anthropic is a startup co-founded by ex-OpenAI employees, with investors that include Google, which has pledged $300 million in Anthropic for a 10% stake in the startup.

And Claude here is a chatbot, just like ChatGPT.

It can be told to perform a wide range of tasks, including searching across documents, summarizing, writing and coding, and answering questions about particular topics. In many ways, Claude is similar to ChatGPT.

Anthropic said that Claude can also help with use cases such as creative and collaborative writing, and even can take direction on personality, tone and behavior.

But according to Anthropic, Claude is "much less likely to produce harmful outputs,", and that it's also "easier to converse with," and also "more steerable."

"We think that Claude is the right tool for a wide variety of customers and use cases," said an Anthropic spokesperson. "We’ve been investing in our infrastructure for serving models for several months and are confident we can meet customer demand."

Claude was initially launched in the late 2022 as a closed beta to only a number of users.

Anthropic tested the technology discreetly with partners like Robin AI, AssemblyAI, Notion, Quora and DuckDuckGo.

And following the success of the test, Anthropic is ready for a public roll out.

But what makes Claude a worthy competitor here, especially during the time when people realize Bing AI can become emotional at times, is that Claude avoids the pitfalls of ChatGPT and other similar AI chatbot systems.

Claude has no access to the internet, in the same way like ChatGPT. Claude was trained on public web pages up to spring of 2021.

But here's the thing: whereas modern chatbots are notoriously prone to biases and offensive language, Claude was not trained with toxic contents.

Claude was "trained to avoid sexist, racist and toxic outputs," as well as "to avoid helping a human engage in illegal or unethical activities."

What this means, Claude should be more controllable, with answers that are more predictable.

Claude should less likely to invent facts when asked about topics beyond its understanding, or core knowledge.

To make this happen, Anthropic essentially created a "constitutional AI," which aims to provide a "principle-based" approach to align AI systems with human intentions. To build Claude, Anthropic started with a list of around 10 principles that together, they formed a sort of "constitution" (hence the name "constitutional AI").

According to Anthropic, it uses the concepts of beneficence (maximizing positive impact), nonmaleficence (avoiding giving harmful advice) and autonomy (respecting freedom of choice).

As a precursor to Claude, Anthropic had another AI system it used for self-improvement, writing responses to a variety of prompts, and revising the responses in accordance with the constitution.

In other words, Anthropic used reinforced learning to guide the AI into doing what it's supposed to do.

After exploring the ways the AI can respond to prompts, Anthropic funneled everything into a single model.

And it's this model that was used to train Claude.

Anthropic believes that Claude should be less likely to go rogue and start spitting racist obscenities, like what Microsoft Tay did, in part, due to this constitutional AI approach.

Read: Beware Of 'Hallucinating' Chat Bots That Can Provide 'Convincing Made-Up Answer'

While the method is new, Anthropic admits that Claude has its own limitations.

For example, Claude is reportedly worse in math and has a poorer programming ability than ChatGPT.

And more, it still hallucinates.

For example, it can invent a name for a chemical that doesn't exist, and provides a non-existing method for producing weapons-grade uranium.

And also like ChatGPT before this, it's also possible for users to circumvent Claude’s built-in safety features via clever prompting.

For example, a user in the beta version of the AI managed to get Claude to describe how to make a recreational drug from home.

In one way or another, Claude is pretty similar to ChatGPT, and shares many of its traits.

"The challenge is making models that both never hallucinate but are still useful — you can get into a tough situation where the model figures a good way to never lie is to never say anything at all, so there’s a tradeoff there that we’re working on," the Anthropic spokesperson said. "We’ve also made progress on reducing hallucinations, but there is more to do."

Published: 
16/03/2023