Anthropic Introduces 'Claude 2' As A More 'Harmless' And 'Friendly' Chatbot AI

Anthropic Claude 2

The generative AI race is on, and it's getting fiercer.

Thanks to OpenAI, its ChatGPT has made the AI industry that rarely generate buzz outside its own domains, a headline grabbing news. And when tech companies race and become competitors, one particular company wants to compete by making its AI more ethical.

Anthropic created 'Claude' in response to the popularity of ChatGPT.

The company claims that it can do everything ChatGPT can, but without the "harmful outputs."

This time, it's going a step further, with the introduction of 'Claude 2'.

Read: Meet 'Claude', A Google-Backed, ChatGPT-Challenger Created By Former OpenAI Employees

According to Anthropic in the introduction:

"We are pleased to announce Claude 2, our new model."

"Claude 2 has improved performance, longer responses, and can be accessed via API as well as a new public-facing beta website,"

"We have heard from our users that Claude is easy to converse with, clearly explains its thinking, is less likely to produce harmful outputs, and has a longer memory. We have made improvements from our previous models on coding, math, and reasoning."

Claude 2 is claimed to offer a longer memory, and is also better at solving mathematical, code-aligned questions, as well as reasoning-signed queries.

Anthropic claims Claude 2 got 76.5% of the questions correct on the multi-choice Bar Exam questions, which is a significant jump of over 73% compared to the Claude 1.3 model.

"When compared to college students applying to graduate school, Claude 2 scores above the 90th percentile on the GRE reading and writing exams," Anthropic said.

The company also said that users can now input queries worth 100,000 tokens. To put that into context, ChatGPT can only process tokens worth up to 8,000 tokens. In linguistic terms, roughly 2,000 tokens fed to Claude would be equal to an essay worth 1,500 words. Anthropic originally made the token upgrade in May 2023, noting that Claude can now process an entire novel in less than a minute.

For users, that means Claude 2 can now easily handle long-form assignments like letters and stories, while also being able to condense long documents into concise forms.

With Claude 2, Anthropic is picking up its pace, and introduces the AI as an upgrade of the same magnitude as OpenAI when it introduced GPT-4.

Just like ChatGPT and other chatbots, Claude 2 is powered by a Large Language Model, which lets it respond to questions and prompts in natural language.

Since the arrival of OpenAI's ChatGPT in late 2022, tech companies have released a flood of generative AI tools to the masses. With prompts, modern chatbots can produce email responses, travel itineraries and even poetry, among other things, though quality varies. These tools, trained on vast amounts of information, are programmed to identify patterns and then generate plausible-sounding answers.

While every new iteration of the chatbots make it smarter, they're still prone to hallucination, and spitting out incorrect answers and sometimes sources that don't exist.

This is an issue, especially when generative AI products are rapidly adopted by a variety of people and businesses.

This has also raised concerns over potential problems, including spreading misinformation and deepening bias.

Anthropic said that Claude 2 is "less likely to produce harmful outputs" than its predecessor, because it's Anthropic's "core research focus has been training Claude models to be helpful, honest, and harmless."

"We have an internal red-teaming evaluation that scores our models on a large representative set of harmful prompts, using an automated test while we also regularly check the results manually," said the announcement.

This is to ensure that Claude 2 is less susceptible to jailbreaks or nefarious uses.

"We've been iterating to improve the underlying safety of Claude 2, so that it is more harmless and harder to prompt to produce offensive or dangerous output," the company said.

To use the AI, users can visit the website, and sign up for an account.

Claude 2, initially in beta, has some features that are limited for free users.

While Anthropic claims to be a ethically-driven company that makes generative AI safe and "steerable," those who wish to use Claude 2 are prompted to agree to the company's terms, acceptable use policy and privacy policy.

They should click through a handful of informational pages, including reminders that the chatbot isn't intended to give "legal, financial, and medical advice," and that some conversations may be reviewed to improve the company's safety systems.

Initially, Claude 2 has been made available to users in the U.S. and UK.