
Generative AI refers to systems that create content such as text, images, audio, or video using machine learning algorithms, particularly neural networks.
These systems, which gained widespread attention following the introduction of OpenAI's ChatGPT, are trained on vast datasets and generate outputs based on patterns within the data. This allows them to become highly versatile tools, capable of leveraging their massive knowledge base and interpreting human language—both spoken and written—in astonishing ways.
While these AI tools are undoubtedly useful, their widespread adoption has also highlighted their potential unreliability.
And one of the most notable unpracticality of these Large Language Model-powered chatbots, is their outdated and inaccurate content.
Google wants to change this.
Large Language Model, or LLM, is a type of AI model designed to understand, generate, and process human language. These models are based on machine learning, particularly deep learning, and are typically built using neural networks, especially transformers, which excel at handling sequential data like text.
LLMs are trained on massive datasets, often encompassing terabytes of text from books, articles, websites, and other written sources.
Their size is measured in billions or even trillions of parameters, which are the adjustable weights within the neural network. The scale and complexity of these models allow them to grasp the nuances of language and perform a variety of tasks.
The capabilities of LLMs are vast.
They can generate text, such as writing essays, stories, or even code, and provide answers to queries in natural language. They can also summarize large amounts of text into concise summaries, translate languages, and perform sentiment analysis to detect emotions or opinions in text. These models can be applied in countless domains, from education to customer support, and are used to improve efficiency and accuracy in language-related tasks.
For its part, Google has Gemini, which is able to combine advanced capabilities in text understanding and generation with multimodal functionality, meaning it can process and generate text, images, and other types of content.
But just like pretty much all LLMs out there, Gemini can still hallucinate and generate responses that aren't at all relevant or up-to-date.
This happens because Gemini and others rely on historical data to make predictions and generate content. If the training data is outdated or lacks recent developments, the AI may produce outdated responses.

To tackle this issue, Google announced that it is partnering with The Associated Press, which gives the former access to the latter's massive amount of human-generated content.
"For years, we’ve worked with The Associated Press (AP) to provide up-to-date and accurate information for features in Google Search. To build on that collaboration, the AP will now deliver a feed of real-time information to help further enhance the usefulness of results displayed in the Gemini app. This will be particularly helpful to our users looking for up-to-date information."
And according to AP Senior Vice President and Chief Revenue Officer Kristin Heitmann:
"We are pleased Google recognizes the value of AP's journalism as well as our commitment to nonpartisan reporting, in the development of its generative AI products."
AP is a leading global news organization renowned for its extensive reach and influence. It distributes its content to over 3,000 U.S. sites and 900 international sites, supporting 23 languages.
With its extensive network that reaches a diverse and global audience, partnering with AP is a huge win for Google, in terms of making Gemini a more capable LLM, in terms of delivering real-time information.