
The AI was dull, boring, and barely made ripples outside its own industry.
But since OpenAI introduced ChatGPT, things changed. Pretty much all tech companies, large and small, began the arms race to either partner or rival the generative AI.
In the ever-shifting world, the competition is fierce, and the stakes are high.
To keep on distinguishing itself from rivals, OpenAI has to move fast, and this time, the company has announced the successor of its powerful GPT-4 multimodal large language model.
The company calls it the GPT-4o.
This time, OpenAI is going one step further, by allowing users to customize the AI to their liking.

OpenAI announced the launch of fine-tuning capability for its flagship GPT-4o artificial intelligence large language model, which is meant to allow developers to create custom versions of the AI for specific use cases.
At this time, GPT-4o is OpenAI’s largest and most complex model capable of responding in real-time to text, audio and video.
It can also reply to voice inputs so quickly that it’s almost like speaking to another human being.
It even developed a cult, and that it initially angered actress Scarlett Johansson, when she realized the AI sounded just like her in the film Her.
After postponing the Voice Mode to make adjustments, OpenAI ventured forward with this fine-tuning ability, which allows users to adjust its already pre-trained model to suit a specific task or dataset.
GPT-4o, like pretty much all LLM models, are pre-trained models that come with a lot of general information stored in them. The dataset they learned from often include a wide variety of subjects, which means that they're all-purpose chatbot instead of being masters to something.
When fine-tuning, the goal is to adapt the model for specialized use or knowledge domain.
By giving users the ability to customize the AI to their liking, is equivalent to to training an employee for a particular job, making them better and more efficient in an expert role.
For example, the model could be fine-tuned to act as a professional tutor for a college-level coding course where students learn Java.
Because the model is trained from a custom dataset that could include specific knowledge, developers can make the AI to be an expert in being a tutor for that specific programming language only, and not other languages, like C++.

According to OpenAI on a webpage on its website:
"Developers can now fine-tune GPT-4o with custom datasets to get higher performance at a lower cost for their specific use cases. Fine-tuning enables the model to customize structure and tone of responses, or to follow complex domain-specific instructions. Developers can already produce strong results for their applications with as little as a few dozen examples in their training data set."
The introduction of fine-tuning capabilities for GPT-4o is a highly anticipated feature, especially for developers who have been eager to customize the model to suit their unique requirements and needs.
With it, developers can train GPT-4o with custom datasets to get higher performance at lower costs for specific use cases to change the tone or behavior of the model.
OpenAI’s announcement includes an offer of one million free training tokens per day for its GPT-4o model for each organization until September 23, and two million free training tokens per day until September 23 for its GPT-4o mini, which is also available for fine-tuning.