




Open Ai just released fine tuning for ChatGPT, what is it and what does it do?
James Longbottom
5 min read
Share this post:

Let’s cut to the chase, most are now profoundly aware of what ChatGPT is and over 100 million have used it.
Over the past few months OpenAi have been releasing updates, such as new products and features to add new ways to interact with the large language model (LLM).
There most recent is expected to be released this autumn, and is based around there new “Turbo” mode and the ability to fine tune and train the GPT output.
Users typically have to pay $20 a month for access to GPT4, the updated, more accurate and relevant version of the LLM. However OpenAi are claiming that the accuracy and performance can be matched or even outperformed by fine tuning of the GPT 3.5 Turbo version. This requires uploading training data to then retrain the language model for applications at scale.
Fine tuning case, according to OpenAi improves the output of chatGPT by:
- “Improved steerabilty”: Improved output language or type of language, for example the model could be used other languages or different contexts of language if trained that way.
- Output formatting : The model can be used to better output in coding languages or formats that could then be integrated into other parts of the business eg JSON that could then feed into a website or server
- Custom tone: If used in marketing, sales or customer experience, the model could be trained to better match with the tone of the brand, making it more recognisable or consistent with their tone.
The training can also allow users to shorten prompts, making them more efficient and cheaper to run. Early testers managed to reduce prompts by 90%!
For pricing to train a model, it would run as follows, to train some data 3 times over, with a file worth around 75,000 words would cost around £2.
Furthermore...
This is new and useful, especially as it is built into the OpenAi platform, however it it possible to make systems like ChatGPT built into a larger architecture that could train from something called a vector database. This allows the LLM to become operated by other planers and system to add contextual data into the input.

An example of how this could be useful would be for a legal team, who need to produce customer facing documents, based from past examples. The data could be added into a vector database, which is then used to feed the LLM, giving the output more accurate results in the form of documents that can be used in business. This set up moves closer to Ai based agents, that can preform complex tasks using complete automation.
Whilst this takes more development work to tailor it to a businesses process, it has exciting implications to build intelligent automation into companies that often deal with complex repetitive tasks.
If you want to learn more about how Ai can be used to supercharge efficiency in a process, or how it can add value to any part of your business, contact Pro Ai here : info@proai.co.uk.