GPT-4o Mini launched by OpenAI is now the most cost effective small AI model of the company. This model is 60% cheaper than the GPT 3.5 Turbo.
GPT-4o Mini -the cost-effective small AI model of OpenAI
According to the announcement of the company, one of the major highlights of the new AI model is its cost effectiveness. The company says it is 60 percent cheaper than GPT-3.5 Turbo, which was its previous cheapest small model. In actual numbers, processing one million input tokens will cost the company $0.15 (roughly Rs. 12), and per million output tokens will cost $0.60 (roughly Rs. 50).
This cost effective model offers low latency and this makes it one of the most efficient AI models of the company. Currently text and vision is supported by the model. OpenAI says that it will support text image video audio as both input and output in the future.
In a blog post, OpenAI said, “We expect GPT-4o mini will significantly expand the range of applications built with AI by making intelligence much more affordable. GPT-4o mini scores 82% on MMLU and currently outperforms GPT-41 on chat preferences in LMSYS leaderboard. GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots).”
For more information please keep reading techinnews