UNLOCK THE POWER OR REASONING LLMs
With the launch of o1 preview and o1 mini OpenAI is the first to announce models with reasoning capabilities.
Reasoning LLMs generate internal chains of thought (CoT) before producing a response.
They make logical connections between pieces of information and draw conclusions based on those connections.

What is a reasoning LLM good for?
These are just a few examples of the many potential applications of a reasoning LLM.
- Generating tutorials or guides for complex topics by breaking down the information into smaller, more manageable pieces and providing explanations and examples.
- Developing AI-powered advice systems that can reason about the user’s goals, preferences, and constraints to provide personalized recommendations.
- Creating AI-generated art, music, or literature that coherent and well-reasoned.
- Assisting in the development of autonomous vehicles by reasoning about the road environment, traffic rules, and potential hazards.
- Helping in the diagnosis and treatment of complex medical conditions by analyzing patient data, medical literature, and expert opinions.
If you want to tackle complex problems that require nuance you should definitely try prompting a reasoning LLM! I asked o1 preview to give me a marketing strategy for my magazine – it did a great job!
THE MILLION DOLLAR QUESTION!

Reasoning LLMs are designed to simulate human-like reasoning, but whether they can actually “reason” in the same way as humans is a subject of ongoing debate and research.
They are trained on vast amounts of text data and are designed to predict the likelihood of a given sequence of words or phrases. They use complex algorithms to process and generate human-like language.
There are limitations to what LLMs can do.
For example:
- Lack of common sense: LLMs often lack common sense and real-world experience, which can lead to errors in reasoning.
- Limited domain knowledge: LLMs are typically trained on specific domains or topics and may not have the breadth of knowledge or understanding that humans take for granted.
- Biases and errors: LLMs can perpetuate biases and errors present in the training data, which can impact the accuracy of their reasoning.
Did you know that you can create your own reasoning LLM – more soon!
Check out my short tutorial & get access to an open source project called LlamaLogic – a developer gave it reasoning capabilities