Blog Posts

Follow our stories and unique insights!
Contextual Function-Calling: Reducing Hidden Costs in LLM Function-Calling Systems
Function-calling in LLMs, like OpenAI’s system, can lead to high token costs by including all registered functions in each prompt, even when they’re unused. Contextual Function-Calling offers a solution by dynamically selecting only relevant functions, significantly reducing token overhead.
Simplifying Data Extraction with OpenAI JSON Mode and JSON Schemas
The blog explores using OpenAI’s JSON mode to generate structured data from LLMs for easier application integration. While JSON mode improves results, the author recommends defining Data Transfer Objects (DTOs) and JSON schemas to ensure more reliable formatting, though occasional inconsistencies still occur.
Why Function-Calling GenAI Must Be Built by AI, Not Manually Coded
Function-calling systems powered by large language models require a dynamic, AI-driven approach due to their non-linear complexity, infinite input variations, and the need to adapt to model updates. Instead of relying on manual coding, capturing and refining AI-generated function sets ensures flexibility and long-term resilience in evolving GenAI applications.
User-Aligned Functions to Improve LLM-to-API Function-Calling Accuracy
Function-calling allows large language models to interact with external systems via APIs, but challenges like terminology mismatches and complex structures can affect accuracy. User-Aligned Functions (UAFs) offer a solution by simplifying these interactions.
Charting a New Path: Announcing Early Access of Gentoro, LLM to Enterprise Bridge
After realizing the transformative impact of generative AI, the Gentoro team shifted their focus to developing an AI-integrated solution. Now, after two years of effort, they are offering early access to Gentoro, a middleware platform designed to help AI Agents interact seamlessly with enterprise systems while managing complexities like security and privacy.
Function-based RAG: Extending LLMs Beyond Static Knowledge Bases
Retrieval-Augmented Generation (RAG) improves large language models (LLMs) by enabling them to access and use external, real-time data. Function-based RAG goes further by integrating dynamic sources for tasks requiring up-to-date information, enhancing LLM capabilities beyond static knowledge.​

Accelerate Enterprise GenAI App Development

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form