Integration
HyperChat™ APIs are a drop-in replacement for LLM providers in all popular LLMOps frameworks because they are compatible with the OpenAI API template. One example is the popular Langchain framework, for which we provide examples for integration in Python as well as NodeJS for your convenience.
Python Example
import os
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(api_key=os.environ["HYPERBEE_API_KEY"],
base_url="https://api.hyperbee.ai/v1/",
extra_body = {"optimization" : "premium"}) # this can be "premium", "auto" or "fast"
llm_input = "Will humans be able to live in outer space any time soon?"
messages = [("user", llm_input)]
ai_msg = llm.invoke(messages)
print(ai_msg.content)
NodeJS Example
This example is with a RAG call rather than a chat completion call (works for both), assuming that <COLLECTION_NAMESPACE_ID>
contains the information related to the question.
import { ChatOpenAI } from "@langchain/openai";
process.env['OPENAI_BASE_URL'] = 'https://api-rag.hyperbee.ai/v1';
process.env['OPENAI_API_KEY'] = '<YOUR_API_KEY>';
const llm = new ChatOpenAI({
model: "hyperchat",
modelKwargs: {
"namespace": "<COLLECTION_NAMESPACE_ID>",
"optimization": "premium"
}
});
const aiMsg = await llm.invoke([
["user", "Will humans be able to live in outer space any time soon?"],
]);
aiMsg;
console.log(aiMsg.content);