Question Answering Chain
The question answering chain receives both documents and a question as input. It then utilizes the language model to provide an answer to the question based on the relevant documents.
import { OpenAI } from "langchain/llms";
import { loadQAStuffChain } from "langchain/chains";
import { Document } from "langchain/document";
const llm = new OpenAI({});
const chain = loadQAStuffChain(llm);
const docs = [
new Document({ pageContent: "harrison went to harvard" }),
new Document({ pageContent: "ankush went to princeton" }),
];
const res = await chain.call({
input_documents: docs,
question: "Where did harrison go to college",
});
console.log({ res });
By default, the QA chain will try to stuff all the documents into the context window. If you have a lot of documents, you may want to try using the map-reduce method.
import { OpenAI } from "langchain/llms";
import { loadQAMapReduceChain } from "langchain/chains";
import { Document } from "langchain/document";
// Optionally limit the nr of concurrent requests to the language model,
// if you have a lot of documents, to avoid rate limiting.
const llm = new OpenAI({ concurrency: 10 });
const chain = loadQAMapReduceChain(llm);
const docs = [
new Document({ pageContent: "harrison went to harvard" }),
new Document({ pageContent: "ankush went to princeton" }),
];
const res = await chain.call({
input_documents: docs,
question: "Where did harrison go to college",
});
console.log({ res });