python - llama-index RAG: how to display retrieved context? - Stack Overflow

I am using LlamaIndex to perform retrieval-augmented generation (RAG).Currently, I can retrieve and an

I am using LlamaIndex to perform retrieval-augmented generation (RAG).

Currently, I can retrieve and answer questions using the following minimal 5 line example, from /:

from llama_index import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)

This returns an answer, but I would like to display the retrieved context (e.g., the document chunks or sources) before the answer.

Desired output format would look something like:

Here's my retrieved context:
[x]
[y]
[z]

And here's my answer:
[answer]

What is the simplest reproducible way to modify the 5 line example to achieve this?

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1742312921a4420258.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信