llama index - Ollama, llama_index, HuggingFaceEmbedding,VectorStoreIndex, python - getting message "Empty Response&

I have Ollama installed on my local and I am able to run ollama run tinyllama from command prompt, also

I have Ollama installed on my local and I am able to run ollama run tinyllama from command prompt, also able to ask question to the llm from command prompt. But when run the code on python it does not give the response. it only Prints: Empty Response Below is the code:

from llama_index.llms.ollama import Ollama
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.embeddings.huggingface import HuggingFaceEmbedding

documents = SimpleDirectoryReader("data").load_data()
print(f"Loaded {len(documents)} documents.")

# bge-base embedding model
Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")

# ollama
Settings.llm = Ollama(model="tinyllama:latest", request_timeout=360.0)

index = VectorStoreIndex.from_documents(
    documents,
    embed_model=Settings.embed_model
)

query_engine = index.as_query_engine()
print("Query engine created.")

query = "What is the capital of France?"
print(f"Executing query: {query}")
response = query_engine.query(query) 

print(response)

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1736132706a3862406.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信