Trying to call my fine-tuned model using google-genai but it keeps reverting to the base model (gemini-1.5-flash-001-tuning) - S

I successfully fine-tuned the base model gemini-1.5-flash-001-tuning, using the latest version of the g

I successfully fine-tuned the base model gemini-1.5-flash-001-tuning, using the latest version of the google-genai package (version 1.3.0). I am trying to then generate content with my fine-tuned model, but when I check the model response details it always states that the model version is "gemini-1.5-flash-001-tuning" instead of my tuned model name. Additionally, from the response I can also tell it is not the expected behaviour of my fine-tuned model.

I am doing all this in a Google Colab notebook. Here is my python code:

# confirming the fine-tuned model exists, which it does
for model in genai_client.models.list(config={'page_size': 10, 'query_base': False}):
    print(model)


tuned_model = genai_client.models.get(model='tunedModels/my-tuned-model')

print(tuned_model)

# I've tried explicitly passing the name string here as well as 
# tuned_model.name and also tuning_job.tuned_model.model
response = genai_client.models.generate_content(
    model='tunedModels/my-tuned-model',
    contents='Tell me about yourself'
)

print(response.model_dump_json(
    exclude_none=True, indent=4))

The output of the response.model_dump_json:

{
    "candidates": [
        {
            "content": {
                "parts": [
                    {
                        "text": "..."
                    }
                ],
                "role": "model"
            },
            "finish_reason": "STOP",
            "index": 0,
            "safety_ratings": [
                {
                    "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
                    "probability": "NEGLIGIBLE"
                },
                {
                    "category": "HARM_CATEGORY_HATE_SPEECH",
                    "probability": "NEGLIGIBLE"
                },
                {
                    "category": "HARM_CATEGORY_HARASSMENT",
                    "probability": "NEGLIGIBLE"
                },
                {
                    "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
                    "probability": "NEGLIGIBLE"
                }
            ]
        }
    ],
    "model_version": "gemini-1.5-flash-001-tuning",
    "usage_metadata": {
        "candidates_token_count": 118,
        "prompt_token_count": 7,
        "total_token_count": 125
    },
    "automatic_function_calling_history": []
}

I have tried refreshing the client in case of any cacheing issues, I've restarted my notebook kernal, etc., I am not sure why it seems to keep using the base 1.5 flash model rather than the fine-tuned one I am defining.

I have been following the tutorials in docs e.g.

  • /
  • /gemini-api/docs/sdks
  • .html#module-genai.tunings
  • /gemini-api/docs/model-tuning/tutorial?lang=python

Note that I am using the GenAI SK and not VertexAI version of it.

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745081337a4610143.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信