python - YOLOv11 Model Converted to TFLite Not Producing Correct Output in TensorFlow - Stack Overflow

I'm training an ALPR detection model using the dataset from Roboflow ALPR with YOLOv11, converted

I'm training an ALPR detection model using the dataset from Roboflow ALPR with YOLOv11, converted to TFLite using:

import ultralytics as yolo
!yolo detect export model=/content/runs/detect/yolov11_anpr/weights/best.pt imgsz=640 batch=1 format=tflite

My Current Python Inference Code (Ultralytics YOLO) Both .pt and .tflite models work correctly in Ultralytics' inference pipeline:

from PIL import Image
from ultralytics import YOLO

image = Image.open("/content/Screenshot From 2025-03-08 16-37-15.png")
model = YOLO('/content/runs/detect/yolov11_anpr/weights/best_saved_model/best_float32.tflite')
results = model(image)

result = results[0]
result.show()

This successfully detects Persian numbers:

Here's a visual representation of the successful detection using Ultralytics YOLO:

Problem However, direct inference with TensorFlow (without Ultralytics) doesn't produce correct detections. The output data is incorrect or missing entirely.

Questions: Why does inference using Ultralytics YOLO work, but direct TensorFlow inference doesn't? What preprocessing or post-processing steps am I missing for YOLOv11 TFLite inference with TensorFlow? Any insights or solutions to correctly use the TFLite model directly with TensorFlow would be greatly appreciated!

you can download and test my tflite model with below link :

I'm training an ALPR detection model using the dataset from Roboflow ALPR with YOLOv11, converted to TFLite using:

import ultralytics as yolo
!yolo detect export model=/content/runs/detect/yolov11_anpr/weights/best.pt imgsz=640 batch=1 format=tflite

My Current Python Inference Code (Ultralytics YOLO) Both .pt and .tflite models work correctly in Ultralytics' inference pipeline:

from PIL import Image
from ultralytics import YOLO

image = Image.open("/content/Screenshot From 2025-03-08 16-37-15.png")
model = YOLO('/content/runs/detect/yolov11_anpr/weights/best_saved_model/best_float32.tflite')
results = model(image)

result = results[0]
result.show()

This successfully detects Persian numbers:

Here's a visual representation of the successful detection using Ultralytics YOLO:

Problem However, direct inference with TensorFlow (without Ultralytics) doesn't produce correct detections. The output data is incorrect or missing entirely.

Questions: Why does inference using Ultralytics YOLO work, but direct TensorFlow inference doesn't? What preprocessing or post-processing steps am I missing for YOLOv11 TFLite inference with TensorFlow? Any insights or solutions to correctly use the TFLite model directly with TensorFlow would be greatly appreciated!

you can download and test my tflite model with below link : https://drive.google/file/d/1p4CaFl9g2gPjGUd68xlr_EQlxz-umTre/view?usp=sharing

Share asked Mar 10 at 22:54 faridfarid 3201 silver badge11 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

For preprocessing, you need to load the image via OpenCV and then perform the following steps:

  • Convert BGR to RGB

  • Resize the image to your model's expected input image size (in my case, 640x640)

  • Normalize the values by dividing them by 255

  • Expand the dimension.

Here is the code:

# Load and preprocess image
def preprocess_image(image_path, input_shape):
    image = cv2.imread(image_path)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    image = cv2.resize(image, (input_shape[1], input_shape[2]))
    image = image.astype(np.float32) / 255.0 # Normalize to [0, 1]
    image = np.expand_dims(image, axis=0)  # Add batch dimension
    return image

For postprocessing, you need to inspect the model's output dimension; In my case, it was (1, 6, 8400)
Here, 1 is the batch size, (6 = 4 + number of your classes) the first four values are respectively, the x center, y center, width, and height of the detected grid.

Here is how I am doing it in my project:


image_width, image_height = image_shape

detections = output_data[0]  # Shape: (6, 8400)
xc = detections[0]  # (8400,) - normalized center x
yc = detections[1]  # (8400,) - normalized center y
w = detections[2]   # (8400,) - normalized width
h = detections[3]   # (8400,) - normalized height
confs = detections[4:]  # (8400,) - classes confidence scores


# Apply confidence threshold (e.g., 0.5)
threshold = 0.5
# x_min, y_min, x_max, y_max, class_id, confidence
boxes = []

for class_id, conf in enumerate(confs):
    for i in range(len(conf)):
        if conf[i] > threshold:
            # Convert to pixel coordinates
            x_min = int((xc[i] - (w[i] / 2)) * image_width)
            y_min = int((yc[i] - (h[i] / 2)) * image_height)
            x_max = int((xc[i] + (w[i] / 2)) * image_width)
            y_max = int((yc[i] + (h[i] / 2)) * image_height)

            boxes.append([x_min, y_min, x_max, y_max, class_id, conf[i]])

After you've got your output in desired format, you may perform NMS on the detected boxes for getting rid of overlaps. After all these, annotate your image using OpenCV:

# Visualize the final boxes on the image
image = cv2.imread('image.jpg')

# Assuming final_boxes is a list of bounding boxes with (x_min, y_min, x_max, y_max, score)
for box in final_boxes:
    x_min, y_min, x_max, y_max, class_id, score = box

    print(x_min, y_min, x_max, y_max, score)
    
    # Draw bounding box
    cv2.rectangle(image, (x_min, y_min), (x_max, y_max), (0, 255, 0), 2)
    
    # Draw score text
    cv2.putText(image, f"{class_id} {score:.2f}", (x_min, y_min - 10),
                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)    


cv2.imwrite('output.jpg', image)

For the full code, checkout my repo:
https://github/TanimSk/Ultralytics-ODM

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744820167a4595565.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信