I'm building a RAG system for a platform where the primary content consists of videos and slides. My approach involves extracting keyframes from videos using OpenCV
diff = cv2.absdiff(prev_image, curr_image)
gray_diff = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
mean_diff = cv2.mean(gray_diff)[0]
scene change detection. For each keyframe, I generate a caption and apply OCR to extract any embedded text.
However, a major issue I encounter is that many extracted frames are just shots of the speaker, which are not useful for my application. Since scene change detection works by comparing consecutive frames, it often captures moments where the speaker moves slightly, rather than focusing on slides, graphs, or other informative visuals.
I considered filtering out frames containing faces, but the problem is that many useful frames (e.g., slides, graphs) also contain the speaker’s face. Simply discarding frames with faces would result in losing valuable content.
How can I refine the keyframe extraction process to prioritize frames containing meaningful content, such as slides, graphs, or images, while filtering out those that primarily show the speaker?
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744243064a4564795.html
评论列表(0条)