一条瑾瑜的小站

一条瑾瑜的小站



python MediaPipe 模块学习

jinyu · 2025-10-24 · 11浏览 · python


Mediapipe 是 Google
开发的一个跨平台的开源机器学习框架,它能够助力开发者迅速构建具备实时多媒体处理能力的应用程序。Mediapipe Python 版本为
Python 开发者提供了便捷的接口,让他们能够轻松运用 Mediapipe 的强大功能,像人体姿态检测、手部追踪、面部识别等。

在 Windows 系统上安装MediaPipe模块

要使用 Mediapipe Python,首先需要安装该库。可以使用以下命令进行安装:

pip install mediapipe

基础概念

管道(Pipeline)
Mediapipe 借助管道(Pipeline)来处理数据流。管道由一系列节点(Node)构成,每个节点负责特定的处理任务,例如数据的读取、预处理、模型推理、后处理等。数据在管道中按顺序流经各个节点,最终输出处理结果。

节点(Node)
节点是管道中的基本处理单元,每个节点接收输入数据,进行处理后输出结果。Mediapipe 提供了多种类型的节点,像图像读取节点、模型推理节点、可视化节点等。

数据流(Stream)
数据流是节点之间传递的数据序列。在 Mediapipe 中,数据流可以是图像、视频帧、关键点坐标等。不同类型的节点可以处理不同类型的数据流。

1.MediaPipe进行手部追踪代码

import cv2
import mediapipe as mp

# 初始化 Mediapipe 手部模块
mp_hands = mp.solutions.hands
hands = mp_hands.Hands()
mp_drawing = mp.solutions.drawing_utils
"""
mp_hands.Hands(): 创建手部检测器实例,默认参数包括:

static_image_mode=False: 视频模式(更适合实时检测)

max_num_hands=2: 最多检测2只手

min_detection_confidence=0.5: 检测置信度阈值

min_tracking_confidence=0.5: 跟踪置信度阈值
"""
# 打开摄像头
cap = cv2.VideoCapture(0)

while cap.isOpened():
    success, image = cap.read()
    if not success:
        print("无法读取摄像头数据")
        continue

    # 转换图像颜色空间
    image = cv2.cvtColor(cv2.flip(image,1), cv2.COLOR_BGR2RGB)
    image.flags.writeable = False
    results = hands.process(image)
    """
    cv2.flip(image, 1): 水平翻转图像,使显示更像镜子

    cv2.COLOR_BGR2RGB: 将 BGR 转换为 RGB(MediaPipe 需要 RGB 格式)
    
    image.flags.writeable = False: 设置为只读以提高性能
    
    hands.process(image): 处理图像并返回检测结果
    """

    image.flags.writeable = True
    image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

    # 如果检测到手部
    if results.multi_hand_landmarks:
        for hand_landmarks in results.multi_hand_landmarks:
            # 绘制手部关键点和连接线
            mp_drawing.draw_landmarks(
                image, hand_landmarks, mp_hands.HAND_CONNECTIONS)
            """
            results.multi_hand_landmarks: 包含所有检测到的手部关键点

            每只手有21个关键点(0-20),分别代表:
            0: 手腕   1-4: 拇指  5-8: 食指    9-12: 中指   13-16: 无名指  17-20: 小指

            draw_landmarks(): 绘制关键点和连接线
            """
    # 显示图像
    cv2.imshow('Mediapipe Hands', image)
    if cv2.waitKey(5) & 0xFF == 27:
        break
    """
    cv2.imshow(): 显示处理后的图像
    
    cv2.waitKey(5): 等待5毫秒,检测按键
    
    27: ESC 键的 ASCII 码,按下 ESC 退出程序
    """
# 释放摄像头并关闭窗口
cap.release()
cv2.destroyAllWindows()

2.MediaPipe进行人体姿态识别代码

    import cv2
    import mediapipe as mp
    
    # 初始化 Mediapipe 姿态模块
    mp_pose = mp.solutions.pose
    pose = mp_pose.Pose()
    mp_drawing = mp.solutions.drawing_utils
    """
    mp_pose: 引用 MediaPipe 的姿态检测模块
    
    pose: 创建姿态检测器实例,使用默认参数
    
    mp_drawing: 用于在图像上绘制检测结果的工具
    """
    # 打开摄像头
    cap = cv2.VideoCapture(0)
    
    while cap.isOpened():
        success, image = cap.read()
        if not success:
            print("无法读取摄像头数据")
            continue
    
        # 转换图像颜色空间
        image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
        image.flags.writeable = False
        results = pose.process(image)
        """
        cv2.flip(image, 1): 水平翻转图像,使摄像头画面呈现镜像效果(更自然)
    
        cv2.COLOR_BGR2RGB: 将 BGR 格式转换为 RGB 格式(MediaPipe 需要 RGB 输入)
        
        image.flags.writeable = False: 将图像设置为只读,提高处理效率
        
        pose.process(image): 使用 MediaPipe 处理图像,检测人体姿态
        """
    
        image.flags.writeable = True
        image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
        """
        恢复图像的可写权限,将图像从 RGB 转换回 BGR 格式(OpenCV 使用 BGR)
        """
        # 如果检测到人体姿态
        if results.pose_landmarks:
            # 绘制人体姿态关键点和连接线
            mp_drawing.draw_landmarks(
                image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS)
        """
        results.pose_landmarks: 如果检测到人体姿态,包含 33 个关键点坐标
        draw_landmarks(): 在图像上绘制:关键点(身体关节位置)连接线(骨骼结构)
        """
        # 显示图像
        cv2.imshow('Mediapipe Pose', image)
        if cv2.waitKey(5) & 0xFF == 27:
            break
        """
        cv2.imshow(): 显示处理后的图像
    
        cv2.waitKey(5): 等待键盘输入,参数 5 表示等待 5 毫秒
        
        & 0xFF == 27: 检测是否按下 ESC 键(ASCII 27),如果是则退出循环
        """
    # 释放摄像头并关闭窗口
    cap.release()
    cv2.destroyAllWindows()

3.MediaPipe进行面部识别代码

import cv2
import mediapipe as mp

# 初始化 Mediapipe 面部模块
mp_face_detection = mp.solutions.face_detection
face_detection = mp_face_detection.FaceDetection()
mp_drawing = mp.solutions.drawing_utils
"""
mp_face_detection:获取面部检测模块

face_detection:创建面部检测器实例

mp_drawing:用于在图像上绘制检测结果的工具
"""
# 打开摄像头
cap = cv2.VideoCapture(0)

while cap.isOpened():
    success, image = cap.read()
    """
    持续循环直到摄像头关闭

    cap.read():从摄像头读取一帧图像
    success:布尔值,表示是否成功读取图像
    image:读取的图像数据
    """
    if not success:
        print("无法读取摄像头数据")
        continue

    # 转换图像颜色空间
    image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
    image.flags.writeable = False
    results = face_detection.process(image)
    """
    cv2.flip(image, 1):水平翻转图像(镜像效果)

    cv2.cvtColor(..., cv2.COLOR_BGR2RGB):将BGR转换为RGB(MediaPipe需要RGB格式)
    
    image.flags.writeable = False:设置为只读,提高处理效率
    
    face_detection.process(image):使用MediaPipe处理图像,检测面部
    """
    image.flags.writeable = True
    image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

    # 如果检测到面部
    if results.detections:
        for detection in results.detections:
            # 绘制面部检测框和关键点
            mp_drawing.draw_detection(image, detection)
    """
    将图像恢复为可写状态

    转换回BGR格式(OpenCV默认格式)
    
    如果有检测到面部(results.detections),遍历每个检测结果
    
    mp_drawing.draw_detection():在图像上绘制面部边界框和关键点
    """
    # 显示图像
    cv2.imshow('Mediapipe Face Detection', image)
    if cv2.waitKey(5) & 0xFF == 27:
        break
    """
    cv2.imshow():显示处理后的图像
    
    cv2.waitKey(5):等待5毫秒,并检测按键
    
    & 0xFF == 27:检测是否按下ESC键(ASCII 27)
    
    如果按下ESC,跳出循环
    """

# 释放摄像头并关闭窗口
cap.release()
cv2.destroyAllWindows()
"""
cap.release():释放摄像头资源

cv2.destroyAllWindows():关闭所有OpenCV窗口
"""

4.MediaPipe进行人体骨架识别

(一)对图片进行识别

import cv2
import mediapipe as mp

# mp.solutions.drawing_utils用于绘制
mp_drawing = mp.solutions.drawing_utils

# 参数:1、颜色,2、线条粗细,3、点的半径
DrawingSpec_point = mp_drawing.DrawingSpec((0, 255, 0), 2, 2)  # 绿色点,粗细2,半径2
DrawingSpec_line = mp_drawing.DrawingSpec((0, 0, 255), 2, 2)   # 红色线条,粗细2

# mp.solutions.pose,是人的骨架
mp_pose = mp.solutions.pose

# 参数:1、是否检测静态图片,2、姿态模型的复杂度,3、结果看起来平滑(用于video有效),4、检测阈值,5、跟踪阈值
pose_mode = mp_pose.Pose(static_image_mode=True)

file = 'img.png'
image = cv2.imread(file)  # 读取图像
image_hight, image_width, _ = image.shape  # 获取图像尺寸
image1 = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)  # BGR转RGB

# 处理RGB图像
results = pose_mode.process(image1)

'''
mp_pose.PoseLandmark类中共33个人体骨骼点
'''
if results.pose_landmarks:
    print(
        f'Nose coordinates: ('
        f'{results.pose_landmarks.landmark[mp_pose.PoseLandmark.NOSE].x * image_width}, '
        f'{results.pose_landmarks.landmark[mp_pose.PoseLandmark.NOSE].y * image_hight})'
    )

# 绘制
mp_drawing.draw_landmarks(
    image,                    # 目标图像
    results.pose_landmarks,   # 检测到的姿态关键点
    mp_pose.POSE_CONNECTIONS, # 预定义的骨骼连接关系
    DrawingSpec_point,        # 关键点绘制样式
    DrawingSpec_line)         # 连接线绘制样式

cv2.imshow('image', image)      # 显示图像
cv2.waitKey(0)                  # 等待按键
cv2.imwrite('image-pose.jpg', image)  # 保存结果
pose_mode.close()               # 释放资源

(二)对视频进行识别

import cv2
import mediapipe as mp

# mp.solutions.drawing_utils用于绘制
mp_drawing = mp.solutions.drawing_utils

# 参数:1、颜色,2、线条粗细,3、点的半径
DrawingSpec_point = mp_drawing.DrawingSpec((0, 255, 0), 1, 1)
DrawingSpec_line = mp_drawing.DrawingSpec((0, 0, 255), 1, 1)

# mp.solutions.pose,是人的骨架
mp_pose = mp.solutions.pose

# 参数:1、是否检测静态图片,2、姿态模型的复杂度,3、结果看起来平滑(用于video有效),4、检测阈值,5、跟踪阈值
pose_mode = mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5)

cap = cv2.VideoCapture('input.mp4')
while cap.isOpened():
    success, image = cap.read()
    if not success:
        print("Ignoring empty camera frame.")
        continue
    image1 = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

    # 处理RGB图像
    results = pose_mode.process(image1)

    '''
    mp_holistic.PoseLandmark类中共33个人体骨骼点
    '''

    # 绘制
    mp_drawing.draw_landmarks(
        image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS, DrawingSpec_point, DrawingSpec_line)

    cv2.imshow('MediaPipe Pose', image)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

pose_mode.close()
cv2.destroyAllWindows()
cap.release()

5.MediaPipe进行手部关键点识别

(一)图片识别

import cv2
import mediapipe as mp

# mp.solutions.drawing_utils用于绘制
mp_drawing = mp.solutions.drawing_utils

# 参数:1、颜色,2、线条粗细,3、点的半径
DrawingSpec_point = mp_drawing.DrawingSpec((0, 255, 0), 5, 5)  # 绿色点,粗细5,半径5
DrawingSpec_line = mp_drawing.DrawingSpec((0, 0, 255), 5, 5)   # 红色线条,粗细5

# mp.solutions.hands,是人的手
mp_hands = mp.solutions.hands

# 参数:1、是否检测静态图片,2、手的数量,3、检测阈值,4、跟踪阈值
hands_mode = mp_hands.Hands(static_image_mode=True, max_num_hands=2)

file = 'input.jpg'
image = cv2.imread(file)  # 读取图像
image_hight, image_width, _ = image.shape  # 获取图像尺寸
image1 = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)  # BGR转RGB

# 处理RGB图像
results = hands_mode.process(image1)

print('Handedness:', results.multi_handedness)  # 输出左右手信息
for hand_landmarks in results.multi_hand_landmarks:
    # 打印所有关键点坐标
    print('hand_landmarks:', hand_landmarks)
    # 特别输出食指指尖坐标(归一化坐标转换为像素坐标)
    print(
        f'Index finger tip coordinates: (',
        f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].x * image_width}, '
        f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y * image_hight})'
    )
    mp_drawing.draw_landmarks(
        image,  # 目标图像
        hand_landmarks,  # 检测到的关键点
        mp_hands.HAND_CONNECTIONS,  # 预定义的手部连接关系
        DrawingSpec_point,  # 关键点绘制样式
        DrawingSpec_line)  # 连接线绘制样式

cv2.imshow('image', image)      # 显示图像
cv2.waitKey(0)                  # 等待按键
cv2.imwrite('image-hands.jpg', image)  # 保存结果
hands_mode.close()              # 释放资源

(二)对视频进行识别

import cv2
import mediapipe as mp

# mp.solutions.drawing_utils用于绘制
mp_drawing = mp.solutions.drawing_utils

# 参数:1、颜色,2、线条粗细,3、点的半径
DrawingSpec_point = mp_drawing.DrawingSpec((0, 255, 0), 1, 1)
DrawingSpec_line = mp_drawing.DrawingSpec((0, 0, 255), 1, 1)

# mp.solutions.hands,是人的手
mp_hands = mp.solutions.hands

# 参数:1、是否检测静态图片,2、手的数量,3、检测阈值,4、跟踪阈值
hands_mode = mp_hands.Hands(max_num_hands=2)

cap = cv2.VideoCapture('input.mp4')
while cap.isOpened():
    success, image = cap.read()
    if not success:
        print("Ignoring empty camera frame.")
        continue
    image1 = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

    # 处理RGB图像
    results = hands_mode.process(image1)

    # 绘制
    if results.multi_hand_landmarks:
        for hand_landmarks in results.multi_hand_landmarks:
            mp_drawing.draw_landmarks(
                image, hand_landmarks, mp_hands.HAND_CONNECTIONS, DrawingSpec_point, DrawingSpec_line)

    cv2.imshow('MediaPipe Hands', image)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

hands_mode.close()
cv2.destroyAllWindows()
cap.release()

6.MediaPipe和pyqt6制作综合项目

(一)综合代码

import sys
import cv2
import mediapipe as mp
from PyQt6.QtCore import *
from PyQt6.QtWidgets import *
from PyQt6.QtGui import QImage, QPixmap

class MainWindow(QWidget):
    def __init__(self):
        super().__init__()
        main_layout = QVBoxLayout()
        main_layout.addWidget(self.bu())
        main_layout.addWidget(self.out_put())

        self.setLayout(main_layout)
        self.setWindowTitle("cv2+mediapipe学习")
        self.resize(1000, 600)

    def bu(self):
        group = QGroupBox("按钮")
        layout = QHBoxLayout()

        self.b1 = QPushButton("open")
        self.b1.clicked.connect(self.bu_1)
        self.b2 = QPushButton("close")
        self.b2.clicked.connect(self.bu_2)
        self.b2.setDisabled(True)
        self.b3 = QPushButton("face")
        self.b3.clicked.connect(self.bu_3)
        self.b3.setDisabled(True)
        self.b4 = QPushButton("hand")
        self.b4.clicked.connect(self.bu_4)
        self.b4.setDisabled(True)
        self.b5 = QPushButton("body")
        self.b5.clicked.connect(self.bu_5)
        self.b5.setDisabled(True)

        self.b1.setMinimumHeight(100)
        self.b2.setMinimumHeight(100)
        self.b3.setMinimumHeight(100)
        self.b4.setMinimumHeight(100)
        self.b5.setMinimumHeight(100)

        self.b1.setMaximumWidth(100)
        self.b2.setMaximumWidth(100)
        self.b3.setMaximumWidth(100)
        self.b4.setMaximumWidth(100)
        self.b5.setMaximumWidth(100)

        layout.addWidget(self.b1)
        layout.addWidget(self.b2)
        layout.addWidget(self.b3)
        layout.addWidget(self.b4)
        layout.addWidget(self.b5)

        group.setLayout(layout)
        return group

    def bu_1(self):
        self.b1.setDisabled(True)
        self.b2.setDisabled(False)
        self.b3.setDisabled(False)
        self.b4.setDisabled(False)
        self.b5.setDisabled(False)
        self.open = video()
        self.open.start()
        self.open.finished.connect(self.bu_1_off)
        self.open.mess_sign.connect(self.update_output)
        self.open.frame_sign.connect(self.update_video)

    def bu_1_off(self):
        self.b1.setDisabled(False)
        self.b2.setDisabled(True)
        self.b3.setDisabled(True)
        self.b4.setDisabled(True)
        self.b5.setDisabled(True)

    def bu_2(self):
        # 清理所有可能的线程
        if hasattr(self, 'open_face') and self.open_face is not None:
            self.open_face.cleanup()
            self.open_face = None
            self.b1.setDisabled(True)
            self.b2.setDisabled(False)
            self.b3.setDisabled(False)
            self.b4.setDisabled(False)
            self.b5.setDisabled(False)
            if hasattr(self, 'open') and self.open is not None:
                self.open.pause_frame_update = False
            
        elif hasattr(self, 'open_hand') and self.open_hand is not None:
            self.open_hand.cleanup()
            self.open_hand = None
            self.b1.setDisabled(True)
            self.b2.setDisabled(False)
            self.b3.setDisabled(False)
            self.b4.setDisabled(False)
            self.b5.setDisabled(False)
            if hasattr(self, 'open') and self.open is not None:
                self.open.pause_frame_update = False

        elif hasattr(self, 'open_body') and self.open_body is not None:
            self.open_body.cleanup()
            self.open_body = None
            self.b1.setDisabled(True)
            self.b2.setDisabled(False)
            self.b3.setDisabled(False)
            self.b4.setDisabled(False)
            self.b5.setDisabled(False)
            if hasattr(self, 'open') and self.open is not None:
                self.open.pause_frame_update = False

        elif hasattr(self, 'open') and self.open is not None:
            self.open.cleanup()
            self.open = None
            self.b1.setDisabled(False)
            self.b2.setDisabled(True)
            self.b3.setDisabled(True)
            self.b4.setDisabled(True)
            self.b4.setDisabled(True)
            self.video_display.clear()
            if hasattr(self, 'open') and self.open is not None:
                self.open.pause_frame_update = False


    def bu_3(self):
        self.b1.setDisabled(True)
        self.b2.setDisabled(False)
        self.b3.setDisabled(True)
        self.b4.setDisabled(False)
        self.b5.setDisabled(False)

        if hasattr(self, 'open_hand') and self.open_hand is not None:
            self.open_hand.cleanup()
            self.open_hand = None
        if hasattr(self, 'open_body') and self.open_body is not None:
            self.open_body.cleanup()
            self.open_body = None

        # 创建面部检测线程
        self.open_face = face(self.open)
        self.open_face.start()
        self.open_face.finished.connect(self.bu_3_off)
        self.open_face.mess_sign.connect(self.update_output)
        self.open_face.frame_sign.connect(self.update_video)
        
        # 暂停video线程发送帧
        if hasattr(self, 'open') and self.open is not None:
            self.open.pause_frame_update = True
            
    def bu_3_off(self):
        if hasattr(self, 'open_body') and self.open_body is None:
            if hasattr(self, 'open_hand') and self.open_hand is None:
                if hasattr(self, 'open_face') and self.open_face is None:
                    if hasattr(self, 'open') and self.open is not None:
                        self.open.pause_frame_update = False

    def bu_4(self):
        self.b1.setDisabled(True)
        self.b2.setDisabled(False)
        self.b3.setDisabled(False)
        self.b4.setDisabled(True)
        self.b5.setDisabled(False)

        if hasattr(self, 'open_face') and self.open_face is not None:
            self.open_face.cleanup()
            self.open_face = None
        if hasattr(self, 'open_body') and self.open_body is not None:
            self.open_body.cleanup()
            self.open_body = None

        # 创建手部检测线程
        self.open_hand = hand(self.open)
        self.open_hand.start()
        self.open_hand.finished.connect(self.bu_4_off)
        self.open_hand.mess_sign.connect(self.update_output)
        self.open_hand.frame_sign.connect(self.update_video)
        
        # 暂停video线程发送帧
        if hasattr(self, 'open') and self.open is not None:
            self.open.pause_frame_update = True
            
    def bu_4_off(self):
        # 恢复video线程发送帧
        if hasattr(self, 'open_body') and self.open_body is None:
            if hasattr(self, 'open_face') and self.open_face is None:
                if hasattr(self, 'open_hand') and self.open_hand is None:
                    if hasattr(self, 'open') and self.open is not None:
                        self.open.pause_frame_update = False

    def bu_5(self):
        self.b1.setDisabled(True)
        self.b2.setDisabled(False)
        self.b3.setDisabled(False)
        self.b4.setDisabled(False)
        self.b5.setDisabled(True)

        # 停止可能存在的face线程
        if hasattr(self, 'open_face') and self.open_face is not None:
            self.open_face.cleanup()
            self.open_face = None
        # 停止可能存在的hand线程
        if hasattr(self, 'open_hand') and self.open_hand is not None:
            self.open_hand.cleanup()
            self.open_hand = None

        self.open_body = body(self.open)
        self.open_body.start()
        self.open_body.finished.connect(self.bu_5_off)
        self.open_body.mess_sign.connect(self.update_output)
        self.open_body.frame_sign.connect(self.update_video)

        # 暂停video线程发送帧
        if hasattr(self, 'open') and self.open is not None:
            self.open.pause_frame_update = True
    def bu_5_off(self):
        if hasattr(self, 'open_body') and self.open_body is None:
            if hasattr(self, 'open_face') and self.open_face is None:
                if hasattr(self, 'open_hand') and self.open_hand is None:
                    if hasattr(self, 'open') and self.open is not None:
                        self.open.pause_frame_update = False

    def out_put(self):
        group = QGroupBox("输出区域")
        layout = QHBoxLayout()

        self.out_1 = QTextEdit()
        self.out_1.setReadOnly(True)
        self.out_1.setPlaceholderText("此处将显示输出内容...")
        layout.addWidget(self.out_1)

        self.video_display = QLabel()
        self.video_display.setMinimumSize(640, 480)
        self.video_display.setStyleSheet("background-color: #000000;")
        layout.addWidget(self.video_display)

        group.setLayout(layout)
        return group

    def update_output(self,message):
        self.out_1.append(message)
    
    def update_video(self, frame):
        # 检查是否还有活跃的视频线程,如果没有则不更新显示
        if (hasattr(self, 'open') and self.open is not None) or \
           (hasattr(self, 'open_face') and self.open_face is not None) or \
           (hasattr(self, 'open_body') and self.open_body is not None) or \
           (hasattr(self, 'open_hand') and self.open_hand is not None):

            # 将OpenCV的BGR图像转换为QImage
            rgb_image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            h, w, ch = rgb_image.shape
            bytes_per_line = ch * w
            q_image = QImage(rgb_image.data, w, h, bytes_per_line, QImage.Format.Format_RGB888)
            
            # 缩放图像以适应QLabel
            scaled_q_image = q_image.scaled(self.video_display.size(), Qt.AspectRatioMode.KeepAspectRatio, Qt.TransformationMode.SmoothTransformation)
            
            # 显示图像
            self.video_display.setPixmap(QPixmap.fromImage(scaled_q_image))


class video(QThread):
    finished = pyqtSignal()
    mess_sign = pyqtSignal(str)
    frame_sign = pyqtSignal(object)
    
    def __init__(self):
        super().__init__()
        self.frame = None
        self.ret = None
        self.cap = None
        self.running = True
        self.pause_frame_update = False  # 控制是否暂停帧更新

    def run(self):
        self.cap = cv2.VideoCapture(0)
        if not self.cap.isOpened():
            print("打开摄像头失败")
            self.mess_sign.emit("打开摄像头失败")
            self.finished.emit()
            return

        print("摄像头已打开,开始显示视频流")
        self.mess_sign.emit("摄像头已打开,开始显示视频流")

        while self.running and self.cap.isOpened():
            if not self.running:
                break
            
            # 读取最新帧
            self.ret, self.frame = self.cap.read()
            if not self.ret:
                self.mess_sign.emit("无法读取视频帧")
                break

            # 只有在没有暂停帧更新时才发送帧
            if not self.pause_frame_update:
                self.frame_sign.emit(self.frame)

        self.finished.emit()

    def cleanup(self):
        self.running = False
        QThread.msleep(50)
        # 释放资源
        if self.cap is not None:
            self.cap.release()
        self.mess_sign.emit("摄像头资源已释放")
        print("摄像头资源已释放")


class face(QThread):
    finished = pyqtSignal()
    mess_sign = pyqtSignal(str)
    frame_sign = pyqtSignal(object)
    
    def __init__(self, video_instance):
        super().__init__()
        self.running = True
        self.video_instance = video_instance  # 保存视频实例引用

    def run(self):
        mp_face_detection = mp.solutions.face_detection
        face_detection = mp_face_detection.FaceDetection()
        mp_drawing = mp.solutions.drawing_utils

        print("开始面部检测")
        self.mess_sign.emit("开始面部检测")

        while self.running and self.video_instance is not None and self.video_instance.running:
            if not self.running:
                break
            
            # 直接使用video实例的帧,而不是自己读取
            if hasattr(self.video_instance, 'frame') and self.video_instance.frame is not None:
                # 深拷贝帧,避免修改原始帧
                image = self.video_instance.frame.copy()
                
                # 转换图像颜色空间
                rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
                rgb_image.flags.writeable = False
                results = face_detection.process(rgb_image)

                rgb_image.flags.writeable = True
                bgr_image = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2BGR)

                if results.detections:
                    for detection in results.detections:
                        # 绘制面部检测框和关键点
                        mp_drawing.draw_detection(bgr_image, detection)

                    # 发送帧到主窗口显示
                    self.frame_sign.emit(bgr_image)
            
            # 控制帧率,与视频线程保持一致
            QThread.msleep(16)  # 约60fps

        self.finished.emit()

    def cleanup(self):
        self.running = False
        QThread.msleep(50)
        self.mess_sign.emit("面部检测已停止")
        print("面部检测已停止")


class hand(QThread):
    finished = pyqtSignal()
    mess_sign = pyqtSignal(str)
    frame_sign = pyqtSignal(object)

    def __init__(self, video_instance):
        super().__init__()
        self.running = True
        self.video_instance = video_instance  # 保存视频实例引用

    def run(self):
        mp_hands = mp.solutions.hands
        hands = mp_hands.Hands()
        mp_drawing = mp.solutions.drawing_utils

        print("开始手部检测")
        self.mess_sign.emit("开始手部检测")

        while self.running and self.video_instance is not None and self.video_instance.running:
            if not self.running:
                break

            # 直接使用video实例的帧,而不是自己读取
            if hasattr(self.video_instance, 'frame') and self.video_instance.frame is not None:
                # 深拷贝帧,避免修改原始帧
                image = self.video_instance.frame.copy()

                # 转换图像颜色空间
                image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
                image.flags.writeable = False
                results = hands.process(image)

                image.flags.writeable = True
                image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

                if results.multi_hand_landmarks:
                    for hand_landmarks in results.multi_hand_landmarks:
                        # 绘制手部关键点和连接线
                        mp_drawing.draw_landmarks(
                            image, hand_landmarks, mp_hands.HAND_CONNECTIONS)

                # 发送帧到主窗口显示
                self.frame_sign.emit(image)

            # 控制帧率,与视频线程保持一致
            QThread.msleep(16)  # 约60fps

        self.finished.emit()

    def cleanup(self):
        self.running = False
        QThread.msleep(50)
        self.mess_sign.emit("手部检测已停止")
        print("手部检测已停止")


class body(QThread):
    finished = pyqtSignal()
    mess_sign = pyqtSignal(str)
    frame_sign = pyqtSignal(object)

    def __init__(self, video_instance):
        super().__init__()
        self.running = True
        self.video_instance = video_instance  # 保存视频实例引用

    def run(self):
        mp_pose = mp.solutions.pose
        pose = mp_pose.Pose()
        mp_drawing = mp.solutions.drawing_utils

        print("开始身体检测")
        self.mess_sign.emit("开始身体检测")

        while self.running and self.video_instance is not None and self.video_instance.running:
            if not self.running:
                break

            # 直接使用video实例的帧,而不是自己读取
            if hasattr(self.video_instance, 'frame') and self.video_instance.frame is not None:
                # 深拷贝帧,避免修改原始帧
                image = self.video_instance.frame.copy()
                # 转换图像颜色空间
                image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
                image.flags.writeable = False
                results = pose.process(image)

                image.flags.writeable = True
                image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

                if results.pose_landmarks:
                    # 绘制人体姿态关键点和连接线
                    mp_drawing.draw_landmarks(
                        image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS)
                # 发送帧到主窗口显示
                self.frame_sign.emit(image)
            # 控制帧率,与视频线程保持一致
            QThread.msleep(16)  # 约60fps
        self.finished.emit()

    def cleanup(self):
        self.running = False
        QThread.msleep(50)
        self.mess_sign.emit("姿态检测已停止")
        print("姿态检测已停止")

if __name__ == '__main__':
    app = QApplication(sys.argv)
    window = MainWindow()
    window.show()
    app.exec()





comment 评论区

添加新评论





  • ©2025 bilibili.com

textsms
内容不能为空
account_circle
昵称不能为空
email
邮件地址格式错误
web
beach_access
验证码不能为空
keyboard发表评论


star_outline 咱快来抢个沙发吧!




©2025 一条瑾瑜的小站

Theme Romanticism2.1 by Akashi
本站已在国内备案: 赣ICP备2025057350号