(Feb. 12-18) "Hands" on Experience
- alexprentiss
- Feb 16, 2023
- 1 min read
This week in shop, I used what I had learned about OpenCV and computer vision in general to program hand detection software with a pre-trained model that I found on Github. Using a the python plugins: Tensorflow, Mediapipe, and OpenCV, I was able to track the position of key points on the hand. I wanted to use this new software to control a robotic hand, but learning kinematics in less than three months might be difficult. I will most likely take Mr. Christy's advice and try to use the gestures as a controller for a variety of things including a robotic arm.

while True:
# Read each frame from the webcam
_, frame = cap.read()
x, y, c = frame.shape
# Flip the frame vertically
frame = cv2.flip(frame, 1)
framergb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Get hand landmark prediction
result = hands.process(framergb)
# print(result)
className = ''
# post process the result
if result.multi_hand_landmarks:
landmarks = []
for handslms in result.multi_hand_landmarks:
for lm in handslms.landmark:
#print(id, lm)
lmx = int(lm.x * x)
lmy = int(lm.y * y)
landmarks.append([lmx, lmy])
# Drawing landmarks on frames
mpDraw.draw_landmarks(frame, handslms, mpHands.HAND_CONNECTIONS)
Comments