SELF-OCCLUSION MAKES TRACKING JOINTS TRICKY BUT MAYBE NOT FOR LONG
Fitbits and pedometers didn’t do it for the University of North Texas (UNT) researcher Xiaohui Yuan.
So, Yuan began developing a more data-driven method to detect and track human movements, for use in technologies involved with at-home personal training via online platforms. Then, the computer science and engineering associate professor realized that the technology had many more applications.
As part of his work, he uses an infrared sensor similar to radar technology to create a 3D video. Using that video, he and a group of graduate students track the movement of human joints in relation to other parts of the body, including the fingers, elbows, shoulders, hips, knees, toes and top of the head.
“Movements can essentially be broken down into different poses,” Yuan says. “There has been a lot of research about tracking movement to estimate what kind of action a person is performing. But that estimation is often inconsistent.”
He says that because the human body is three-dimensional, tracking joints can be difficult due to self-occlusion, an issue in virtual environments that occurs when part of an object overlaps itself.
“As I swing my arm, the inside of my elbow is visible. Then, as my arm bends upward, my inner arm is no longer visible and is replaced with a view of my outer elbow,” he says.
That overlap of images — also known as self-occlusion — results in a 3- to 4-centimeter error when trying to track a continuous motion, according to Yuan.
“We want to track a point consistently throughout a movement,” he says.
He believes the technology has applications in helping people do physical therapy without needing to travel to see the therapist in person, in personal fitness and in potentially improving augmented reality.
“We want to move the technology toward creating a true environmental representation in augmented reality,” Yuan says.
This article is part of the 2020 Higher Education Review Magazine.