Study concludes that human body data used by modern performance capture systems is “flawed”

A research team at the University of Michigan School of Information and Center for the Study of Complex Systems led by assistant professor Abigail Z. Jacobs has conducted an academic study into how motion capture systems work. They conclude that the assumptions these systems make about whose human bodies are ‘standard’ or ‘representative’ are “stylised and flawed”, involve an over reliance on the “healthy male” form and still influence modern motion capture studies, including those using AI, compounding inherent problems associated with bias.

“We dug into these so-called gold standards being used for all kinds of studies and designs, and many of them had errors or were focused on a very particular type of body. We want engineers to be aware of how these social aspects become coded into the technical—hidden in mathematical models that seem objective or infrastructural.

Many researchers don’t have access to advanced motion-capture labs to collect data, so we’re increasingly relying on benchmarks and standards to build new tech. But when these benchmarks don’t include representations of all bodies, especially those people who are likely to be involved in real-world use cases—like elderly people who may fall—these standards can be quite flawed.”

Abigail Jacobs

Studies in the 1930s, 1950s and even the 1970s also made extensive use of frozen, often dismembered, male cadavers for research. Over time, the researchers say, the results have become ‘baked into’ modern performance capture software.

“Thus historical errors often inform the ‘neutral’ basis of our present-day technological systems. This can lead to software and hardware that does not work equally for all populations, experiences, or purposes.

Since many of these issues are baked into the foundational elements of the system, teams innovating today may not have quick recourse to address bias or error, even if they want to.

If you’re building an application that uses third-party sensors, and the sensors themselves have a bias in what they detect or do not detect, what is the appropriate recourse?”

Kasia Chmielinski, project lead, Data Nutrition Project and a fellow at Stanford University’s Digital Civil Society Lab

Sources: IEEE Spectrum and Tech Xplore