Pessimism regarding upcoming artificially intelligent personal trainers

Posted on March 31, 2017

2


AI is tag-alonging trickling into every field. Exercise and personal training are no different. In this series we’ll stick to what’s currently being worked on and purportedly near term. Science fiction such as Ex Machina is not going to be considered. The impetus is:

Artificial Intelligence in Sports on the Example of Weight Training

These researchers put some sensors on a leg press machine. They measured traits like speed, force, range of motion, acceleration. They categorized this information into proper and improper reps. Potential application being when somebody is using the machine, it could give coaching feedback.

A fair amount of this series will be me thinking out loud. One of the most dangerous types of knowledge is knowing enough to sound like you know what you’re saying, but not enough to actually know what you’re saying. I have no doubt some, if not a lot of my AI knowledge could be in that domain.

Furthermore, I want to show the counterweight due to a lot of hype going on right now, and keep in mind I’m a personal trainer. In multiple ways, this is biased.

One could consider How quickly can the brain atrophy? as Part 1. Those in the AI world are now progressively talking about brain augmentation, for example, combating neurodegenerative disease with brain implants, while ignoring what uses the brain most: not passively computing equations, but generating movement. The last thing those with cognitive issues need is to sit on their ass more waiting for a computer to help them. (See: all the research showing physical activity improves memory. Or why canceling recess is counterproductive.) Where then we get into what gets people to move? Who knows, and will continue to know, best how to do this? A computer, or humans?

Here’s what we’ll be hitting-

AI is neat. People are messy.

Back to our leg press study-

“Since some of the first and last repetitions appeared to be interrupted (e.g. by correcting the feet position just after the initial extension or abandoning the final flexion phase) causing, for instance, “incorrect” time intervals, these sequences were not included in the classification process.”

I understand AI researchers’ complaint “once we achieve something, it’s no longer considered AI.” There is truth there. But there is an extraordinary amount of hype going on right now due to a lack of appreciation for how fitted these accomplishments are.

The above quote is a quintessential example of AI running into problems. Machine learning as currently done is typically based on previous exposure. (If it’s not done this way, then an extraordinary amount of new data is often needed e.g. millions of monitored leg pressing reps.) These exposures are regularly made very clean. Somebody moves their feet because they aren’t set properly or are uncomfortable, or they’re checking out the ass of the person walking by? Hey, let’s ignore that. Now if somebody does that in real life, it could be considered a bad rep. Or the system can’t handle it.

This could be the media’s fault as much as the AI evangelists. Last year it was AI is better at humans than Go, despite the fact the AI couldn’t even more the pieces on the board. This year it’s AI is better at humans at poker, despite the AI couldn’t pick up the cards, could only play one on one -most people play poker with more than one other person!- had to play at a predefined pace, and each player’s chip count was reset to 20,000 before each hand was dealt, rather than rising and falling with each win and loss over the course of play.

1) In other words, this thing isn’t playing poker

2) Change any variable in how the game was played and you likely drastically change the outcome.

(More about poker later.)

It’s as if you’re saying AI is better at playing quarterback because of how good it is at Madden, yet it can’t actually throw a football. Nobody would ever get away with that, yet here we are “Computers are better at poker / Go / image recognition / etc.”

This is my biggest concern with AI handling people. There are products coming which might tell a person if they’re hitting the ground too hard when running. What’s too hard though? What about if they’re wearing different shoes than was used when the AI learned? Does too hard mean less injury risk, but does that also mean taking away from how hard someone can workout? At some point one needs to accept more injury risk if they’re going to push their limits.

Many of these products want to base workout difficulty on heart rate. What if somebody is on blood pressure medication, which doesn’t let the heart go above a certain pace? So the AI keeps telling the person to push harder? (Cardio machines have been doing this for years!)

It’s not these problems can’t be worked on. It’s they’re currently not being talked about as problems. Much like 99% of the current fitness market, the majority of the wannabe coming market is catered to those in their 20s. The ones who are easiest to deal with, need the least need intervention, and can tolerate error the most. It’s not hard to envision headlines “AI is now better than humans at coaching runners.” Meanwhile coaching and or running has fell into the category of “playing poker,” and the 40 year old runner buys the product and tears their achilles.

We’ve already put FitBits and smart watches out which don’t properly assess calorie expenditure and are being sued for inaccurate heart rate monitoring. Hey, you’re 23 years old and off by 15%? Fuck it. You’re 50+ years old with a heart history…

People can suffer setbacks, if not get hurt. We keep talking about revolutions yet we can’t even get Watson to talk to an electronic health record database! What’s one of the largest sources of burnout amongst doctors? Dealing with electronic health records. How great is a panacea if it makes those using the panacea want to quit the profession???

There is a reason the progress of AI has been most pervasive in games. Games have a clear goal- maximize the score. Many human endeavors aren’t so objective or directed. We’re currently seeing this- a great deal of the medical establishment has decided the score to be maximized is lifespan, meanwhile more and more states are approving euthanasia. Living forever isn’t many people’s priority.

If humans’ goal was to live as long as possible, we wouldn’t smoke or drink. Yet we do. Because that shit is fun.

 

Enter your email address to follow this blog and receive notifications of new posts by email.