Pessimism regarding upcoming artificially intelligent personal trainers (part 2)

Posted on April 3, 2017

(Last Updated On: April 3, 2017)

Classification is done by people. Not AI. 

Back to our leg pressing study-

“For the current study, the use of supervised learning methods, mapping input objects to desired output values, appeared to be a suitable modeling technique, considering the inclusion of the measured time series and the experts’ evaluations of the executions. These assessments in respect to pre-defined indicators and specifications were carried out on the basis of video recordings with the help of professional coaches. In particular, the chosen evaluation process was based on the available literature discussed earlier and common recommendations stating that factors like time, velocity, constancy and completeness are significant determinants for the execution and the quality of the movement.”

Rudimentarily- a large portion of the current wave of AI works by telling the program what’s right and wrong. So in our leg press example people do reps a ton of ways. People then tell the program what’s right and wrong, so the AI can learn. It then bases further information on the initial classifications, deeming new info right / wrong.

“Medical institutions often struggle to bring all data on to the same platform, said Peter Szolovits, the head of the Clinical Decision-Making Group at the MIT Computer Science and Artificial Intelligence Laboratory. The way medical information is stored and labeled can differ widely, even between departments at the same institution, he said. For instance, “there’s no standard way to record a heart rate, a blood-glucose value or temperature measured at the bedside,” he said. If the way data is stored or labeled changes, often the artificial-intelligence software must be retrained, he said.”

Hospital Stumbles in Bid to Teach a Computer to Treat Cancer: A University of Texas audit shows MD Anderson’s struggles to use IBM Watson in a health-care setting

Your AI is based on someone(s) interpretation. This is no different than you buying a program from somebody. That program is based on what someone(s) felt is best. And what do we know about people? They could be wrong.

Or hell, the client / patient might just not like the approach, even if it’s correct. Look at the exorbitant amount of diets out there. Tons of them work. Even if one is, on average, the best approach, we know we can’t paint a brush and tell everybody to do that diet. This doesn’t change with AI. Nor does AI have 100% predictability. If your model does, it’s flawed! (Overfitting.)

The notion of the AI being omnipotent -in this approach at least- is wrong.

My favorite recent example of this is when trying to find obesity genes, you have to classify how much do people eat / what qualifies as obese / how much do they weigh, right? So what one group of researchers did was use machine learning to scour the genome, finding associations between genomes and weights. Application being “having X, Y, Z genes => obesity.” (Who knows how many genes it could actually be. The fairytale of there being one gene for each trait has long passed.) Where we can hopefully modify this somehow to => no obesity.

And how did those researchers get people’s weights? BY ASKING THEM.

Newsflash: people lie about their weight.

Artificial intelligence still has human limitations.


Nine part series-

  • AI is neat. People are messy.
    • Are computers really as good as humans in chess / Go / poker?
  • Classification is done by people. Not AI. 
    • And people are fallible. 
  • Liability / Are we ok with machines telling us what to do? / Loneliness
    • It’s not all sunshine and rainbows. 
  • The more expensive the gym is the less incentive the gym has to keep us there
    • Why the gym hopes you never show up.
  • Improved performance methods aren’t too relevant / Improving performance filtering is a dangerous, superfluous, endeavor
    • Ed Sheeran laughs at predictive analytics. 
  • Why you’re unlikely to find the perfect program yourself, and why AI might not be better than a trainer
  • Machines can tell you what to do, but can they tell you why?
    • Thinking about why other countries have more trust in their healthcare system
  • The electricity bill could be insane
    • Incentives still matter. Voo doo economics. 
    • Is any company that says it’s green, actually?
  • While there are rules, like less metabolic costpeople bend them, and humans as a market are rather hard to predict
    • What happened to Xbox Kinect?


Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.