If an AI is now coaching people, telling them what’s too hard and what’s not, who is responsible when something goes wrong?
Craig Venter, the most known name in genetics, did “the most thorough physical in the world” and likely gave himself prostate cancer. Who’s responsible for that? When the inevitable “Oh shit, we didn’t know or think of that” happens.
This is a thoughtful article about
Does the gym take the heat or is it the equipment manufacturer? We do currently have versions of this. Think treadmill programs. The hill program, the floating-through-the-mountains program, but at some point we’re talking a different level of communication here with Siri progressed. Do you get to claim human superiority but not deal with human liability?
-> Uber currently dealing with this. “We’ll have people use their own cars. If they have a license that’s enough vetting. We won’t be an employer. Any liability is thus on everybody but us!” One reason Uber has yet to make a single dollar in profit is all their legal issues. Uber has been a hit socially, yet even Johnny Depp thinks “wow, that’s bad” with how much they’ve bombed as a business. We’ll return to this theme later in the series.
One advantage of a human over a machine here is a human, if a relationship has been nurtured, is going to be less likely to be sued. I’d bet a lot of the doctors who get sued are those with shitty bedside manner.
-> Or if you’re a CEO with a limited capacity to empathize. Everybody hears about Uber’s issues; you don’t hear a peep about Lyft.
It’s hard to financially ruin someone if you like them. But who has empathy for machines? (Again, in their current iterations. We’re not at I, Robot.) Unemployed law grads smile when they hear “Hey, the algorithm said that was the best approach.”
Say you’re a hospital and you implement AI diagnosing because it is 10% more effective. Alright, you’re 10% less likely to get a wrong diagnosis now. But what if you’re 100% more likely to get sued per wrong diagnosis? Does it make financial sense to have this AI?
Are we ok with machines telling us what to do?
We have to go based on what we have. Currently, that’s home workout videos / games, smart watches and apps. How well do we respond to these when they tell us to e.g. “push harder”? While an AI might have a good sense of whether you’re pushing yourself or not, how much do we care when an elliptical tells us it’s time to do more? Do we adhere to it, or do we “Will this thing stop talking to me?? Where the hell is the manual setting?!”
A routine phrase to hear from personal training clients is “I don’t want to let you down.” “I need the accountability.” A sense of responsibility occurs. Is that achievable with Alexa, even if she’s on steroids?
-> Is it a coincidence all the voices are female? Or is it because they miss mom’s voice too?
Look around and you can see loneliness is becoming an increasingly appreciated risk factor for death. My guess is this is being looked at more with the amount of older people ridin’ solo.
There’s also been a lot of research on the amount of interaction with Facebook correlating well with depression.
With personal training AI we are by definition taking people out of the equation. Whether that be more wearing headphones at the gym rather than saying hi to others, or attempting to avoid using a human personal trainer.
Are we then also increasing depression / loneliness in these people? We need to have social contact somewhere. The gym is one of those somewheres for many.
I’ve been approached to build an exercise app. One where participants can follow the app as they go through their workout. But I don’t like the idea of somebody having yet another part of their day glued to a screen. I’m now seeing younger guys come in the gym, and in between sets watch a TV show on their phone. Christ, what ever happened to checking out the cardio bunnies??? Or be the everybody-knows-you’re-a-sociopath-due-to-piercing-a-hole-in-the-mirror-with-how-hard-you-examine-yourself guy. Every gym needs at least one of those. No gym needs the puts headphones on for his music and watches a television at the same time guy.
(I do make client programs available to use on a smartphone, but it’s done where looking down happens sporadically. Ideally, the person starts to remember more and more; looking down less and less. It’s just annoying enough that they’ll do it, but they’ll probably start memorizing it. Personal training the 💪 and the [brain emoji].🧠)
-> Skynet is around the corner yet there is no brain emoji? Can we get our priorities straight? Luckily, it has been proposed. In a manner more thoughtful than most marriage proposals–
Nine part series-
- AI is neat. People are messy.
- Are computers really as good as humans in chess / Go / poker?
- Classification is done by people. Not AI.
- And people are fallible.
- Liability / Are we ok with machines telling us what to do? / Loneliness
- It’s not all sunshine and rainbows.
- The more expensive the gym is the less incentive the gym has to keep us there
- Why the gym hopes you never show up.
- Improved performance methods aren’t too relevant / Improving performance filtering is a dangerous, superfluous, endeavor
- Ed Sheeran laughs at predictive analytics.
- Why you’re unlikely to find the perfect program yourself, and why AI might not be better than a trainer
- Machines can tell you what to do, but can they tell you why?
- Thinking about why other countries have more trust in their healthcare system
- The electricity bill could be insane
- Incentives still matter. Voo doo economics.
- Is any company that says it’s green, actually?
- While there are rules, like less metabolic cost, people bend them, and humans as a market are rather hard to predict
- What happened to Xbox Kinect?