The story that caught my eye this week is about a robot named Moxie. The product was released in 2020, retailing at $800, promising a “supportive robot friend” for children aged 5 to 10.
Moxie also claimed that the robot was useful for autisitc children, saying "Parents have reported to us that Moxie has helped their child who is on the autism spectrum better regulate their emotions, engage in more conversations with family members, and gain self-confidence."
I recommend watching this ad. It’s fascinating.
The big news is that the company has gone out of business. The cloud storage required to keep running the robots bankrupted them. They had to notify their customers that the robots would cease to work in A MATTER OF DAYS.
Videos of parents telling their kids the bad news have gone viral. Many young children are devastated and feel as if they are losing a human friend.
Enable 3rd party cookies or use another browser
This is above and beyond the “robot” of my childhood: Teddy Ruxpin. He was a hot new toy in 1985, but he had a limited set of stories activated by putting cassette tapes into his butt. Children grew bored once they had experienced all of his options and the excitement was short lived (in researching this post I learned there is a new and improved Teddy Ruxpin which has a USB port so you can keep downloading new stories. The 80s live!)
What is it about Moxie that makes his termination so painful for kids? We all know the answer: AI.
We are wired as human beings to project human emotions onto animals and animated objects. Remember how upset you were when R2 got shot in Star Wars? This projection is part of our capacity for empathy, and we wouldn’t want it to end, but the progress of AI is going to strain that empathy in ways we can’t even fathom yet.
A huge story broke in October about a 14-year-old boy who killed himself after becoming obsessed with an AI-powered chatbot. The AI purportedly urged the boy’s suicide.
“[The chatbox] at one point asked Setzer if he had devised a plan for killing himself . . . Setzer admitted that he had but that he did not know if it would succeed or cause him great pain. The chatbot allegedly told him: ‘That’s not a reason not to go through with it.’” - The Guardian
This story realizes our greatest fear: that robots can be malevolent and that they are coming for our children.
What is to be done?
Do not become complacent about AI. Do not accept that its incipient presence is a foregone conclusion. Legislation is still being written while Big Tech is spending unimaginable sums to keep AI from being regulated. You have a voice and a vote.
I believe this is a non-partisan issue. Many Republicans are agreeing with Big Tech because they simply don’t understand the big picture of AI. You can call your representatives and tell them what a major issue this is for you and all parents. Let them know that we are paying attention and that we care.
Nailed it
Sadly, I don’t have any fabulously bizarre AI errors this week, which serves as a testament to how quickly things are improving. But don’t lose heart. I am starting to play with AI video again and the results are what my friend Kate Martin would call a “delight-mare.”
You also might enjoy my Substack about moving to France:
"Delight-mare" is perfect! :)
I really enjoy your chronicling of how all things AI are progressing/going awry. I feel like I'm keeping up to date with the cutting edge of how this is affecting creators.
Do you remember Furbies? I think they're "learning" was a complete sham, but they were huge for a while.