Mattel Cuts Aristotle But AI For Kids Already Exists

Mattel may have cancelled their plans for Aristotle, the learning children’s voice-activated speaker, but AI for kids was a logical move for the company. Originally, Aristotle was going to be compatible with Alexa bringing voice control to adults and children alike. While toddlers can play games with Aristotle, parents could order things on Amazon with Alexa. While Mattel has decided that Aristotle does not “fully align with Mattel’s new technology strategy,” you can see the appeal of the idea.

Aristotle could look after your baby, comforting them, connecting to the cot to gently rock them back to sleep and playing music to calm them; it could also play with toddlers and young children and even help tweens with homework. Presumably, at some point, Aristotle would then give way to Alexa – a new marker of adulthood for the modern age.

As I said initially, it is logical for a company like Mattel to seek to follow consumers from their first night at home as a baby through to their adulthood. Imagine the possibilities for these companies – gaining loyalty would be easy, after all, the device would be a way of life.

It is logical, but is it ethical?

This is the question that has really been the downfall for Aristotle. Because while we are happy to allow a device like Amazon Echo into our houses for adult use, the privacy of children is still well protected.

Or at least, we think it is. We may still hold on to privacy as an ethical priority, but haven’t we already compromised on our children’s behalf?

Our privacy has been gradually chipped away at for years. It is such a subtle process, and it is so well rewarded, that we have hardly noticed as we have handed over all sorts of data to huge companies like Facebook, Google and Amazon. In fact, even with all the articles we read about the dangers of losing our privacy, how many of us will actually act and remove ourselves totally from these internet giants?

But we haven’t just handed our own data over. How many pictures of your friend’s children appear on your social media everyday? I bet that most new parents will post a picture of their newborn – it makes sense as the best way to let remote friends and family know your good news. And what about the changes to your Google and Amazon browsing? Since the outrageous headline, How Target Figured Out a Teenage Girl Was Pregnant Before Her Father Did was run by Forbes back in 2012, it has been well known that the algorithms of these companies could know all sorts of things about us we didn’t think we were giving away.

[This story has since been revealed to be based on conjecture of the possibility of this happening, rather than actual events, but it is still a relevant idea today.]

Clearly, we are okay with handing over data, so what is really happening with the backlash against Aristotle? I suspect that branding is to blame here rather than the product itself. That it is a toy company producing such a device rather than a tech company is also at odds with how the AI industry has persuaded us to accept this development. We have come to trust tech companies with the data they collect, can we trust a new player on the market? Especially when that player is making us aware of how much information we could be giving away?

While other AI voice-controlled devices have been marketed towards adults, with practical advantages such as organising calendars and easy shopping pushed to the forefront of marketing, we should not discount the obvious reality that children can talk to these devices too. Even if these devices are not intended for that use, nor indeed are they optimised for children’s use, it is practical to assume that they could be. But because the marketing is so adult-focused, I doubt many people would consider this a problem.

By branding Aristotle as a type of toy that grows up with your child, Mattel have crossed an invisible line from practical tool that helps to impractical toy that simply accumulates data. The perceived difference between the two has everything to do with how much of our privacy we are willing to give up and the awareness of doing so.

The advantage of telling Alexa which brand of kitchen roll you want is that you get that exact kitchen roll delivered. You will probably also be notified when there is a promotional discount on another similar brand. There are not so many tangible rewards for telling Aristotle what you want to be when you grow up. What you give up does not have an obvious return and so we naturally doubt the motives of the device.

You’d have thought that Mattel might have realised that talking toys weren’t such a good idea. Hello Barbie, another toy produced by Mattel, was a talking doll who encouraged children to chat with her. In fact, Hello Barbie used AI to provide responses and to guide the conversation while connected to wi-fi. Not so different to Aristotle, then.

But add to that the third parties who could use recordings of the child talking to Barbie to improve their software responses and you might wonder how Hello Barbie even made it to the market. Judging by the script Hello Barbie worked to, Mattel could reasonably find out anything from the children talking to her from details about family members and friends, interests and activities, even descriptions of how they look… the list is staggering. And that is without the concern that Hello Barbie could be hacked.

The real difference then, is the sense of the personal and the impersonal. We are comfortable in sharing our kitchen towel preferences but would we be so comfortable if Alexa asked us how our friends are or which hairstyle we’ve gone for today? This kind of conversation doesn’t just chip away at privacy, for an adult, it also veers towards the uncanny. We don’t hold our thoughts on kitchen roll close to our chests, but we certainly consider our friends to be integral to who we are as ourselves. It is weird to think of sharing this kind of personal information with a toy and intensely sad to think that a child might find their only confident in a Barbie.

I began by saying that AI was a logical move for Mattel and I suspect that Hello Barbie and Aristotle are just the first forays made into this unknown territory. It would be easy to say that both were complete flops – Hello Barbie didn’t last very long on the shelves and Aristotle didn’t even make it that far. However, I would argue that these controversial toys are laying the foundations for things to come. Both products created a buzz around Mattel that have brought it back into the limelight.

If these toys do anything, they show us an uncomfortable truth – that we give up so much in order to have smart AI gadgets that talk to us. The total number of Americans using voice-activated assistant devices will reach 35.6 million this year. So isn’t it time that we really think about what we are giving in exchange? The way we value the privacy of our children used to be the way we valued our own privacy.

Alexa doesn’t have Barbie’s fixed smile or unrealistic body shape (yet) and she probably won’t attempt such a bald a segue as this: “So we’ve been talking about family, let’s talk about the other people in our lives that mean a lot to us …like our friends!” But Alexa does have all sorts of other information we should bear in mind. And if you feel uncomfortable with that idea for your kids, maybe we should feel a little more uncomfortable about that ourselves.