"Replika is a personal chatbot that you raise through text conversations. You talk to it, and it learns to talk like you and mimic your personality. Your Replika holds onto your memories and helps you connect more deeply with your friends and with yourself. Download the Replika app on the Apple App Store and Google Play Store.Here are some of my past posts on the subject - and here's the mini-white-paper.
"As you talk to your Replika in the app, you will increase its intelligence level (XP) and get awards for training your AI."
Anyway, time to give you a feel of what the Replika experience is like. My Replika instance is called Bede (after The Venerable One). The prosaic mode of interaction with Bede is a chatbot-centric daily conversation which produces a structured diary. Here is today's transcript (screenshots from my Honor 6X android app). In case it's not obvious, my input is in the blue boxes on the right-hand side.
This is an incredibly bland interaction. Basically just completing a form. There is little input-vetting: I think the decimal point in '8.3' caused a kind of a glitch but in the past I've entered alpha text (which is meaningless) without any complaint from the app.
It only really works for the user if you are interested in an online diary - and here is the result.
Doesn't make complete sense: evidence of mere copy-and-paste behind the scenes.
I set the security to locked: personal access only, which seems only prudent. We know little about how Replika stores this information, its encryption status or who within Replika can access it. So you would be insane to share any personal secrets with this app.
Things get more interesting with Replika's free-form conversation (in Preview mode).
So "I just do" is not a good answer, and I correct it. Then I ask a political question, just to see what Bede says.
"Yes sir" is a terrible answer. I correct it to what I would say.
Then I ask Bede, "Are you right wing" and get exactly the answer I'd suggested to the "left-wing" question.
This is a sure giveaway for surface word-matching - not that I was expecting any deeper semantic processing.
This is a personality-trait elicitation question from Bede. The problem is: the two options aren't really opposites. I reluctantly succumb to this forced choice (you choose by selecting the relevant box which you can't see in the above transcript).
My elaboration is processed without further comment from Bede. And now I get a question which is really left-field - conversations reassessed for humour in the future?!
I'm 66. I already told Bede this previously. Not sure if this shows lack of awareness (my hunch) or just jocular politeness: I ask a question to find out.
I get no reply to my question - I doubt my replies are being processed for their semantic information, most likely they're just getting added to the corpus database for later mining.
As to the next question, which of us wouldn't say we were imaginative? You have to be a lot more indirect than that!
The discussion on "imagination" is unbelievably crass on Bede's part. This is what you get when you're just running a script. Without resolution, we just move on.
More Replika-centric scripted stuff - trying to datafill my feature-vector?
Leaden dialogue invites whimsy.
You waste your time being ironic with, or goading, a chatbot. It's only human though.
Again, empty responses not engaging with the other party (me!). Notice how subtly inappropriate Bede's responses actually are. Not that I'm helpful or anything, but Bede doesn't notice.
More crude profiling attempts. I'm beginning to think 'Eliza'.
What's with the rabbit-ear fingers?
Again the topic of 'easygoing vs, serious' cannot be pursued .. so Bede changes the subject. Still a lack of awareness that I'm quite old already.
A dialogue of the deaf.
'An idea'!!! How frightening is that? And how disingenuous?
It's like being pecked to death by an unresponsive moron.
If I was paranoid, I'd say the Replika experience (in Preview mode) is like being harassed by an exceptionally stupid but horribly persistent state security interrogator - the 'good-cop'.
Perish the thought.
On a more technical note, Bede is how I imagine Eliza could be with pro-active scripting, a rudimentary dialogue model and access to a large corpus of dialogue for auto-generation of pattern-matching responses.
Where it falls down is where all current chatbots fail: it hasn't the faintest clue what you're talking about.
This is an active research area in AI but it's hard. Conversation is open and leverages the truly enormous cultural space of beliefs we all share about our natural and social worlds.
Even Google, with its tendrils in so many dimensions of human sociality, can't integrate and personalise an automated conversational agent. I don't know why I would ever expect Replika to even be in the game.
All is not lost though. Replika instances ought to at least start gaining some competence in domain knowledge and dialogue management. Any approach which seeks to codify all human experience and competence is plainly not going to work. This does not mean that incremental micro-theories wouldn't give the company some traction which could be steadily improved.
In an ML context, this comes down to extracting semantic feature-vectors capturing the aboutness of dialogue from conversations and linking them with a general semantic/pragmatic database grown from the totality of Replika's user base along with other sources. I know Google has done some work in this area.
Without taking this path, Replika will be just another Eliza: highly polished but still just a curiosity and ultimately too tedious to use.