Version 1.0. -- June 22nd 2017
Download the
PDF.
---
|
Available as a PDF |
Birds of a feather flock together: it's well known from psychometric studies that
friends psychometrically match; that is, they are more similar in personality type and intelligence than randomly chosen pairs of people.
This is a problem for chatbot designers in the business of designing
virtual friends (eg
Replika). By default, the chatbot starts with each new user as a standardised
blank-slate, slowly individuating through lengthy and often tedious
get to know you dialogue. See
this transcript of a dialogue with my
Replika instance,
Bede.
It seems likely that concepts of intelligence and personality type are not even architectural present for these kinds of chatbot, limiting their ability to optimally-match their human partners.
We can do better than this.
---
Before
chatbot-friends there were online dating agencies. They too faced the problem of assortatively matching people who came to them unknown, as strangers. Dating agencies therefore constructed detailed online questionnaires designed to elicit salient psychological traits.
Which particular traits did they investigate? That's proprietary, part of their USP. No doubt they experimented - lots of data! - but the starting point was surely the standard models of personality and IQ.
---
Many people (think employers) are interested in knowing your personal psychological qualities. The most popular evaluation framework is the
Myers-Briggs Type Indicator which comes with an intuitively-compelling personality classification scheme (I'm an
INTP) plus an underlying narrative of
type dynamics which can be powerful in an informed analyst's hands.
The Myers-Briggs establishment is quite proprietorial with its canon of intellectual property, but it naturally holds no monopoly over personality research in general. The
Keirsey system tells a different story, but generates similar results.
Academics tend to dismiss both camps as pseudo-science, with constructs unanchored in rigorous observation. The
five-factor model (FFM), based on the '
lexical hypothesis' processed through factor analysis, is claimed as both rigorously-empirical and fundamentally atheoretic as regards underlying genetic, environmental, or neurophysiological etiology.
No matter: from the point of view of dating agencies and chatbot design, it is sufficient to define an appropriate personality space and to be able to classify people within it. It is commonly observed that in the five-dimensional space of the FFM there is a four-dimensional subspace broadly isomorphic to the MBTI and Keirsey as follows:
E = Extraversion
N = Openness
F = Agreeableness
J = Conscientiousness
Neuroticism (a tendency to experience and channel negative emotions - contrasted with emotional stability) is not a feature of MB/Keirsey. Some people have advocated adding it.
I would also suggest adding the somewhat-orthogonal dimension of
intelligence as an equally relevant attribute, so using six dimensions overall.
For the chatbot (or dating agency) designer, a new user should be allocated a coordinate in personality/intelligence space: the means of doing so is through their answering questions.
The
design of psychometric questionnaires is interesting and well-studied. Lists of candidate questions are generated for each trait and then tested with large samples of subjects. Question-responses are cross-correlated to identify those questions with the greatest diagnostic power. The idea is to prune down to a much-reduced, highly-efficient subset of key questions.
The whole process is quite expensive, uses large sample sizes and takes a while. Luckily, for dating agencies and chatbots, we're doing engineering, not science; we just need to allocate people to the right 'bins' (to use a technical term).
The best approach is to take one of the many FFM questionnaires freely available on the web and simply edit the questions to the needs of your own scripts while maintaining their general tenor. A cursory google search, for example, turns up
this.
Once you have a starting point of maybe 50-100 questions, they should then be tested on a tame audience (eg your employees) where you already have psychometric data. This will ensure initial calibration.
Next, the surviving, and duly modified questions can go live in the chatbot dialogue. They need to be instrumented so evaluation can continue on the much larger user datasets to come, looking for high within-trait correlation clustering - and ideally, further factor analysis.
---
The process so far is asymmetric: the personality type/IQ of the
user is being assessed. This is vital for a dating agency - it's the raw material for the matching algorithm. However, the chatbot designer further requires that the chatbot should use this data to 'morph'
itself.
In the FFM + IQ model, construct a six-vector with two-valued components:
This will be used to configure 64 chatbot variants.
---
How do we do this? Let me give you an example from an area I'm
somewhat familiar with: automated theorem-proving. Intelligence is associated with the ability to competently handle abstractions, both deductive and abductive (the latter being associated with creativity/openness).
For the theorem-prover designer, humans are pretty useless at deduction - they have to be modelled as exhibiting
severely-bounded search spaces, with smarter people having larger bounds - greater lookahead, if you like. You can see how a chatbot could have an adjustable parameter here.
Abduction (reasoning from facts to larger, embedding contexts) is also a search problem. An automated system will start from the topic under discussion and seek matches in its wider database of concepts. Smart people have larger and more sophisticated 'concept-bases' plus a greater ability to find productive matches.
All this is readily emulated by bounded search in diverse semantic nets (or similar formalisms). This gives two dimensions of inter-personal variability; two parameters to be varied.
---
In my more GOFAI-moments, I would be tempted to create algorithms, data-structures and search strategies for the computational realisation of FFM traits. But that would not be the ML-way. Instead, take the conversation datasets from FFM-labelled users and run them through a machine learning process to extract the relevant conversational feature traits.
Then use those traits in generative-mode.
Someone who scored
"(concrete, organised, introvert, tough-minded, stable)"
would produce very different conversational feature-vectors than a typical
"(abstract, spontaneous, extravert, friendly, emotional)".
---
It would be deeply unfashionable these days to do too much hand-crafting of the 64 chatbot variants. The thing is to
architecturally distinguish them, so that machine learning has explicit parameters to adjust based on the classification assigned to each new user.
So here is how I would see it working.
You sign up with a
chatbot-friend provider (such as
Replika) which initially knows nothing at all about you. Your first interactions with the chatbot are friendly but rather impersonal. It's like talking to an amiable stranger - whiling away the time on a long journey.
The chatbot is subtly directive. The questions are those which elicit your personality six-vector values. As you become more localised in personality-space, the chatbot itself begins to transition. Like an empathic colleague, it alters its own configuration parameters to mirror your localisation in personality space. If you are more extravert, its conversational style veers that way; if you are intellectual its mode becomes .. perhaps more discursive.
Subconsciously you begin to feel more at home with your chatbot-partner, it seems to be 'like you', sharing your style. It's comfortable.
---
Like the dating agencies, this would just be a start. The end-game is to tailor chatbot empathic-convergence to each user as rapidly as possible.
This is a problem for which big data was designed. Interactions must be instrumented and analysed in a process of continuous improvement.
It sounds like a really interesting programme!