This shows the strength and weakness of the vanilla Eliza in action.
It's tempting, and even conventional, to scoff at Eliza. Peter Norvig says in Chapter 5.4 of his "Paradigms of Artificial Intelligence Programming":
" In the end, it is the technique that is important - not the program. ELIZA has been "explained away" and should rightfully be moved to the curio shelf. Pattern matching in general remains important technique, and we will see it again in subsequent chapters. The notion of a rule-based translator is also important. The problem of understanding English (and other languages) remains an important part of AI. Clearly, the problem of understanding English is not solved by ELIZA."
But the Eliza engine is a very good place to start.
The inadequacies of pseudo-human interlocutors are more forgivable in an animal. I'm going to craft a rule set for our dear, departed pet using the existing Eliza engine: Shadow v. 1.0.
Then it's back to Peter Norvig for his Prolog interpreter in Lisp (chapter 11). This will give me a unification algorithm and a resolution procedure.
My proposed architecture is to use the Eliza engine for input-output and have a Prolog-style knowledge-representation and inference-engine at the back.
In theory, the knowledge-based could be data-filled with rather tedious Q&A - the kind of thing which gives chatbots a bad name - but it's more pleasant for the user to have an initial registration procedure in which the user just tells the system about the domain of interest. For example, details of family and friends. Yes, I'm aware of privacy issues: you ask, in the age of Facebook?
The final area of architecture which I've been thinking about is the topic of conversation. This relates to the information and issues currently being talked about which drive the conversation forward. In Eliza it's basically in the hands of the user: there is no chatbot control mechanism.
But we can do better once we can do inference.
Watch this space.