30 volunteers conducted a typed 4-minute conversation with an unknown entity, where 50% of them spoke to the humans, while the remainder had the Cleverbot for their virtual “companion”. The audience were privy to all of their conversations on large displays, and the final results were rather stunning – the Cleverbot was voted to be 59.3% human, while the humans were rated at a surprisingly low 63.3% – considering the fact that they’re as human as you and I.
How does the Cleverbot do it? Well, it will look through records of its previous conversations, choosing the right kind of response to the comment or question, and each online search is performed three times before an answer is selected. The more powerful version that saw action in the test conducted 42 searches, so that might be rather taxing on the server if it were to be deployed online with millions of conversations going on simultaneously.
Anyone who wants to have a go with Cleverbot can do so here. As you can tell by the image on the right, my attempt with the Cleverbot did not go down too well.