Things to know before getting your own CHATBOT
Introduction
A chatbot is a computer program that manages conversations with humans, via auditory or textual methods. It simulates an intelligent conversation between the user and itself. This comment will show readers the issues, opportunities and consequences of chatbots before fans invest in one for their business. For example; would it be cost-effective? How would it impact the business? Will it work for us?
The History of Chatbots
First generation chatbots imitated human-to-human conversations. Second generation did not need any coding and could converse freely. Third generation included bots that had the ability to learn autonomously, aka machine learning (Ritter, 2016). Finally the fourth generation was marked by bots which simulated human emotions.
The first chatbots were ELIZA in 1966 and PARRY in 1972.
ELIZA simulated conversation by exploring patterns in user input and asking questions to determine certain properties of its interlocutor (Weizenbaum, 1966). PARRY was a program which simulated a paranoid patient (Weizenbaum, 1976). However this simulation included only free association and spontaneous output – it could not use information previously acquired. A third program, named DOCTOR, was able to answer questions about the diagnostics of psychiatric illness (Snyder et al., 1972). These programs are examples of the first generation of chatbots, which allow for free-form conversation with little support from AI technology.
The second generation emerged in the 1970s.
During this time Joseph Weizenbaum created ELIZA’s nephew – a program named ELIZA’S CHILD, that had more features including an ability to learn new patterns of behavior (Weizenbaum, 1976). However this child was not able to engage in conversation – it only imitated ELIZA’s methods.
The third generation emerged in the 1980s. They were used as an experimental testbed for the newly developed AI technology (Ritter, 2016). During this time heuristic techniques were used to allow the bot to take part in conversations based on rules and guidelines. This generation included chatbots such as PABOT – a bot that focused on weather-related information.
The 1990s marked the beginning of modern chatbots.
They were able to communicate with users via auditory or textual methods, and this time could engage in conversations by leveraging the growing collection of natural language processing algorithms. This generation included ELIZA-II – a bot that could handle general topics and mimic human moods.
The second half of the 1990’s marked a turning point towards more complex chatbots. During this time AIs were integrated into home appliances and the first robot pets were created. However these bots did not recognize speech, had little reasoning capabilities, and could only perform pre-programmed actions.
The 2000s saw significant improvement in AIs.
They became widely used on web services, mobile devices, games, etc., and included chatbots that could carry out conversations based on information in their database. Some bots could learn from the users inputs and apply the learned knowledge to future conversations. This generation included an AI named AILA – a bot able to carry out conversations on various topics for 20 minutes.
The 2010s marked the fourth generation of chatbots which included deep learning, reinforcement learning, neural networks, etc. They used natural language processing in order to detect the user’s intent in each sentence and respond accordingly. This generation is marked by bots that can understand not only single words or phrases but entire sentences. During this time chatbots became more human-like with the development of AIs able to execute specific tasks.
The 2010s are also marked by the development of chatbots that can understand emotions.
This ability is based on cognitive psychology theories which explain how humans process emotion-related information. For example, if a user expresses negative feedback by saying “I don’t like this”, an AI should recognize this statement as negative and try to change it into “I like this”.
The use of chatbots in modern applications became popular after Facebook Messenger released its chatbot API in April 2016. Various businesses, including airlines, e-commerce sites, online shops, etc., launched their own chatbots in order to interact with the customers via Facebook Messenger.
Most bots use natural language processing (NLP) technology to identify user intent. The responses of the bot are then sent back to the user in text or speech form. However, NLP and NLG technologies might not be enough in order to generate a natural conversation flow between human and chatbot.
In 2017, Google launched Actions and Google Assistant in order to increase the number of conversational tasks that chatbots can carry out. These APIs give developers access to an HTTP-based API, which makes it easier for third-party chatbots to access and integrate with Google Assistant.
Chatbots are typically accessed via a messaging app such as Facebook Messenger or WeChat, and their interactions are mostly personalized. In terms of development, chatbots can be divided into two categories: rule-based and machine learning-based AIs. The former interact with users by following a set of rules programmed in advance by a developer. Each time a user sends an input message, the rules are applied to find a matching response from the chatbot. In contrast, machine learning-based AIs carry out conversations using a more advanced learning algorithm that uses machine learning techniques such as deep learning and reinforcement learning.
The developer of a rule-based chatbot has to determine the set of rules before deploying it on a messaging app. This task consists of a number of steps, including:
Entering the content of the rules into a computer program or script
Testing and debugging the rules Deploying the chatbot on an app such as Facebook Messenger
Maintaining and updating the chatbot as new versions are released.
Developers use NLP technology to add more features to chatbots. This allows the bots to carry out conversations in natural language rather than by simply responding to specific commands. There are several popular NLP frameworks such as API.ai, IBM’s Watson and Wit.ai developed by Facebook AI Research Lab.
Machine learning-based chatbots can carry out their conversation by learning from the interactions with users that they have. This method is challenging for chatbots since conversations are usually unpredictable (Ritter, 2016). However, machine learning-based AIs overcome this issue by using reinforcement learning (RL) algorithms which allow bots to learn from their responses in an iterative manner.
Conversational bots that use RL algorithms require huge amounts of data in order to learn from responses. This means that developers have to spend a lot of time and effort in order to build and maintain such bots (Ritter, 2016). It is also difficult for machine learning-based AIs to carry out conversations without any errors.
The aim of this study is to present the architecture and implementation of a Chatbot application that uses NLG technology in order to generate conversational flows that mimic those produced by human beings. The entity extraction method presented in is used as a foundation for the main approach employed here. This approach has been proven to be successful in order to generate flows of similar length and form as those produced by humans.
Conclusion
All areas of research and development in artificial intelligence are progressing at an extremely rapid pace. Chatbots are no different, with the aforesaid Google Assistant, Apple Siri and Microsoft Cortana all becoming part of our everyday lives. The rather large field of natural language generation (NLG) has also seen major advancements, with the machine learning-based approaches detailed in this paper paving the way for more advanced and widespread use of NLG technology.