Sentimental Analysis/ Cursing Detection in hybrid chat?

Do we have a sentimental analysis feature in our hybrid chat ?"

1 Like

Sentiment analysis is something that is being executed by the NLU layer of a chatbot (IBM Watson, Google Dialogflow, Spacy, Rasa,… ).

In some cases, the chatbot chosen does not have ready-made models to detect customer- or culture-specific content, such as:
*Sentiments

  • Cursing/ Insults/ inappropriate language

In bot parlance, these are either intents or entiites, but the primary goal of typical chatbot NLU engines is not classification.

A simple and good method do classification is a Naive Bayes Classifier, for which a library exists from NLTK.
A way how such a classifier can be custom-built and embedded into a Rasa NLU pipeline: https://blog.rasa.com/enhancing-rasa-nlu-with-custom-components/
This component will be executed in sequence with all other NLU engines that might be used to detect content in messages).
As the DIET classifier can detect multiple intents in a single intents, this might however not even be necessary, as a message might have both the intent “polite”, as well as an intent “restaurantbooking” - provided there is sufficient training data for the “politeness” classification.

For the user and trainers, they would then only need to classify messages on training data, and the bot will then be able to detect these.

For cursing in particular, there are also finished models such as https://www.perspectiveapi.com/#/start - which enables you to bypass training. This could be plugged into for ex Rasa as a custom component, or act as a bot in it’s own right (if that’s the only thing you want to detect).

What I understood from this, we can train bot for specific words & phrases, based on those responses we can display stats.

entire phrases are classified as something. This classification (polite/neutral/insulting, happy/ neutral/ sad,…) can be added as a tag to each message, and then you could run statistics for example what percentage of all messages were polite/neutral/insulti, how many were happy/neutral/sad . You could also classify multiple things at the same time (by using multiple classifiers in sequence).
So this depends what you want to detect, and how you want to have statistics. Do you want to just highlight the individual messages that are happy/ neutral/ sad, or classify the entire conversation? What should be the rule to classify an entire conversation as sad (more than x% of all messages classified as sad, or a certain outcome?)

Understood, thank you for your detailed response. In the current release, how much time implementation will take?

this is a professional service, independent of any release. I would expect a month’s time (if the bot is deployed). Most of the effort will then to provide training and classification data (which is typically done by the enduser/ customer that will determine which sentence expresses what kind of classification).