SentimentPulse: Temporal-Aware Custom Language Models vs. GPT-3.5 for Consumer Sentiment
Published in I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models, NeurIPS 2023, 2023
Recommended citation: Lixiang Li, Bharat Bhargava, Alina Nesen, Nagender Aneja "SentimentPulse: Temporal-Aware Custom Language Models vs. GPT-3.5 for Consumer Sentiment." I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models, NeurIPS 2023, 2023. https://nips.cc/virtual/2023/76523
SentimentPulse:-Temporal-Aware-Custom-Language-Models-vs.-GPT-3.5-for-Consumer-Sentiment
SentimentPulse:-Temporal-Aware-Custom-Language-Models-vs.-GPT-3.5-for-Consumer-Sentiment
(Conference Workshop Poster Publication)
Abstract: Large Language Models are trained on an extremely large corpus of text data to allow better generalization, but this blessing can also become a curse and significantly limit their performance in a subset of tasks. In this work, we argue that LLMs are notably behind well-tailored and specifically designed models where the temporal aspect is important in making decisions, and the answer depends on the timespan of available training data. We prove our point by comparing two major architectures: first, SentimentPulse, a real-time consumer sentiment analysis approach that leverages custom language models and continual learning techniques, and second, GPT-3.5, which is tested on the same data. Unlike foundation models lacking temporal context, our custom language model is pre-trained on time-stamped data, uniquely suited for real-time applications.
Recommended citation: ‘Lixiang Li, Bharat Bhargava, Alina Nesen, Nagender Aneja "SentimentPulse: Temporal-Aware Custom Language Models vs. GPT-3.5 for Consumer Sentiment." I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models, NeurIPS 2023, 2023.’