Links
Abstract
This work presents an approach for fine-tuning pre-trained language models to perform Dialog Act Classification or Sentiment/Emotion analysis. We start with a pre-trained language model and fine-tune it on task-specific datasets from SILICONE [1] using transfer learning techniques. Our approach improves the model’s ability to capture task-specific nuances while retaining its pre-existing language understanding capabilities. We experiment with different pre-trained models and compare their performances. We also perform undersampling on training data to evaluate the performance gain associated with data balancing. Overall, our findings suggest that fine-tuning pre-trained language models is an effective approach for improving the performance of Dialog Act classification and Sentiment Analysis models.
Collaborators
This work has been done in a group work with Théo Lorthios , under the supervision of Pierre Colombo as part of the Natural Language Processing course at ENSAE Paris.
References
[1] Chapuis, E., Colombo, P., Manica, M., Labeau, M., & Clavel, C. (2021). Hierarchical Pre-training for Sequence Labelling in Spoken Dialog.