Logo
Please use this identifier to cite or link to this item: http://20.198.91.3:8080/jspui/handle/123456789/9068
Title: Audio sentiment analysis using deep learning
Authors: Barat, Anupam
Advisors: Bhattacharjee, Debotosh
Keywords: Sentiment Analysis;Lexicon-Based Techniques;Rule-Based Techniques;Machine Learning
Issue Date: 2023
Publisher: Jadavpur University, Kolkata, West Bengal
Abstract: The challenging issue of sentiment analysis in natural audio sources is the subject of the research that is being suggested, which focuses particularly on speaker-discriminated speech transcripts. The objective is to identify the emotional states that each speaker in a conversation exhibits. This study acknowledges that current methods for sentiment extraction frequently rely on text-based sentiment classifiers, which might not be able to capture the subtleties and emotional cues inherent in raw audio. The study investigates several strategies for speaker discrimination and sentiment analysis to meet this goal. Identification and distinction of various speakers in an audio discussion are accomplished by speaker discrimination. The importance of this phase lies in the fact that it enables the sentiment analysis to be conducted on certain speakers rather than the entire conversation. The research most likely entails using labelled audio datasets that have been manually annotated with speaker names and accompanying sentiment labels to train deep learning models. The models may use a variety of neural network architectures, such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs), that are appropriate for processing audio data. Further methods, including feature extraction, audio modelling, and language modelling may be investigated to improve the sentiment analysis process. The long-term goal of this project is to create effective algorithms capable of reliably analysing sentiment on speaker-discriminated voice transcripts. These algorithms would make it possible to automatically identify different speakers' emotions, adding to the growing field of audio sentiment analysis. In this proposed research, we perform sentiment analysis on speaker-discriminated speech transcripts to detect the emotions of the individual speakers involved in the conversation. We analyzed different techniques to perform speaker discrimination and sentiment analysis to find efficient algorithms to perform this task.
URI: http://20.198.91.3:8080/jspui/handle/123456789/9068
Appears in Collections:Dissertations

Files in This Item:
File Description SizeFormat 
M.Tech (Computer Science and Engineering) Anupam Barat.pdf830.68 kBAdobe PDFView/Open


Items in IR@JU are protected by copyright, with all rights reserved, unless otherwise indicated.