Research on Public Opinion Sentiment Analysis Based on Multi-modal Feature Fusion
DOI:
https://doi.org/10.53469/jrse.2025.07(09).07Keywords:
Opinion analysis, Sentiment analysis, Graphic fusion, Multimodal modelingAbstract
Under the background of rapid development of Internet technology, social media provides diversified expression channels for the public, and users are more inclined to use a combination of text and pictures to post comments. However, most of the current sentiment analysis methods use single-modal analysis, resulting in limited accuracy. To overcome this problem, this paper constructs a multimodal sentiment analysis model based on InceptionClip-Bert. First, text sentiment features are extracted with the help of the Bert model, and the Clip model is improved to extract image sentiment features; then, the cosine similarity is used to calculate the correlation between graphic and text sentiment tendencies to realize feature fusion; finally, public opinion analysis is carried out in terms of word frequency, word cloud, IP of the information publisher, identity of the information publisher, and T-SNE image clustering. The experimental comparison results show that this method can significantly improve the accuracy of sentiment recognition and provides a new idea for the research of multimodal feature fusion for sentiment analysis of public opinion.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Wenxin Fang, Tao Ye, Qinlong Xu

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Deprecated: json_decode(): Passing null to parameter #1 ($json) of type string is deprecated in /www/bryanhousepub/ojs/plugins/generic/citations/CitationsPlugin.inc.php on line 49

