Image-MLの皆様 お世話になっております。広島市立大学のマークと申します。 国際ワークショップ3rd International Workshop on Multimodal Human Understanding for the Web and Social Media (MUWS2024) @ ACM ICMR 2024ですが、締め切りが延びましたので再度ご案内させていただきます。 投稿締切は1週間後、4月14日AOEまでとなります。奮って投稿していただければ幸いです。 質問などがあれば気軽にマークまでお問い合わせください。 詳細は下記のホームページまたは添付のCFP案内を参照してください。 https://muws-workshop.github.io どうぞよろしくお願い致します。 -- Dr. Marc A. Kastner Assistant Professor Hiroshima City University, Information Science Department for Systems Engineering, Interface Design Course https://www.marc-kastner.com/ On 26 Jan 2024, at 11:08, Marc A. Kastner wrote: > Image-MLの皆様 > > お世話になっております。京都大学のマークと申します。 > > 国際ワークショップ3rd International Workshop on Multimodal > Human Understanding for the Web and Social Media (MUWS2024) @ ACM ICMR > 2024の開催案内を送らせていただきます。 > > 詳細は下記のホームページまたは添付のCFP案内を参照してください。 > https://muws-workshop.github.io > > 投稿締め切りは4月7日です。奮って投稿していただければ幸いです。 > 質問などがあれば気軽にマークまでお問い合わせください。 > > --- > > (Apologies for possible cross-posting) > ----------------------------------------------------------------------------------------------------------------------------- > CALL FOR PAPERS > MUWS 2024 - The 3rd International Workshop on Multimodal Human > Understanding for the Web and Social Media > co-located with International Conference on Multimedia Retrieval > (ICMR) 2024 in Phuket, Thailand. > > June 10 2024, Phuket, Thailand > More Info: https://muws-workshop.github.io/ > ----------------------------------------------------------------------------------------------------------------------------- > > Aim and Scope > > Multimodal human understanding and analysis is an emerging > research area that cuts through several disciplines like Computer > Vision, Natural Language Processing (NLP), Speech Processing, > Human-Computer Interaction, and Multimedia. Several multimodal > learning techniques have recently shown the benefit of combining > multiple modalities in image-text, audio-visual and video > representation learning and various downstream multimodal tasks. At > the core, these methods focus on modelling the modalities and their > complex interactions by using large amounts of data, different loss > functions and deep neural network architectures. However, for many Web > and Social media applications, there is the need to model the human, > including the understanding of human behaviour and perception. For > this, it becomes important to consider interdisciplinary approaches, > including social sciences, semiotics and psychology. The core is > understanding various cross-modal relations, quantifying bias such a > s social biases, and the applicability of models to real-world > problems. Interdisciplinary theories such as semiotics or gestalt > psychology can provide additional insights and analysis on perceptual > understanding through signs and symbols via multiple modalities. In > general, these theories provide a compelling view of multimodality and > perception that can further expand computational research and > multimedia applications on the Web and Social media. > > The theme of the MUWS workshop, multimodal human understanding, > includes various interdisciplinary challenges related to social bias > analyses, multimodal representation learning, detection of human > impressions or sentiment, hate speech, sarcasm in multimodal data, > multimodal rhetoric and semantics, and related topics. The MUWS > workshop will be an interactive event and include keynotes by relevant > experts, poster and demo sessions, research presentations and > discussion. > > Particular areas of interest include, but are not limited to: > > - Modeling human impressions in the context of the Web and > Social Media > - Cross-modal and semantic relations > - Incorporating multi-disciplinary theories such as Semiotics > or Gestalt-Theory into multimodal analyses > - Measuring and analyzing biases such as cultural bias, social > bias, multilingual bias, and related topics in the context of the Web > and Social Media > - Multimodal human perception understanding > - Multimodal sentiment/emotion/sarcasm recognition > - Multimodal hate speech detection > - Multimodal misinformation detection > - Multimodal content understanding and analysis > - Multimodal rhetoric in online media > > > Submission Instructions > > We welcome contributions from 4 pages (short papers) to 8 > pages (long papers) that address the topics of interest. Papers should > follow the ACM proceedings style. All submissions must be written in > English and must be formatted according to the proceedings style. The > workshop proceedings will be part of the ICMR Proceedings. > > Submission Page: > https://easychair.org/conferences/?conf=muws24 > > > Important Dates > > Submission deadline: April 7th, 2024 > Paper notification: April 21st, 2024 > Workshop date: June 10th, 2024 > > Organizing Committee > > Marc A. Kastner, Kyoto University, Kyoto, Japan > Gullal S. Cheema, TIB - Leibniz Information Centre for Science > and Technology, Hannover, Germany > Sherzod Hakimov, University of Potsdam, Potsdam, Germany > Noa Garcia, Osaka University, Osaka, Japan > > Contact > > All questions about the workshop should be emailed to: muws24 > (at sign) easychair.org > > > -- > Dr. Marc A. Kastner > Assistant Professor > Kyoto University, Graduate School of Informatics > Intelligent Science and Technology Course, Computer Vision Lab > https://www.marc-kastner.com/ > > _______________________________________________ > image mailing list > image@imageforum.org > http://www.imageforum.org/mailman/listinfo/image