ホーム
プロジェクト
研究内容
メンバー
卒業生・修了生
発表文献
アクセス
ライト
ダーク
自動
日本語
English
paper-conference
Uncurated image-text datasets: Shedding light on demographic bias
The increasing tendency to collect large and uncurated datasets to train vision-and-language models has raised concerns about fair …
Noa Garcia
,
廣田裕亮
,
Yankun Wu
,
中島悠太
引用
URL
ACT2G: Attention-based Contrastive Learning for Text-to-Gesture Generation
Recent increase of remote-work, online meeting and tele-operation task makes people find that gesture for avatars and communication …
Hitoshi Teshima
,
Naoki Wake
,
Diego Thomas
,
中島悠太
,
Hiroshi Kawasaki
,
Katsushi Ikeuchi
引用
DOI
URL
Analyzing Font Style Usage and Contextual Factors in Real Images
There are various font styles in the world. Different styles give different impressions and readability. This paper analyzes the …
Naoya Yasukochi
,
早志英朗
,
Daichi Haraguchi
,
Seiichi Uchida
引用
URL
CARE-MI: Chinese benchmark for misinformation evaluation in maternity and infant care
The recent advances in NLP, have led to a new trend of applying LLMs to real-world scenarios. While the latest LLMs are astonishingly …
Tong Xiang
,
Liangzhi Li
,
Wangyue Li
,
Mingbai Bai
,
Lu Wei
,
Bowen Wang
,
Noa Garcia
引用
URL
Contrastive Losses Are Natural Criteria for Unsupervised Video Summarization
Video summarization aims to select a most informative subset of frames in a video to facilitate efficient video browsing. Unsupervised …
Zongshang Pang
,
中島悠太
,
Mayu Otani
,
長原一
引用
Enhancing Fake News Detection in Social Media via Label Propagation on Cross-modal Tweet Graph
Fake news detection in social media has become increasingly important due to the rapid proliferation of personal media channels and the …
Wanqing Zhao
,
中島悠太
,
Haiyuan Chen
,
Noboru Babaguchi
引用
DOI
URL
Inverse Rendering of Translucent Objects using Physical and Neural Renderers
In this work, we propose an inverse rendering model that estimates 3D shape, spatially-varying reflectance, homogeneous subsurface …
Chenhao Li
,
Trung Thanh Ngo
,
長原一
引用
Acquiring a Dynamic Light Field Through a Single-Shot Coded Image
We propose a method for compressively acquiring a dynamic light field (a 5-D volume) through a single-shot coded image (a 2-D …
Ryoya Mizuno
,
Keita Takahashi
,
吉田道隆
,
Chihiro Tsutake
,
Toshiaki Fujii
,
長原一
PDF
引用
AxIoU: An Axiomatically Justified Measure for Video Moment Retrieval
Evaluation measures have a crucial impact on the direction of research. Therefore, it is of utmost importance to develop appropriate …
Riku Togashi
,
Mayu Otani
,
中島悠太
,
Janne Heikkilä Esa Rahtu
,
Tetsuya Sakai
PDF
引用
Gender and Racial Bias in Visual Question Answering Datasets
廣田裕亮
,
中島悠太
,
Noa Garcia
PDF
引用
«
»
引用
×