Subscribe to Updates
Subscribe to get the latest content in real time.
Browsing: scholar
Authors: Yongjin Yang、Haneul Yoo、Hwaran Lee Paper: https://arxiv.org/abs/2408.06816 Introduction Large Language Models (LLMs) have shown remarkable capabilities in various tasks, including…
Authors: Ronja Fuchs、Robin Gieseke、Alexander Dockhorn Paper: https://arxiv.org/abs/2408.06818 Introduction Balancing game difficulty is crucial for creating engaging and enjoyable gaming experiences.…
Authors: Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei ArXiv: https://arxiv.org/abs/2204.08387 Introduction In the realm of Document AI, self-supervised pre-training techniques have…
Authors: Ruining Li, Chuanxia Zheng, Christian Rupprecht, Andrea Vedaldi ArXiv: http://arxiv.org/abs/2408.04631v1 Abstract We present Puppet-Master, an interactive video generative model that can serve as a…
Authors: Thao Nguyen, Jeffrey Li, Sewoong Oh, Ludwig Schmidt, Jason Weston, Luke Zettlemoyer, Xian Li ArXiv: http://arxiv.org/abs/2408.04614v1 Abstract: We propose a new method, instruction back-and-forth translation, to construct…
Authors: Xiangyu Zhao, Chengqian Ma Category: Computation and Language, Artificial Intelligence ArXiv: http://arxiv.org/abs/2408.01423v1 Abstract: Large Language Models (LLMs) exhibit remarkable proficiency in addressing a diverse…
1.On Discrete Prompt Optimization for Diffusion Models This paper introduces the first gradient-based framework for prompt optimization in text-to-image diffusion…
Authors: Richard Ren, Steven Basart, Adam Khoja, Alice Gatti, Long Phan, Xuwang Yin, Mantas Mazeika, Alexander Pan, Gabriel Mukobi, Ryan H. Kim, Stephen Fitz, Dan Hendrycks Category: Machine Learning, Artificial Intelligence, Computation and…
Authors: Atsuyuki Miyai, Jingkang Yang, Jingyang Zhang, Yifei Ming, Yueqian Lin, Qing Yu, Go Irie, Shafiq Joty, Yixuan Li, Hai Li, Ziwei Liu, Toshihiko Yamasaki, Kiyoharu Aizawa Category: Computer Vision and Pattern Recognition, Artificial…
1. AbstractThis paper addresses the challenge of generating tailored long-form responses from large language models (LLMs) in coverage-conditioned (C2) scenarios,…