Subscribe to Updates
Subscribe to get the latest content in real time.
Author: Olivia Wilson
Authors: Yen-Che Hsiao、Abhishek Dutta Paper: https://arxiv.org/abs/2408.06458 Introduction Large Language Models (LLMs) have demonstrated significant capabilities in predicting text sequences based on given inputs. These models can learn new tasks through in-context learning, where they adapt to new tasks from a small set of examples provided during inference. This paper introduces a novel in-context learning algorithm aimed at creating autonomous decision-making agents using a single LLM. The proposed method allows the language agent to self-correct and improve its performance on tasks through iterative trials. Methods The core of the proposed method involves using a single LLM to generate thoughts, actions, and…
Authors: Wei Pang、Ruixue Duan、Jinfu Yang、Ning Li Paper: https://arxiv.org/abs/2408.06725 Introduction Visual Dialog (VD) is a complex task that involves answering a series of image-related questions based on a multi-round dialog history. Traditional VD methods often treat the entire dialog history as a simple text input, which overlooks the inherent conversational information flows at the round level. To address this limitation, the authors propose the Multi-round Dialogue State Tracking model (MDST). This model leverages dialogue states learned from dialog history to answer questions more accurately. MDST captures each round of dialog history, constructing internal dialogue state representations defined as 2-tuples of vision-language…
Authors: Yongjin Yang、Haneul Yoo、Hwaran Lee Paper: https://arxiv.org/abs/2408.06816 Introduction Large Language Models (LLMs) have shown remarkable capabilities in various tasks, including solving mathematical problems, acquiring world knowledge, and summarizing texts. However, they still suffer from producing plausible but incorrect responses, often referred to as hallucinations. To address this, recent research has focused on uncertainty quantification to predict the correctness of responses. This paper investigates previous uncertainty quantification methods under the presence of data uncertainty, which arises from irreducible randomness, unlike model uncertainty, which stems from a lack of knowledge. Contributions The paper makes two primary contributions: 1. Proposing a new Multi-Answer…
Authors: Ronja Fuchs、Robin Gieseke、Alexander Dockhorn Paper: https://arxiv.org/abs/2408.06818 Introduction Balancing game difficulty is crucial for creating engaging and enjoyable gaming experiences. If the difficulty level does not match a player’s skill or commitment, it can lead to frustration or boredom, reducing the time players spend on the game. This paper explores a novel approach to balancing game difficulty using machine learning-based agents that adapt to players’ current behavior. The proposed framework combines imitation learning and reinforcement learning to create personalized dynamic difficulty adjustment (PDDA) in the context of fighting games. Background and Related Work on DDA Dynamic Difficulty Adjustment (DDA) techniques…
Authors: Xiangyu Zhao, Chengqian Ma Category: Computation and Language, Artificial Intelligence ArXiv: http://arxiv.org/abs/2408.01423v1 Abstract: Large Language Models (LLMs) exhibit remarkable proficiency in addressing a diverse array of tasks within the Natural Language Processing (NLP) domain, with various prompt design strategies significantly augmenting their capabilities. However, these prompts, while beneficial, each possess inherent limitations. The primary prompt design methodologies are twofold: The first, exemplified by the Chain of Thought (CoT), involves manually crafting prompts specific to individual datasets, hence termed Expert-Designed Prompts (EDPs). Once these prompts are established, they are unalterable, and their effectiveness is capped by the expertise of the human designers. When applied to…
Authors: Richard Ren, Steven Basart, Adam Khoja, Alice Gatti, Long Phan, Xuwang Yin, Mantas Mazeika, Alexander Pan, Gabriel Mukobi, Ryan H. Kim, Stephen Fitz, Dan Hendrycks Category: Machine Learning, Artificial Intelligence, Computation and Language, Computers and Society ArXiv: http://arxiv.org/abs/2407.21792v1 Abstract: As artificial intelligence systems grow more powerful, there has been increasing interest in “AI safety” research to address emerging and future risks. However, the field of AI safety remains poorly defined and inconsistently measured, leading to confusion about how researchers can contribute. This lack of clarity is compounded by the unclear relationship between AI safety benchmarks and upstream general capabilities (e.g., general knowledge and reasoning). To address these issues, we conduct a comprehensive meta-analysis of…
1. AbstractThis paper addresses the challenge of generating tailored long-form responses from large language models (LLMs) in coverage-conditioned (C2) scenarios, where users request specific information ranges. The authors propose QTREE, a dataset of 10K hierarchical queries representing diverse perspectives on various topics, and QPLANNER, a 7B language model that generates customized query outlines for C2 queries. By utilizing QTREE and QPLANNER, the paper demonstrates the effectiveness of query outlining in C2 scenarios and verifies the benefits of preference alignment training for generating better outlines and long-form responses.2. Quick Reada. Research MethodologyThe paper introduces a novel approach to handle C2 queries…
1. Abstract This paper introduces CHAIN-OF-KNOWLEDGE (CoK), a framework designed to enhance Large Language Models’ (LLMs) knowledge reasoning abilities by integrating knowledge from Knowledge Graphs (KGs). CoK consists of two main components: 2. Rapid Reading a. Research Methodology b. Experiment Process c. Main Advantages 3. Summary a. Contributions b. Main Innovations c. Future Research Directions View PDF:https://arxiv.org/pdf/2407.00653
Abstract This paper investigates the capabilities of large language models (LLMs) like ChatGPT in performing machine translation (MT) tasks across a wide range of languages. Using the FLORES-200 benchmark, the authors compare the performance of ChatGPT with traditional MT models like NLLB, as well as commercial systems like Google Translate and the more advanced GPT-4. The results reveal that while ChatGPT demonstrates competitive performance for high-resource languages (HRLs), it consistently falls short for low-resource languages (LRLs), underperforming traditional MT models in 84.1% of the languages evaluated. The study also highlights the limited effectiveness of few-shot prompts for improving LLM MT…