Author: Mia Anderson

Authors: Haozhe Ma、Zhengding Luo、Thanh Vinh Vo、Kuankuan Sima、Tze-Yun Leong Paper: https://arxiv.org/abs/2408.10858 Introduction Reinforcement learning (RL) has achieved remarkable success in various domains, including robotics, gaming, autonomous vehicles, signal processing, and large language models. However, environments with sparse and delayed rewards pose significant challenges, as the lack of immediate feedback hinders the agent’s ability to distinguish valuable states, leading to aimless exploration. Reward shaping (RS) has proven effective in addressing this challenge by providing additional dense and informative rewards. In this context, multi-task reinforcement learning (MTRL) is gaining importance due to its ability to share and transfer knowledge across tasks. Integrating RS…

Read More

Authors: Xiao Wang、Yao Rong、Fuling Wang、Jianing Li、Lin Zhu、Bo Jiang、Yaowei Wang Paper: https://arxiv.org/abs/2408.10488 Introduction Sign Language Translation (SLT) is a crucial task in the realm of AI-assisted disability support. Traditional SLT methods rely on visible light videos, which are susceptible to issues such as lighting conditions, rapid hand movements, and privacy concerns. This paper introduces a novel approach using high-definition event streams for SLT, which effectively mitigates these challenges. Event streams offer a high dynamic range and dense temporal signals, making them resilient to low illumination and motion blur. Additionally, their spatial sparsity helps protect the privacy of the individuals being recorded.…

Read More

Authors: Ziyou Jiang、Lin Shi、Guowei Yang、Qing Wang Paper: https://arxiv.org/abs/2408.08619 Introduction Security patches are crucial for maintaining the stability and robustness of projects in the Open-Source Software (OSS) community. Despite the importance of patching vulnerabilities before they are disclosed, many organizations struggle with this task. Security practitioners typically track vulnerable issue reports (IRs) and analyze the relevant insecure code to generate potential patches. However, the insecure code may not always be explicitly specified, making it difficult to generate patches. PatUntrack is an automated approach designed to generate patch examples from IRs without tracked insecure code, utilizing auto-prompting to optimize Large Language Models…

Read More

Authors: Mark Towers、Yali Du、Christopher Freeman、Timothy J. Norman Paper: https://arxiv.org/abs/2408.08230 Introduction Reinforcement learning (RL) agents have achieved remarkable success in complex environments, often surpassing human performance. However, a significant challenge remains: explaining the decisions made by these agents. Central to RL agents is the future reward estimator, which predicts the sum of future rewards for a given state. Traditional estimators provide scalar outputs, which obscure the timing and nature of individual future rewards. This paper introduces Temporal Reward Decomposition (TRD), a novel approach that predicts the next N expected rewards, offering deeper insights into agent behavior. Related Work Previous research in…

Read More

Authors: Daniele Rege Cambrin、Eleonora Poeta、Eliana Pastor、Tania Cerquitelli、Elena Baralis、Paolo Garza Paper: https://arxiv.org/abs/2408.07040 Introduction In recent years, the integration of remote sensing and deep neural networks has significantly advanced agricultural management, environmental monitoring, and various earth-observation tasks. One critical application is the segmentation of crop fields, which is essential for optimizing agricultural productivity, assessing crop health, and planning sustainable farming practices. Accurate segmentation enables precise calculations of area coverage, assessment of crop types, and monitoring of agronomic factors such as plant health and soil conditions. However, the complexity of deep learning models often makes them difficult to interpret, posing challenges in understanding…

Read More