论文标题
改进语言模型促使支持半自治任务学习
Improving Language Model Prompting in Support of Semi-autonomous Task Learning
论文作者
论文摘要
语言模型(LLMS)为需要在性能环境中获得新任务能力的代理商提供了潜力。我们描述了一种新型代理能力的努力,该能力可以构建线索(或“提示”),从而为代理学习新任务提供了有用的LLM响应。重要的是,响应不仅必须是“合理的”(在LLMS知识提取的研究中通常使用的措施),而且还针对代理商的任务上下文,并以代理商可以解释其本地语言能力来解释的形式。我们总结了一系列促使策略的实证研究,并评估针对目标学习的目标和可行响应的目标。我们的结果表明,可以从LLM中获得可行的任务知识,以支持在线代理任务学习。
Language models (LLMs) offer potential as a source of knowledge for agents that need to acquire new task competencies within a performance environment. We describe efforts toward a novel agent capability that can construct cues (or "prompts") that result in useful LLM responses for an agent learning a new task. Importantly, responses must not only be "reasonable" (a measure used commonly in research on knowledge extraction from LLMs) but also specific to the agent's task context and in a form that the agent can interpret given its native language capacities. We summarize a series of empirical investigations of prompting strategies and evaluate responses against the goals of targeted and actionable responses for task learning. Our results demonstrate that actionable task knowledge can be obtained from LLMs in support of online agent task learning.