Fireact - Toward Language Agent Fine-tuning

arXiv V1: FIREACT: TOWARD LANGUAGE AGENT FINE-TUNING


NASA ADS - Google Scholar - Semantic Scholar

@inproceedings{Chen2023FireActTL,
  title={FireAct: Toward Language Agent Fine-tuning},
  author={Baian Chen and Chang Shu and Ehsan Shareghi and Nigel Collier and Karthik Narasimhan and Shunyu Yao},
  year={2023},
  url={https://api.semanticscholar.org/CorpusID:263829338}
}

作者:爱可可-爱生活 链接:https://zhuanlan.zhihu.com/p/660824265 来源:知乎 著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

FireAct: Toward Language Agent Fine-tuningB Chen, C Shu, E Shareghi, N Collier, K Narasimhan, S Yao[System2 Research & University of Cambridge & Monash University]FireAct:语言Agent微调要点:提出FireAct,一种用来微调语言模型的新方法,作为语言Agent的骨干模型。现有的大多数语言Agent依赖于大型语言模型的少样本提示,这在性能、效率、鲁棒性和泛化性方面都存在局限性。FireAct利用像GPT-4这样强大的语言模型,根据不同的任务和方法生成各种推理轨迹(ReAct、CoT、Reflexion格式),这些轨迹用于微调较小的语言模型。在问答任务中使用Google搜索工具的实验表明,与提示相比,微调在精确匹配准确率、推理成本和时间以及对误导工具输出的鲁棒性方面都带来了实质性改进。微调在泛化到未见的数据集方面也显示出更好的性能,像Llama-2这样的小型语言模型可以通过微调来匹配或超过像GPT-3.5这样的大型提示语言模型。来自多个方法和任务的更多样化微调数据可导致更灵活的Agent,它们可以隐式地根据任务复杂性调整推理策略。基础语言模型、提示方法、任务和数据大小之间的交互非常复杂,GPT-3.5的样本效率最好,但较小的语言模型在有更多数据的情况下可以赶上。阐明了语言模型微调对Agent的多方面优势,并为该领域的新研究方向开辟了新的道路。动机:研究语言模型微调对语言Agent的影响,以解决现有语言Agent在性能、鲁棒性、成本等方面存在的问题。方法:提出一种名为FireAct的方法,通过使用来自多个任务和提示方法的Agent轨迹来对语言模型进行微调,以提高语言Agent的性能和鲁棒性。优势:通过实验证明,微调语言Agent可以显著提升语言Agent的性能,降低推理时间和成本,并且更多的微调数据可以进一步改善Agent的表现。

一句话总结:研究了语言模型微调对语言Agent的影响,通过使用多任务和多提示方法的Agent轨迹进行微调,提高了Agent的性能和鲁棒性,解决了现有语言Agent存在的问题。

Recent efforts have augmented language models (LMs) with external tools or environments, leading to the development of language agents that can reason and act. However, most of these agents rely on few-shot prompting techniques with off-the-shelf LMs. In this paper, we investigate and argue for the overlooked direction of fine-tuning LMs to obtain language agents. Using a setup of question answering (QA) with a Google search API, we explore a variety of base LMs, prompting methods, fine-tuning data, and QA tasks, and find language agents are consistently improved after fine-tuning their backbone LMs. For example, fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4 leads to a 77% HotpotQA performance increase. Furthermore, we propose FireAct, a novel approach to fine-tuning LMs with trajectories from multiple tasks and prompting methods, and show having more diverse fine-tuning data can further improve agents. Along with other findings regarding scaling effects, robustness, generalization, efficiency and cost, our work establishes comprehensive benefits of fine-tuning LMs for agents, and provides an initial set of experimental designs, insights, as well as open questions toward language agent fine-tuning.