Papers
arxiv:2406.18082

Octo-planner: On-device Language Model for Planner-Action Agents

Published on Jun 26
· Submitted by alexchen4ai on Jun 27
#2 Paper of the day

Abstract

AI agents have become increasingly significant in various domains, enabling autonomous decision-making and problem-solving. To function effectively, these agents require a planning process that determines the best course of action and then executes the planned actions. In this paper, we present an efficient on-device Planner-Action framework that separates planning and action execution into two distinct components: a planner agent based on Phi-3 Mini, a 3.8 billion parameter LLM optimized for edge devices, and an action agent using the Octopus model for function execution. The planner agent first responds to user queries by decomposing tasks into a sequence of sub-steps, which are then executed by the action agent. To optimize performance on resource-constrained devices, we employ model fine-tuning instead of in-context learning, reducing computational costs and energy consumption while improving response times. Our approach involves using GPT-4 to generate diverse planning queries and responses based on available functions, with subsequent validations to ensure data quality. We fine-tune the Phi-3 Mini model on this curated dataset, achieving a 97\% success rate in our in-domain test environment. To address multi-domain planning challenges, we developed a multi-LoRA training method that merges weights from LoRAs trained on distinct function subsets. This approach enables flexible handling of complex, multi-domain queries while maintaining computational efficiency on resource-constrained devices. To support further research, we have open-sourced our model weights at https://huggingface.co/NexaAIDev/octopus-planning. For the demo, please refer to https://www.nexa4ai.com/octo-planner.

Community

Paper author Paper submitter

Octo-planner, an innovative open-source planning model with 3.8 billion parameters, represents Nexa AI's advancement in applying large language models (LLMs) for on-device planning. Octo-planner introduces a novel Planner-Action framework that separates planning and action execution. This approach allows it to achieve high accuracy comparable to larger models while significantly improving efficiency for edge devices. Octo-planner also employs a multi-LoRA training method to handle complex, multi-domain queries without compromising performance.

Paper author

Great work and thanks for linking all the models to the paper! It would be fantastic to build the demo on the hub and link it to the paper as well. : )

Paper author

great work team!

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.18082 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.18082 in a Space README.md to link it from this page.

Collections including this paper 11