Safetensors
qwen2

How to do multi-turn?

#2
by HuanzhiMao - opened

Hi,
Thanks for the great model.
I'm wondering if the Hammer models support multi-turn (eg, user -> assistant -> tool result -> assistant)? If so, how should I format the query message?

MadeAgents org

Hi, our training is based on XLAM data, so there isn't a specific requirement for a multi-turn dialogue format. We directly input multi-turn queries (e.g., user -> assistant -> tool result -> assistant) into the model for function calling. If the assistant's response includes function calling format data, it will be converted into the function calling format output by our model.

From this issue on the xLAM repo, they mentioned to use the xLAM client for multi-turn queries. I'm wondering if that also applies to the Hammer Models?

MadeAgents org
edited 2 days ago

The xLAM client does not apply to our Hammer model because we did not incorporate "\n[BEGIN OF HISTORY STEPS]\n{history_string}\n[END OF HISTORY STEPS]\n" during training in the current version. However, our model can directly use the conversation history as query input to complete the corresponding function calling, as shown in the example below. We will update the multi-turn template as soon as possible and enhance the model's capabilities in this area.

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What's the weather like in New York?"},
    {"role": "assistant", "content": "To get the weather information for New York, I'll need to use the get_weather function.", "tool_calls": '```\n[{"name": "get_weather", "arguments": {"location": "New York", "unit": "celsius"} }]\n```'},
    {"role": "observation","content": '{"temperature": 30, "description": "Partly cloudy"}'},
    {"role": "user", "content": "Now, search for the weather in San Francisco."}
]



# Example function definition (optional)
tools = [
    {
        "name": "get_weather",
        "description": "Get the current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "The city and state, e.g. San Francisco, New York"},
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"], "description": "The unit of temperature to return"}
            },
            "required": ["location"]
        }
    },
    {
        "name": "search",
        "description": "Search for information on the internet",
        "parameters": {
            "type": "object",
            "properties": {
                "query": {"type": "string", "description": "The search query, e.g. 'latest news on AI'"}
            },
            "required": ["query"]
        }
    },
    {
        "name": "respond",
        "description": "When you are ready to respond, use this function. This function allows the assistant to formulate and deliver appropriate replies based on the input message and the context of the conversation. Generate a concise response for simple questions, and a more detailed response for complex questions.",
        "parameters": {
            "type": "object",
            "properties": {
                "message": {"type": "string", "description": "The content of the message to respond to."}
            },
            "required": ["message"]
        }
    }
]

def build_conversation_history_prompt(conversation_history):
        history_string = ''
        for step_data in conversation_history:
            if "tool_calls" in step_data:
                history_string += f'{step_data["role"]}: {step_data["tool_calls"]}\n'
            else:
                history_string += f'{step_data["role"]}: {step_data["content"]}\n'
            
        
        return history_string
query = build_conversation_history_prompt(messages)
format_tools = convert_to_format_tool(tools)
content = build_prompt(TASK_INSTRUCTION, FORMAT_INSTRUCTION, format_tools, query)

messages=[
    { 'role': 'user', 'content': content}
]

inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)


outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))

Sign up or log in to comment