Yes so there is a single ‘instructions’ box for natural language input - see screenshot. Shouldn’t we via json/rpc return the data in a way that the agent when fetching the available tools sees with which params he needs to call the tool with? Or if this ‘instruction’ based approach is made on purpose, curious to hear the considerations (e.g. are you expecting on par accuracy when llms improve so that you don’t need to specify the params anymore + no latency tradeoffs anymore).