awesome
My assumption is that for enabling an LLM to use an action, we would go through the configuration process using connect components and connect-react
, and somehow export the saved component configuration from connect-react, and save that in a database and load it at invocation/execution time when the LLM decides to call it
with some way (handled on our end) of marking which fields the LLM should supply
Will the configuration be UI / form-drive by the user? I would imagine some of the config might be handled by the LLM?
but e.g. picking the channel to send will be user-configured
for better quality & precision you want to hide as much of the complexity from the LLM as possible at runtime, e.g. if you are always sending to the same channel no reason to make it choose the channel manually
would be cool if options were specified as a JSON Schema since this is standard for tool calling / structured generation in LLMs btw
e.g in the screenshot above, the channelType
(per slack’s API docs) has to be im
, mpim
, channel
or group
but these specific values aren’t enumerated in a way that the LLM would have enough information to generate it
Yea I hear you, that’s been a big ask. There is some complexity on our end around dynamic props, which means that the schema may change based on the input value of a given field
makes sense
the think you’d have to do I think would just choose all dynamic props ahead of time
or use an iterative loop but then you have to do iterative structure generation
Yea, for any of those fields where there are dynamic options available, you’ll want to either call the /configure API (or expose that as a tool to your LLM)
yeah I think what we will end up doing is just requiring remote options to be specified ahead of time
trying to think about how to make this as intuitive as possible for the end-user
like in some cases maybe they do want the LLM to specify a dynamic options, and in other cases maybe they only want a specific one; so we will need a way for the end-user to indicate which fields should be LLM-generated , not sure if it’s possible to do that through react-connect
On the demo app, check out the propNames
arg on the left side — you can define which props to add to the UI form, so you could theoretically only show what you want, and if there are other prop inputs you want to define on the user’s behalf, you can just include those in the payload even if you omit them from the UI.
right that makes sense
e.g. for slack, have them pick the channel to send the message in, and then always have the message generated by the LLM