Seems like I can’t do that:
Oh, sorry wasn’t sure if you were in the basic or advanced plan
I can think of two options:
- Using data store keys as the counter (the total number of keys, use a hash function like UUID to generate unique keys)
a. Run a timer workflow that gets the number of keys in the data store, then deletes the keys
b. Make sure it doesn’t go over 500 in a given time frame - Using an external analytics tool or database like Supabase to send events and aggregate the numbers
ok, will look at those options
On the pricing front, I hope you see how I look at this.
Basically this says to pay $588 in order to limit execution to one thread to work around a race condition where there’s no real alternative.
This doesn’t mean I’m getting any more value out of the tool, it’s just a roadblock to me using it successfully.
The basic plan wouldn’t really give me the things I want.
this is always how I run into considering a paid plan with Pipedream and it’s never compelling
For the data store race condition, I agree with you it should be an out-of-the box feature
So there shouldn’t be need for this workaround
And since the data store actually uses Redis under the hood… it should be fairly simple to support this.
and to go from 10kb to 100kb it costs $350/year or to 1MB it costs $588 a year
You could always use an external data store. Something like DynamoDB is almost free for non-significant usage.
And there are also some turn-key cloud DB offerings like Redis Cloud and others.
Yep exactly, thanks Marco. Data stores are not meant to store large amounts of data. Mainly sharing small data between workflows.
File stores are currently unlimited and are intended to store static data if you need more storage