Data Store goes AWOL–too much data?

I have one rather large record in a data store which has suddenly started failing. All reads of that record now fail with:

Error
unexpected end of file

DETAILS
    at BrotliDecoder.zlibOnError (zlib.js:187:17)

(This was tested using the “get_record_or_create” step)

When I open the data store in the Pipedream website, the website shows a 404 error, but the dev tools reveal a request to https://api.pipedream.com/graphql?operationName=myStore&variables=... which returns an HTTP 500. I suspect that the value of that record has gotten too big, as I am regularly appending data to it.

For me, this means the data in that record is currently lost. There is no way for me to access it. I had to create a new data store and start from scratch. If the issue is indeed due to record size, the new data store will inevitably run into the same problem at some point.

Hi @ew

First off, welcome to the Pipedream community. Happy to have you!

That’s correct, Data Stores memory storage is not an infinite resouce.

We’re working on providing betting guardrails and limit expectations. But for now, think of it as a simple key value store, and it shouldn’t be used for storing large amounts of important data.

It’s more of a cache and less of a database.

Hi @pierce, thank you for the quick response!

I understand, I suppose shouldn’t have used data stores for our use case in the first place. I also understand that they’re still in preview.

However, the data store feature has been around for a while and is located prominently on your website. If it really is not ready for production use and if it will remain in this state, perhaps you should consider advertising it less prominently or adding some documentation to explain what should be used instead.

If we do need to set up our own external data store to avoid sudden irrecoverable data loss, the value of Pipedream would be greatly diminished for us. At that point, we could probably just as well set up our own service in Google Cloud Platform.

Hi @ew

Thanks for the feedback, it’s always appreciated.

The newest version of Data Stores will have clearer guardrails on usage limits, but this also means that if a limit is hit on storage it will immediately reject any new incoming data.

Unfortunately Data Stores do not and will not have infinite data storage, but the v1.0 general release will include clear ceilings on this.

Got it, I understand that data storage cannot be infinite. A clear error message, and no data loss/data ending up in an inaccessible state would make it much more pleasant to resolve this issue for us!