⚠️ This version of the docs is deprecated. Looking for the latest features and updates? Explore our latest documentation.

# Amazon S3

Amazon S3 (opens new window) — the Simple Storage Service — is a common place to dump data for long-term storage on AWS. Pipedream supports delivery to S3 as a first-class Destination.

# Using $send.s3 in workflows

You can send data to an S3 Destination in Node.js code steps using $send.s3().

$send.s3() takes the following parameters:

$send.s3({
  bucket: "your-bucket-here",
  prefix: "your-prefix/",
  payload: event.body,
});

Like with any $send function, you can use $send.s3() conditionally, within a loop, or anywhere you'd use a function normally.

# Using $.send.s3 in component actions

If you're authoring a component action, you can deliver data to an S3 destination using $.send.s3.

$.send.s3 functions the same as $send.s3 in workflow code steps:

async run({ $ }) {
  $send.s3({
    bucket: "your-bucket-here",
    prefix: "your-prefix/",
    payload: event.body,
  });
}

# S3 Bucket Policy

In order for us to deliver objects to your S3 bucket, you need to modify its bucket policy (opens new window) to allow Pipedream to upload objects.

Replace [your bucket name] with the name of your bucket near the bottom of the policy.

{
  "Version": "2012-10-17",
  "Id": "allow-pipedream-limited-access",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::203863770927:role/Pipedream"
      },
      "Action": [
        "s3:AbortMultipartUpload",
        "s3:GetBucketLocation",
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:ListBucketMultipartUploads"
      ],
      "Resource": [
        "arn:aws:s3:::[your bucket name]",
        "arn:aws:s3:::[your bucket name]/*"
      ]
    }
  ]
}

This bucket policy provides the minimum set of permissions necessary for Pipedream to deliver objects to your bucket. We use the Multipart Upload API (opens new window) to upload objects, and require the relevant permissions (opens new window).

# S3 Destination delivery

S3 Destination delivery is handled asynchronously, separate from the execution of a workflow. Moreover, events sent to an S3 bucket are batched and delivered once a minute. For example, if you sent 30 events to an S3 Destination within a particular minute, we would collect all 30 events, delimit them with newlines, and write them to a single S3 object.

In some cases, delivery will take longer than a minute. You can always review how many Destinations we've delivered a given event to by examining the Dest column in the Inspector.

# S3 object format

We upload objects using the following format:

[PREFIX]/YYYY/MM/DD/HH/YYYY-MM-DD-HH-MM-SS-IDENTIFIER.gz

That is — we write objects first to your prefix, then within folders specific to the current date and hour, then upload the object with the same date information in the object, so that it's easy to tell when it was uploaded by object name alone.

For example, if I were writing data to a prefix of test/, I might see an object in S3 at this path:

test/2019/05/25/16/2019-05-25-16-14-58-8f25b54462bf6eeac3ee8bde512b6c59654c454356e808167a01c43ebe4ee919.gz

As noted above, a given object contains all payloads delivered to an S3 Destination within a specific minute. Multiple events within a given object are newline-delimited.

# Limiting S3 Uploads by IP

S3 provides a mechanism to limit operations only from specific IP addresses (opens new window). If you'd like to apply that filter, uploads using $send.s3() should come from one of the following IP addresses:

3.208.254.105
3.212.246.173
3.223.179.131
3.227.157.189
3.232.105.55
3.234.187.126
18.235.13.182
34.225.84.31
52.2.233.8
52.23.40.208
52.202.86.9
52.207.145.190
54.86.100.50
54.88.18.81
54.161.28.250
107.22.76.172

This list may change over time. If you've previously whitelisted these IP addresses and are having trouble uploading S3 objects, please check to ensure this list matches your firewall rules.

Still have questions?

Please reach out if this doc didn't answer your question. We're happy to help!