AWS S3 Integration

Set up this integration to copy generated PDFs into an S3 bucket in your own AWS account.

This integration applies to all templates, submissions, and combined submissions in your account unless you specify API Token ID(s). When you specify API Token ID(s) then you can have multiple S3 buckets for different workflows or some workflows that don't upload to AWS at all.

You can configure what type of submissions will be uploaded to your S3 bucket:

  • Only Live (default)
  • Only Test
  • Both Live and Test

You can also configure different path templates for live and test submissions.

Create an S3 Bucket

  • Sign into your AWS account
  • Visit the S3 service
  • Click "Create bucket"
  • Choose a bucket name and select the correct AWS region
  • Click "Next", then configure your bucket options and permission
  • Click "Create bucket"

Create An IAM Policy With Limited Permissions

  • Visit the IAM service
  • Click "Policies"
  • Click "Create policy"
  • Under "Service", choose "S3"
  • Under "Actions", expand the "Write" section
    • Select the PutObject checkbox. (DocSpring does not need any other permissions)
  • Under "Resources", choose "Specific". Then click "Add ARN".
    • Paste your bucket name into "Bucket name"
    • Check the "Any" checkbox for "Object name"
    • Click "Add"
  • Click "Review policy"
  • Set a name for the new policy, e.g. "DocSpringS3Uploads"
  • Set a description, e.g. "Allows the DocSpring service to upload PDFs to our S3 bucket"
  • Click "Create policy"

Create An IAM User With Limited Permissions

  • Visit the IAM service
  • Click "Users"
  • Click "Add user"
  • Configure the username, e.g. "docspring-s3-uploads"
  • Under "Access type", select "Programmatic access"
  • Click "Next: Permissions"
  • Click the "Attach existing policies directly" option.
  • Find the policy you just created (use the Search box)
  • Select this policy by clicking the checkbox
  • Click "Next: Tags"
  • Skip the Tags section. Click "Next: Review"
  • Click "Create user"
  • Click "Show" under the "Secret access key".
  • Copy the Access key ID and Secret access key, and save these for later.

Create the AWS S3 Integration in DocSpring

  • Visit the Account Integrations page
  • Click the "Create Integration" button at the top right
  • Select "AWS S3" in the "Service" dropdown
  • Paste your "Access key ID" into the "AWS Access Key ID" field
  • Paste your "Secret access key" into the "AWS Secret Access Key" field
  • Select the correct AWS Region from the dropdown list
  • Enter your S3 bucket name
  • Configure the "Path Template for Submissions"
    • Example: {{ template_id }}/{{ submission_id }}.pdf will upload your PDF to: tpl_eGc5CmFbPnCCmerqsx/sub_Gbxesk7Xf52Pq3KgT9.pdf
    • This path template uses the Liquid syntax, which is similar to Handlebars or Mustache templates.
    • You can use any values from the metadata object.
      • Access values with {{ metadata.<key> }}. For example: {{ metadata.pdf_filename }}
      • For example, to use the user_id from your metadata: {{ metadata.user_id }}
      • All invalid characters are replaced with an underscore.
    • Available variables:
      • account_id (Your DocSpring Account ID) - Example: acc_X3gQR5GN6tS6tcYgJs
      • template_id - Example: tpl_eGc5CmFbPnCCmerqsx
      • template_name
        • All invalid characters are replaced with an underscore. (See metadata above.)
      • submission_id - Example: sub_Gbxesk7Xf52Pq3KgT9
      • timestamp (Time when the submission was processed) - Example: 20180509094531
      • date - Example: 20180509
      • year - Example: 2018
      • month (Not zero-padded) - Example: 5
      • day (Not zero-padded) - Example: 9
  • Configure the "Path Template for Combined PDFs"
    • Leave this blank if you will not be combining any PDFs
    • Example: merged_pdfs/{{ combined_submission_id }}.pdf will upload your PDF to: merged_pdfs/com_Zbetd3ayK4EK3J4Hf4.pdf
    • You can use any values from the metadata object.
      • Access values with {{ metadata.<key> }}. For example: {{ metadata.pdf_filename }}
      • For example, to use the user_id from your metadata: {{ metadata.user_id }}
      • All invalid characters are replaced with an underscore.
    • Available variables:
      • account_id (Your DocSpring Account ID) - Example: acc_X3gQR5GN6tS6tcYgJs
      • combined_submission_id - Example: com_Zbetd3ayK4EK3J4Hf4
      • timestamp (Time when the combined submission was processed) - Example: 20181029094531
      • date - Example: 20181029
      • year - Example: 2018
      • month - Example: 10
      • day - Example: 29
  • Click "Create"

Now that you've created an AWS S3 integration, we will upload any generated PDFs to your S3 bucket. You can test the integration by generating a new live PDF. (Test PDFs are skipped.)

When you view a submission or combined submission in the web UI, you will see the S3 upload status in the Actions section at the bottom of the page.

FAQ

How can I configure the S3 integration to only upload certain PDFs?

You can use the "Submission Type" and "API Token IDs" fields to configure which PDFs will be uploaded to your S3 bucket.

For example, if you set "Submission Type" to live, then DocSpring will only upload live PDFs to S3 (without watermarks.) Test PDFs will be skipped. The other options are "test", and "all" (for both test and live PDFs.)

You can also provide a comma-separated list of API tokens in the "API Token IDs" field. If you provide one or more API token IDs, then DocSpring will only upload submissions (or combined submissions) that were created using this API token. You could use feature to set up S3 buckets for different environments (e.g. dev, staging, production, qa), and use separate API tokens for each environment. This would allow you to send your generated PDFs into the correct S3 bucket for each environment while sharing the same templates in a single DocSpring account.

Does DocSpring still keep a copy of the PDF?

Yes. This AWS S3 integration is just a one-way file upload, but DocSpring continues to store your template PDFs and generated PDFs. We serve our own copy of the generated PDF when you request a download URL. We will also use our own copy of the PDF when merging them into a "combined submission".

Does DocSpring delete the PDF from my S3 bucket when a submission expires?

No. DocSpring will only delete our own copy of the PDF when a submission expires. We will never delete a PDF in your custom S3 bucket.

How can I tell when the PDF has been uploaded to my custom S3 bucket?

One thing to be aware of is that the submission state will change to processed as soon as our copy of the PDF is ready, but it might take a few seconds before the PDF is uploaded into your custom S3 bucket. The AWS integration upload happens after the initial processing is completed.

If you need to know when the PDF is available in your own S3 bucket, you can check the actions array in the API response. This array will be empty before the submission is processed. As soon as the the submission is processed, it will contain an entry for the aws_s3_upload action. This action's state will be pending until the file has been uploaded into your S3 bucket, and then it will change to processed.

For example, here's how you could wait for the PDF to be uploaded to your own S3 bucket (in JavaScript):

const pdfHasBeenUploadedToS3Bucket = () => {
  if (submission['actions'].length === 0) return false
  const action = submission['actions'].find(
    (a) => a.action_type === 'aws_s3_upload'
  )
  return action && action.state === 'processed'
}

This code assumes that you only have a single aws_s3_upload action. Note that it is possible to configure multiple AWS integrations, so you can store the PDF in multiple buckets.

Another thing you could do is set up an AWS S3 event notification. You could send a webhook to your server as soon as the PDF has been uploaded to your S3 bucket. This means that you wouldn't need to do any polling.

What if my "path template" generates a duplicate key?

If a path template generates a duplicate key, any existing files will be overwritten with the new file. To protect against this case, you should enable "Versioning" for your S3 bucket. This means that you will always be able to restore an original file in case it is accidentally overwritten with a duplicate key. Your path template should also use at least one variable that is guaranteed to be unique, such as submission_id.

Note: Please don't rely on the timestamp variable to provide unique filenames, because multiple PDFs can be processed in the same second.

results matching ""

    No results matching ""