How to upload large files to AWS S3 in ASP.NET web service using Multipart Upload API?

Setu Kumar Basak
3 min readApr 6, 2019

--

Last week, i was facing challenge on uploading large files of size like 1.5GB AWS S3 in ASP.NET web service. Today, i will tell how i have approached the problem.

The requirement was that i will have to upload large file to S3 and return the S3 file URL to the front-end. In the front-end, the large file is being divided into some chunks(some bytes of the large file) and sent one chunk at a time with chunk number to the WCF service.

I followed this approach Upload a File to an S3 Bucket Using the AWS SDK for .NET (Low-Level API) to upload the large to S3 bucket.

Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object’s data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.

Using multipart upload provides the following advantages:

  • Improved throughput — You can upload parts in parallel to improve throughput.
  • Quick recovery from any network issues — Smaller part size minimizes the impact of restarting a failed upload due to a network error.
  • Pause and resume object uploads — You can upload object parts over time. Once you initiate a multipart upload there is no expiry; you must explicitly complete or abort the multipart upload.
  • Begin an upload before you know the final object size — You can upload an object as you are creating it.

This is the full code that i have written and i will explain the code.

Step-1: In the first step, if this is the first chunk, then we initiate the multipart upload request with the bucket name and file name as key. Once initiated, we will get the unique upload id. We will send this same upload id to front-end in the response as we will need this upload id while uploading the other chunks for this file.

Step-2: In this step, we will upload the chunk. We create a UploadPartRequest and upload the chunk. Here, PartNumber is the chunk number. We convert the bytes into memory stream and provide the stream as InputStream. PartSize is the stream length. IsLastPart is required whether it is the last chunk or not.

Step-3: If we upload a chunk, we get ETags for each chunk. We will need all the ETags to complete the full upload. So, if the chunk is not the last chunk then we send the partNumber and ETag with the response to the front-end so that for the next chunk upload request, we get all the previous chunks ETags.

But, if this is the last chunk, then we will complete the multipart upload with all the ETags and upload id. Upon completion of the multipart upload, we will get the S3 file url and will return to the front-end.

Here, we have returned a thumbnail url for image and will discuss in the next article how we have made the thumbnail from the image using AWS Lambda.

--

--