A Sha256 Checksum Could Not Be Calculated for the Provided Upload Body

How To Encrypt and Upload Large Files to Amazon S3 in Laravel

Source: Wikipedia Eatables

Last calendar week I wrote an article called How to encrypt large files in Laravel, explaining how to encrypt big files on the local filesystem. Today I want to show you lot how to upload and encrypt large files to Amazon S3, using Laravel and the FileVault packet.

Offset, I'll explain a couple of concepts and methods that nosotros will be using, and then we'll write the code.

Streaming Files to Amazon S3

Laravel already ships with all the tools needed to upload a file to Amazon S3. If yous don't know how to do that already, have a look at the putFile and putFileAs functions on the Storage facade. With any of these two functions, Laravel will automatically manage streaming a file to a storage location, such equally Amazon S3. All you need to do is something like this:

                Storage::disk('s3')->putFile('photos', new File('/path/to/photo'));              

Streaming a file to S3 may take a long time, depending on the network speed. Even if the putFile and putFileAs functions stream the file in segments and won't swallow a lot of retentivity, this is nevertheless a task that may end up taking a lot of time to complete, causing timeouts. That'due south why it's recommended to apply queued jobs for this operation.

Using Queued Jobs

Queues allow y'all to defer the processing of a time-consuming chore. Deferring these time-consuming tasks drastically speeds up web requests to your application.

We will utilize two separate queued jobs, one to encrypt the file and another 1 to upload the encrypted file to Amazon S3.

In Laravel, you can concatenation queued jobs and then that the jobs volition run in sequence. This way, we can start uploading the file to S3 immediately later on the file has been encrypted.

Let'southward Start Coding

In this tutorial, we volition build the encrypt and upload functionalities to S3, on height of the app created in our previous tutorial. If you haven't already seen my previous piece, hither it is.

As a quick recap, nosotros have built a unproblematic app where users tin log in and upload files that volition be encrypted equally soon every bit the upload finishes.

Configure Amazon S3

Get-go, yous will need to configure S3 on Amazon side and create a bucket where nosotros volition store the encrypted files. This tutorial does a cracking task of explaining how to create a bucket, add together the proper policies, associate an IAM user to it and add the AWS variables to your .env file.

As per the Laravel docs, we as well need to install the Flysystem adapter package via Composer:

                composer crave league/flysystem-aws-s3-v3              

We also need to install an additional package for a cached adapter — an absolute must for performance:

                composer require league/flysystem-cached-adapter              

Creating Queueable Jobs

Side by side, let'southward create the ii queueable jobs that we employ for encryption and uploading to S3:

                php artisan make:job EncryptFile                php artisan brand:job MoveFileToS3              

This will create two files in app/Http/Jobs : EncryptFile.php and MoveFileToS3.php. These jobs volition take a param in the constructor, which represents the filename. Nosotros add the functionality of encrypting and uploading to S3 in the handle method. This is what the two jobs look similar:

                <?php

namespace App\Jobs;

use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
utilize Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use SoareCostin\FileVault\Facades\FileVault;

class EncryptFile implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

protected $filename;

/**
* Create a new chore instance.
*
* @return void
*/
public part __construct($filename)
{
$this->filename = $filename;
}

/**
* Execute the chore.
*
* @return void
*/
public function handle()
{
FileVault::encrypt($this->filename);
}
}

<?php

namespace App\Jobs;

use Exception;
use Illuminate\Bus\Queueable;
utilise Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Autobus\Dispatchable;
use Illuminate\Http\File;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Storage;

course MoveFileToS3 implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

protected $filename;

/**
* Create a new job case.
*
* @return void
*/
public function __construct($filename)
{
$this->filename = $filename . '.enc';
}

/**
* Execute the job.
*
* @return void
*/
public office handle()
{
// Upload file to S3
$result = Storage::disk('s3')->putFileAs(
'/',
new File(storage_path('app/' . $this->filename)),
$this->filename
);

// Forces drove of any existing garbage cycles
// If we don't add together this, in some cases the file remains locked
gc_collect_cycles();

if ($result == false) {
throw new Exception("Couldn't upload file to S3");
}

// delete file from local filesystem
if (!Storage::disk('local')->delete($this->filename)) {
throw new Exception('File could not be deleted from the local filesystem ');
}
}
}

As you can see, the EncryptFile job is simple — we are just using the FileVault package to encrypt a file and save it into the same directory, with the same proper name and the .enc extension. It'southward exactly what we were doing before, in the HomeController's store method.

For the MoveFileToS3 job, we are starting time using the Laravel putFileAs method that will automatically stream our file to S3, following the aforementioned directory convention as we had on the local filesystem.

We are then calling the PHP gc_collect_cycles role, in order to force collection of any existing garbage cycles. In some cases, if we don't run this function then the file volition remain locked and we won't be able to delete it in the next step.

Finally, we are deleting the file from the filesystem and throwing Exceptions if the upload or the delete processes fail.

Updating the Controller

Now let's update the HomeController.php file to friction match the new functionality.

Instead of encrypting the file inline using the FileVault bundle with the store method, we call to dispatch the newly created queued jobs, chained together:

                EncryptFile::withChain([
new MoveFileToS3($filename),
])->dispatch($filename);

Next, in the index method, we ship both the local files and the S3 files of a user to the view, so we tin can display the files that are in the process of encrypting and streaming to S3 together with the files that are already encrypted and stored in S3:

                $localFiles = Storage::files('files/' . auth()->user()->id);
$s3Files = Storage::disk('s3')->files('files/' . auth()->user()->id);
return view('home', meaty('localFiles', 's3Files'));

We also update our downloadFile, specifying that we desire to download and stream the file from S3 instead of the local filesystem. Nosotros but chain a deejay('s3') call to both the Storage and FileVault facades.

This is what the HomeController.php file looks like:

                <?php

namespace App\Http\Controllers;

use App\Jobs\EncryptFile;
use App\Jobs\MoveFileToS3;
use Illuminate\Http\Request;
use Illuminate\Back up\Facades\Storage;
use Illuminate\Support\Str;
apply SoareCostin\FileVault\Facades\FileVault;

form HomeController extends Controller
{
/**
* Create a new controller case.
*
* @return void
*/
public function __construct()
{
$this->middleware('auth');
}

/**
* Bear witness the awarding dashboard.
*
* @render \Illuminate\Contracts\Support\Renderable
*/
public function index()
{
$localFiles = Storage::files('files/' . auth()->user()->id);
$s3Files = Storage::disk('s3')->files('files/' . auth()->user()->id);

return view('abode', compact('localFiles', 's3Files'));
}

/**
* Shop a user uploaded file
*
* @param \Illuminate\Http\Request $request
* @return \Illuminate\Http\Response
*/
public function shop(Request $asking)
{
if ($request->hasFile('userFile') && $request->file('userFile')->isValid()) {
$filename = Storage::putFile('files/' . auth()->user()->id, $request->file('userFile'));

// check if we have a valid file uploaded
if ($filename) {
EncryptFile::withChain([
new MoveFileToS3($filename),
])->dispatch($filename);
}
}

return redirect()->route('abode')->with('message', 'Upload complete');
}

/**
* Download a file
*
* @param string $filename
* @return \Illuminate\Http\Response
*/
public function downloadFile($filename)
{
// Basic validation to check if the file exists and is in the user directory
if (!Storage::disk('s3')->has('files/' . auth()->user()->id . '/' . $filename)) {
abort(404);
}

return response()->streamDownload(function () use ($filename) {
FileVault::disk('s3')->streamDecrypt('files/' . auth()->user()->id . '/' . $filename);
}, Str::replaceLast('.enc', '', $filename));
}

}

Updating the View

The last thing we need to exercise is update the home.blade.php view file, then that we can display not only the user files that have been encrypted and are stored to S3 merely also the files that are being encrypted and uploaded to S3 at that moment.

Note: You lot tin can make this step much more engaging by using JavaScript to testify a spinning icon for the files that are being encrypted and streamed to S3, and refreshing the table one time the files take been uploaded. Because we want to proceed this tutorial strictly to the indicate of deferring the encryption and S3 upload to a split up procedure, we'll stick to a basic solution that requires transmission refresh in order to run across whatsoever updates to the queued jobs status.

                <h4>Your files</h4>
<ul class="listing-group">
@forelse ($s3Files as $file)
<li class="list-group-item">
<a href="{{ route('downloadFile', basename($file)) }}">
{{ basename($file) }}
</a>
</li>
@empty
<li grade="list-group-item">You have no files</li>
@endforelse
</ul>

@if (!empty($localFiles))
<hr />
<h4>Uploading and encrypting...</h4>
<ul class="list-group">
@foreach ($localFiles as $file)
<li class="list-group-item">
{{ basename($file) }}
</li>
@endforeach
</ul>
@endif

Queue Configuration

If you haven't made whatsoever changes to the queues configuration, you are most probable using the synchronous commuter (sync) that is set past default in Laravel. This is a driver that will execute jobs immediately and is designed specifically for local use. However, nosotros want to come across how deferring our ii queued jobs will piece of work in production, so we will configure the queues to piece of work with the database driver.

In social club to use the database queue commuter, you will need a database table to agree the jobs. To generate a migration that creates this table, run the queue:tabular array Artisan command. Once the migration has been created, y'all may drift your database using the migrate command:

                php artisan queue:tabular array                php artisan migrate                                                

The terminal step is updating your QUEUE_CONNECTION variable in your .env file to use the database driver:

                QUEUE_CONNECTION=database              

Running the Queue Worker

Side by side, we need to run the queue worker. Laravel includes a queue worker that volition process new jobs as they are pushed onto the queue. Yous may run the worker using the queue:piece of work Artisan command. You can specify the maximum number of times a task should be attempted using the —-tries switch on the queue:piece of work command

                php artisan queue:work —-tries=three              

Fourth dimension to Test

We're now ready to test our changes. Once you upload a file, you should meet that the file is immediately displayed in the "Uploading and encrypting…" department.

If you switch to the terminal where you lot initiated the queue worker, y'all should run across that the jobs are starting in sequence. Once both jobs are completed, the file should exist found in S3 and no longer in the local filesystem.

Refreshing the user dashboard after the jobs accept finished should brandish the file into "Your files" section, with a link to stream download it from S3.

That'south information technology!

You can notice the unabridged Laravel app in this Github repo and the changes fabricated above in this commit.

vaughnderapt.blogspot.com

Source: https://betterprogramming.pub/how-to-encrypt-upload-large-files-to-amazon-s3-in-laravel-af88324a9aa

0 Response to "A Sha256 Checksum Could Not Be Calculated for the Provided Upload Body"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel