I'm getting a random crash while uploading a file to S3 using Laravel File Storage system. The crash is not reproducible in local/dev environment and in production also it is very random. All the files are still getting uploaded to S3. The issue occurs randomly for any file type (pdf, png, jpg). File size is usually 1 MB to 3 MB.
Aws\Exception\CouldNotCreateChecksumException
A sha256 checksum could not be calculated for the provided upload body, because it was not seekable. To prevent this error you can either 1) include the ContentMD5 or ContentSHA256 parameters with your request, 2) use a seekable stream for the body, or 3) wrap the non-seekable stream in a GuzzleHttp\Psr7\CachingStream object. You should be careful though and remember that the CachingStream utilizes PHP temp streams. This means that the stream will be temporarily stored on the local disk.
Crashed in non-app: /vendor/aws/aws-sdk-php/src/Signature/SignatureV4.php in Aws\Signature\SignatureV4::getPayload
/app/Http/Controllers/ApiController.php in App\Http\Controllers\ApiController::__invoke at line 432
$filename = $request->file('file')->getClientOriginalName();
$user_file_id = $request->input('file_id');
$path = Storage::putFileAs(
'fileo',
$request->file('file'),
$user_file_id
);
return $path;
I had the same error message, but files were not being saved to S3 - so it my be different.
I followed the answer StackOverflow - update php.ini to increase upload limits and this error stopped.
I had the same issue on laravel and minio object storage. The problem was from my /etc/php.ini configuration I messed up some values. Just make sure that you did not changed this or if you did, make sure they are correct.
upload_max_filesize = 1024M
max_file_uploads = e.g 25
Related
I'm trying to upload a file with the size of e.g. 244 KB via POST request with PHP but it causes a 403 Forbidden error. When I upload a small file (e.g. 3 B) it works fine.
I've made sure my PHP.ini is configured to support bigger files as seen below in the CPanel screenshot:
My PHP script is the following:
<?php
$filename = $_GET['filename'];
$fileData = file_get_contents('php://input');
file_put_contents($filename, $fileData);
Error handling and such removed for clarity.
Adding a user agent to the POST request does not help.
What could be the cause of bigger files being refused to be uploaded?
When I upload an image for compress use Intervention, sometimes it shows me 500 Internal Server Error.
The image size less then 1Mb
This error usually occurs after I run php artisan serveand request compress API for the first time.
public function compressPhoto(Request $request)
{
$photo = $request->photo;
$file = Image::make($photo);
return 'success';
}
In a Laravel application, it is also possible to pass an uploaded file directly to the make a method. You must install it properly before using it: intervention Image
Image::make(Input::file('photo'))->save('foo.jpg');
If your file size is more than 2MB then you have to increase it upload_max_filesize = in C:\xampp\php\php.ini file
I know that your question might be not actual, but one of the possible solutions is to increase memory limit in your script
ini_set('memory_limit','512M');
or set
memory_limit = 512M
in your php.ini file.
Implementing a feature for uploading files with a (potentially) unlimited filesize using chunked-file uploading and WebWorkers, I stumbled upon a quite strange problem:
Whenever I try to attempt to write to a file bigger than 128 MB - 134 MB using fwrite(), an internal Server error is raised and, thus, script execution is stopped. The problem could be simplified to this (hopefully self-explanatory) test-case:
$readHandle = fopen("smallFile", "r"); // ~ 2 MB
$writeHandle = fopen("bigFile", "a"); // ~ 134 MB
// First possible way of writing data to the file:
// If the file size of bigFile is at approx. 134 MB, this
// will result in an HTTP 500 Error.
while (!feof($readHandle)){
fwrite($writeHandle, fread($readHandle, 1024 * 1024));
}
// Second way of reproducing the problem:
// Here, the data is just NOT written to the destination
// file, but the script itself doesn't crash.
// stream_copy_to_stream($readHandle, $writeHandle);
fclose($readHandle);
fclose($writeHandle);
When using stream_copy_to_stream, the script doesn't crash, but the data is just not written to the destination file.
Having contacted the support-team of my (shared) server host, I got the answer that this limit had something to do with the php configuration variables post_max_size and upload_max_size. However, neither do the set values (96MB for both) correspond to the measured maximum file size (134MB) at which files are writeable, nor does the problem exist when I apply the same values to my local test server.
Also, I could not find any information about a potential correlation between PHP_MEMORY_LIMIT (the offer I am using states 512MB) and the maximum writeable file size of 128 - 134MB (of which 512MB is a multiple).
Does anybody know if
the said configuration values really correspond to the problem at all?
there is any other way of continuing appending data to such a file?
PS: This SO thread might be based on the same problem, but here the question(s) are different.
Some files has to be uploaded to system. Mostly around 5-10MB JPG files.
However, users usually have a very slow upload speed, so it exceeds max_execution_time() most of the time.
I don't have permission to modify max_execution_time()
Is there anything I can do on this case?
Try to set using ini_set():
$max_execution_time = 1000; // or whatever value you need
ini_set('max_execution_time', $max_execution_time);
I have some code that copies a file to a temporary location where it is later included in a zip file.
I already have the source files stored in a local cache directory, and have also stored the SHA1 hash of the original files. The files in question are .png images, ranging from a few kb to around 500kb.
My problem is that at high server loads, the copy intermittently fails. Upon examining my logs, I see that even though a healthy file exists in the source location, the destination contains a file with zero bytes.
So, to try and figure out what was going on and to increase reliability, I implemented a SHA1 check of the destination file, so that if it fails, I can retry the copy using the shell.
99.9% of the time, the files copy with no issue. Occasionally, the first copy fails but then the second attempt succeeds. In a few number of cases (around 1 in 2,500; and always at high server load), both copies will fail. In nearly all these cases SHA1 of the destination file is da39a3ee5e6b4b0d3255bfef95601890afd80709 which is consistent with an empty file.
In all occasions, the script continues, and the created zip includes an empty image file. There is nothing in the Nginx, PHP or PHP-FPM error logs that indicates any problem. The script will copy the same file successfully when retried.
My stack is Debian Squeeze with the .deb PHP 5.4/PHP 5.4 FPM packages and Nginx 1.2.6 on an Amazon EBS backed AMI. The file system is XFS and I am not using APC or other caching. The problem is consistent and replicable at server loads >500 hits per second.
I cannot find any documentation of known issues that would explain this behaviour. Can anyone provide any insight into what may be causing this issue, or provide suggestions on how I can more reliably copy an image file to a temporary location for zipping?
For reference, here is an extract of the code used to copy / recopy the files.
$copy = copy ($cacheFile, $outputFile);
if ($copy && file_exists($outputFile) && sha1_file($outputFile) !== $storedHash) {
// Custom function to log debug messages
dbug(array($cacheFile, sha1_file($cacheFile),
$storedHash, $outputFile,
file_exists($outputFile), filesize($outputFile)),
'Corrupt Image File Copy from Cache 1 (native)');
// Try with exec
exec ("cp " . $cacheFile . " " . $outputFile);
if (file_exists($outputFile) && sha1_file($outputFile) !== $storedHash)
dbug(array($cacheFile, sha1_file($cacheFile),
$storedHash, $outputFile,
file_exists($outputFile), filesize($outputFile)),
'Corrupt Image File Copy from Cache 2 (shell exec)');
}