I'm having an issue where I am trying to support cancelling file uploads. I would like to know what the best practice for determine whether an upload is cancellable. So how can you determine when the file has completed uploading, versus the server generating/returning a response? I understand this is possible by tracking the file progress in HTML5, but since I have to support IE9, I am running out of ideas.
The end result is if you attempt to cancel a file upload that is nearing being completely upload, and issue the abort request, you end up aborting the response and the file is happily sitting on the server.
I am using jquery to submit the request, and am cancelling via the abort() method. I see in the browser console that the request was successfully aborted.
Am I missing something trivial?
PHP now has the ability to track the upload progress. See "Session Upload Progress".
Use this feature to write a short script: checkUpload.php, and use AJAX to return the status back to your IE9 page.
<?php
$_SESSION["upload_progress_123"] = array(
"start_time" => 1234567890, // The request time
"content_length" => 57343257, // POST content length
"bytes_processed" => 453489, // Amount of bytes received and processed
"done" => false, // true when the POST handler has finished, successfully or not
"files" => array(
0 => array(
"field_name" => "file1", // Name of the <input/> field
// The following 3 elements equals those in $_FILES
"name" => "foo.avi",
"tmp_name" => "/tmp/phpxxxxxx",
"error" => 0,
"done" => true, // True when the POST handler has finished handling this file
"start_time" => 1234567890, // When this file has started to be processed
"bytes_processed" => 57343250, // Number of bytes received and processed for this file
),
// An other file, not finished uploading, in the same request
1 => array(
"field_name" => "file2",
"name" => "bar.avi",
"tmp_name" => NULL,
"error" => 0,
"done" => false,
"start_time" => 1234567899,
"bytes_processed" => 54554,
),
)
);
Using PHP, it is also now possible to cancel the upload process. From that same manual page referenced above, comes the following text:
It is also possible to cancel the currently in-progress file upload, by setting the
$_SESSION[$key]["cancel_upload"] key to TRUE. When uploading multiple files in the same
request, this will only cancel the currently in-progress file upload, and pending file
uploads, but will not remove successfully completed uploads. When an upload is cancelled like
this, the error key in $_FILES array will be set to UPLOAD_ERR_EXTENSION.
The only thing not covered in the PHP documentation is how to get the total file size. There is a very good review of this process here: "PHP Master | Tracking Upload Progress"
Related
I want to upload large files to S3 through REST API using PHP server for creating signed URLs. In PHP S3 Client, there's a command "listParts" which return an array of multipart uploads and I use them in "completeMultipartUpload" to join all the multipart. Every part of the code is working fine, except "listParts". It returns null sometimes. So the function "completeMultipartUpload" does not executes. I can't figure out the reason why it's returning null sometimes. Here sometimes means when I upload a file, it works well. When I refresh the webpage and re-upload the same file it returns null. Then again refresh and upload, it works.
$partsModel = $client->listParts(array(
'Bucket' => bucket(),
'Key' => $_REQUEST['sendBackData']['key'],
'UploadId' => $_REQUEST['sendBackData']['uploadId'],
));
I am using s3 direct uploads along with a database to store the URLS of the files (along with other data like who uploaded etc).
To allow direct upload to s3, I'm creating a presigned URL like :
$s3 = App::make('aws')->createClient('s3', [
'credentials' => [
'key' => 'AAAAAAAAAAAAAAA',
'secret' => 'YYYYYYYYYYYYYYYYYYYY',
]
]);
$command = $s3->getCommand('PutObject', [
'#use_accelerate_endpoint'=>true,
'Bucket' => 'remdev-experimental',
'Key' => "newest newest.txt",
'Metadata' => array(
'foo' => "test",
)
]);
return response()->json(((string)$s3->createPresignedRequest($command, '+1 minutes')->getUri()));
Now, after the file from the client has finished uploading , I want my server to know about it. So I will require the client to send me a request , notifying about the fact that he has finished uploading. For this, I think the simplest(and also secure) way is to just allow the client to send back the signed URL that he just sent back.
Is there a way to parse the URL ?
I am interested in getting the object key , and more importantly , I want to verify that the URL has not been tampered with (meaning, the signature in the URL should match the rest of the contents). How can I do this in php sdk ?
The signed URL is the file's URL with the signature information in the query data. So a signed request for bucket: remdev-experimental file: abc.txt looks like https://s3.amazonaws.com/remdev-experimental/abc.txt?X-Amz-Date=date&X-Amz-Expires=60&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Signature=signature&X-Amz-Credential=SOMEID/20160703/us-east-1/s3/aws4_request&X-Amz-SignedHeaders=Host&x-amz-security-token=some-long-token so all you need to do is get the URL's Path (/remdev-experimental/abc.txt and take everything after the 2nd slash.
Also you should be aware that you can have S3 redirect the browser to a URL using success_action_redirect in an HTTP post policy
Lastly you can have S3 trigger a notification to your server (via SQS, SNS, or Lambda) whenever a file is uploaded.
after playing a bit an uploading some small test files I wanted to upload a bigger file, around 200 MB but I always get the timeout exception, then I tried to upload a 30 MB file and the same happens.
I think the timeout is 30 seconds, it is possible to tell the glacier client to wait until the upload is done?
This is the code I use:
$glacier->uploadArchive(array(
'vaultName' => $vaultName,
'archiveDescription' => $desc
'body' => $body
));
I have tested with other files and the same happens, then I tried with a small file of 4MB and the operation was successful, I thought that dividing the files and uploading them one by one, bu then again around the third one a timeout exception comes out.
I also tried the multiupload with the following code
$glacier = GlacierClient::factory(array(
'key' => 'key',
'secret' => 'secret',
'region' => Region::US_WEST_2
));
$multiupload = $glacier->initiateMultipartUpload(array(
'vaultName' => 'vaultName',
'partSize' => '4194304'
));
// An array for the suffixes of the tar file
foreach($suffixes as $suffix){
$contents = file_get_contents('file.tar.gz'.$suffix);
$glacier->uploadMultipartPart(array(
'vaultName' => 'vaultName',
'uploadId' => $multiupload->get('uploadId'),
'body' => $contents
));
}
$result=$glacier->completeMultipartUpload(array(
'vaultName' => 'vaultName',
'uploadId' => $multiupload->get('uploadId'),
));
echo $result->get('archiveId');
It misses the parameter Range, I don't think I fully understand how this multi part upload works, but I think I will have the same timeout exception. So my question is as I said before.
It is possible to tell the glacier client to wait until the upload is done?
The timeout sounds like a script timeout like Jimzie said.
As for using the Glacier client, you should checkout this blog post from the official AWS PHP Developer Blog, which shows how to do multipart uploads to Glacier using the UploadPartGenerator object. If you are doing the part uploads in different requests/processes, you should also keep in mind that the UploadPartGenerator class can be serialized.
This sounds suspiciously like a script timeout. Try
set_time_limit (120);
just inside of the foreach loop. This will give you a two minute PHP sanity timer for each of your multi-part files.
I've written a Wordpress Plug-in that interacts with Salesforce via the REST API. It successfully gets an Instance URL and an Authentication Token using a username/password.
I'm able to submit queries and create objects using wp_remote_post with GET and POST respectively.
However, when creating objects, though successfully created in the Salesforce instance, I get the following in response to my POST:
{"message":"HTTP Method 'POST' not allowed. Allowed are HEAD,GET,PATCH,DELETE","errorCode":"METHOD_NOT_ALLOWED"}
Using the same json body content from these requests, I am able to submit and create via the Salesforce Workbench with no problems at all. I get a proper response that looks like this:
{
"id" : "003E000000OubjkIAB",
"success" : true,
"errors" : [ ]
}
Is there something in the Headers that I'm sending that Salesforce only partially disagrees with? Here are some other arguments that are getting sent as a result of using wp_remote_post - http://codex.wordpress.org/HTTP_API#Other_Arguments
Here's the php code that's calling it:
$connInfo['access_token'] = get_transient('npsf_access_token');
$connInfo['instance_url'] = get_transient('npsf_instance_url');
$url = $connInfo['instance_url'] . $service;
$sfResponse = wp_remote_post($url, array(
'method' => $method,
'timeout' => 5,
'redirection' => 5,
'httpversion' => 1.0,
'blocking' => true,
'headers' => array("Authorization" => "OAuth ". $connInfo['access_token'], "Content-type" => "application/json"),
'body' => $content,
'cookies' => array()
)
);
The $content is being encoded via json_encode before it gets to this point.
Update:
It is specific to one of the extra CURL options being sent by the WP_Http_Curl class. I haven't yet narrowed down which one Salesforce is having a problem with.
The solution is disable redirection in the request. You have it as 5 (the default) -- it needs to be set to 0 for this to work.
The initial request works but Salesforce sends a location header as a part of the response (the URL of the newly created object). WordPress thinks it is being told that the URL moved and that it should try again at this new URL. The response you're seeing is the result of that second request to the actual object you just created. That URL doesn't accept POST requests apparently.
It's a bit odd for Salesforce to be sending such a header, but there's also some discussion going on on the WordPress side that WordPress shouldn't follow location headers for non-301/302 responses which would solve this.
Thanks for posting this by the way. You update made me start debugging WP_Http_Curl which made me realize it was actually making a second HTTP request.
I using curl to post data to another server,
between each posting I use a function to fetch the hidden fields
like "__VIEWSTATE".
it worked like a charm before, but they updated there website,
so i rewriten my code to use the new fieldnams,
but on the last step i gets the error:
"Validation of viewstate MAC failed."
if I do the same step in a webbrowser it works as it should,
I used an addon to fetch what postdata the browser was sending
and compared it with what my script is sending,
and its looks the same.
My knowledge of ASP.NET is minimal,
and all info i can find here about the error
recomendates changes on the ASP-NET-server.
So i hope someone here can guide me to find out why
it in the browser have a 100% successrate,
and curl have 0% successrate on that page,
but using the same functions on previus pages,
works 100% with curl.
postdata the browser was sending:
__EVENTTARGET=
__EVENTARGUMENT=
__VIEWSTATE=%2FwEPDwUKLTk2MDAxNjU3MA9kFgJmD2QWAgIDD2QWDgIFD2QWAgIBDw8WAh4EVGV4dAUfRsO2cmV0YWdzZ3J1cHBlbiBpIEfDtnRlYm9yZyBBQmRkAgcPDxYEHwAFH0bDtnJldGFnc2dydXBwZW4gaSBHw7Z0ZWJvcmcgQUIeC05hdmlnYXRlVXJsBR1%2BL0NsaWVudENhcmQuYXNweD9DbGllbnRJRD05OGRkAgkPDxYCHgdWaXNpYmxlZ2RkAgsPDxYEHwAFI0JZR0cgJiBFTkVSR0lTRVJWSUNFIFPDlkRFUlTDllJOIEFCHwEFNH4vQ3VzdG9tZXJPdmVydmlldy5hc3B4P0NsaWVudElEPTk4JkN1c3RvbWVySUQ9MjY0NDBkZAINDw8WAh8CZ2RkAg8PDxYCHwAFE1JlZGlnZXJhIGFudsOkbmRhcmVkZAIVDw8WAh8CaGQWAgIDDxBkZBYBZmQYAgUeX19Db250cm9sc1JlcXVpcmVQb3N0QmFja0tleV9fFg0FFmN0bDAwJGJvZHkkY2hrSXNBY3RpdmUFHmN0bDAwJGJvZHkkY2hrSGFzU3VwZXJVc2VyUGVybQUfY3RsMDAkYm9keSRjaGtIYXNTdGF0aXN0aWNzUGVybQUkY3RsMDAkYm9keSRjaGtIYXNBbm51YWxSZXBvcnRTZXJ2aWNlBTBjdGwwMCRib2R5JGNoa0hhc0NvcnBvcmF0aW9uQ2hhcnRlclJlcG9ydFNlcnZpY2UFN2N0bDAwJGJvZHkkY2hrSGFzQ2VydGlmaWNhdGVPZlJlZ2lzdHJhdGlvblJlcG9ydFNlcnZpY2UFH2N0bDAwJGJvZHkkY2hrSGFzTW9uaXRvclNlcnZpY2UFK2N0bDAwJGJvZHkkY2hrSGFzRGlnaXRhbFNwYXJya2F0YWxvZ1NlcnZpY2UFJmN0bDAwJGJvZHkkY2hrSGFzUGVyc29ua29udHJvbGxTZXJ2aWNlBSVjdGwwMCRib2R5JGNoa0hhc0NvbXBhbnlSZXBvcnRTZXJ2aWNlBSRjdGwwMCRib2R5JGNoa0hhc1BlcnNvblJlcG9ydFNlcnZpY2UFHWN0bDAwJGJvZHkkY2J4UmVwb3J0c0NvbXBhbnkzBRxjdGwwMCRib2R5JGNieFJlcG9ydHNQZXJzb24zBRBjdGwwMCRtbHRDb250ZW50Dw9kZmR8z6SDM7weB%2BgWrg%2B8u3EnNPkQGA%3D%3D
__EVENTVALIDATION=%2FwEWFwKGsKOJCgK70ZWTDQLr%2BJWFDQKo1a2oCwKplfT%2BCgLRieqTAwKt6qHvAQK9rKu9AgKh%2F5ODDQKqtpTtDQLvv7CxBALa4vDGBQKCuafwDwKP1ZOjBgKsqdXxCgL6hbmQBwK%2BjaGZDQL%2FqY7cBALml%2FqcBgLYg53pDwL108DhBQLfzPnCAQLBr6dM9cK5UIsGFZ5ocJchTM8CHTFigfk%3D
ctl00%24body%24cmdSave=Spara
ctl00%24body%24txtName=BYGG+%26+ENERGISERVICE+S%C3%96DERT%C3%96RN+AB
ctl00%24body%24txtUserName=5566960836
ctl00%24body%24txtEmail=anonym%40telia.se
ctl00%24body%24txtDepartment=
ctl00%24body%24chkIsActive=on
ctl00%24body%24chkHasStatisticsPerm=on
ctl00%24body%24txtLoginName=5566960836
ctl00%24body%24txtPassword=stackoverflow
ctl00%24body%24chkHasAnnualReportService=on
ctl00%24body%24chkHasCorporationCharterReportService=on
ctl00%24body%24chkHasCertificateOfRegistrationReportService=on
ctl00%24body%24chkHasMonitorService=on
ctl00%24body%24chkHasDigitalSparrkatalogService=on
ctl00%24body%24chkHasPersonkontrollService=on
ctl00%24body%24chkHasCompanyReportService=on
ctl00%24body%24chkHasPersonReportService=on
ctl00%24body%24cbxReportsCompany3=on
ctl00%24body%24cbxReportsPerson3=on
ctl00%24body%24hidNewUser=1
the post data my script is sending
Array
(
[__EVENTTARGET] =>
[__EVENTARGUMENT] =>
[__VIEWSTATE] => /wEPDwUKLTk2MDAxNjU3MA9kFgJmD2QWAgIDD2QWDgIFD2QWAgIBDw8WAh4EVGV4dAUfRsO2cmV0YWdzZ3J1cHBlbiBpIEfDtnRlYm9yZyBBQmRkAgcPDxYEHwAFH0bDtnJldGFnc2dydXBwZW4gaSBHw7Z0ZWJvcmcgQUIeC05hdmlnYXRlVXJsBR1+L0NsaWVudENhcmQuYXNweD9DbGllbnRJRD05OGRkAgkPDxYCHgdWaXNpYmxlZ2RkAgsPDxYEHwAFI0JZR0cgJiBFTkVSR0lTRVJWSUNFIFPDlkRFUlTDllJOIEFCHwEFNH4vQ3VzdG9tZXJPdmVydmlldy5hc3B4P0NsaWVudElEPTk4JkN1c3RvbWVySUQ9MjY0NDBkZAINDw8WAh8CZ2RkAg8PDxYCHwAFE1JlZGlnZXJhIGFudsOkbmRhcmVkZAIVDw8WAh8CaGQWAgIDDxBkZBYBZmQYAgUeX19Db250cm9sc1JlcXVpcmVQb3N0QmFja0tleV9fFg0FFmN0bDAwJGJvZHkkY2hrSXNBY3RpdmUFHmN0bDAwJGJvZHkkY2hrSGFzU3VwZXJVc2VyUGVybQUfY3RsMDAkYm9keSRjaGtIYXNTdGF0aXN0aWNzUGVybQUkY3RsMDAkYm9keSRjaGtIYXNBbm51YWxSZXBvcnRTZXJ2aWNlBTBjdGwwMCRib2R5JGNoa0hhc0NvcnBvcmF0aW9uQ2hhcnRlclJlcG9ydFNlcnZpY2UFN2N0bDAwJGJvZHkkY2hrSGFzQ2VydGlmaWNhdGVPZlJlZ2lzdHJhdGlvblJlcG9ydFNlcnZpY2UFH2N0bDAwJGJvZHkkY2hrSGFzTW9uaXRvclNlcnZpY2UFK2N0bDAwJGJvZHkkY2hrSGFzRGlnaXRhbFNwYXJya2F0YWxvZ1NlcnZpY2UFJmN0bDAwJGJvZHkkY2hrSGFzUGVyc29ua29udHJvbGxTZXJ2aWNlBSVjdGwwMCRib2R5JGNoa0hhc0NvbXBhbnlSZXBvcnRTZXJ2aWNlBSRjdGwwMCRib2R5JGNoa0hhc1BlcnNvblJlcG9ydFNlcnZpY2UFHWN0bDAwJGJvZHkkY2J4UmVwb3J0c0NvbXBhbnkzBRxjdGwwMCRib2R5JGNieFJlcG9ydHNQZXJzb24zBRBjdGwwMCRtbHRDb250ZW50Dw9kZmR8z6SDM7weB+gWrg+8u3EnNPkQGA==
[__EVENTVALIDATION] => /wEWFwKGsKOJCgK70ZWTDQLr+JWFDQKo1a2oCwKplfT+CgLRieqTAwKt6qHvAQK9rKu9AgKh/5ODDQKqtpTtDQLvv7CxBALa4vDGBQKCuafwDwKP1ZOjBgKsqdXxCgL6hbmQBwK+jaGZDQL/qY7cBALml/qcBgLYg53pDwL108DhBQLfzPnCAQLBr6dM9cK5UIsGFZ5ocJchTM8CHTFigfk=
[ctl00$body$hidNewUser] => 1
[ctl00$body$cmdSave] => Spara
[ctl00$body$txtName] => BYGG & ENERGISERVICE SÖDERTÖRN AB
[ctl00$body$txtUserName] => 5566960836
[ctl00$body$txtEmail] => anonym#telia.se
[ctl00$body$txtDepartment] =>
[ctl00$body$chkIsActive] => 1
[ctl00$body$chkHasStatisticsPerm] => 1
[ctl00$body$txtLoginName] => 5566960836
[ctl00$body$txtPassword] => stackoverflow
[ctl00$body$chkHasAnnualReportService] => 1
[ctl00$body$chkHasCorporationCharterReportService] => 1
[ctl00$body$chkHasCertificateOfRegistrationReportService] => 1
[ctl00$body$chkHasMonitorService] => 1
[ctl00$body$chkHasDigitalSparrkatalogService] => 1
[ctl00$body$chkHasPersonkontrollService] => 1
[ctl00$body$chkHasCompanyReportService] => 1
[ctl00$body$chkHasPersonReportService] => 1
[ctl00$body$cbxReportsCompany3] => 1
[ctl00$body$cbxReportsPerson3] => 1
)
The question:
What client side differences can trigger the "Validation of viewstate MAC failed"-error?
(notice: the postdata above have bean manipulated in 2 ways, first i replaced the password with "stackoverflow", and i also replace the user of email adress with anonym)
Check so see if there's not some javascript changing the values before they're posted, and to be on the save side, set the referrer page too.
used the wrong URL, sent the right postdata from start, just sent it to the wrong place.
so simple, and still so hard to find when you look at the wrong place.