i've been tasked converting a payment gateway for magento from asp to php, the asp one is outdated for an old site and what's required is a new one, looking at all of the asp code i can easily do it but there is one thing that gets me.
Magento uses this Varien_Http_Adapter_Curl to send xml to the payment gateway and while i understand how the .asp file get the xml data (using Request.InputStream) i can't seem to duplicate that in a php file. this is as close as i can get
function Page_Load()
{
$cUrl = curl_init();
$dump = curl_exec($cUrl);
$file = fopen("Gateway.txt","w");
echo fwrite($file,var_export($dump,true));
fclose($file);
var_dump($dump);
}
Page_Load();
i changed the payment gateway URL in magento backend to my php file and went though the checkout, it creates the .txt file but all it contains is false
so how do i receive the output from magento in my Page_Load function, at the moment i'm just outputting it to a file just to confirm that i am getting a response
UPDATE: i have chnaged the code to this
function Page_Load()
{
$cUrl = curl_init();
curl_setopt($cUrl,CURLOPT_URL,"http://my.site.local/Current-Build/gateway/Gateway.php");
curl_setopt($cUrl,CURLOPT_RETURNTRANSFER,true);
$dump = curl_exec($cUrl);
if(!$dump)
{
$dump = curl_error($cUrl);
}
$file = fopen("Gateway.txt","a");
echo fwrite($file,var_export($dump,true));
fclose($file);
curl_close($cUrl);
var_dump($dump);
}
Page_Load();
the site is on a wamp served that's been put online so anyone on in my development team can access it and it is also the same path that is specified in magento in the payment methods for the payment gateway url which was set to the old aspx file
when i then go though the checkout, while magento gives me the normal error cause it's not getting a response my output file isn't even being created, when i try to go back to magento it just sits there loading and i have to restart WAMP Server a couple of times cause it sits on the orange logo for ages
Related
Until today, I was able to read daily exchange rates form Turkish Central Bank xml url. However, today, customers informed us that they cannot make online transaction on our website.Then I checked and realized that I can not read the contents of xml file. I am using php for that. Here is the code:
$tcmb = ("https://www.tcmb.gov.tr/kurlar/today.xml");
if (false === $tcmb) {
echo "Failed loading XML\n";
foreach(libxml_get_errors() as $error) {
echo "\t", $error->message;
}
}
else {
$euro_satis = $tcmb->Currency[3]->BanknoteSelling;
}
This returned with a "failed to load external entity" error.
Then I tried curl in order to make sure if file is closed to external reach.
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,"https://www.tcmb.gov.tr/kurlar/today.xml");
curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
$output = curl_exec($ch);
curl_close($ch);
$tcmb = simplexml_load_string($output);
$euro_satis = $tcmb->Currency[3]->BanknoteSelling;
This returned null.
Then i tried both of the codes on local with Xampp (Apache Server) and both worked fine.
I tried to download the xml file from url. Again, it worked on local but not on production server. They all were working fine till today.
What could be wrong?
We recently started our first TYPO3 10 project and are currently struggling with a custom import script that moves data to Algolia. Basically, everything works fine, but there is an issue with FAL images, specifically, when they need to be processed.
From the logs, I could find something called DeferredBackendImageProcessor, but the docs are not mentioning this, or I am not looking for the right thing. I'm not sure.
Apparently, images within the backend environment are not just processed anymore. There is something called "processingUrl" which has to be called once for the image to be processed.
I tried calling that url with CURL, but it does not work. The thing is, when I open that "processingUrl" in the browser, it has not effect - but if I open that link in a browser, where I am logged into the TYPO3 backend, then the image is processed.
I'm kind of lost here, as I need the images to be processed within the import script that runs via the scheduler from the backend (manual, not via cron).
That is the function where the problem occurs, the curl part has no effect here, sadly.
protected function processImage($image, $imageProcessingConfiguration)
{
if ($image) {
$scalingOptions = array (
'width' => 170
);
$result = $this->contentObject->getImgResource('fileadmin/'.$image, $scalingOptions);
if (isset($result[3]) && $result[3]) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $result[3]);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$output = curl_exec($ch);
curl_close($ch);
return '/fileadmin'.$result['processedFile']->getIdentifier();
}
}
return '';
}
$result[3] being the processing url. Example of the url:
domain.com/typo3/index.phproute=%2Fimage%2Fprocess&token=6cbf8275c13623a0d90f15165b9ea1672fe5ad74&id=141
So my question is, how can I process the image from that import script?
I am not sure if there is a more elegant solution but you could disable the deferred processing during your jobs:
$processorConfiguration = $GLOBALS['TYPO3_CONF_VARS']['SYS']['fal']['processors']
unset ($GLOBALS['TYPO3_CONF_VARS']['SYS']['fal']['processors']['DeferredBackendImageProcessor'])
// ... LocalImageProcessor will be used
$GLOBALS['TYPO3_CONF_VARS']['SYS']['fal']['processors'] = $processorConfiguration;
References:
https://github.com/TYPO3/TYPO3.CMS/blob/10.4/typo3/sysext/core/Classes/Resource/Processing/ProcessorRegistry.php
https://github.com/TYPO3/TYPO3.CMS/blob/10.4/typo3/sysext/core/Configuration/DefaultConfiguration.php#L284
I am trying to integrate Amazon CloudSearch into SilverStripe. What I want to do is when the pages are published I want a CURL request to send the data about the page as a JSON string to the search cloud.
I am using http://docs.aws.amazon.com/cloudsearch/latest/developerguide/uploading-data.html#uploading-data-api as a reference.
Every time I try to upload it returns me a 403. I have allowed the IP address in the access policies for the search domain as well.
I am using this as a code reference: https://github.com/markwilson/AwsCloudSearchPhp
I think the problem is the AWS does not authenticate correctly. How do I correctly authenticate this?
If you are getting the following error
403 Forbidden, Request forbidden by administrative rules.
and if you are sure you have appropriate rules in effect, I would check the api url you are using. Make sure you are using the correct endpoint. If you are doing batch upload the api endpoint should look like below
your-search-doc-endpoint/2013-01-01/documents/batch
Notice 2013-01-01, that is a required part of the url. That is the api version you will be using. You cannot do the following even though it might make sense
your-search-doc-endpoint/documents/batch <- Won't work
To search you would need to hit the following api
your-search-endpoint/2013-01-01/search?your-search-params
After many searches and trial and error I was able to put together a small code block, from small pieces of code from everywhere to be able to upload a "file" using CURL and PHP to aws cloudsearch.
The one and most important things is to make sure that your data is prepare correctly to be sent in JSON format.
Note: For cloudsearch you're not uploading a file your posting a stream of JSON data. That is why many of us have a problem uploading the data.
So in my case I wanted to be able to upload data that my search engine on clousearch, it seems simple and it is but the lack of example code to do this is not there most people tell you you to go to the documentation which usually has examples but to use the aws CLI. The php SDK is just a learning curb plus instead of making it simple you do 20 steps to do 1 task and not only that you're require to have all these other libraries that are just wrappers for native PHP functions and sometimes instead of making it simple it becomes complicated.
So back to how I did it, first I am pulling the data from my database as an array and serialize it to save it to a file.
$row = $database_data;
foreach ($rows as $key => $row) {
$data['type'] = 'add';
$data['id'] = $row->id;
$data['fields']['title'] = $row->title;
$data['fields']['content'] = $row->content;
$data2[] = $data;
}
// now save your data to a file and make sure
// to serialize() it
$fp = fopen($path_to_file, $mode)
flock($fp, LOCK_EX);
fwrite($fp, serialize($data2));
flock($fp, LOCK_UN);
fclose($fp);
Now that you have your data saved we can play with it
$aws_doc_endpoint = '{Your AWS CloudSearch Document Endpoint URL}';
// Lets read the data
$data = file_get_contents($path_to_file);
// Now lets unserialize() it and encoded in JSON format
$data = json_encode(unserialize($data));
// finally lets use CURL
$ch = curl_init($aws_doc_endpoint);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Length: ' . strlen($data)));
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json'));
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$response = curl_exec($ch);
curl_close($ch);
$response = json_decode($response);
if ($response->status == 'success')
{
return TRUE;
}
return FALSE;
And like I said there is nothing to it. Most answers that I encounter where, use Guzzle its really easy, well yes it is but for just a simple task like this you don't need it.
Aside from that if you still get an error make sure to check the following.
Well formatted JSON data.
Make sure you have access to the endpoint.
Well I hope someone finds this code helpful.
To diagnose whether it's an access policy issue, have you tried a policy that allows all access to the upload? Something like the following opens it up to everything:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "cloudsearch:*"
}
]
}
I noticed that if you just go to the document upload endpoint in a browser (mine looks like "doc-YOURDOMAIN-RANDOMID.REGION.cloudsearch.amazonaws.com") you'll get the 403 "Request forbidden by administrative rules" error, even with open access, so as #dminer said you'll need to make sure you're posting to the correct full url.
Have you considered using a PHP SDK? Like http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-cloudsearchdomain.html. It should take care of making correct requests, in which case you could rule out transport errors.
this never worked for me. and i used the Cloudsearch terminal to upload files. and php curl to search files.
Try adding "cloudsearch:document" to CloudSearch's access policy under Actions
I had previously asked a question, and got the answer, but I think I've run into another problem.
The php script I'm using does this:
1 - transfers a file to my server from my backup server
2 - when it's done transfering it sends some post data to it using curl, which creates a zip file
3 - when it's done, the result is echoed and depending on what the result is; transfers the file, or does nothing.
My problem is this:
When the file is small enough (under 500MB) it creates it, and transfers back no problem. When it's larger, it timesout, finishes creating the zip on the remote server, but because it timed out it doesn't get transfered.
I'm running this from a command line on the backup server. I have this in the php script:
set_time_limit(0); // ignore php timeout
ignore_user_abort(true); // keep on going even if user pulls the plug*
while(ob_get_level())ob_end_clean(); // remove output buffers
But it still timesout when I run sudo php backup.php
Is using curl making it timeout like a browser on the other end where the zip is being made? I think the problem is the response isn't being echo'd out.
Edits:
(#symcbean)
I'm not seeing anything, which is why I'm struggling. When I run it from the browser, I see the loading thing in the address bar. After about 30 seconds it just stops. When I do it from the command line, same deal. 30 seconds and it just stops. This only happens when large zips need to be created.
It's being invoked via a file. The file loads a class, sends the connection information to the class. Which contacts the server to make the zip, transfers the zip back, does some stuff to it then transfers it to S3 for archiving.
It logs into the remote server, uploads a file with curl. upon a valid response, it curls again with the location of the file as a url (I'll always know what it is), which fires up the php file I just transfered over. The zip ALWAYS gets created no problem, even up to 22GB, just sometimes takes a long time of course. After that it waits for a response of "created". Waiting for that response is where it dies.
So the zip always gets created, but the waiting time is what "I think" is making it die.
Second Edit:
I tried this from the command line:
$ftp_connect= ftp_connect('domain.com');
$ftp_login = ftp_login($ftp_connect,'user','pass');
ftp_pasv($ftp_connect, true);
$upload = ftp_put($ftp_connect, 'filelist.php', 'filelist.php', FTP_ASCII);
$get_remote = 'filelist.php';
$post_data = array (
'last_bu' => '0'
);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'domain.com/'.$get_remote);
curl_setopt($ch, CURLOPT_HEADER, 0 );
// adding the post variables to the request
curl_setopt($ch, CURLOPT_POSTFIELDS, $post_data);
//echo the following to get response
$response = curl_exec($ch);
curl_close($ch);
echo $response;
and got this:
<HTML>
<HEAD>
<TITLE>500 Internal Server Error</TITLE>
</HEAD><BODY>
<H1>Internal Server Error</H1>
The server encountered an internal error or
misconfiguration and was unable to complete
your request.<P>
Please contact the server administrator to inform of the time the error occurred
and of anything you might have done that may have
caused the error.<P>
More information about this error may be available
in the server error log.<P>
<HR>
<ADDRESS>
Web Server at domain.com
</ADDRESS>
</BODY>
</HTML>
Again, the error log is blank, the zip still gets created, but because of the timeout around 650MB of creation I can't get the response.
The problem is in the server code that generates the file to be returned.
Check the php error log
It may be timing out for a few reasons but the log shouldl tell you why.
I fixed it guys, thank you so much to everyone who helped me, it pointed me in the right directions.
In the end, the problem was on the remote server. What was happening was that it was timing out the cURL connection, which didn't send the result I needed back.
What I did to fix it was add a function to my class that (again) using curl, checks for the zip file http code I know it's creating When it finishes, then throw the result locally. If it's not finished, sleep for a few seconds and check again.
private function watchDog(){
$curl = curl_init($this->host.'/'.$this->grab_file);
//don't fetch the actual page, you only want to check the connection is ok
curl_setopt($curl, CURLOPT_NOBODY, true);
//do request
$result = curl_exec($curl);
//if request did not fail
if ($result !== false) {
//if request was ok, check response code
$statusCode = curl_getinfo($curl, CURLINFO_HTTP_CODE);
if ($statusCode == 404) {
sleep(7);
self::watchDog();
}
else{
return 'zip created';
}
}
curl_close($curl);
}
OK, firstly, I must be missing something, so I apologise for what may turn out to be a newb question...
I have a complicated bit of code that is just not working, so am putting it out here or any pointers. I can't share too much as it is proprietary, so here goes.
I have three tiers: User, server, appliance. The server, and appliance are php enabled, the client is either IE, or Chrome - the behavior is the same.
The user tier sends data from an HTML 5 form to the server, which in turn logs it in a database, and can send to the appliance - all OK here.
Due to the appliance not being https enabled I am trying to set up a trigger/response model. This means sending an abbreviated message, or key (as a GUID), to the appliance, and then the appliance calling back to the server for an XML message for processing. The call back is done using a get_file_contents() call.
All the parts seem to be working, the server response is retrieving the XML and the client is picking the XML headers correctly - however when the appliance is performing the call, the response is empty.
$result = file_get_contents($DestURL) ;
// If I call the value in $DestURL in a browser address
// box - it all works
// If I echo the $result, it is empty, and then nothing
// executes, except the last line.
if (strlen($result)== 0 ) {
// ==> this is not executing <==
$msg = "Failed to open the <a href='" . htmlspecialchars($DestURL) . "'> URL<a>: " . htmlspecialchars($DestURL);
$result="Error";
}
// ==> this is not executing <<==
if ($result=="Error")
{
/*
* need to send an error message
*/
}
else
{
$result = PrintMessage($my_address, $result
}
// ==> This is executing <==
Echo "all Finished";
?>
Any ideas from anyone greatly appreciated.
The Server Web service reads like this:
<?php
header("Content-type: text/xml");
// a bunch of items getting the data from the database
$result = mysqli_query($con, $sql);
$row = mysqli_fetch_array($result);
echo $row['message_XML'];
?>
I still have no real reason why this is happening, however a related post helped over here:
PHP ini file_get_contents external url helped a huge amount - Thanks to both responses.
I've changed the get from using file_get_contents() to a CURL call. Problem Solved.
Here is the code:
function get_message($URL)
{
/*
* This code has been lifted from stackoverflow: URL to the article
*/
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $URL);
// Can also link in security bits here.
$data = curl_exec($ch);
curl_close($ch);
return $data;
}