How can I debug a cURL request in PHP? - php

Inside a Slave site I have a script that performs a cURL request vs a server of mine (Master).
Locally, I have installed those two sites and I'd wish to debug what happens on Master when Slave tries to connect it.
Ideally, the perfect solution would be to attach my own request to PHPStorm debugger, so I can actually see what's going on.
I tried to start the debug, but then PHPStorm attaches to the calling script, and not the receiving site.
Do you have any suggestions on how can I actually debug it, without the need to rely on the good old var_dump();die();?

Well, in the end of the day, PHPStorm relies on a cookie to attach to the incoming request.
By default, such cookie has the following value: XDEBUG_SESSION=PHPSTORM.
This means that you simply have to add the following line to your code:
curl_setopt($ch, CURLOPT_HTTPHEADER, array("Cookie: XDEBUG_SESSION=PHPSTORM"));
and PHPStorm will "see" the incoming request, allowing you to debug it.
Additional Tip
The previous trick works everywhere!
If you are trying to debug a cURL request from command line, once again you simply have to pass the cookie param and PHPStorm will attach to the request:
--cookie "XDEBUG_SESSION=PHPSTORM"

Related

Set up one instance of ChromeDriver for Panther, rather than creating one for each request

I am using Symfony Panther for web scraping (not testing) in a PHP project that does not use Symfony. I installed via Composer. Each time I need to scrape a link submitted by a user, I start a new Chrome Browser.
$client = Symfony\Component\Panther\Client::createChromeClient('/usr/bin/chromedriver');
$client->request('GET', $url);
$crawler = $client->waitFor('body');
Starting a new Chrome browser for each submitted $url is slow and resource intensive, so I want to keep the Chrome Client running on port 9515 and then each user's $url request can connect to that one same instance. Based on some user comments on Github, it sounds like a reasonable method:
Spin up a Chrome instance on the Linux server, running on port 9515
Make each url request connect to that instance.
I placed the first line i.e. with createChromeClient in a php script for a CRON job, but it never starts up the chrome client, and I get no errors either. Any ideas how to achieve this?
"Not showing" the browser (Chrome here) is the default way, as it is faster. It is called "headless"
To show it, you have to specify "PANTHER_NO_HEADLESS" :
PANTHER_NO_HEADLESS=1 vendor/bin/phpunit -c phpunit.xml
also, you could check if chome is running with : (or any system log)
ps aux | grep chrome

PHP get_headers fails on https://www.ticketmaster.com

I have code which validates whether a URL is OK, starting with get_headers() with a variety of user agents and then trying CURL. If I try:
get_headers('https://www.ticketmaster.com/');
This hangs for a long time and then returns false.
The equivalent CURL call behaves similarly.
If I open a Mac terminal window and try:
wget('https://www.ticketmaster.com');
It says it's connected to www.ticketmaster.com, but hangs while awaiting the response and eventually times out.
The URL works fine in a browser (obviously), and it's also considered OK by SSL Checker.
Any ideas for what Ticketmaster are doing weirdly, and how to check for it?

Is it possible to obtain POST data in NodeJS without creating a server?

I want to replace the PHP script that currently handles receiving and processing data from my webpage (LAN only) by a NodeJS file. Currently, the JSON data is being sent in JS with an XMLHttpRequest:
var xhttp = new XMLHttpRequest();
var url = "/server.php";
xhttp.open("POST", url, true);
xhttp.setRequestHeader("Content-Type", "application/json");
xhttp.onreadystatechange = function () {
...
};
xhttp.send(content);
Obviously, server.php is the file I'm looking to replace. In this file, I receive the data like this:
$stringHttp = file_get_contents("php://input");
I have searched far and wide on how to do something like this in NodeJS, but everything I find uses this basic layout:
http.createServer((request, response) => {
...
}).listen(8091);
Now, since my webpage is hosted by Apache, it's probably not possible to create this server on the same port. At least, that's what I'm getting from the error message I get when I try to run the NodeJS file:
events.js:183
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE :::8091
at Object._errnoException (util.js:992:11)
at _exceptionWithHostPort (util.js:1014:20)
at Server.setupListenHandle [as _listen2] (net.js:1355:14)
at listenInCluster (net.js:1396:12)
at Server.listen (net.js:1480:7)
at Object.<anonymous> (/var/www/apache/testNode.js:15:4)
at Module._compile (module.js:652:30)
at Object.Module._extensions..js (module.js:663:10)
at Module.load (module.js:565:32)
at tryModuleLoad (module.js:505:12)
So basically, I'm looking for a NodeJS replacement of file_get_contents("php://input").
Hence my question: Can you obtain POST data in NodeJS without creating a server?
No, it isn't.
In your PHP version, you have a server. It is Apache and it makes use of (for example) mod_php to execute the PHP.
If you are executing your program with Node.js, then you need some way to get the HTTP request to the program. That involves running a server.
it's probably not possible to create this server on the same port
No. You'd need to run it on a different port. (And then either post to it directly or configure Apache to act as a proxy in front of it).
Well, if you really just want to run a node.js script one and done to get a result, then you could keep a shell of your PHP script and have it run node and your script. You could either pass the node script the required data on the command line or you could send it in stdio and the node script would grab it from whichever.
Your node script would run and create the desired result and write it to stdout. The PHP script would then grab the stdout data and forward it as the http response.
Nobody is going to describe this a super optimal way to do things. HTTP response is sent to Apache, then fires up the PHP interpeter to run your PHP script which fires up node.js to run your node.js script. But, if it's a one-off thing just for this one use and there's some compelling reason that you use PHP elsewhere and need node.js for this one thing, then you could make it work.
It may even be possible to create a custom path for this one script that Apache could be configured to detect and then run your node script directly (like it does for PHP) without the PHP middleman. I don't know Apache well enough to advise exactly how to do that, but there are some references on doing it. Again, not optimal, but it could be made to work.
The best performance solution would be to actually create a node.js server on another port and have either Apache or some other proxy detect certain requests (usually based on the path) that you want redirected to your node.js server rather than sent through to PHP or post directly to the other port from the client (you'd have to enable CORS in your node.js server to allow that to work).

How to debug curl on IIS?

How to inspect CURL requests?
My PHP scripts are hosted on IIS and I want to find some debugging tool for CURL.
Could you suggest something in fiddler-style?
(Or maybe there is a way to use fiddler itself, I failed to do so because if I make my CURL to tunnel through proxy 127.0.0.1 it makes CONNECT requests instead of GET)
wireshark is not working for HTTPS but for HTTP only.
Can you change your curl script to use HTTP ?
Use curl -v for verbose mode.
From man curl
-v/--verbose
Makes the fetching more verbose/talkative. Mostly useful for debugging. A line starting
with '>' means "header data" sent by curl, '<' means "header data" received by curl that
is hidden in normal cases, and a line starting with '*' means additional info provided by
curl.
Note that if you only want HTTP headers in the output, -i/--include might be the option
you're looking for.
If you think this option still doesn't give you enough details, consider using --trace or
--trace-ascii instead.
This option overrides previous uses of --trace-ascii or --trace.

How can I figure out why cURL is hanging and unresponsive?

I am trying to track down an issue with a cURL call in PHP. It works fine in our test environment, but not in our production environment. When I try to execute the cURL function, it just hangs and never ever responds. I have tried making a cURL connection from the command line and the same thing happens.
I'm wondering if cURL logs what is happening somewhere, because I can't figure out what is happening during the time the command is churning and churning. Does anyone know if there is a log that tracks what is happening there?
I think it is connectivity issues, but our IT guy insists I should be able to access it without a problem. Any ideas? I'm running CentOS and PHP 5.1.
Updates: Using verbose mode, I've gotten an error 28 "Connect() Timed Out". I tried extending the timeout to 100 seconds, and limiting the max-redirs to 5, no change. I tried pinging the box, and also got a timeout. So I'm going to present this back to IT and see if they will look at it again. Thanks for all the help, hopefully I'll be back in a half-hour with news that it was their problem.
Update 2: Turns out my box was resolving the server name with the external IP address. When IT gave me the internal IP address and I replaced it in the cURL call, everything worked great. Thanks for all the help everybody.
In your php, you can set the CURLOPT_VERBOSE variable:
curl_setopt($curl, CURLOPT_VERBOSE, TRUE);
This then logs to STDERR, or to the file specified using CURLOPT_STDERR (which takes a file pointer):
curl_setopt($curl, CURLOPT_STDERR, $fp);
From the command line, you can use the following switches:
--verbose to report more info to the command line
--trace <file> or --trace-ascii <file> to trace to a file
You can use --trace-time to prepend time stamps to verbose/file outputs
You can also use curl_getinfo() to get information about your specific transfer.
http://in.php.net/manual/en/function.curl-getinfo.php
Have you tried setting CURLOPT_MAXREDIRS? I've found that sometimes there will be an 'infinite' redirect loop for some websites that a normal browser user doesn't see.
If at all possible, try sudo ing as the user PHP runs under (possibly the one Apache runs under).
The curl problem could have various reasons that require a user input, for example an untrusted certificate that is stored in the trusted certificates cache of the root user, but not the PHP one. In that case, the command would be waiting for an input that never happens.
Update: This applies only if you run curl externally using exec - maybe it doesn't apply.

Categories