I'm using the HTTPClient with Symfony 5.2
$response = $myApi->request($method, $url, $options);
When a request fails, I'd like to get the detailed info about the original request, ie: the request headers.
$response->getInfo() does not return them (only the response headers).
My $options don't always have all that I need because some can come from the client config.
I need to log that somewhere in production, I saw maintainers working on injecting a logger but didn't find more info about it.
After a quick check on the code, I can see that a logger can be set but it seems to log only the method and URI.
How can I get the request info like headers or params/body ?
Github Issue opened about this
$response->getInfo('debug') contains the request headers once they have been received by the client.
dump($response->getInfo('debug'));
* Found bundle for host myapi: 0x7fe4c3f81580 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host myapi
* Connected to myapi (xxx.xxx.xxx.xxx) port xxxx (#0)
> POST /my/uri/ HTTP/1.1
Host: myapi:xxxx
Content-Type: application/json
Accept: */*
Authorization: Bearer e5vn9569-n76v9nd-v6n978-dv6n98
User-Agent: Symfony HttpClient/Curl
Accept-Encoding: gzip
Content-Length: 202
* upload completely sent off: 202 out of 202 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 8
< ETag: W/"8-M1u4Sc28uxk+zyXJvSTJaEkyIGw"
< Date: Tue, 06 Apr 2021 07:36:10 GMT
< Connection: keep-alive
<
* Connection #0 to host myapi left intact
Also TraceableHttpClient seems to be designed for detailed debuging
Here's a quick sample of one of the options that I posted in my comment:
public function __construct(HttpClientInterface $client, LoggerInterface $logger)
{
$this->client = $client;
if ($this->client instanceof LoggerAwareInterface) {
$this->client->setLogger($logger);
}
}
You could also delay the setLogger to your use area, too.
Related
I would like my server to return a header with a custom message. Using the header() function, I can generate the appropriate headers but the message always reverts to some standard string, not the text I provide.
For example, if I put this in my server code
header ($_SERVER['SERVER_PROTOCOL'] . ' 501 test error', true, 501);
I always see 501 Not Implemented in my client. For clients, I've used Postman and also my Xamarin Forms client app. With the latter, I stopped it in the debugger to look at the text returned from httpClient.GetAsynch().
I've also tried having only the first parameter
header ($_SERVER['SERVER_PROTOCOL'] . ' 501 test error');
but I get the same results.
Here's another try. I returned this:
header ($_SERVER['SERVER_PROTOCOL'] . ' Status: 501 test error', true, 501);
But curl on a command line shows this:
HTTP/1.1 200 OK
Connection: Keep-Alive
X-Powered-By: PHP/5.6.40
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Date: Thu, 24 Dec 2020 17:20:54 GMT
Server: LiteSpeed
Alt-Svc: quic=":443"; ma=2592000; v="43,46", h3-Q043=":443"; ma=2592000, h3-Q046=":443"; ma=2592000, h3-Q050=":443"; ma=2592000, h3-25=":443"; ma=2592000, h3-27=":443"; ma=2592000
And, if I take out "Status: ", I get this:
HTTP/1.1 501 Not Implemented
Connection: Keep-Alive
X-Powered-By: PHP/5.6.40
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Date: Thu, 24 Dec 2020 17:28:09 GMT
Server: LiteSpeed
Alt-Svc: quic=":443"; ma=2592000; v="43,46", h3-Q043=":443"; ma=2592000, h3-Q046=":443"; ma=2592000, h3-Q050=":443"; ma=2592000, h3-25=":443"; ma=2592000, h3-27=":443"; ma=2592000
header("HTTP/1.1 …") is a workaround for CGI setups. It's not a HTTP header as such. It's transformed and cleansed by PHP-FPM in most cases: https://github.com/php/php-src/blob/97d2dd0f90b328e771b60634cc377fd20eececbc/sapi/fpm/fpm/fpm_main.c#L307 if sent that way.
This is how you set a Status: header:
header("Status: 429 Begone!");
Now, if your webserver (LiteSpeed) strips out custom messages, then that's that. Nothing PHP can do about it. You'll have to find a server config workaround then. (e.g. Header add with some if= for Apache)
In short, give it a rest with SERVER_PROTOCOL unless your SAPI binding requires it. Upgrading PHP is an option if you run into troubles otherwise. Else you'll have to live with the standardized status message.
After doing some reading, I believe the right way to provide a custom message for an error is to send it in the body, not the header.
So, for example, to provide a custom message "missing weight=x parameter, one can use this code:
http_response_coede (400);
print json_encode (array ('error' => 400,
'message' => 'missing weight=x parameter');
Then, in your client, you parse this json string from the result body.
This might also happen if you use HTTP/2, which no longer has the status text.
A while ago I implemented a Soap connection with the Dutch Vecozo company retrieving insurance information for Dutch citizens. Now Vecozo has announced to migrate their server to an Azure environment and therefore I need to change the URLs. My SoapClient is constructed with their supplied wsdl which they supplied years back. However they did not supply a new wsdl file for the new Azure server. They just indicated that the old URL need to be changed to https://api.vecozo.nl/cov/vz801802/v1/soap11
To implement this I added the location and uri option parameters. Also I added __setLocation. However since the connection was also working before, and still after this change, how do I know it actually goes to the new Azure server (they have them both parallel running until Jan 2021)? Did I implement the URL correctly? Any suggestions how to do this otherwise?
$soap_options = array('local_cert'=>$certFile,
'trace'=>1,
'location' => "https://api.vecozo.nl/cov/vz801802/v1/soap11",
'uri' => "https://api.vecozo.nl/cov/vz801802/v1/soap11",
'soap_version'=>SOAP_1_1);
$client = new SoapClient($wsdl_incoming_flat, $soap_options);
// new url 2021
$client->__setLocation('https://api.vecozo.nl/cov/vz801802/v1/soap11');
If I examine my soap header before it was;
POST /v1/VZ801802.svc/soap11 HTTP/1.1
Host: covwebservice.vecozo.nl
Connection: Keep-Alive
User-Agent: PHP-SOAP/5.4.16
Content-Type: text/xml; charset=utf-8
SOAPAction: "http://schemas.vecozo.nl/VZ801802/v1/Controleer"
Content-Length: 897
After the location change it became;
POST /cov/vz801802/v1/soap11 HTTP/1.1
Host: api.vecozo.nl
Connection: Keep-Alive
User-Agent: PHP-SOAP/5.4.16
Content-Type: text/xml; charset=utf-8
SOAPAction: "http://schemas.vecozo.nl/VZ801802/v1/Controleer"
Content-Length: 897
I think this is what Vecozo means with changing the url. The Host and POST changed..
I'm developing a front end to a PHP based application that runs a custom management tool. My goal is to use Vue.js to build out the front end and specifically use Axios to make the API calls.
I'm using a Lightsail server with a public port forward.
Visual Studio Code with Remote Developer tools installed and connected via SSH to develop on that server rather than my machine.
Vue CLI 3 to set up my project and choosing the default packages of Babel and Linting.
Axios is installed via npm.
Our PHP environment originally allowed me to pass params like user ID, API Key and a query ( while user = x method=get, apikey=x, ) and I could easily consume the data and output it in a v-for and everything worked pretty well. But, it was not a well designed API structure, so we changed the idea that params would be passed, as I did not like passing the API key in the URL and don't like the idea of having to pass a SQL query to get the data. So, my colleague adjusted the API so now we have a URL that is like https://tunnel.host.com/api/sites/read.php. We'll also have PHP files for the rest of the CRUD operations later. But I need to get past my current problem.
My research immediately led me to the issue of CORS and I've spent many hours reading about that topic and ultimately feel like it is the issue preventing me from passing the necessary headers to the server and getting access.
I thought for a while that installing the CORS npm package would help, but that appears to only be for fixing the issue with a locally hosted server environment. ( like using ExpressJS as the server in a dev environment )
After reading the Mozilla docs regarding CORS I wonder if I need to send preflight headers in an OPTION HTTP request.
So far I have tried:
Adding a vue.config.js file with dev server options ( I'll include the code below )
Using POSTMAN to construct the headers and pass a GET request - which works just fine
attempting to use the headers key in an Axios object ( code below )
My colleague who runs the PHP side of things assures me that all of the CORS headers in the files are correct.
I only have one component being loaded into App.vue called AxiosTest.
I've edited this post to update my findings.
By sending the headers as const the request is made as GET
const config = {
headers: {
"content-type": "application/vnd.api+json",
"Cache-Control": "no-cache",
"x-api-key": "9xxxxxxxxxxxxxxxxxxxxxx9"
}
}
axios.get(
`https://tunnel.xxxxx.com/api/headers.php?`,{ config
})
.then(response => {
this.results = response;
})
.catch(error => {
// eslint-disable-next-line
console.log(error)
})
And the header Response
HTTP/1.1 200 OK
Server: nginx/1.10.3 (Ubuntu)
Date: Mon, 08 Jul 2019 19:22:55 GMT
Content-Type: application/json; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Access-Control-Allow-Origin: http://54.x.x.155:8080
Access-Control-Allow-Credentials: true
Access-Control-Max-Age: 0
and the Request
Host: tunnel.xxxxxx.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:67.0) Gecko/20100101 Firefox/67.0
Accept: application/json, text/plain, */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: http://54.x.x.155:8080/
Origin: http://54.x.x.155:8080
DNT: 1
Connection: keep-alive
Cache-Control: max-age=0
However, if I keep the headers object inside of the axios.get function I get it sent as OPTIONS
axios.get(
`https://tunnel.xxxx.com/api/headers.php?`,{
headers: {
"content-type": "application/vnd.api+json",
"Cache-Control": "no-cache",
"x-api-key": "9xxxxxxxxxxxxxxxxxxxxxx9"
}
})
.then(response => {
this.results = response;
})
.catch(error => {
// eslint-disable-next-line
console.log(error)
})
Response
HTTP/1.1 200 OK
Server: nginx/1.10.3 (Ubuntu)
Date: Mon, 08 Jul 2019 19:22:55 GMT
Content-Type: application/json; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Access-Control-Allow-Origin: http://54.x.x.155:8080
Access-Control-Allow-Credentials: true
Access-Control-Max-Age: 0
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: {cache-control,x-api-key}
Request
Host: tunnel.xxxxx.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:67.0) Gecko/20100101 Firefox/67.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Access-Control-Request-Method: GET
Access-Control-Request-Headers: cache-control,x-api-key
Referer: http://54.x.x.155:8080/
Origin: http://54.x.x.155:8080
DNT: 1
Connection: keep-alive
Cache-Control: max-age=0
vue.config.js
module.exports = {
devServer: {
public: '54.x.x.x:8080',
proxy: 'https://tunnel.xxxxxxx.com/'
}
}
In the most successful test produced I still receive Invalid CORS origin or invalid API key.
Look at the documentation for axios:
axios.get(url[, config])
The get method takes two arguments but you are passing it three.
Thus the data you are trying to pass is being treated as configuration (and ignored because none of the values are valid config options) and the config data (including the request headers object) is ignored.
The query string needs to be part of the URL. Axios won't use the second argument to generate it for you.
const data = {
foo: "bar"
};
axios.get(
`https://example.com/api/headers.php?${new URLSearchParams(data)}`, {
headers: {
"Cache-Control": "no-cache",
"content-type": "application/vnd.api+json",
"x-api-key": "9xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx9"
}
}).then(response => {
console.log(response);
}).catch(error => {
console.log(error)
})
Had some problems with the formatting of my Axios call, and found that using headers as a const or var in my call did not work while dumping headers directly into the call did work.
On the PHP side, there were some extra curly brackets on the Access-Control-Allow-Headers directive that was causing the header response to be bracketed.
All in all it was troubleshooting through the Network tab in Browser Dev tools that helped us find the fix.
Here is the function that I'm calling at the top of the PHP scripts to allow the requester 100% full CORS access.
function InitCors() {
if (isset($_SERVER["HTTP_ORIGIN"])) {
header("Access-Control-Allow-Origin: {$_SERVER["HTTP_ORIGIN"]}");
header("Access-Control-Allow-Credentials: true");
header("Access-Control-Max-Age: 0");
}
if ($_SERVER["REQUEST_METHOD"] == "OPTIONS") {
if (isset($_SERVER["HTTP_ACCESS_CONTROL_REQUEST_METHOD"])) header("Access-Control-Allow-Methods: GET, POST, OPTIONS");
if (isset($_SERVER["HTTP_ACCESS_CONTROL_REQUEST_HEADERS"])) header("Access-Control-Allow-Headers: {" . $_SERVER["HTTP_ACCESS_CONTROL_REQUEST_HEADERS"] ."}");
}
header("Content-Type: application/json; charset=UTF-8");
}
I also have another test script to echo back all of the $_SERVER array values and I never see any of the custom headers when his Axios calls it. However, if you call things with curl from the command line (adding the headers) they most definitely do appear in the resulting output.
<?
//---------------------------------------------------------------------------------------------------
include("../dashboard/subs.php");
//---------------------------------------------------------------------------------------------------
if (! IS_SSL()) {
echo("{\"message\":\"API requires all traffic to be SSL encrypted.\"}\n");
exit;
}
//---------------------------------------------------------------------------------------------------
InitCors();
//---------------------------------------------------------------------------------------------------
$JSON = "{\"data\":[";
$JSON .= json_encode($_SERVER);
$JSON .= "]}";
echo("$JSON\n");
//---------------------------------------------------------------------------------------------------
?>
I have a weird problem with the new twitter api. I followed the very good answer from this question to create a search in twitter and used the TwitterAPIExchange.php from here.
Everything works fine as long as I am directly calling it from my server with CURL. But in the live environment I have to use a proxy with Basic Authentication.
All I've done is add the proxy authentication to the performRequest function:
if(defined('WP_PROXY_HOST') && defined('WP_PROXY_PORT') && defined('WP_PROXY_USERNAME') && defined('WP_PROXY_PASSWORD'))
{
$options[CURLOPT_HTTPPROXYTUNNEL] = 1;
$options[CURLOPT_PROXYAUTH] = CURLAUTH_BASIC;
$options[CURLOPT_PROXY] = WP_PROXY_HOST . ':' . WP_PROXY_PORT;
$options[CURLOPT_PROXYPORT] = WP_PROXY_PORT;
$options[CURLOPT_PROXYUSERPWD] = WP_PROXY_USERNAME . ':' . WP_PROXY_PASSWORD;
}
Without the proxy I get a JSON response. But with the proxy I get:
HTTP/1.1 200 Connection established
HTTP/1.1 400 Bad Request
content-type: application/json; charset=utf-8
date: Fri, 20 Dec 2013 09:22:59 UTC
server: tfe
strict-transport-security: max-age=631138519
content-length: 61
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Set-Cookie: guest_id=v1%3A138753137985809686; Domain=.twitter.com; Path=/; Expires=Sun, 20-Dec-2015 09:22:59 UTC
Age: 0 {"errors":[{"message":"Bad Authentication data","code":215}]}
I've tried to simulate a proxy in my local environment with Charles Proxy, and it worked.
I'm assuming the proxy is either not sending the Authentication Header, or is changing data somehow.
Anybody with a clue....
EDIT:
Using the HTTP API works but HTTPS fails. I've tried CURLOPT_SSL_VERIFYPEER and CURLOPT_SSL_VERIFYHOST set to FALSE but the twitter SSL is valid so this is not recommended
Is your proxy response caches or is the date in the proxy response old because you did perform the API call on the 20th december?
If it is cached maybe your proxy is having a cached reply from an actual invalid request?
I wrote a PHP web application which uses authentication and sessions (no cookies though). All works fine for the users in their browsers. At this point though I need to add functionality which will perform a task automatically... users don't need to see anything and can't interact with this process. So I wrote my new PHP, import.php, which works in my browser. I set up a new cron job to call 'php import.php'. Doesn't work. Started Googling and it seems maybe I need to be using cURL and possibly cookies but I'm not certain. Basically import.php needs to authenticate and then access functions in a separate file, funcs.php, in the same directory on the local server. So I added cURL to import.php and reran from the command line; I see the following:
[me#myserver]/var/www/html/webapp% php ./import.php
* About to connect() to myserver.internal.corp port 443 (#0)
* Trying 192.168.111.114... * connected
* Connected to myserver.internal.corp (192.168.111.114) port 443 (#0)
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* Remote Certificate has expired.
* SSL certificate verify ok.
* SSL connection using SSL_RSA_WITH_3DES_EDE_CBC_SHA
* Server certificate:
* subject: CN=dept,O=Corp,L=Some City,ST=AK,C=US
* start date: Jan 11 16:48:38 2012 GMT
* expire date: Feb 10 16:48:38 2012 GMT
* common name: myserver
* issuer: CN=dept,O=Corp,L=Some City,ST=AK,C=US
> POST /webapp/import.php HTTP/1.1
Host: myserver.internal.corp
Accept: */*
Content-Length: 356
Expect: 100-continue
Content-Type: multipart/form-data; boundary=----------------------------2c5ad35fd319
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
< Date: Thu, 27 Dec 2012 22:09:00 GMT
< Server: Apache/2.4.2 (Unix) OpenSSL/0.9.8g PHP/5.4.3
< X-Powered-By: PHP/5.4.3
* Added cookie webapp="tzht62223b95pww7bfyf2gl4h1" for domain myserver.internal.corp, path /, expire 0
< Set-Cookie: webapp=tzht62223b95pww7bfyf2gl4h1; path=/
< Expires: Thu, 19 Nov 1981 08:52:00 GMT
< Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
< Pragma: no-cache
< Content-Length: 344
< Content-Type: text/html
<
* Connection #0 to host myserver.internal.corp left intact
* Closing connection #0
I'm not sure what I'm supposed to do after I authenticate via cURL. Or is there an alternate way to authenticate with which I don't use cURL? Currently all pages in the web app take action (or not) based on $_SESSION and $_POST value checks. If cURL is the only way, do I need cookies? If I need cookies, once I send it back to the server why do I need to do to process it?
Basically import.php checks for and reads files from the same directory. Supposing there are files when the cron runs and parses them and inserts data into the DB. Again, everything works in the browser, just not the import from the command line.
Having never done this before (or much PHP for that matter), I'm completely stumped.
Thanks for your help.
I've solved my problems with this one.
shell_exec('nohup php '.realpath(dirname(__FILE__)).'/yourscript.php > /dev/null &');
You can set this to run every x minutes, and it will run in the background without user delay.
Can we start from here?
This is highly unlikely to help anybody but the requirements for this project changed so I ended up creating a PHP-based REST API and rewriting this import script in Python to integrate with some others tools being developed. All works as needed. In Python...
import cookielib
import getopt
import os
import sys
import urllib
import urllib2
import MultipartPostHandler
Shouldn't need to provide any more details - anybody versed enough in Python should get the drift. Script reads a file and submits it to my PHP API.