My problem is that request URI in the PHP-FPM status page does not show the full request host, and there are multiple domains used on the server, so just showing the path to something is entirely useless if I don't know what domain it is coming from.
All domains send the PHP requests to the server hostname, so I know the server hostname, but not the HTTP request hosts that the status request URI is being given for, which is what I would like to know.
There are over 500 domains, so just trying one-by-one is not an option either.
The current output I get is:
$ curl -4k http://localhost/status?full
pool: www
process manager: static
start time: 14/May/2022:22:33:26 +0000
start since: 165698
accepted conn: 2002323
listen queue: 0
max listen queue: 0
listen queue len: 0
idle processes: 8
active processes: 8
total processes: 16
max active processes: 16
max children reached: 0
slow requests: 1823
************************
pid: 3108690
state: Running
start time: 16/May/2022:20:18:46 +0000
start since: 978
requests: 877
request duration: 27936
request method: GET
request URI: /path/to/request
content length: 0
user: -
script: /path/to/script.php
last request cpu: 0.00
last request memory: 0
Is it possible to show the full request host somewhere? If so, is there any guidance I can use to do so? Thank you in advance!
For the record there is PHP-FPM (fpm-fcgi) with NGINX web server. PHP version: 7.4.28.
Related
I'm having server troubles. Website goes down and up repeatedly. After checking server processes, this constantly keeps coming up. e.g.
Jan 3 23:30:05 website kernel: CPU: 10 PID: 22345 Comm: php-fpm Tainted: G W 4.4.0-77-generic #89~10.03.1-Ubuntu
What does php-fpm tainted meaning?
I have an interesting issue which I am not sure what the root cause is. I have a server and two virtual hosts A and B with ports running on 80 and 81 respectively. I have written a simple PHP code on A which looks like this:
<?php
echo "from A server\n";
And another simple PHP code on B:
<?php
echo "B server:\n";
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, "localhost:81/a.php");
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// $output contains the output string
$output = curl_exec($ch);
// close curl resource to free up system resources
curl_close($ch);
echo $output;
When making concurrent requests using ab, I get the following results:
ab -n 10 -c 5 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.10.173 (be patient).....done
Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80
Document Path: /b.php
Document Length: 26 bytes
Concurrency Level: 5
Time taken for tests: 2.680 seconds
Complete requests: 10
Failed requests: 0
Total transferred: 1720 bytes
HTML transferred: 260 bytes
Requests per second: 3.73 [#/sec] (mean)
Time per request: 1340.197 [ms] (mean)
Time per request: 268.039 [ms] (mean, across all concurrent requests)
Transfer rate: 0.63 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 2 1339 1408.8 2676 2676
Waiting: 2 1339 1408.6 2676 2676
Total: 3 1340 1408.8 2676 2677
Percentage of the requests served within a certain time (ms)
50% 2676
66% 2676
75% 2676
80% 2676
90% 2677
95% 2677
98% 2677
99% 2677
100% 2677 (longest request)
But making 1000 requests with concurrency level 1 is extremely fast:
$ ab -n 1000 -c 1 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.10.173 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80
Document Path: /b.php
Document Length: 26 bytes
Concurrency Level: 1
Time taken for tests: 1.659 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 172000 bytes
HTML transferred: 26000 bytes
Requests per second: 602.86 [#/sec] (mean)
Time per request: 1.659 [ms] (mean)
Time per request: 1.659 [ms] (mean, across all concurrent requests)
Transfer rate: 101.26 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 1 1 10.3 1 201
Waiting: 1 1 10.3 1 201
Total: 1 2 10.3 1 201
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 1
95% 1
98% 1
99% 2
100% 201 (longest request)
Can anyone explain why this happened? I really want to know the root cause. Is it an issue of curl? It doesn't feel like an network bottleneck or open file issue since the concurrency is just 5. By the way, I also try the same thing with guzzlehttp, but the result is the same. I use ab on my laptop and the server is in the same local network. Plus, it certainly has nothing to do with network bandwidth because requests between host A and B are done at localhost.
I have modified the code so that testing is more flexible:
<?php
require 'vendor/autoload.php';
use GuzzleHttp\Client;
$opt = 1;
$url = 'http://localhost:81/a.php';
switch ($opt) {
case 1:
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, $url);
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// $output contains the output string
$output = curl_exec($ch);
curl_close($ch);
echo $output;
break;
case 2:
$client = new Client();
$response = $client->request('GET', $url);
echo $response->getBody();
break;
case 3:
echo file_get_contents($url);
break;
default:
echo "no opt";
}
echo "app server:\n";
I try the file_get_contents, but there is no obvious differences when switching to file_get_contents. When the concurrency is 1, all methods are good. But they all start downgrading when concurrency increases.
I think I find something related to this issue so I just post another question concurrent curl could not resolve host. This might be the root cause but I don't have any answer yet.
After trying for so long, I think this is definitely related to name resolving. And here is the php script that can execute at concurrent level 500
<?php
require 'vendor/autoload.php';
use GuzzleHttp\Client;
$opt = 1;
$url = 'http://localhost:81/a.php';
switch ($opt) {
case 1:
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, $url);
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_PROXY, 'localhost');
// $output contains the output string
$output = curl_exec($ch);
curl_close($ch);
echo $output;
break;
case 2:
$client = new Client();
$response = $client->request('GET', $url, ['proxy' => 'localhost']);
echo $response->getBody();
break;
case 3:
echo file_get_contents($url);
break;
default:
echo "no opt";
}
echo "app server:\n";
What really matters are curl_setopt($ch, CURLOPT_PROXY, 'localhost'); and $response = $client->request('GET', $url, ['proxy' => 'localhost']);. It tells curl to use localhost as proxy.
And here is the result of ab test
ab -n 1000 -c 500 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.10.173 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80
Document Path: /b.php
Document Length: 182 bytes
Concurrency Level: 500
Time taken for tests: 0.251 seconds
Complete requests: 1000
Failed requests: 184
(Connect: 0, Receive: 0, Length: 184, Exceptions: 0)
Non-2xx responses: 816
Total transferred: 308960 bytes
HTML transferred: 150720 bytes
Requests per second: 3985.59 [#/sec] (mean)
Time per request: 125.452 [ms] (mean)
Time per request: 0.251 [ms] (mean, across all concurrent requests)
Transfer rate: 1202.53 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 6 4.9 5 14
Processing: 9 38 42.8 22 212
Waiting: 8 38 42.9 22 212
Total: 11 44 44.4 31 214
Percentage of the requests served within a certain time (ms)
50% 31
66% 37
75% 37
80% 38
90% 122
95% 135
98% 207
99% 211
100% 214 (longest request)
But still why name resolving failed at concurrency level 5 when not using localhost as proxy?
The virtual host setting is very simple and clean, and almost everything is in default configuration. I do not use iptables on this server, neither do I config anything special.
server {
listen 81 default_server;
listen [::]:81 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}
Find something interesting! If you do another ab test right after the first one in about 3 seconds. The second ab test pretty quick.
Without using localhost as proxy
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 2.8 seconds to finish.
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.008 seconds only.
Using localhost as proxy
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds.
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds.
I think it still means that the issue is name resolving. But Why?
Assumption: nginx is not listening to localhost:81
I tried adding listen 127.0.0.1:81; to nginx, and it shows no effect.
Find myself making some mistakes about using curl proxy, that does not work! Update other details later.
Solved, not related to proxy, or anything. The root cause is pm.start_servers in php-fpm's www.conf.
Ok, after so many days of trying to solve this issue, I finally find out why. And It's not name resolving. I can't believe that it takes so many days to track down the root cause which is the number of pm.start_servers in php-fpm's www.conf. Initially, I set the number of pm.start_servers to 3 which is why the ab test to localhost always gets worse after concurrency level 3. While php-cli has no issue of limited number of php process, thus, php-cli always performs great. After increaing pm.start_servers to 5, the ab test result is as fast as php-cli. If this is the reason why your php-fpm is slow, you should also think about changing the number of pm.min_spare_servers, pm.max_spare_servers, pm.max_children and anything related.
Problem: I have set up a MediaWiki with file caching enabled, but when I migrate the file cache to another MediaWiki, the cache is bypassed.
Background: I have set up MediaWiki 1.26.2 with apache2 as the front-end web server and mariadb as the MySQL database, populated with the Danish wikipedia.
I have enabled the file cache to accelerate performance in LocalSettings.php:
# Enable file caching.
$wgUseFileCache = true;
$wgFileCacheDirectory = "/tmp/wikicache";
$wgShowIPinHeader = false;
# Enable sidebar caching.
$wgEnableSidebarCache=true;
# Enable page compression.
$wgUseGzip = true;
# Disable pageview counters.
$wgUseGzip = true;
# Enable miser mode.
$wgMiserMode = true;
Goal: Migrate the file cache, which is located under /tmp/wikicache, to another MediaWiki server. This does not seem to work, as the cache is skipped.
Use case: node server hosts MediaWiki, where I have migrated (copied) the file cache from another MediaWiki server, as well as the same LocalSettings.php.
Here is a cached page:
root#server:~# find /tmp/ -name DNA*
/tmp/wikicache/3/39/DNA.html.gz
On another node, client, I use the apache benchmark ab to measure the connection time when requesting that page. TL;DR; only 10% of the requests succeed with a time of ~20 sec, which is roughly the time needed to query the database and retrieve the whole page.
root#client:~# ab -n 100 -c 10 http://172.16.100.3/wiki/index.php/DNA
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.16.100.3 (be patient).....done
Server Software: Apache/2.4.7
Server Hostname: 172.16.100.3
Server Port: 80
Document Path: /wiki/index.php/DNA
Document Length: 1184182 bytes
Concurrency Level: 10
Time taken for tests: 27.744 seconds
Complete requests: 100
Failed requests: 90
(Connect: 0, Receive: 0, Length: 90, Exceptions: 0)
Total transferred: 118456568 bytes
HTML transferred: 118417968 bytes
Requests per second: 3.60 [#/sec] (mean)
Time per request: 2774.370 [ms] (mean)
Time per request: 277.437 [ms] (mean, across all concurrent requests)
Transfer rate: 4169.60 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 123 2743 7837.1 145 27743
Waiting: 118 2735 7835.6 137 27723
Total: 123 2743 7837.2 145 27744
Percentage of the requests served within a certain time (ms)
50% 145
66% 165
75% 168
80% 170
90% 24788
95% 26741
98% 27625
99% 27744
100% 27744 (longest request)
If I subsequently request again the same page, it is served in ~0.15 seconds. I observe the same performance even if I flush MySQL's cache with RESET QUERY CACHE:
root#client:~# ab -n 100 -c 10 http://172.16.100.3/wiki/index.php/DNA
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.16.100.3 (be patient).....done
Server Software: Apache/2.4.7
Server Hostname: 172.16.100.3
Server Port: 80
Document Path: /wiki/index.php/DNA
Document Length: 1184179 bytes
Concurrency Level: 10
Time taken for tests: 1.564 seconds
Complete requests: 100
Failed requests: 41
(Connect: 0, Receive: 0, Length: 41, Exceptions: 0)
Total transferred: 118456541 bytes
HTML transferred: 118417941 bytes
Requests per second: 63.93 [#/sec] (mean)
Time per request: 156.414 [ms] (mean)
Time per request: 15.641 [ms] (mean, across all concurrent requests)
Transfer rate: 73957.62 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 129 150 18.8 140 189
Waiting: 120 140 18.0 130 171
Total: 129 150 18.8 141 189
Percentage of the requests served within a certain time (ms)
50% 141
66% 165
75% 169
80% 170
90% 175
95% 181
98% 188
99% 189
100% 189 (longest request)
So, why isn't the file cache working when I migrate it to another MediaWiki server?
I'm using Google's latest Windows App Engine PHP SDK, v1.9.38 to run some long running scripts on the local dev server and for some reason they're timing out at 30 seconds. Error is e.g. "Fatal error: The request was aborted because it exceeded the maximum execution time. in [my script path]\timertest.php on line 8"
The time out is supposed to be 60 seconds for automatic scaling! I'm not sure what I'm missing here... I'm doing various file processing in one script but I then wrote a test script to see if that failed at 30 secs too, and it did. Script is:
<?php
$a = 1;
do
{
syslog(LOG_INFO, $a.' Sleeping for 10 secs...\n');
sleep(10);
$a++;
}
while($a < 8)
?>
Output is:
INFO: 1 Sleeping for 10 secs...\n
INFO: 2 Sleeping for 10 secs...\n
INFO: 3 Sleeping for 10 secs...\n
ERROR:root:php failure (255) with:
stdout:
X-Powered-By: PHP/5.5.26
Content-type: text/html
<br />
<b>Fatal error</b>: The request was aborted because it exceeded the maximum execution time. in <b>[my script path]\timertest.php</b> on line <b>8</b><br />
INFO 2016-06-02 20:52:56,693 module.py:788] default: "GET /testing/timertest.php HTTP/1.1" 500 195
I was thinking it was a config error somewhere, but not sure what or where. My app.yaml is very standard:
application: ak2016-1
version: 1
runtime: php55
api_version: 1
handlers:
# Serve php scripts.
- url: /(.+\.php)$
script: \1
login: admin
and php.ini too:
google_app_engine.disable_readonly_filesystem = 1
upload_max_filesize = 8M
display_errors = "1"
display_startup_errors = "1"
As I say, this is an issue with the local dev SDK server only, I'm not bothered about the online live side as files I'm processing are local (and need to remain so).
Thanks for any suggestions etc!
I deployed the app sample on the Request Timer documentation, and was not able to duplicate your issue. My requests all timeout after ~60 seconds:
$ time curl https://<project-id>.appspot.com/timeout.php
Got timeout! Cleaning up...
real 1m0.127s
user 0m0.021s
sys 0m0.010s
I then copied your code, app.yaml, and php.ini to see if I could duplicate that, and received the following in my syslogs:
INFO: 1 Sleeping for 10 secs...\n
INFO: 2 Sleeping for 10 secs...\n
INFO: 3 Sleeping for 10 secs...\n
INFO: 4 Sleeping for 10 secs...\n
INFO: 5 Sleeping for 10 secs...\n
INFO: 6 Sleeping for 10 secs...\n
INFO: PHP Fatal error: The request was aborted because it exceeded the maximum execution time. in /base/data/home/apps/.../timeout2.php on line 9
However, if you continue to have issues with requests timing out after 30 seconds, I would suggest moving the offending code into task queues. I hope this helps!
I have a RFID reader (http://avea.cc/web08s.html), which posts the request to IIS server using GET method. Both the reader and server are placed in the same network and connected using a router.
The reader posts the couple of paramaters using key/value pair and ASP.Net page will send the response using couple of tags (http://avea.cc/spec/web08s-sp01.pdf).
I see a constant delay of 4 seconds to get the response on the reader. I see 4 seconds under the "time-taken" column in IIS logs.
I have enabled failed request tracing and see "0ms" as processing time.
There is no much code in ASP.Net page except sending hard-coded "GRNT=01" tags.
I tried to send the same tags using PHP page, Classic ASP page on the same IIS server. But, still see the same 4 seconds delay.
Below are the headers posted by reader:
------------- HeaderParameters --------------
Connection : close
User-Agent : webreader (http://avea.cc)
Then, I installed Ubuntu on the same Windows server using Hyper-V and pointed the reader to post to this new server. I got the response in fraction of a second. I used the same PHP file, which I used on IIS server. After that, I tried on couple of servers and I always got the delay on IIS and not on Non-IIS servers.
No other roles are running on IIS server. Its a fresh installation with Web role.
The delay is the same on Windows 7, Windows 8, Windows 2008 R2 and Windows 2012.
I do not see any delay, when I post the same request through browser or Fiddler.
Below is the log from Microsoft Network monitor:
19 12:08:40 AM 9/18/2012 70.2321659 System 78.70.27.161 192.168.1.101 TCP TCP:Flags=......S., SrcPort=65269, DstPort=HTTP(80), PayloadLen=0, Seq=1498152, Ack=1023812214, Win=32768 ( ) = 32768 {TCP:12, IPv4:11}
20 12:08:40 AM 9/18/2012 70.2341408 System 192.168.1.101 78.70.27.161 TCP TCP:Flags=...A..S., SrcPort=HTTP(80), DstPort=65269, PayloadLen=0, Seq=1500184337, Ack=1498153, Win=8192 ( Scale factor not supported ) = 8192 {TCP:12, IPv4:11}
21 12:08:40 AM 9/18/2012 70.3629428 System 78.70.27.161 192.168.1.101 HTTP HTTP:Request, GET /avea.asp, Query:cmd=PU&sid=00000100&deviceid=5988&mac=00:13:00:00:17:64&id=192.168.1.100&type=m&mode=MF2&rev=2&sw=O&ver=1.23 {HTTP:13, TCP:12, IPv4:11}
22 12:08:40 AM 9/18/2012 70.3636151 System 192.168.1.101 78.70.27.161 HTTP HTTP:Response, HTTP/1.1, Status: Ok, URL: /avea.asp {HTTP:13, TCP:12, IPv4:11}
23 12:08:40 AM 9/18/2012 70.7728540 System 192.168.1.101 78.70.27.161 TCP TCP:[ReTransmit #22]Flags=...AP..F, SrcPort=HTTP(80), DstPort=65269, PayloadLen=251, Seq=1500184338 - 1500184590, Ack=1498347, Win=65070 (scale factor 0x0) = 65070 {TCP:12, IPv4:11}
24 12:08:41 AM 9/18/2012 71.5462335 System 192.168.1.101 78.70.27.161 TCP TCP:[ReTransmit #22]Flags=...AP..F, SrcPort=HTTP(80), DstPort=65269, PayloadLen=251, Seq=1500184338 - 1500184590, Ack=1498347, Win=65070 (scale factor 0x0) = 65070 {TCP:12, IPv4:11}
25 12:08:43 AM 9/18/2012 73.1113701 System 192.168.1.101 78.70.27.161 TCP TCP:[ReTransmit #22]Flags=...AP..F, SrcPort=HTTP(80), DstPort=65269, PayloadLen=251, Seq=1500184338 - 1500184590, Ack=1498347, Win=65070 (scale factor 0x0) = 65070 {TCP:12, IPv4:11}
26 12:08:43 AM 9/18/2012 73.2449081 System 78.70.27.161 192.168.1.101 HTTP HTTP:Request, GET /avea.asp, Query:cmd=PU&sid=00000100&deviceid=5988&mac=00:13:00:00:17:64&id=192.168.1.100&type=m&mode=MF2&rev=2&sw=O&ver=1.23 {HTTP:13, TCP:12, IPv4:11}
27 12:08:43 AM 9/18/2012 73.4495140 System 192.168.1.101 78.70.27.161 TCP TCP:Flags=...A...., SrcPort=HTTP(80), DstPort=65269, PayloadLen=0, Seq=1500184590, Ack=1498541, Win=64876 (scale factor 0x0) = 64876 {TCP:12, IPv4:11}
28 12:08:44 AM 9/18/2012 74.6766982 System 192.168.1.101 78.70.27.161 TCP TCP:[ReTransmit #22]Flags=...AP..F, SrcPort=HTTP(80), DstPort=65269, PayloadLen=251, Seq=1500184338 - 1500184590, Ack=1498541, Win=64876 (scale factor 0x0) = 64876 {TCP:12, IPv4:11}
29 12:08:44 AM 9/18/2012 74.7931629 System 78.70.27.161 192.168.1.101 TCP TCP:Flags=...A...F, SrcPort=65269, DstPort=HTTP(80), PayloadLen=0, Seq=1498541, Ack=1500184590, Win=32768 (scale factor 0x0) = 32768 {TCP:12, IPv4:11}
30 12:08:44 AM 9/18/2012 74.7931982 System 192.168.1.101 78.70.27.161 TCP TCP:Flags=...A...., SrcPort=HTTP(80), DstPort=65269, PayloadLen=0, Seq=1500184590, Ack=1498542, Win=64876 (scale factor 0x0) = 64876 {TCP:12, IPv4:11}
Below is the corresponding entry in IIS Logs:
#Software: Microsoft Internet Information Services 7.5#Version: 1.0#Date: 2012-09-18 12:00:28#Fields: date time cs-method cs-uri-stem cs-uri-query c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) sc-status sc-substatus sc-win32-status time-taken
2012-09-18 12:08:44 GET /avea.asp cmd=PU&sid=00000100&deviceid=5988&mac=00:13:00:00:17:64&id=192.168.1.100&type=m&mode=MF2&rev=2&sw=O&ver=1.23 78.70.27.161 HTTP/1.0 webreader+(http://avea.cc) - - 200 0 0 4421
On the same windows box, if I use Apache+PHP, there is no delay.
On the same network, with Ubuntu+PHP, there is no delay.
Please let me know if you have any suggestions to reduce the delay.
Thank you for your time!
maybe helps somebody:
if it happens in LAN - add both server names to hosts file
did trick in my case. time in LAN dropped from 4.5 sec to 50ms