PHP - while loop loading - php

I have a while loop, that takes ip's and passwords from a text file and logins to some servers that I rent using HTTP Auth.
<?php
$username = 'admin';
function login($server, $login){
global $username, $password, $server;
$options = array(
CURLOPT_URL => $server,
CURLOPT_HEADER => 1,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_USERAGENT => "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 GTB5",
CURLOPT_HTTPHEADER => array("
Host: {$server}
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:28.0) Gecko/20100101 Firefox/28.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Authorization: Basic {$login}
"));
$ch = curl_init();
curl_setopt_array($ch, $options);
$result = curl_exec($ch);
$http_status = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if ( $http_status == 200 ) {
//do something
echo "Completed";
}
else { echo "Something went wrong";}
};
$file = fopen('myServers.txt', 'r');
while (! feof($file)) {
$m = explode(fgets($file), ':');
$password = $m[0];
$server = $m[1];
$login = base64_encode("{$username}:{$password}");
login($server, $login);
};
?>
The script works fine. However, when I load the page on my localhost, it takes forever to load and then prints out everything at once when its done with the entire file.
I want to print out Something went wrong or completed each time it does the file, I don't want it to wait for the entire file to go through the loop.

You're probably going to want to take a look at PHP flushing, which pushes content to the browser before continuing on with creating more page content. Note that from what I remember of PHP, you need to ob_flush() and flush() at the same time in order to properly flush content to the browser.
http://us3.php.net/flush
[Edit]
Example: You might try changing your echo statements to something resembling the below:
echo "Completed";
ob_flush();
flush();

Whether you can do what you want to do depends on the web server being used, and how it's configured, with regards to output buffering.
A good place to start reading would be the documentation for PHP's flush function.
A call to flush is intended to push output to the end user - but sometimes the web server implements it's own output buffering, which defeats the effect.
From the flush documentation:
Several servers, especially on Win32, will still buffer the output from your script until it terminates before transmitting the results to the browser.

Related

Why would a PHP cURL request work on localhost but not on server (getting 403 forbidden)? [duplicate]

I am trying to make a sitescraper. I made it on my local machine and it works very fine there. When I execute the same on my server, it shows a 403 forbidden error.
I am using the PHP Simple HTML DOM Parser. The error I get on the server is this:
Warning:
file_get_contents(http://example.com/viewProperty.html?id=7715888)
[function.file-get-contents]: failed
to open stream: HTTP request failed!
HTTP/1.1 403 Forbidden in
/home/scraping/simple_html_dom.php on
line 40
The line of code triggering it is:
$url="http://www.example.com/viewProperty.html?id=".$id;
$html=file_get_html($url);
I have checked the php.ini on the server and allow_url_fopen is On. Possible solution can be using curl, but I need to know where I am going wrong.
I know it's quite an old thread but thought of sharing some ideas.
Most likely if you don't get any content while accessing an webpage, probably it doesn't want you to be able to get the content. So how does it identify that a script is trying to access the webpage, not a human? Generally, it is the User-Agent header in the HTTP request sent to the server.
So to make the website think that the script accessing the webpage is also a human you must change the User-Agent header during the request. Most web servers would likely allow your request if you set the User-Agent header to an value which is used by some common web browser.
A list of common user agents used by browsers are listed below:
Chrome: 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
Firefox: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:75.0) Gecko/20100101 Firefox/75.0
etc...
$context = stream_context_create(
array(
"http" => array(
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
)
)
);
echo file_get_contents("www.google.com", false, $context);
This piece of code, fakes the user agent and sends the request to https://google.com.
References:
stream_context_create
Cheers!
This is not a problem with your script, but with the resource you are requesting. The web server is returning the "forbidden" status code.
It could be that it blocks PHP scripts to prevent scraping, or your IP if you have made too many requests.
You should probably talk to the administrator of the remote server.
Add this after you include the simple_html_dom.php
ini_set('user_agent', 'My-Application/2.5');
You can change it like this in parser class from line 35 and on.
function curl_get_contents($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
function file_get_html()
{
$dom = new simple_html_dom;
$args = func_get_args();
$dom->load(call_user_func_array('curl_get_contents', $args), true);
return $dom;
}
Have you tried other site?
It seems that the remote server has some type of blocking. It may be by user-agent, if it's the case you can try using curl to simulate a web browser's user-agent like this:
$url="http://www.example.com/viewProperty.html?id=".$id;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$html = curl_exec($ch);
curl_close($ch);
Write this in simple_html_dom.php for me it worked
function curl_get_contents($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$html = curl_exec($ch);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
function file_get_html($url, $use_include_path = false, $context=null, $offset = -1, $maxLen=-1, $lowercase = true, $forceTagsClosed=true, $target_charset = DEFAULT_TARGET_CHARSET, $stripRN=true, $defaultBRText=DEFAULT_BR_TEXT, $defaultSpanText=DEFAULT_SPAN_TEXT)
{
$dom = new simple_html_dom;
$args = func_get_args();
$dom->load(call_user_func_array('curl_get_contents', $args), true);
return $dom;
//$dom = new simple_html_dom(null, $lowercase, $forceTagsClosed, $target_charset, $stripRN, $defaultBRText, $defaultSpanText);
}
I realize this is an old question, but...
Just setting up my local sandbox on linux with php7 and ran across this. Using the terminal run scripts, php calls php.ini for the CLI. I found that the "user_agent" option was commented out. I uncommented it and added a Mozilla user agent, now it works.
Did you check your permissions on file? I set up 777 on my file (in localhost, obviously) and I fixed the problem.
You also may need some additional information in the conext, to make the website belive that the request comes from a human. What a did was enter the website from the browser an copying any extra infomation that was sent in the http request.
$context = stream_context_create(
array(
"http" => array(
'method'=>"GET",
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/50.0.2661.102 Safari/537.36\r\n" .
"accept: text/html,application/xhtml+xml,application/xml;q=0.9,
image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3\r\n" .
"accept-language: es-ES,es;q=0.9,en;q=0.8,it;q=0.7\r\n" .
"accept-encoding: gzip, deflate, br\r\n"
)
)
);
In my case, the server was rejecting HTTP 1.0 protocol via it's .htaccess configuration. It seems file_get_contents is using HTTP 1.0 version.
Use below code:
if you use -> file_get_contents
$context = stream_context_create(
array(
"http" => array(
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
)
));
=========
if You use curl,
curl_setopt($curl, CURLOPT_USERAGENT,'User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36');

PHP file_get_contents returns with a 400 Error

My problem is pretty straightforward, but I cannot for the life of me figure out what is wrong. I've done something similar with another API, but this just hates me.
Basically, I'm trying to get information from https://owapi.net/api/v3/u/Xvs-1176/blob and use the JSON result to get basic information on the user. But whenever I try to use file_get_contents, it just returns
Warning: file_get_contents(https://owapi.net/api/v3/u/Xvs-1176/blob): failed to open stream: HTTP request failed! HTTP/1.1 400 BAD REQUEST in Z:\DevProjects\Client Work\Overwatch Boost\dashboard.php on line
So I don't know what's wrong, exactly. My code can be seen here:
$apiBaseURL = "https://owapi.net/api/v3/u";
$apiUserInfo = $gUsername;
$apiFullURL = $apiBaseURL.'/'.$apiUserInfo.'/blob';
$apiGetFile = file_get_contents($apiFullURL);
Any help would be largely appreciated. Thank you!
You need to set user agent for file_get_contents like this, and you can check it with this code. Refer to this for set user agent for file_get_contents.
<?php
$options = array('http' => array('user_agent' => 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:53.0) Gecko/20100101 Firefox/53.0'));
$context = stream_context_create($options);
$response = file_get_contents('https://owapi.net/api/v3/u/Xvs-1176/blob', false, $context);
print_r($response);
That's what page is sending: "Hi! To prevent abuse of this service, it is required that you customize your user agent".
You can customize it using curl like that:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://owapi.net/api/v3/u/Xvs-1176/blob");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$output = curl_exec($ch);
$output = json_decode($output);
if(curl_getinfo($ch, CURLINFO_HTTP_CODE) !== 200) {
var_dump($output);
}
curl_close($ch);
If you do curl -v https://owapi.net/api/v3/u/Xvs-1176/blob you will get a response and you will see what headers cURL includes by default. Namely:
> Host: owapi.net
> User-Agent: curl/7.47.0
> Accept: */*
So then the question is, which one does owapi care about? Well, you can stop cURL from sending the default headers like so:
curl -H "Accept:" -H "User-Agent:" -H "Host:" https://owapi.net/api/v3/u/Xvs-1176/blob
... and you will indeed get a 400 response. Experimentally, here's what you get back if you leave off the "Host" or "User-Agent" headers:
{"_request": {"api_ver": 3, "route": "/api/v3/u/Xvs-1176/blob"}, "error": 400, "msg": "Hi! To prevent abuse of this service, it is required that you customize your user agent."}
You actually don't need the "Accept" header, as it turns out. See the PHP docs on how to send headers along with file_get_contents.

cURL: POST request gets treated as GET

I made a POST request, printed out header out information and noticed that is gets treated as GET. What is the reason for such behaviour?
HEADER OUT DATA:
GET /inx/aeGDrYQ HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3
Accept: */*
Cookie: PHPSESSID=t762fd0nbi12p3hrgb9sgx9k20; ____ri=4485; safemode=1; session=eyJpdiI6Im1HQzlNR1JhMTNDc0JRelYyRVwveUp6N0JxZG56Z2p5K094eSs3YU5HQ3dzPSIsInZhbHVlIjoiVXBPYzN4TVNReURhVnMxQlZ1TndLZ0dYUjltbUVEcW11bkJJMDdMRVZoZ0hHMjRXZ2p6azlcL1FWXC93NnZWN3oreDcxQms3aGlcL3l0MG1vTjd1V21FcmVCVzFnQjVuMUY5dHBWeUlTbU9NSjJcL1d5TlwvTW11ZWp1eHpNd3d4eFZTamV6aThsNldkdlN3aFo0XC9sTnVnU0tXVDRKbWVBU25VU0hJaDREQ1J5M2xDXC9zRUc5OXhWMWJWWG9jYndhczYyZW4xMkUxb3BoU3FmQmMrNVdzM3RqQmgzeHY1NVJ5RXRTNGZOdmQ4dTRCbmRtWVZBN210QVVEVk1BNTFPc1NQcFU3bnd4NEpKbnRaTFliRWNzbkZaXC9YWUF1Nld1ekZSbjVGRXBuZzNoRlBNND0iLCJtYWMiOiI4OWEwNmMyZGVkYjFiYTlmNDY0MDE5MTQwNzE1YzNhYWJjYTA5YjJ3MWMyZjgwMTViN2MyYmI0OWUyNmMwNjM0In0%3D; toastMsg=2; ts1=11e2bb0a86bfb9669c36Xcc407e1e3b3decefcce
REST OF THE CODE:
$ch = curl_init('https://example.com/login');
$postData = [
'name' => $name,
'pass' => $pass
];
$postDataStr = http_build_query($postData);
# Append some fields to the CURL options array to make a POST request. I left out headers, since
# they don't change and added return_transfer for echoing end results
$options[CURLOPT_POST] = 1;
$options[CURLOPT_POSTFIELDS] = $postDataStr;
$options[CURLOPT_HEADER]=1;
$options[CURLOPT_COOKIEJAR]=$cookie;
$options[CURLOPT_USERAGENT]= $useragent;
$options[CURLOPT_FOLLOWLOCATION] = true;
$options[CURLOPT_RETURNTRANSFER] = true;
$options[CURLINFO_HEADER_OUT] = true;
curl_setopt_array($ch, $options);
# Execute
$response = curl_exec($ch);
// echo $response;
$request = curl_getinfo($ch, CURLINFO_HEADER_OUT);
echo "Request sent: $request<br>";
You're only showing one request, and I suspect it is the second request where the first was a POST and the GET you see here is the one done after a redirect has been followed.
curl may switch to a GET when following a redirect based on which 30x code is in the response and the behavior is guided by the HTTP 1.1 spec (RFC 7230 and friends).

Bad Request. Connecting to sites via curl on host and system

I have this cURL code in php.
curl_setopt($ch, CURLOPT_URL, trim("http://stackoverflow.com/questions/tagged/java"));
curl_setopt($ch, CURLOPT_PORT, 80); //ignore explicit setting of port 80
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_ENCODING, "");
curl_setopt($ch, CURLOPT_HTTPHEADER, $v);
curl_setopt($ch, CURLOPT_VERBOSE, true);
The contents of HTTPHEADER are ;
Proxy-Connection: Close
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1017.2 Safari/535.19
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: __qca=blabla
Connection: Close
Each of them individual items in the array $v.
When I upload the file on my host and run the code, what I get is :
400 Bad request
Your browser sent an invalid request.
But when I run it on my system using command line PHP, what I get is
< HTTP/1.1 200 OK
< Vary: Accept-Encoding
< Cache-Control: private
< Content-Type: text/html; charset=utf-8
< Content-Encoding: gzip
< Date: Sat, 03 Mar 2012 21:50:17 GMT
< Connection: close
< Set-Cookie: buncha cokkies; path=/; HttpOnly
< Content-Length: 22151
<
* Closing connection #0
.
It's not only on stackoverflow, this happens, it happens also on 4shared, but works on google and others.
Thanks for any help.
This is more a comment than an answer: From your question it's not clear what specifically triggers the 400 error nor what especially means it or more concrete: the source of it.
Is that the output by your server? Is that some feedback (the curl response) that you output with your script?
To better debug things, I've come up with a slightly different form of configuration you might be interested in when using the curl extension. There is a nice function called curl_setopt_array which allows you to set multiple options at once. It will return false if one of the options fails. It allows you to configure your request in complete in front. So you can more easily inject and replace it with a second (debug) configuration:
$curlDefault = array(
CURLOPT_PORT => 80, // ignore explicit setting of port 80
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_FOLLOWLOCATION => TRUE,
CURLOPT_ENCODING => '',
CURLOPT_HTTPHEADER => array(
'Proxy-Connection: Close',
'User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1017.2 Safari/535.19',
'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Encoding: gzip,deflate,sdch',
'Accept-Language: en-US,en;q=0.8',
'Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Cookie: __qca=blabla',
'Connection: Close',
),
CURLOPT_VERBOSE => TRUE, // TRUE to output verbose information. Writes output to STDERR, or the file specified using CURLOPT_STDERR.
);
$url = "http://stackoverflow.com/questions/tagged/java";
$handle = curl_init($url);
curl_setopt_array($handle, $curlDefault);
$html = curl_exec($handle);
curl_close($handle);
This might help you to improve the code and to debug things.
Furthermore you're making use of the CURLOPT_VERBOSE option. This will put the verbose information into STDERR - so you can't track it any longer. Instead you can add it to the output as well to better see what's going on:
...
CURLOPT_VERBOSE => TRUE, // TRUE to output verbose information. Writes output to STDERR, or the file specified using CURLOPT_STDERR.
CURLOPT_STDERR => $verbose = fopen('php://temp', 'rw+'),
);
$url = "http://stackoverflow.com/questions/tagged/java";
$handle = curl_init($url);
curl_setopt_array($handle, $curlDefault);
$html = curl_exec($handle);
$urlEndpoint = curl_getinfo($handle, CURLINFO_EFFECTIVE_URL);
echo "Verbose information:\n<pre>", !rewind($verbose), htmlspecialchars(stream_get_contents($verbose)), "</pre>\n";
curl_close($handle);
Which gives sort of the following output:
Verbose information:
* About to connect() to stackoverflow.com port 80 (#0)
* Trying 64.34.119.12...
* connected
* Connected to stackoverflow.com (64.34.119.12) port 80 (#0)
> GET /questions/tagged/java HTTP/1.1
Host: stackoverflow.com
Proxy-Connection: Close
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1017.2 Safari/535.19
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: __qca=blabla
Connection: Close
< HTTP/1.1 200 OK
< Cache-Control: private
< Content-Type: text/html; charset=utf-8
< Content-Encoding: gzip
< Vary: Accept-Encoding
< Date: Mon, 05 Mar 2012 17:33:11 GMT
< Connection: close
< Content-Length: 10537
<
* Closing connection #0
Which should provide you the information needed to track things down if they are request/curl related. You can then easily change parameters and see if it makes a difference. Also compare the curl version you have installed locally with the one on the server. To obtain it, use curl_version:
$curlVersion = curl_version();
echo $curlVersion['version']; // e.g. 7.24.0
Hope this helps you to track things down.
according to http://php.net/manual/en/function.curl-setopt.php
try setting CURLOPT_ENCODING to "gzip"
also, i'd try to avoid as many header lines as possible, for example use CURLOPT_COOKIE instead of Cookie: __qca__=blabla or CURLOPT_USERAGENT
EDIT: it seems that you're not using an array (key => value) for CURLOPT_HTTPHEADER, are you? in this case, use the array and with the other stuff, i wrote, you'll be fine. (how this is done, read the manual :P)
hope that helps.
this worked for me
curl_setopt($ch, CURLOPT_VERBOSE, true);
$verbose = fopen('php://temp', 'w+');
curl_setopt($ch, CURLOPT_STDERR, $verbose);
$response = curl_exec($ch);
rewind($verbose);
$verboseLog = stream_get_contents($verbose);
echo "Verbose information:\n<pre>", htmlspecialchars($verboseLog), "</pre>\n";

Curl 400 error when using UserAgent

Why i'm getting sometimes this error?
**Bad Request**
Your browser sent a request that this server could not understand.
Apache Server at control.digitalcoding.com Port 80
When
$UserAgent = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11";
everything works fine, but not with
Opera/7.52 (Windows NT 5.1; U) [en]
Mozilla/5.0 (Windows; U; Windows NT 5.1; rv:1.7.3) Gecko/20041001 Firefox/0.10.1
Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1
for example. What is the problem?
HtmlReciever.php
<?php
if(empty($_GET["Link"]))
{
echo "empty";
die;
}
$LinkToFetch = urldecode($_GET["Link"]);
$UserAgent = urldecode($_GET["UserAgent"]);
function iscurlinstalled()
{
if (in_array ('curl', get_loaded_extensions()))
{
return true;
}
else
{
return false;
}
}
// If curl is instaled
if(iscurlinstalled()==true)
{
$ch = curl_init($LinkToFetch);
curl_setopt($ch, CURLOPT_USERAGENT,$UserAgent);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
$HtmlCode = curl_exec($ch);
curl_close($ch);
}
else
{
$HtmlCode = file_get_contents($LinkToFetch);
}
echo $HtmlCode;
?>
I must say that i'm running RecieverHtml.php from another .php with GET like this
http://127.0.0.1/reciever/RecieverHtml.php?Link=http%3A%2F%2Fwww.digitalcoding.com%2Ftools%2Fdetect-browser-settings.html&UserAgent=Mozilla%2F5.0+%28Windows+NT+6.1%3B+rv%3A10.0.1%29+Gecko%2F20100101+Firefox%2F10.0.1%0D%0A
This depends on the server your request is sent to. If the server checks the user agent and allows only requests that match a limited/incomplete/outdated list of common browser user agents, the server might return a generic 400 status code.
If you don't have control over the server and want your script to work, use the user agent that works and forget about the others. The user agent you provide with your request is "wrong" anyway, as it is not Chrome doing the actual request but your server running your PHP script.
EDIT:
You can also pass the user agent of the browser that requests your PHP script by using the following code:
curl_setopt($ch, CURLOPT_USERAGENT, $_REQUEST['HTTP_USER_AGENT']);
Just keep in mind that the value might be empty or exotic (like. Lynx/2.8.8dev.3 libwww-FM/2.14 SSL-MM/1.4.1) and be rejected by the server.

Categories