I've been searching for an answer here and in Google but can't seem to have a definite answer, this is my first post in StackOverflow but have been a regular visitor for some years and I have learned a lot from this site, so thanks!
My problem is with Expedia's API regarding Hotel information: http://developer.ean.com/docs/read/hotel_info/examples/XML_Default_Content
My problematic URL is:
http://api.ean.com/ean‑services/rs/hotel/v3/info?cid=55505&minorRev=1&apiKey=9kxdnz8ngbf7gmwkzm4qkgjw&customerSessionId=0ABAA850-419E-A913-D072-4A24A390607C&customerUserAgent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:19.0) Gecko/20100101 Firefox/19.0&customerIpAddress=2.50.181.154&locale=en_UScurrencyCode=USD&xml=1175480
I'm getting a "596 Service Not Found" error.
I apologize if this has been asked before, and many thanks for the help!!!
The example in EAN's Developer Hub for making a Hotel Info request has been fixed.
The problem:
http://api.ean.com/ean‑services/rs/hotel/v3/info?
The solution:
http://api.ean.com/ean-services/rs/hotel/v3/info?
You also didn't enter the xml parameter correctly. It should have been like this:
http://api.ean.com/ean-services/rs/hotel/v3/info?cid=55505&minorRev=1&apiKey=9kxdnz8ngbf7gmwkzm4qkgjw&customerUserAgent=Mozilla/5.0 (Macintosh; Intel Mac 0OS X 10.8; rv:19.0) Gecko/20100101 Firefox/19.0&customerIpAddress=2.50.181.154&locale=en_US¤cyCode=USD&xml=<HotelInformationRequest>
<hotelId>122212</hotelId>
<options>0</options>
</HotelInformationRequest>
You could just forget about using xml and use the REST format:
http://api.ean.com/ean-services/rs/hotel/v3/info?cid=55505&minorRev=1&apiKey=9kxdnz8ngbf7gmwkzm4qkgjw&customerUserAgent=Mozilla/5.0 (Macintosh; Intel Mac 0OS X 10.8; rv:19.0) Gecko/20100101 Firefox/19.0&customerIpAddress=2.50.181.154&locale=en_US¤cyCode=USD&hotelId=122212&options=0
By the way, I just found a SDK for Android from Expedia Affiliate Network. https://github.com/ExpediaInc/ean-android
You can Try this its working good.
http://api.ean.com/ean-services/rs/hotel/v3/info?cid=55505&minorRev=1&apiKey=9kxdnz8ngbf7gmwkzm4qkgjw&customerSessionId=&customerUserAgent=Mozilla/5.0%20%28Macintosh;%20Intel%20Mac%20OS%20X%2010.8;%20rv:19.0%29%20Gecko/20100101%20Firefox/19.0&customerIpAddress=&locale=en_UScurrencyCode=USD
You can add this xml to your code
&xml=<HotelInfo><hotelId>407696</hotelId><city></city><options>DEFAULT</options></HotelInfo>
Related
Over the last few days I have noticed that my Wordpress website had been running quite slowly, so I decided to investigate. After checking my database I saw that a table which was responsible for tracking 404 errors was over 1GB is size. At this point it was evident I was being targeted by bots.
After checking my access log I could see that there was a pattern of sorts, the bot seemed to land on a legitimate page which listed my categories and then move into a category page and at this point they request seemingly random page numbers, many of which are non-existent pages causing the issue.
Example:
/watch-online/ - Landing Page
/category/evolution/page/7 - 404
/category/evolution/page/1
/category/evolution/page/3
/category/evolution/page/5 - 404
/category/evolution/page/8 - 404
/category/evolution/page/4 - 404
/category/evolution/page/2
/category/evolution/page/6 - 404
/category/evolution/page/9 - 404
/category/evolution/page/10 - 404
This is the actual order of requests and they all happen within a second, at this point the IP becomes blocked as too many 404's have been thrown but this seems to have no affect due to the sheer number of bots all doing the same thing.
Also the category changes with each bot so they are all attacking random categories and generating 404 pages.
At the moment there are 2037 unique ip's which have thrown similar 404s in the last 24 hours.
I also use Cloudflare and have manually blocked many ip's from ever reaching my box but this attack is relentless and it seems as though they keep generating new ip's. Here is a list of some offending ip's:
77.101.138.202
81.149.196.188
109.255.127.90
75.19.16.214
47.187.231.144
70.190.53.222
62.251.17.234
184.155.42.206
74.138.227.150
98.184.129.57
151.224.41.144
94.29.229.186
64.231.243.218
109.160.110.135
222.127.118.145
92.22.14.143
92.14.176.174
50.48.216.145
58.179.196.182
Other than automatically blocking ip's for too many 404 errors I can think of no other real solution and this in itself is quite ineffective due to the sheer number of ip's.
Any suggestions on how to deal with this would be greatly appreciated as there appears to be no end to this attack and my websites performance really is taking a hit.
Some User Agents Include:
Mozilla/5.0 (Windows NT 6.3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.86 Safari/537.36
Mozilla/5.0 (Windows NT 6.2; rv:26.0) Gecko/20100101 Firefox/26.0
Mozilla/5.0 (compatible; MSIE
10.0; Windows NT 7.0; WOW64; Trident/6.0)
Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:22.0) Gecko/20100101
Firefox/22.0 Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
If its your personal website, you can try checking cloudflare, which is free and also it can provide support against any ddos attacks.May be you can give a try.
Okay so after much searching, experimentation and head banging I have finally mitigated the attack.
The solution was to install the apache module 'mod_evasive' see:
https://www.digitalocean.com/community/tutorials/how-to-protect-against-dos-and-ddos-with-mod_evasive-for-apache-on-centos-7
So for any other poor soul that gets slammed as severally as I did have a look at that and get your thresholds finely tuned. This is a simple, cheap and very effective means of drastically downplaying any attack similar to the one I suffered.
My server is still getting bombarded by bots but this really does limit their damage.
I have most recent version of serbanghita-mobile-detect (serbanghita).
However, I have a Asus PadFone2 tablet/phone device which is not being handled correctly by it:
Mozilla/5.0
(Linux; U; Android 4.1.1; en-gb; PadFone 2 Build/JRO03L)
AppleWebKit/534.30
(KHTML, like Gecko)
Version/4.0 Safari/534.30
So I would like to manually update it for this device.
Stijn responded correct. I look at the issues, do some research and then update the library after all the UnitTests pass.
If you don't know how to add an issue, just send me the feedback via http://demo.mobiledetect.net
This question already has answers here:
Determine Browser's Version
(4 answers)
Closed 9 years ago.
could you please tell me that how do I implement a Update Your Browser notification in php? Actually I'm developing a web app, which is fully done with html5, so I wanna show a notification to those users, who does not have an updated browser, so that they can update their browser.
Waiting for your reply.
Thanks
You can check the browser version by using the PHP User Agent
if($_SERVER['HTTP_USER_AGENT'] == 'Firefox (or whatever)'){
echo 'Please update your browser.';
}
A better way to do it would be by checking the version of their browser. To do this, first, use get_browser:
$users_browser = get_browser(null, true);
Then, do the same thing as above, but use the version element:
if($users_browser[version] == 1.0.4){
echo 'Please update your browser.';
}
This would take some time, and testing, on your part, to find which browsers ad versions work. Then, you could double-check:
if($_SERVER['HTTP_USER_AGENT'] == 'Firefox'){
if($users_browser[version] <= 0.9){
echo 'Please update your browser.';
}
}
This would display the error to anyone using Firefox version 0.9 or earlier.
I hope this helps.
As others have noted, using JS to check for the feature you need in the browser is best. But if you must do it on the server, your php could check the $_SERVER['HTTP_USER_AGENT'] string for details. For example:
Here is the request from my Mac:
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/536.30.1 (KHTML, like Gecko) Version/6.0.5 Safari/536.30.1
And here from my Windows server:
Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Using Php is it possible to detect exact OS even if the browser agent value is altered?
Here is the case
If someone override Firefox useragent value using "general.useragent.override"
Actual: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:12.0) Gecko/20100101 Firefox/12.0
Override: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.3 Safari/534.53.10
$_SERVER['HTTP_USER_AGENT'] value will be totally fake. it's not useful even to detect correct Operating System.
Is there any Php solution in this situation?
Thanks.
No, it is not possible. The only information you have is that supplied by the User-agent header, and if a user wants to send false information there is nothing you can do to detect it.
You can still use JavaScript to find the screen size but not the os this is how to
<script type="text/javascript">
document.write(screen.width+'x'+screen.height);
</script>
but this can be changed by the client anyway as its on the client side on ios there is one way by setting up a mobile management profile temp to very the device but thats a lot of work for the client so only do that if you have to
But in most cases you cannot very its not mod
When a search engine visits a webpage, what does get_browser() function and $_SERVER['HTTP_USER_AGENT'] return?
Also, what is the other possible evidence that PHP offers when a search engine crawls a webpage?
The get_browser() function attempts to determine the browser's features (in array) but dont count too much on it because of the non standard user-agents; instead, for a serious app, build your own.
the $_SERVER["HTTP_USER_AGENT"] is a long string "describing" the user's browser and can be used as first parameter in the above function (optional); A tip: use this one to uncover user's browser instead of get_browser() itself! Also be prepared for a missing user agent as well! An example of this string is this:
Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en) AppleWebKit/418 (KHTML, like Gecko) Safari/417.9.3
a search engine or robot or spider or crawler that follows the rules will visit your page according to the information stored of robots.txt that must exist in your site's root.
Without a robots.txt a spider can crawl the whole site, as long as it find links inside your pages; if you have this file you can program it so to tell the spider what to search; NOTE: this rule applies only to "good" spiders and not the bad ones
get_browser() & $_SERVER['HTTP_USER_AGENT'] will return you the Useragents, it should look like this :
Google :
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)
Googlebot-Image/1.0
Bing :
Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534+ (KHTML, like Gecko) BingPreview/1.0b
msnbot/2.0b (+http://search.msn.com/msnbot.htm)
msnbot-media/1.1 (+http://search.msn.com/msnbot.htm)
Yahoo :
Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)
-> To fully control (and limit) the crawl don't use robots.txt, use .htaccess or http.conf rules. (good crawler don't give a f*** about your disallow rules half of the time in robots.txt)