Load cross-browser site with moblie emulator in iframe using php - php

i'm trying to check and display a cross-browser site in an iframe to emulate a mobile environment. The iframe should display the website in mobile format. I have an iframe that is only 320px in width, some mobile sites that is loaded and using css for responsive layout works as expected. But for webbsites that uses other techniques for detecting mobile devices do not load correctly. I would like to catch them all. My major problem is the orgin for the sites, they differ an different url are loaded on specifik actions. I'm not developing a emulator for that purpose, I need to load these the urls to check if they are currently fully responsive.
I saw this site:
http://php-drops.blogspot.se/2013/07/mobile-emulator-with-php.html
But cannot get the hang of it. How can I load the true responsive site in my iframe? I suppose when the header tells the environment to load a different site, like m.site.com. If there is a unique mobile site that redirects how can I get that url?

Got it working, this is what I did:
$ch = curl_init();
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_VERBOSE, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (iPhone; CPU iPhone OS 5_0 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9A334 Safari/7534.48.3');
curl_setopt($ch, CURLOPT_URL, htmlspecialchars_decode($url));
curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_exec($ch);
$info = curl_getinfo($ch);
curl_close($ch);
return $info['url'];
And I got the mobile url back. So in case of somewebsite goes to m.somewebsite or any other and includes the correct layout within the iframe :)

Related

Failed to call the browser based request using PHP Curl

If we call the same URL in the browser then it auto-download the CSV file. So, I want the same feature using PHP curl to save the CSV file under the same folder. But it gives me an empty result every time. Can you please guide me on what's is wrong in the code below?
$url="https://www.centrano.com/catalog_download.php?email=info#sporttema.com&password=dHB3L1FpTEg1c2pLZ29SUkdnUWcwWTFqN2RIamQx";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
$agent= "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36";
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_VERBOSE, true);
curl_setopt($ch, CURLOPT_USERAGENT, $agent);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, false);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 100000); //time out of 15 seconds
$output = curl_exec($ch);
print_r($output);
curl_close($ch);
Add the following two CURL options to make it work:
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_COOKIEJAR, '/tmp/cookies.txt');
The page redirects several times to the same host and requires for the session cookies to remain present (thus, storing them in a cookie jar.
Also: change your centrano.com account password IMMEDIATELY! Even though it helped solving this, it's generally not a good idea to make it public.

How can I get HTML data from a site who use CloudFlare?

First at all, sorry for my bad English.
I'm trying to get the HTML code from https://www.uasd.edu.do/ but when I try to catch the code with the PHP function "file_get_contents()" or using cURL, it just simply doesn't work.
With "file_get_contents()" it returns with a 403 HTTP error. With cURL, it returns with a fictional captcha that just do not appear.
I tried sending Cookies with cURL, setting a user-agent, but I'm still on the same point. Also I tried to find the real IP address of the site, but with not success. Please help me! I'll really appreciate that.
The code:
$curl = curl_init();
if (!$curl) {
die("Is not working");
}
curl_setopt($curl, CURLOPT_URL, "https://uasd.edu.do/");
curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0');
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_FAILONERROR, true);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 10);
curl_setopt($curl, CURLOPT_TIMEOUT, 50);
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
$html = curl_exec($curl);
echo $html;
curl_close($curl);
The output:
Please enable cookies. One more step Please complete the security
check to access www.uasd.edu.do Why do I have to complete a CAPTCHA?
Completing the CAPTCHA proves you are a human and gives you temporary
access to the web property. What can I do to prevent this in the
future?
If you are on a personal connection, like at home, you can run an
anti-virus scan on your device to make sure it is not infected with
malware.
If you are at an office or shared network, you can ask the network
administrator to run a scan across the network looking for
misconfigured or infected devices.
Cloudflare Ray ID: 4fcbf50d18679f88 • Your IP: ... •
Performance & security by Cloudflare
Note: The "please enable cookies" appear using and not using cookies.

PHP Curl return different results from URL in browser

I am using PHP Curl with this code:
curl_setopt($ch, CURLOPT_URL, 'https://www.segundamano.mx/anuncios/ciudad-de-mexico/alvaro-obregon/florida/renta-inmuebles/departamentos?precio=0-10000');
curl_setopt($ch, CURLOPT_COOKIEJAR, $cookies);
curl_setopt($ch, CURLOPT_COOKIEFILE, $cookies);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY);
//curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.1; rv:22.0) Gecko/20100101 Firefox/22.0");
$uagent = 'Mozilla/5.0 (Windows NT 6.1; rv:22.0) Firefox/22.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/36.0.1985.125 Chrome/36.0.1985.125 Safari/537.36';
curl_setopt($ch, CURLOPT_USERAGENT, $uagent);
curl_setopt($ch, CURLOPT_REFERER, 'http://www.google.com');
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
My question is.. why PHP Curl gives different result Than Searching URL in BROWSER?
PHP Curls gives a big BODY CONTENT... with this LINE...
In Spanish... "No encontramos resultados para tu búsqueda..."
In English.....There are no results for your search...
What happen with this URL?
How Can I CURL and read by code this URL and get the REAL RESULTS AS BROWSER?
Help me please!
Thanks!!!
The link you have mentioned is a single-page web application or web site that interacts with the user by dynamically rewriting the current page rather than loading entire new pages from a server.
Also, this website is using vue js.
Please find the below links for more details.
https://en.wikipedia.org/wiki/Single-page_application
https://vuejs.org/
Because JavaScript is the root of all evil. the website gets the search results you want with AJAX after you've succesfully loaded the page. Just open the "network" tab of your browser inspection tool and see the requests flying around.
Fun part: the website does have a (seemingly authorized) API that it can talk too, maybe you can try that? https://webapi.segundamano.mx/nga/api/v1.1/public

Using CURL and PHPSimpleHTMLDOMParser gives me - 500 Internal Server error

I am using PHP Simple HTML DOM Parser, here you can check more about it: http://simplehtmldom.sourceforge.net/
And also i am using a CURL because this web adress http://www.sportsdirect.com is not loading on the normal examples from the SimpleHTMLDom.
So here is the code i use:
<?php
include_once('../simple_html_dom.php');
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'http://www.sportsdirect.com/');
curl_setopt($curl, CURLOPT_HEADER, 0);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 10);
$str = curl_exec($curl);
curl_close($curl);
$html= str_get_html($str);
echo $html->plaintext;
?>
When i try to load the script it gives me: 500 Internal Server Error
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, webmaster#superweb.bg and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.
This script is just not working for this web adress, because when i try to load other website like mandmdirectDOTcom it is woking OKEY!
Where is my mistake and how i can make this thing works?
Try this for the curl fetch. It works for me in this case. This is a standard set of curl options & settings I use that work well:
include_once('simple_html_dom.php');
$url = "http://www.sportsdirect.com";
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($curl, CURLOPT_SSLVERSION, 3);
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 10);
curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$str = curl_exec($curl);
curl_close($curl);
$html = str_get_html($str);
echo $html->plaintext;
I believe the issue with your original curl settings was the missing user agent. Try the same script with the CURLOPT_USERAGENT line commented out to see what I mean.
Many servers have firewall settings that disallow curl requests from users making requests without a proper user agent setting. The user agent I have set here is a fairly generic Firefox user agent, so feel free to experiment with that to use something else.
Try setting a Host header in the request. It's possible that the target domain is on a shared server, and without a Host header, the server doesn't know what to do.
curl_setopt($curl, CURLOPT_HTTPHEADER, array('Host: www.sportsdirect.com'));

checking US-only website status using curl

[Problem]
There is a website which works for US-citizens only (shows info "A" for US-citizens, info "B" for non-US citizens). I need to constantly monitor this webpage for changes ("A" info) - an email should be sent when something is changed! How do I do it? The problem is that I live in Europe!
[Already accomplished]
I have a linux server, daemon and curl PHP script which accomplishes the following task! It works great for all "non-US-only" websites.
[Question]
One way to solve the problem might be to rent a US server but that's not acceptable at all and it is going to cost a lot! I believe that another way to solve the problem might be - to use a US VPN on my server, but for some reasons I won't do that. Is there a way to run curl through proxy maybe? Any ideas?
Current code is the following:
function getrequest($url_site/*,$post_data*/) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url_site);
curl_setopt ($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3');
curl_setopt($ch, CURLOPT_FAILONERROR, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_COOKIEJAR, COOKIE_FILE); // Cookie management.
curl_setopt($ch, CURLOPT_COOKIEFILE, COOKIE_FILE);
$result = curl_exec($ch); // run the whole process
curl_close($ch);
return $result;
}
and
$sleep_time = 1;
$login_wp_url = "http://www.mysite.com";
set_time_limit(60*10);
$result = getrequest($login_wp_url);
How do I grab contents from US-only website?
P.S. to get the idea of what I mean - try visiting the Hulu from Europe countries.
P.P.S. that's not a Hulu, not a homework.
Many cloud service providers, e.g. Heroku and Amazon, offer their smallest instances for free. You could simply set up one of these for free, make sure that you are provisioned on an US-located server and run your script there.
Another possibility would be to use a (free) proxy for these requests. Here is a list of free proxie servers: http://www.xroxy.com/proxy-country-US.htm.
curl_setopt($ch, CURLOPT_PROXY, "http://160.76.xxx.xxx:8080");
curl_setopt($ch, CURLOPT_PROXYPORT, 8080);
curl_setopt ($ch, CURLOPT_PROXYUSERPWD, "xxx:xxx");

Categories