Im currently trying to figure out, how i can rebuild cache on my site.
I have a cache plugin, that works perfect, but i need to get my cron script to "simulate" a real request to rebuild the cache (it does not have this function).
I have a while loop that get's all the URL's, and with fopen AND get_file_contents, i have been able to generate a cache, BUT it do not have everything (can't be used as a cache).
So basically, I need to use a function/method that "actually loads the URL", but can be used as a cron script.
Can someone help me out here? Do i need to make a HTTP-Request instead, ect. I'm lost.
Note: If i open the website with my browser, the cache is generated and are correct.
With fopen or get_file_contents, it check's the site, but does not generate a valid cache! :-)
Could something like this work:
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.mywebsite.com/");
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
curl_close($ch)
echo $data; // Dont echo, it's a cron script
?>
You can use the ob_ methods to cache when your pages are accessed rather than having to crawl them. Some Pseudo-code (just the if-conditions are pseudo):
$cached_file_path = 'some path for cached file';
//TODO: would use filemtime($cached_file_path) and time() to determine if file was
//cached today. could also do it every 3 hours, or whatever
If file has not been cached today or cachefile does not exist, then
{
ob_start(); // starts recording the output in a buffer
//TODO: do all your database reads and echos
.....
$contents = ob_get_contents(); // gets all the output for you to save
//TODO: save contents to file at $cached_file_path
....
ob_end_flush(); // returns the content to client
//means we can cache the file and send to client at same time
//so this doesn't have to be run separately
exit;
}
else
{
//cache is current, so just serve the cached file
include($cached_file_path);
exit;
}
I have tried different approaches..
Well, if it put the direct link into my cron job manager, it does refresh the cache.
There must be somehow, i can make my script emulate that behavior?
Related
From a php page, i have to do a get to another php file.
I don't care to wait for the response of the get or know whether it is successful or not.
The file called could end the script also in 5-6 seconds, so i don't know how to handle the get timeout considering what has been said before.
The code is this
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://mywebsite/myfile.php');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, false);
curl_setopt($ch, CURLOPT_TIMEOUT, 1);
$content = trim(curl_exec($ch));
curl_close($ch);
For the first task (Where you don't need to wait for response )you can start new background process and below that write code which will redirect you on another page.
Yeah, you definitely shouldn't be creating a file on the server in response to a GET request. Even as a side-effect, it's less than ideal; as the main purpose of the request, it just doesn't make sense.
If you were doing this as a POST, you'd still have the same issue to work with, however. In that case, if the action can't be guaranteed to happen quickly enough to be acceptable in the context of HTTP, you'll need to hive it off somewhere else. E.g. make your HTTP request send a message to some other system which then works in parallel whilst the HTTP response is free to be sent back immediately.
I have a file called "receive_data.php" on my server, that receives POST data from excel vba constantly throughout the day. This file inserts data into the database as a log of which tools and reports are being used throughout the day for the business I work for.
I have now created a report that is generated onscreen when php file "show_data.php" is viewed.
When show_data.php is viewed, ideally I would like to 'ping' "receive_data.php" with similar values as below:
$_POST['code'] = 1;
$_POST['r_id'] = 24;
The company I work for uses very old browsers, therefore using something like AngularJS is not an option as it can be unreliable in anything older than IE9.
I could include "receive_data.php" within the php file, but it's still a case of being able to have the variables sent in as 'post' variables.
I could modify the "receive_data.php" file to accept variables, however...
Ultimately I do not want to modify "receive_data.php" in any way, if at all possible.
If this is possible, then great! If not, then I will have to look at modifying the file, but due to the business intensive needs, editing it is worse for us.
You can try something like this:
if( $curl = curl_init() ) {
curl_setopt($curl, CURLOPT_URL, '<path_to_your_script>/receive_data.php');
curl_setopt($curl, CURLOPT_RETURNTRANSFER,true);
curl_setopt($curl, CURLOPT_POST, true);
curl_setopt($curl, CURLOPT_POSTFIELDS, "code=1&r_id=14");
$out = curl_exec($curl);
echo $out;
curl_close($curl);
}
I want to be able to go to mydoma.in/tunnel.php?file=http://otherdoma.in/music.mp3, and then get the data of http://otherdoma.in/music.mp3 streamed to the client.
I tried doing this via Header();, but it redirects instead of "tunelling" the data.
How can i do this?
Use cURL for streaming:
<?php
$url = $_GET["file"];
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_BUFFERSIZE, 256);
curl_exec($ch);
curl_close($ch);
?>
If they are small, you might be able to use file_get_contents(). Otherwise, you should probably use cURL. You would want to cURL the URL from the get variable "file". Then save it to a local temporary location with PHP. Then, use header() to direct yourself to the local file. Deleting the temporary file is the only issue, as there isn't really a way to determine when you have finished downloading it or not. So you might be able to sleep or delay the file removal, but you might find it's a better option to use a cron job to clean up all of the temporary files later.
Have your PHP script pull the remote content:
$data = file_get_contents($remote_url);
And then just spit it out:
echo $data;
Or simply:
echo file_get_contents($remote_url);
You might have to add some headers to indicate the content type.
Alternatively, you could configure a proxy with something like nginx -- this will allow you to rewrite particular URLs to a remote site and then serve them as local, no coding required.
I am attempting to use cURL to connect to a page like this: https://clients.mindbodyonline.com/asp/home.asp?studioid=851 with the following code;
<?php
$curl_handle=curl_init();
curl_setopt($curl_handle,CURLOPT_URL,'https://clients.mindbodyonline.com/asp/home.asp?studioid=851');
//curl_setopt($curl_handle,CURLOPT_CONNECTTIMEOUT,2);
curl_setopt($curl_handle,CURLOPT_RETURNTRANSFER,1);
curl_setopt($curl_handle,CURLOPT_HTTPAUTH, CURLAUTH_ANY);
curl_setopt($curl_handle,CURLOPT_COOKIEJAR, '/tmp/cookies.txt');
curl_setopt($curl_handle,CURLOPT_COOKIEFILE, '/tmp/cookies.txt');
//curl_setopt($curl_handle,CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($curl_handle,CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($curl_handle, CURLOPT_FOLLOWLOCATION, 1);
//curl_setopt($curl_handle,CURLOPT_HEADER, 1);
//curl_setopt($curl_handle,CURLOPT_RETURNTRANSFER, 1);
//curl_setopt ($curl_handle,CURLOPT_POST, 1);
$buffer = curl_exec($curl_handle);
curl_close($curl_handle);
if (empty($buffer))
{
print "Sorry, The booking system appears to be unavailable at this time.<p>";
}
else
{
print $buffer;
}
?>
I've fiddled the settings and the only three responses I get are;
Nothing is loaded and the error message is called
A redirect to /asp/home... locally
Returns '1' and that's all
Thanks for your time!
1. Nothing is loaded: Your code works fine, but they're ignoring you. It's possibly some anti-hammering functionality on their end. You can always change your code to sleep for a while and retry a few times.
2. A redirect to /asp/home... locally: Your code is working, but returns javascript back from their page which is executed by your browser and redirects you to a page that doesn't exist. To see this code without it running do:
print htmlspecialchars($buffer);
3. Returns '1' and that's all: That happens if you don't use
curl_setopt($client, CURLOPT_RETURNTRANSFER, true);
Uncomment it.
Unfortunately even then you wont be able to read this particular site as is, because it's drawn by javascript, which cURL can't run.
Perhaps the cookies you are using are invalid (ie, the session has expired).
If you load that page without cookies, you get a POST form. Perhaps you should send that POST data first, get the cookies and then use those cookies for the rest of the session.
Also, you need to set CURLOPT_RETURNTRANSFER to 1 in order to use $buffer like that. If not, cURL will output the page it gets, which is probably explains #2.
I have a PHP script on a server that generates the XML data on the fly, say with Content-Disposition:attachment or with simple echo, doesn't matter. I'll name this file www.something.com/myOwnScript.php
On another server, in another PHP script I want to be able to get this file (to avoid "saving file to disk") as a string (using the path www.something.com/myOwnScript.php) and then manipulate XML data that the script generates.
Is this possible without using web services?
security implications?
Thanks
Simple answer, yes:
$output = file_get_contents('http://www.something.com/myOwnScript.php');
echo '<pre>';
print_r($output);
echo '</pre>';
If you want more control over how you request the data (spoof headers, send post fields etc.) you should look into cURL.
link text
If you're on a shared host, you might find that you cannot use file_get_contents. This mainly because it is part of the same permission sets that allow you to include remote files. Anyway...
If you're stuck in that circumstance, you might be able to use CURL:
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "example.com");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$output = curl_exec($ch);
curl_close($ch);
?>
It is more code, but it's still simple. You have the added benefit of being able to post data, set headers, cookies... anything you could do with a highly configurable browser. This makes it useful when people attempt to block bots.