I have a simple question. I‘d like to write a php function to check the database rows and if the number of rows are affected by the last ran query, execute an internal php file. The catch is, that I want it to check the rows, and check the timestamp at the same time so if the time stamp is different and the row count is different, it executes the php file.
The file in question is a sql database backup, so I need it to only execute if there was a change in the database and if the time stamp is older than 43200 seconds (half a day). This would backup the database if there was activities on the site (one activity would back once, two activity would back up twice and anything more than that would be ignored), and if not, it would not do anything. I hope I’m explaining it right.
Cron job is out of question, since it’s dependant on the database changes not just the time.
The code I’m using is like this (without checking the database rows) and is only accessed when a customer access the shopping cart checkout or account page:
<?php
$dbbackuplog = '/path/to/backuptime.log';
if (file_exists($dbbackuplog)) {
$lastRun = file_get_contents($dbbackuplog);
if (time() - $lastRun >= 43200) {
//Its been more than 12 hours so run the backup
$cron = file_get_contents('/file.php');
//update backuptime.log with current time
file_put_contents($dbbackuplog, time());
}
}
?>
I appreciate any input or suggestions.
First of all, you cannot run anything with file_get_contents. That function simply reads the bare contents of the file you ask for and under no circumstances will it run any code. If you want to run the code, you want include or require instead.
Second, your idea about not just triggering but also fully executing backups while a customer is performing an action is, well, I 'm not going to pull any punches, terrible. There's a reason why people use cron for backups (actually more than one reason) and you should follow that example. That's not to say that you are not allowed to affect the behavior of the cron script based on dynamic factors, but rather that the act of taking a backup should always be performed behind the scenes.
Related
Hello guys I need an advice with these situation :
For example I have a free classified posting website where in a user can post classified ads..
The ad would be listed on the website for a maximum of 30 days then on 31st day it will automatically be deleted on the database as well as the images on the server.. The question is :
$db_ad_tbl('id','user_id','title','description',timestamp);
What is the right approach for doing this?
Can anyone suggest tutorials/links that covers this the same situation?
Another approach that does not require cron is to use MySQL events. If you can come up with the correct query, you can set it as a recurring event. phpMyAdmin 4.0.x supports events handling from the interface.
See http://dev.mysql.com/doc/refman/5.5/en/events.html.
As Barmar has noted you should add a cronjob for this task. You can write a simple php script and then add it to your crontab with something like:
1 0 * * * php -f /path/to/file/clean.php
This means that the php file will be executed every day at midnight.
Just a few notes:
the file should not be in your web folder
you might want to do some tests and report errors by email(such as unable to connect to db)
If you build more of thees you should keep a list of them somewhere in case you switch servers(or the server dies)
if you use a config file(ex:to store your db connection details), you should make sure that it is accessible by the user that the cronjob works with.
Most hosting platforms allow for crontab editing and run them with the same user they run the web server so it should not be a problem.
There is really no other good solution to this then creating cron job. This is of course if you don't check the time stamp every time you get the data from the database.You can then delete it if it is bigger then the expiry data (DELETE FROM my_table WHERE timestamp>[Expiry Timestamp] ). This is of course risky, since you will have to include the timestamp every time you try a count, and risk storing everything forever if no expired resource is ever requested from the database.
Ok I know the title doesn't really tell you what my problem is but I'll try it now.
I am developing a game. People can subscribe their animals for a race. That race starts at a specific time. It is a race for which ALL users can subscribe. So the calculation of which animal is first, second etc. happens in an php file that is executed, every 2mins there is a new calculation for about 1h. So there are 30 calculations. But ofc. this code is not connected to the logged in user. The logged in user can click on the LIVE button to see the current result.
Example: There is a race at 17.00 later today. 15 animals subscribed, from 4 players and they can all check how their animals are doing.
I do not want someone to post me the full code but I want to know how I should let a php code run for about 1 hour (so execute code, sleep 2min, new calculation, sleep 2min and so on) on my server or so. So it is not connected to the user.
I thought about cron jobs but that is really not the solution for this I believe.
Thank you for reading :p
Two approaches:
You use an algorithm which will always come to the same conclusion, regardless of when it is run and who runs it. You just define the starting parameters, then at any time you can calculate the result (or the intermediate result at any point in time between start and finish) when needed. So any user can at any time visit your site and the algorithm will calculate the current standings on the fly from some fixed starting condition.
Alternatively, you keep all data in a central data store and actually update the data in certain intervals; any user can request the current standings at any time and the latest data from the datastore will be used. You will still need an algorithm that has traits of the one described above, since you're likely explicitly not actually running the simulation in real time. Just every x seconds, you run your calculations again, calculating what is supposed to have changed from the last time you ran them.
In essence, any algorithm you use needs this approach. Even a "realtime" program simply keeps looping, changing values little by little from their previous state. The interval between theses changes can be arbitrarily stretched out, to the point where you calculate nothing until it becomes necessary. In the meantime, you just store all the data you need in a database.
Cron jobs are the wright way i think. Check this out when you are not so good with algorithm:How To: PHP with Cron job Maybe you have to use different cron jobs.
I am developing a website and got a database where people can insert data (votes). I want to keep a counter in the header like "x" votes have been cast. But it is possible that there will be a lot of traffic on the website soon. Now I can do it with the query
SELECT COUNT(*) FROM `tblvotes
and then display number in the header, but then every time the users changes page, it will redo the query, so I am thinking, maybe it is better to the query once every 30 sec (so much less load on the mysql server) but then I need to save the output of it to some place (this shouldn't be so hard; I can write it to a textfile?) But how can I let my website automatically every 30 sec run the query and put the number in the file. I got no SSH to the server so I can t crontab it?
If there is something you might not understand feel free to ask!
Simplest approach: Write the result into a local textfile, check the filetime of the textfile on every request to be less than now() + 30 seconds, and if so, update the file. To update, you should lock the file. While the file is being updated, other users for whom the condition now() + 30 is met should only read the currently existing file to avoid race conditions.
Hope that helps,
Stefan
Crontabs can only run every minute, at its fastest.
I think there is a better solution to this. You should make an aggregate table in which the statistical information is stored.
With a trigger on the votes_table, you can do 'something' every time the table receives a INSERT statement.
The aggregate table will then store the most accurate information, which you then can query to display the count.
Better solution will be using some cache mechanism (e.g. APC) instead of files if your server allows it.
If you can, you may want to look into using memcached. It allows you to set an expiry time for any data you add to it.
When you first do the query, you write the md5 of the query text associated with the result. On subsequent queries, look for the data in memcached. If it is expired, you can redo the sql query and then rewrite it to memcached.
Okay, so the first part of your question is basically about caching the result of the total votes to be included in the header of your page. Its a very good idea - here is an idea of how to implement it...
If you can not enable a crontab (even with out SSH access you might be able to set this up using your hostings control panel), you might be able to get away with using an external 3rd party cronjob service. (Google has many results for this)...
Everytime your cronjob runs, you can create/update a file that simply contains some PHP arrays -
$fileOutput = "<"."?php\n\n";
$fileOutput .= '$'.$arrayName.'=';
$fileOutput .= var_export($yourData,true);
$fileOutput .= ";\n\n?".">";
$handle = fopen(_path_to_cache_file,'w+');
fwrite($handle,$fileOutput);
fclose($handle);
That will give you a PHP file that you can simply include() into your header markup and then you'll have access to the $yourData variable under the name $arrayName.
Im pulling in search query results using CURL and then iterating through a database to load additional queries then storing the results back in the database. Im running into hassles with php maximum time and have tried setting the maximum time variable to a higher amount which i think isnt working on my host using this:
ini_set('max_execution_time', 600);
in the file that is run by cron so it only changes the max time for the importing process.
The question is, would it be more effecient to store the result of each CURL connection in the database and then having a secondary function that pulls the dataabase results and sorts into the relevant tables and run the secondary function every 10 minutes hypothetically OR is it more effecient to pull the file and insert the sorted records in one go??
You can always find out whether your host is allowing you to modify the ini_set function by using ini_get('max_execution_time') right after your call to ini_set().
Instead of storing the results in the database, I would put them into a directory. Name the files by using the function microtime(true) (which makes it easy to pull the most or least recently written file). Then have a separate script that checks to see if there are files in the directory, and if so, processes one of them. Have the scripts run on a 1 minute interval.
I will note that there is a possible race condition on processing the file if it takes more than one minute, however, even if it takes longer than one minute, it is unlikely to ever occur.
I am looking to make a script that runs and uses the time stamp of the last time it ran as a parameter to retrieve results that have been updated since that time. We were thinking of creating a database table and having it update that and then retrieve the date from there, but I was looking for any other approach that someone might suggest.
Using a database table to store the last run time is probably the easiest approach, especially if you already have that infrastructure in place. A nice thing about this method is that you can write the run time right before the script terminates, in case it runs for a long time and you do not want it to start up again too soon.
Alternatively you could either write a timestamp to file (which has it's own set of issues) or attempt to fish it out of a log file (for example, the web access log if the script is being run that way) but both of those seem harder.
This might work: http://us.php.net/manual/en/function.fileatime.php (pass it $_SERVER['SCRIPT_FILENAME'])
Your best result would be to store your last run time. You could do this in a database if you need historical information, or you can just have a file that stores it.
Depending on how you run the script, you may be able to see it in your logs, but storing it yourself will be easier.