SimplePie CRON job in Windows XAMPP - php

I am attempting to set up a CRON job on XAMPP in Windows but I am having some trouble. I have the timing set up so that it should run every 5 minutes, and this part works because I see my command prompt pop up every five minutes.
This is the code for the CRON.BAT file that runs. Both locations are correct for their respective files.
C:\xampp\php\php.exe C:\xampp\htdocs\codeigniter214\update_simplepie_cache.php
This is my update_simplepie_cache.php file. I'm pretty sure this is the part that's failing, because the MySQL database isn't updating even though the feeds have new items in them. I tried to follow the SimplePie instructions, but it hasn't worked so far.
<?php
$this->load->library('rss');
$feed = $this->rss;
$cache_location = 'mysql://root#127.0.0.1:3306/news_test'; // change to your cache location
$feed->set_feed_url('http://www.theverge.com/rss/frontpage', 'http://gigaom.com/tag/rss-feeds/feed/');
$feed->set_cache_location($cache_location);
$feed->set_cache_duration(9999999); // force cache to update immediatlely
$feed->set_timeout(5); // optional, if you have a lot of feeds a low timeout may be necessary
$feed->init();
?>
Can anyone see what I'm missing here? Thank you.

Related

PHP filemtime() not working - cache problem?

I am calling filemtime() from a PHP file executed by POST from a JavaScript/HTML app. It returns the same time stamp for a separate test HTML file every two seconds even when I edit the test file with a text editor and I can see its DTM change in the local file system.
If I reload the entire app (Ctrl+F5), the timestamp reported stays the same. At times (once after 4 hours) the time stamp changes, but I don't know what makes this happen.
The PHP part of my code looks like this:
clearstatcache(true,$FileArg);
$R=filemtime($FileArg);
if ($R===false)
echo "error: file not found";
else
echo $R;
This code is called by synchronous Ajax, given only its PHP filename, using setInterval every 2 seconds.
Windows 10 Home, Apache 2.4.33 running locally for HTTP access, PHP 7.0.30 .
ADDED:
The behavior is the same in Firefox, Chrome, Opera, and Edge.
The results are being cached: http://php.net/manual/en/function.filemtime.php
Note: The results of this function are cached. See clearstatcache() for more details.
It almost sounds like Windows is doing some write caching...
stat() on the other hand has an additional note:
Note:
Note that time resolution may differ from one file system to another.
Maybe worth checking stat output.
edit
Maybe it's a bug, or Windows not playing nice, but you could also do a shell_exec with the Windows command showing DTM.
News: it turns out to be an ordinary bug in my app. I copied my Ajax call and forgot to edit it to apply to the test file. So it applied to one of my app files instead and the DTM only got updated when I edited that app file (FTAdjust.js).
When I specify the correct test file, the DTM updates just fine each time I edit it in another process.
It can sometimes be hard to find one's own bug even when it stares one in the face! I kept looking everywhere else but where the mistake was.
Is there a way to delete a thread from Stack Overflow, since it is irrelevant to others?

Backup MySQL Database to Dropbox

In the past I have received a lot of help from the SO community, so once I figured this out, I thought here's my opportunity to give back a little. Hopefully it helps someone.
The issue I was faced with was having my core site built on WordPress, with another database for an e-commerce section of the site, I wanted to backup the entire site (all files, both databases, etc.) to Dropbox on a daily basis.
After a lengthy search, I couldn't find anything that did exactly what I was looking for.
Disclaimer: You don't need to be running WordPress or an e-commerce site for this to work. It will work on any MySQL database(s) and requires PHP.
I came across the WordPress Backup to Dropbox plugin, which got me about 90% there. The plugin allowed me to back up all the files on the site plus it does a WordPress database backup at a frequency you schedule.
The problem is that the plugin only does a backup of the WordPress database, but not my e-commerce database.
I also found a MySQL backup to Dropbox tutorial (credit where it's due), which some of the code below is based on. It is a great tutorial, but I wanted it to backup and delete the backup at different times - the tutorial backed up and deleted all at the same time.
The solution I came up with is not specific to WordPress or an e-commerce site. Anyone who has a MySQL database and can run PHP should be able to benefit from this. Perhaps with a few tweaks to my answer, but still they should be able to accomplish the end result.
To store a backup of the e-commerce database, I created a folder in my site's root directory (/temp - call it whatever you want). Then I had to actually create the database backup. Open up a text editor and create a file called backup_dropbox.php.
backup_dropbox.php
<?php
// location of your /temp directory relative to this file. In my case this file is in the same directory.
$tempDir = "";
// username for e-commerce MySQL DB
$user = "ecom_user";
// password for e-commerce MySQL DB
$password = "ecomDBpa$$word";
// e-commerce DB name to backup
$dbName = "ecom_db_name";
// e-commerce DB hostname
$dbHost = "localhost";
// e-commerce backup file prefix
$dbPrefix = "db_ecom";
// create backup sql file
$sqlFile = $tempDir.$dbPrefix.".sql";
$createBackup = "mysqldump -h ".$dbHost." -u ".$user." --password='".$password."' ".$dbName." > ".$sqlFile;
exec($createBackup);
//to backup multiple databases, copy all of the above code for each DB, rename the variables to something unique, and set their values to whatever is appropriate for the different databases.
?>
Now this script should create a backup of the database "ecom_db_name" whenever it is run. To get it to run on a scheduled interval (I want it to run just a couple minutes before my WordPress backup starts to run at 7am). You can either use WP-Cron (if your site gets enough traffic to reliably trigger it to run at the right time) or schedule a cron job.
I am no expert on cron jobs and these types of commands, so there may be a better way. I have used this on two different sites and run them two different ways. Play around with what works best for you.
The first way is on a directory that is not password protected, the second is for a password protected directory. (Replace username and Password with your username and password, and obviously set example.com/temp/backup_dropbox.php to wherever the file resides on your server).
Cron Job to run backup_dropbox.php 5 minutes before WP backup
55 6 * * * php /home/webhostusername/public_html/temp/backup_dropbox.php
OR
55 6 * * * wget -q -O /dev/null http://username:Password#example.com/temp/backup_dropbox.php
Now the cron job is set up to run backup_dropbox.php and create my database backup every day at 6:55am. The WordPress to Dropbox backup that starts at 7am usually takes about 5-6 minutes, but could take a little longer.
I want to delete my .sql backup files after they have successfully been backed up to Dropbox so its not sitting out there forever for someone to somehow open/download the database file.
Fire up the text editor again, and create another file called clr_bkup.php.
clr_bkup.php
<?
$tmpDir = "";
//delete the database backup file
unlink($tmpDir.'db_ecom.sql');
// if you had multiple DB backup files to remove just copy the line above for each backup, and replace 'db_ecom.sql' with your DB backup file name
?>
Since the WordPress backup takes a few minutes to finish up, I want to run a cron job to execute clr_bkup.php at 10 past 7, which should give it enough time. Again, the first cron job below is for an unprotected directory, and the second for a password protected directory.
Cron Job to run clr_bkup.php 10 minutes after WP backup starts
10 7 * * * php /home/webhostusername/public_html/temp/clr_bkup.php
OR
10 7 * * * wget -q -O /dev/null http://username:Password#example.com/temp/clr_bkup.php
Sequence of events
To help wrap your head around what's going on, here's the timeline:
6:55am: Cron Job is scheduled to run backup_dropbox.php, which creates a backup file of my database.
7:00am: WordPress Backup to Dropbox runs, and backs up all files that have changed since the last backup, which includes my 5 minute old, newly created database backup.
7:10am: By now the Dropbox backup has finished up, so the Cron Job is scheduled to run clr_bkup.php, which removes the backup file from the server.
Variables, Notes, and Misc. Info
Timing
The first thing that hung me up was getting the timing right. For simplicity, I used the times in the example above as if everything was happening in the same time zone. In reality, my web host's server is in the US West Coast, while my WordPress timezone is set to the US East Coast (a 3 hour difference). My actual cron jobs are set to run 3 hours earlier (server time) than what is displayed above. This will be different for everyone. The best bet is to know the time difference up front.
Run Backup with a Time Check
In the directory that is not password protected, I wanted to keep the backup_dropbox.php script from running at any other time of the day than 6:55am (by visiting it in a browser at 10am for example). I included a time check at the beginning of the backup_dropbox.php file, which basically checks to see if it isn't 6:55am, then don't let it execute the rest of the code. I modified backup_dropbox.php to:
<?php
$now = time();
$hm = date('h:i', $now);
if ($hm != '06:55') {
echo "error message";
} else {
// DB BACKUP code from above goes here
}
?>
I suppose you could also add this to the clr_bkup.php file to only let it delete the backup files at 7:10am, but I didn't really see the need since the only time clr_bkup.php will do anything is between 6:55-7:10am anyhow. Up to you though if you decide to go that route.
Not on WordPress?
There are a number of free and paid services that will backup your website either to Dropbox or another similar service like Google Drive, Amazon S3, Box, etc., or some will store the files on their servers for a fee.
Backup Machine, Codeguard, Dropmysite, Backup Box, or Mover to name a few.
Want Redundant Offsite Backups?
There are plenty of services that will allow you to automatically create remote redundant backups on any of the cloud storage sites listed above.
For example if you backup your site to Dropbox, you can use a service called If This Then That (IFTTT) to automatically add files uploaded to a particular Dropbox folder to Google Drive. That way should Dropbox ever have an issue with their servers, you'll also have a Google Drive backup. Backup Box listed above could also do something like this.
Hope this helps
There may be a better way of doing all of this. I was in a pinch and needed to figure something out that works reliably, which this does. If there are any improvements that can be made, please share in the comments.
I think this post explain a solution wich can help you:
http://ericsilva.org/2012/07/05/backup-mysql-database-to-dropbox/

set_time_limit not working on heroku

I am using PHP with heroku. I keep on getting a request timeout error due to some database insertions and queries.
I added this line to all my php files in order to avoid this error:
set_time_limit(0);
However, I am still getting this error. Does heroku ignore this command?
I did a simple check to see if the time limit is being changed:
echo 'TIME : '.ini_get('max_execution_time');
set_time_limit(0);
echo 'TIME : '.ini_get('max_execution_time');
It is being changed from 30 (default value) to 0. Despite the change, I am still getting the error.
Also, I would like to add that the php file is being called by ajax.
Furthermore, as far as I know, php is not set to safe mode, so there is no reason why the command should be ignored.
Heroku suggests to use a background job, and as far as I can tell, it forces you if the task takes more than 30 seconds. Has anybody managed without using a background job?
Update: Tried using:
ini_set('max_execution_time', 0);
Still does not want to work
If you have to go over the 30s request timeout on Heroku, you'll need to use a background job - there is no way around that (Heroku will just kill the request if it takes longer than 30 seconds). Heroku has some documentation on this.

Getting PHP processing to happen in the background

I am working on a PHP website, and I am moving it over to a new server. The new server I am moving to does not have CRON compatibility. To compensate for this I have devised a system using time formats and database tables and more to run my code instead.
What I am having a problem with is this bit of code:
if ($lasttime < $pretime)
{
$newtime = strtotime("now");
queryMysql("UPDATE time SET time=".$newtime." WHERE time=".$lasttime);
include_once 'grabber/grabber.php';
}
Specifically it's the include_once 'grabber/grabber.php'; which is causing the problem. When the timer comes round and this code runs, it gets to the include and then the code stops, with no error provided, so the include fails. I have tried changing it to an exec() but to be honest I don't completely understand how exec() works and if it is the correct thing to do. This is how I used it:
if ($lasttime < $pretime)
{
$newtime = strtotime("now");
queryMysql("UPDATE time SET time=".$newtime." WHERE time=".$lasttime);
$grabber = $base."grabber/grabber.php";
exec($grabber);
}
This does not stop the code and seems to run but it doesn't actually work, if grabber/grabber.php runs correctly then I get an email to confirm using the PHP mail() function
If anyone could help me solve this or shed some light that would be brilliant.
Thanks.
This is most probably an issue with the file location or permissions. There should be some kind of error, or the code doesn't stop, but you don't properly check that or there is some kind of an issue with the code in grabber.php itself. Add some debugging lines - print the filename, so you can check for errors in the path/name; add error_reporting(E_ALL); ini_set('display_errors', true); somewhere above the include_once line; make sure the file is where you're trying to open it from, taking into account relative paths, etc. Make sure you have permissions to run this file.
exec() is not what you need in this case, at least not in the way that you're trying to use it.
If that doesn't help - give some more information about how you run the scripts that you've shown, what's in the grabber.php file, what errors you get, etc.
(Assuming your server is *nix) If you want to use exec() you need to place a hashbang at the top of the script that points to the PHP executable and give it execute permissions.
Or (this is probably the better/more portable approach), change
$grabber = $base."grabber/grabber.php";
exec($grabber);
to
$grabber = "php ".$base."grabber/grabber.php";
exec($grabber);
...as if you were running it from a terminal.
However, I doubt this will solve the problem - I think the answer is more likely to be one of these things:
A parse error in grabber.php. Keep in mind that there are slight syntax differences between major PHP versions - if your PHP version is different on your old/new hosts, this may be the problem.
A call to a function that was defined on your old host but not on your new host, because of a difference in PHP version or installed extensions
grabber.php was corrupted during the move between servers
Try it with the include_once, but do ini_set('display_errors',1); error_reporting(-1); to make sure you actually see any errors. How are you calling you main script? How will you see the errors? Edit the question with this info, any code from grabber.php you think may be relevant and I will expand this answer.

session_start hangs

since a few hours our server hangs every time you do a session_start.
For testing purposes i created a script which looks like this:
<?php
session_start();
?>
Calling it from the console hangs and it can't even be stopped with ctrl-c, only kill -9 works. The same for calling it via Apache. /var/lib/php/session/ stays empty but permissions are absolutely fine, www can write and also has read permissions for all parent folders.
According to the admins there were no changes made on the server and there is no special code registered for sessions. The Server is CentOS 4 or 5 and yesterday everything was working perfectly. We rebooted the server and updated PHP, but nothing changed.
I've ran out of ideas, any suggestions?
UPDATE
We solved this problem by moving the project to another server, so while the problem still exists on one server there is no immediate need for a solution anymore.
I will keep the question open in case someone has an idea for others having a similar problem in the future, though.
There are many reasons for that, here are a few of them:
A. The session file could be opened exclusively.
When the file lock is not released properly for whatever reason, it is causing session_start() to hang infinitely on any future script executions.
Workaround: use session_set_save_handler() and make sure the write function uses fopen($file, 'w') instead of fopen($file, 'x')
B. Never use the following in your php.ini file (entropie file to "/dev/random"), this will cause your session_start() to hang:
<?php
ini_set("session.entropy_file", "/dev/random");
ini_set("session.entropy_length", "512");
?>
C.
session_start() needs a directory to write to.
You can get Apache plus PHP running in a normal user account. Apache will then of course have to listen to an other port than 80 (for instance, 8080).
Be sure to do the following things:
- create a temporary directory PREFIX/tmp
- put php.ini in PREFIX/lib
- edit php.ini and set session.save_path to the directory you just created
Otherwise, your scripts will seem to 'hang' on session_start().
If this helps:
In my scenario, session_start() was hanging at the same time I was using the XDebug debugger within PHPStorm, the IDE, on Windows. I found that there was a clear cause: Whenever I killed the debug session from within PHPStorm, the next time I tried to run a debug session, session_start() would hang.
The solution, if this is your scenario, is to make sure to restart Apache every time you kill an XDebug session within your IDE.
I had a weird issue with this myself.
I am using CentOS 5.5x64, PHP 5.2.10-1. A clean ANSI file in the root with nothing other than session_start() was hanging. The session was being written to disk and no errors were being thrown. It just hung.
I tried everything suggested by Thariama, and checked PHP compile settings etc.
My Fix:
yum reinstall php; /etc/init.d/httpd restart
Hope this helps someone.
To everyone complaining about the 30 seconds of downtime being unacceptable, this was an inexplicable issue on a brand new, clean OS install, NOT a running production machine. This solution should NOT be used in a production environment.
Ok I face the same problem on 2 PC, 1 is MAC mini XAMPP, 1 is Windows 10 Xampp.
Both is php spent infinity to run session_start(). Both PHP version is 7.x.x
I found that session files is lock to read and write. So that I added code to make PHP read session files and immediately unlock when done with
<?php
session_start([
'read_and_close' => true,
]);
?>
or
<?php
//For PHP 5.x
session_start();
session_write_close();
?>
After this PHP unlock session file => Problems solve
The problem: -
Iv experienced (and fixed) the problem where file based sessions hang the request, and database based sessions get out of sync by storing out of date session data (like storing each session save in the wrong order).
This is caused by any subsequent request that loads a session (simultaneous requests), like ajax, video embed where the video file is delivered via php script, dynamic resource file (like script or css) delivered via php script, etc.
In file based sessions file locking prevents session writing thus causing a deadlock between the simultaneous request threads.
In database based session the last request thread to complete becomes the most recent save, so for example a video delivery script will complete long after the page request and overwrite the since updated session with old session data.
The fix: -
If your ajax or resource delivery script doesnt need to use sessions then easiest to just remove session usage from it.
Otherwise you'd best make yourself a coffee and do the following: -
Write or employ a session handler (if not already doing so) as per http://www.php.net//manual/en/class.sessionhandler.php (many other examples available via google search).
In your session handler function write() prepend the code ...
// processes may declare their session as read only ...
if(!empty($_SESSION['no_session_write'])) {
unset($_SESSION['no_session_write']);
return true;
}
In your ajax or resource delivery php script add the code (after the session is started) ...
$_SESSION['no_session_write'] = true;
I realise this seems like a lot of stuffing around for what should be a tiny fix, but unfortunately if you need to have simultaneous requests each loading a session then it is required.
NOTE if your ajax or resource delivery script does actually need to write/save data, then you need to do it somewhere other than in the session, like database.
Just put session_write_close(); befor Session_start();
as below:
<?php
session_write_close();
session_start();
.....
?>
I don't know why, but changing this value in /etc/php/7.4/apache2/php.ini worked for me:
;session.save_path = "/var/lib/php/sessions"
session.save_path = "/tmp"
To throw another answer into the mix for those going bananas, I had a session_start() dying only in particular cases and scripts. The reason my session was dying was ultimately because I was storing a lot of data in them after a particularly intensive script, and ultimately the call to session_start() was exhausting the 'memory_limit' setting in php.ini.
After increasing 'memory_limit', those session_start() calls no longer killed my script.
For me, the problem seemed to originate from SeLinux. The needed command was chcon -R -t httpd_sys_content_t [www directory] to give access to the right directory.
See https://askubuntu.com/questions/451922/apache-access-denied-because-search-permissions-are-missing
If you use pgAdmin 4 this can happen as well.
If you have File > Preferences > SQL Editor > Options > "Auto Commit" disabled, and you just ran a query using the query tool but didn't manually commit, then session_start() will freeze.
Enable auto commit, or manually commit, or just close pgAdmin, and it will no longer freeze.
In my case it seems like it was the NFS Share that was locking the session , after restarting the NFS server and only enabled 1 node of web clients the sessions worked normally .
Yet another few cents that might help someone. In my case I was storing in $_SESSION complex data with several different class objects in them and session_start() couldn't handle the whole unserialization as not every class was loaded on session_start. The solution is my case was to serialize/jsonify data before saving it into the $_SESSION and reversing the process after I got the data out of session.

Categories