I'm facing a challenge here. My windows 10 PC needs to run all the time, with some programs running on it. However, as a commonplace about windows, it does hang/freeze/BSOD once in a while, randomly. And since I'm not in front of it all the time, sometimes I won't know that it's stuck, for long, till I check it and have to manually hard restart it.
To overcome this problem I'm thinking of an idea like this:
Some program (probably .bat file) can be set to run in the PC, that sends a ping (or some message) to a webservice running remotely, every 10 mins or so.
A PHP script (the webservice) running in my host server (I own a hosting space for my website) can listen to this particular ping (or message), and wait.
If this webservice doesn't receive the ping (or msg) when expected, it simply sends out an email notifying the same.
So whenever the windows hangs/freezes, that .bat file would stop sending as well, triggering the notification from the websvc in the next 10 mins.
This is an idea, but frankly I still don't know how to actually achieve it technically, and whether it's truly feasible. Also, I'm not sure if I'm missing something crucial in terms of server load, etc.
Would greatly appreciate any help with the idea, and if possible pointers to the script that I can put on the server. Also, I'm not sure how to set it up to listen continuously.
Can someone please help here?
What about this?
Have your web service on your host comprise of one page, one database and one cron job.
The database has one table with one record that holds a time.
The cron job checks the the database-table-record every 10 minutes and if the time in the record is in the past, the cron sends you an email.
The page, when requested, simply updates the record to be the current time + 10 minutes. Have your Windows machine request this page every 10 minutes.
So essentially, the cron job is ready to send you an email, but it never can because the PC is always requesting a page to reset the time - until it can't.
Alright, so here's how I Finally achieved this whole idea, as suggested by #Warren above.
Created a simple db table in mysql in my hosting server, with just 2
fields, id and next_time.
Created a simple php page, which inserts/updates the current time + 10mins into the above table.
Created a python script, that checks in this table, if the time stored is < the current time. If yes, then sends a mail to me.
Scheduled this python script as cron job to run every 10 mins.
Thus when the PC hangs, for more than 10 mins, the script would let me know.
Thanks a lot for the help in coming up with this plan. Hope this helps someone else thinking of a similar thing to do.
Improvisation: I moved the above codes to my local raspberry pi web server, to remove dependency on the remote hosting server.
Next step: I'm planning to let the python script on the raspberry pi control a relay, which would toggle the reset switch of the PC, when the above event happens. So, not only would I know when the windows goes on BSOD, but it'll also be restarted on it's own.
Well, as a next step, I made some more simplification to the solution for the original requirement.
No more PHP now. Just one Python Script, and a small hardware improvement.
As I'm still learning new ways with this Raspberry Pi, I now connected the RPi to the PC via ethernet cable as a peer-to-peer connection.
Enabled ping response from Windows, as per this link.
Then wrote another python script to simply ping the windows PC (with a static IP for ethernet adapter)
If the ping fails, then send the email as the earlier script.
As earlier, setup this new script as the cron job to run every 10 mins, instead of the earlier script.
So if the windows hangs, I assume the ping would fail too, and thus an email would be sent out.
Thus now the web-server and database are both eliminated from the equation.
Still waiting for the Relay module to arrive, so I can implement the next step of automatic hard reboot.
Related
All,
I have a quite disturbing problem with my Amazon Elastic Beanstalk Worker combined with SQS, which is supposed to provide a cron job scheduling - all this running with PHP.
Following scenario - I need a PHP script to be executed regularly in the background, which might eventually run for hours. I saw this nice introduction which seems to cover exact my scenario (AWS Worker Environments - see the Periodic Task part)
So I read quite a lot of howtos and set up an EBS Worker with the SQS (which actually is done automatically during creation of the worker) and provided the cron config (cron.yaml) within my deployment package.
The cron script is properly recognized. The sqs daemon starts, messages are put into the queue and trigger my PHP script exactly on schedule. The script is run and everything works fine.
The configuration of the queue looks like this:
SQS configuration
However after some time of processing (the script is still busy - and NO it is not the next scheduled run^^) a second message is opened and another instance of the same script is executed, and another, and another... in exactly 5 minutes intervals.
I suspect, somehow the message is not removed from the queue (although I ensured that the script sends status 200 back), which ends up in creating new message, if the script runs for too long.
Is there a way to prevent the spawning of another messages? Tell the queue or the sqs daemon not to create new flighing messages? Do I have to remove the message in my code? Although the tutorial states it should happen automatically
I would like to just trigger the script, remove the message from queue and let the script run. No fancy fallback / retry mechanisms please :-)
I spent many hours trying to find something on the internet. Unsuccessful. Any help is appreciated.
Thanks
a second message is opened and another instance of the same script is executed, and another, and another... in exactly 5 minutes intervals.
I doubt it is a second message. I believe it is the same message.
If you don't respond 200 OK before the Inactivity Timeout expires, then the message goes back to the queue, and yes, you'll receive it again, because the system assumes you've crashed, and you would want to see it again. That's part of the design.
There's an X-Aws-Sqsd-Receive-Count request header you're receiving that tells you approximately how many times the current message has been delivered. The X-Aws-Sqsd-Msgid request header identifies the unique message.
If you can't ensure that the script will finish before the timeout, then this is not likely an appropriate use case for this service. It sounds like the service is working correctly.
I know this doesn't directly answer your question regarding configuration, but I ran into a similar issue - my queue configuration is set exactly like yours, and in my Elastic Beanstalk setup, I've set the Visibility Timeout to 1800 seconds (or half an hour) and Max Retries to 2.
If a job runs for more than a minute, it gets run again and then thrown into the dead letter queue, even though after a 200 OK is returned from the application every time.
After a few hours, I realized that it was the Nginx server that was timing out - checking the Nginx error log yielded that insight. I don't know why Elastic Beanstalk includes a web server in this scenario... You may want to check if EB spawns a web server in front of your application, if all else fails.
Look at the Worker Environment documentation for details on the values you can configure. You can configure several different timeout values as well as "Max retries", which if set to 1 will prevent re-sends. However, your Dead Letter Queue will fill up with messages that were actually processed successfully, so that might not be your best option.
Recently L started experiencing performance issues with my online application hosted on bluehost.
I have an online form that takes a company name and event handler "onKeyUp" tied up to that field. Every time you put a character into the field it sends request to server which makes multiple mysql queries to get the data. Mysql queries all together take about 1-2 seconds. But since requests are send after every character that is put in it easily overloads the server.
The solution for this problem was to cancel previous XHR request before sending a new one. And it seemed to work fine for me (for about a year) until today. Not sure if bluehost changed any configuration on server (I have VPS), or any php/apache settings, but right now my application is very slow due to the amount of users i have.
And i would understand gradual decrease in productivity that may be caused bu database grow, but it suddenly happened over the weekend and speeds went down like 10 times. usual request that took about 1-2 seconds before now takes 10-16 seconds.
I connected to server via SSH & ran some stress test sending lots of queries to see what process monitor (top) will show. And as I expected, for every new request it was a php process created that was put in queue for processing. This queue waiting, apparently, took the most of wait-time.
Now I'm confused, is it possible that before (hypothetical changes on server) every XHR Abort command was actually causing PHP process to quit, reducing additional load on server, and therefore making it work faster? And now for some reason this doesn't work anymore?
I have WAMP installed on Windows 7, as my test environment, and when I export the same database and run the stress-test locally it works fast. Just like it used to be on server before. But on windows I dont have such handy process monitor as TOP, so i cannot see if php processes are actually created and killed respectively.
Not sure how to do the troubleshooting at this point.
I'm having trouble understanding how Windows Task Service works. I would like to open a SOAP connection every hour and do its thing.
I came across a few site with how to do it.
1) http://www.redolivedesign.com/utah-web-designers-blog/2007/11/17/how-to-run-a-php-or-asp-file-on-a-schedule-with-windows-xmlhttp-object-and-scheduled-tasks/
2) http://amitdhamu.com/blog/automating-php-using-task-scheduler/
My questions are:
1) which link should I lean toward to?
2) my SOAP connection file is on my server. In the 'Start a Program' in task scheduler, how would I add my ftp script here? or does it have to be in the local machine?
One usually does not "open a soap connection" every once in a while. The SOAP server is running all the time, waiting for requests, so if you want to grab some data from the server, you'd probably "send a soap request every hour" or "make a soap call" as a client.
If the server should only answer requests in a time window every hour, the server would still be running all the time, but the internal code would deny responses beyond "business hours" of that service. Which would be an unusual setup.
So I think you'd really need to run a client script, and that would be easier. First make sure when running the script manually, it does all the things you need. It must be able to run where you need the result.
And then the question is: How to start it automatically, and which operating system is installed on that machine?
Your first link sets up something that sends an HTTP request to a machine. This would work if you have a server, the server has the script, and you have no way to setup a cronjob there, but need the data on that server.
Your second link sets up the script execution on the machine itself. So the script is on that machine, and the data ends up being there, too.
And I cannot answer your second question, because I don't know what your need really is.
i started to learn programming like a month ago. I already knew html and css, i thought i should learn PHP. I learned alot of it from from tutorials and books, now I am making mysql based websites for practice.
I always used to play browser based strategy games like travian when i was a kid. I was thinking about how those sites worked. I didnt have any problem till i realized that the game actually worked after you closed the browser. For example; you log in to your account and start a construction and log off. But even after you close the browser, game knows that in "x" amount of time it needs to update your data of that specific building.
can someone tell me how that works? is it something with php or MySQL or some other programming language? even if you can tell me what to search online, it would be enough.
Despite being someone who loves tackling steep learning curves, I would advise against trying jump into something that requires background processes until you have a bit more programming experience.
But either way, here's what you need to know:
Normal PHP Process
The way that PHP normally works is the following:
User types a url into the browser and hits enter (or just clicks on a link)
Request is sent to a bunch of servers and magically finds its way to the right web server (beyond scope of this answer)
Server program like Apache or IIS listening on port 80 grabs the request
Apache sees that there's a .php extension on the requested page
Apache looks up if any processors have been assigned to .php and finds php.exe
The requested page is fed into php.exe
php.exe starts up a new process for the specific user, runs everything on the script, returns its result
The result is then sent back to the user
When the user closes the browser and ends the "session", the process started by php exits
So the problem you encounter when you want something running in the background is that PHP in most cases is generally accessed through the web server, and hence usually requires a browser (and user making requests through the browser). And since closing the browser ends the process, so you need a way to run php scripts without a browser.
Luckily PHP can be accessed outside of just the webserver as a normal process on the server. But then the problem is that you have to access the server. You probably don't want your users to ssh into your server in order to manually run scripts (and I'm assuming you don't want to do it manually on behalf of your users every single time either). Hence you have the options either creating cronjobs that will automatically execute a command at a specific frequency as if you had typed it in yourself on your server's commandline. Another option is to manually start a script once that doesn't shutdown unless your server shuts down.
Triggering a Script based on Time:
Cron that is a task scheduler on *nix systems and Windows Task Scheduler on Windows. What you can do is set up a cronjob to run a specific php file at a specific frequency, and execute all the "background" tasks you need to run from within there.
One way of doing this would be to have a mysql table containing things that need to be executed along with when they need to be executed. The script then queries the table based on time to retrieve which tasks need to be executed, executes them, and then marks them executed (or just deletes them) in the mysql table.
This is a basic form of process queuing.
Building a Queue Server
This is a lot more advanced, but here's a tutorial for creating a script that will queue processes in the background without the need for any external databases: Building a Queue Server in PHP .
Let me know if this makes sense or if you have any questions :)
PHP is a server side language. Any time anybody accesses a PHP program on the server, it runs, irrespective of who is a client.
So, imagine a program that holds a counter. It stores this in a database. Every time updatecounter.php is called, the counter gets updated by one.
You browse to updatecounter.php, and it tells you that the counter is now at 34.
Next time you browse to updatecounter.php it tells you that the counter is at 53.
Its gone up by 18 more counts than you were expecting.
This is because updatecounter.php was being run without your intervention. It was being run by other people.
Now, if you looked at updatecounter.php, you might see code like this:
require_once("my_code.php);
$counterValue = increment_counter_value();
echo "New Counter Value = ".$counterValue;
Notice that the main core of the program is stored in a separate program than the program that you are calling.
Also, notice that instead of calling increment_counter_value, you could call anything. So every time somebody browsed to updatecounter.php, or whatever your game would be called, the internal game mechanics could be run. You could for instance, have an hourly stat management routine which would check each time it was called if it had been run in the last hour, and if it hadn't it would perform all the stats.
Now, what if nobody else is playing your game? If that happens, then the hourly stat management wouldn't get called, and your game world would die. So what you would need to do is create another program who's sole function is to run your stats. You would then schedule that program on the server to run at an hourly interval. You do this using something called a CRON job. You will probably find that your host already has this facility built in, if you are on Apache. I won't go into any more detail about task scheduling as without knowing your environment its impossible to give the correct answer. But basically, you would need to schedule a PHP program to run on the server to perform the hourly maintenance.
Here's a tutorial on CRON jobs:
http://net.tutsplus.com/tutorials/other/scheduling-tasks-with-cron-jobs/
I haven't used it myself but I've had no problems with other stuff on tutsplus so you should be ok.
This is not only php . Browser based game are combination of php/mysql/javascript/html . There are lot of technologies being used for this kind of work. When you are doing something on the browser, lets say adding a building ,an ajax request is being sent to the server so the server updates the database (can't wait until logout because then other users won't know your status to play (in case of multiparty) .
i am working on an art/programming project that involves using a lab of 30 imacs. i want to synchronize them in a way that will allow me to execute a script on each of them at the very same time.
the final product is in flash player, but if i am able to synchronize an type of data signal through a web page, i'd be able to run the script at the same time. so far my attempts have all had fatal flaws.
the network i'm using is somewhat limited. i don't have admin privileges but i don't think it matters really. i log into my user account on all 30 imacs, run the page or script so i can run my wares.
my first attempts involved running flash player directly.
at first i tried using the system time and had the script run every two minutes. this wasn't reliable because even though the time in my user account is synced there is discrepancy between imacs. a quarter of a second is too much even.
my next try involved having one mac acting as the host which writes a variable to a text file. all other 29 flash players checked for changes in this file multiple times a second. this didn't work. it would work with 3 or 4 computers but then would be flaky. the strain on the server was too great and flash is just unreliable. i figured i'd try using local shared objects but that wasn't reliable. i tried having the host computer write to 30 files and have each mac read only one each but that didn't work either. i tried using local connection but it is not made for more than two computers.
my next try involved having a php server time script run on my web server and have the 30 computers check the time of that files nearly 30 seconds. i don't think my hosting plans supports this because the server would just stop working after a few seconds. too many requests or something.
although i haven't had success with a remote server, it will probably be more reliable with another clever method.
i do have one kludge solution as a last straw (you might laugh): i would take an audio wire and buy 29 audio splitters and plug all of them in. then i would run flash player locally and have it execute when it hears a sound. i've done this before. all you have to do is touch the other end of the wire and the finger static is enough to set it off.
what can i do now? i've been working on this project on and off for a year and just want to get it going. if i can get a web page synchronized on 30 computers in a lab i could just pass data to flash and it would likely work. i'm more confident with a remote server but if i can do it using the local mac network, that would be great.
Ok, here is how i approached my problem using socket connection with flash and php. Basically, first you setup a client script that is to be installed on all 30 imac 'client' machines. lets assume all machines are on a private network. When these clients are activated they are connected to a server(php), by using socket. The php server script would have an ip and a port that these clients connects to, handles client connections pool, message routing and etc, and the server will be running at all time. The socket connection allows the server-client interaction by sending messages back and forth, and these messages can trigger things to do. You should read up more on socket connection/server client interaction. This is just a little summary of how i got my project done.
Simple tutorial on socket/server client connection using php and flash