i am working on an art/programming project that involves using a lab of 30 imacs. i want to synchronize them in a way that will allow me to execute a script on each of them at the very same time.
the final product is in flash player, but if i am able to synchronize an type of data signal through a web page, i'd be able to run the script at the same time. so far my attempts have all had fatal flaws.
the network i'm using is somewhat limited. i don't have admin privileges but i don't think it matters really. i log into my user account on all 30 imacs, run the page or script so i can run my wares.
my first attempts involved running flash player directly.
at first i tried using the system time and had the script run every two minutes. this wasn't reliable because even though the time in my user account is synced there is discrepancy between imacs. a quarter of a second is too much even.
my next try involved having one mac acting as the host which writes a variable to a text file. all other 29 flash players checked for changes in this file multiple times a second. this didn't work. it would work with 3 or 4 computers but then would be flaky. the strain on the server was too great and flash is just unreliable. i figured i'd try using local shared objects but that wasn't reliable. i tried having the host computer write to 30 files and have each mac read only one each but that didn't work either. i tried using local connection but it is not made for more than two computers.
my next try involved having a php server time script run on my web server and have the 30 computers check the time of that files nearly 30 seconds. i don't think my hosting plans supports this because the server would just stop working after a few seconds. too many requests or something.
although i haven't had success with a remote server, it will probably be more reliable with another clever method.
i do have one kludge solution as a last straw (you might laugh): i would take an audio wire and buy 29 audio splitters and plug all of them in. then i would run flash player locally and have it execute when it hears a sound. i've done this before. all you have to do is touch the other end of the wire and the finger static is enough to set it off.
what can i do now? i've been working on this project on and off for a year and just want to get it going. if i can get a web page synchronized on 30 computers in a lab i could just pass data to flash and it would likely work. i'm more confident with a remote server but if i can do it using the local mac network, that would be great.
Ok, here is how i approached my problem using socket connection with flash and php. Basically, first you setup a client script that is to be installed on all 30 imac 'client' machines. lets assume all machines are on a private network. When these clients are activated they are connected to a server(php), by using socket. The php server script would have an ip and a port that these clients connects to, handles client connections pool, message routing and etc, and the server will be running at all time. The socket connection allows the server-client interaction by sending messages back and forth, and these messages can trigger things to do. You should read up more on socket connection/server client interaction. This is just a little summary of how i got my project done.
Simple tutorial on socket/server client connection using php and flash
Related
I'm facing a challenge here. My windows 10 PC needs to run all the time, with some programs running on it. However, as a commonplace about windows, it does hang/freeze/BSOD once in a while, randomly. And since I'm not in front of it all the time, sometimes I won't know that it's stuck, for long, till I check it and have to manually hard restart it.
To overcome this problem I'm thinking of an idea like this:
Some program (probably .bat file) can be set to run in the PC, that sends a ping (or some message) to a webservice running remotely, every 10 mins or so.
A PHP script (the webservice) running in my host server (I own a hosting space for my website) can listen to this particular ping (or message), and wait.
If this webservice doesn't receive the ping (or msg) when expected, it simply sends out an email notifying the same.
So whenever the windows hangs/freezes, that .bat file would stop sending as well, triggering the notification from the websvc in the next 10 mins.
This is an idea, but frankly I still don't know how to actually achieve it technically, and whether it's truly feasible. Also, I'm not sure if I'm missing something crucial in terms of server load, etc.
Would greatly appreciate any help with the idea, and if possible pointers to the script that I can put on the server. Also, I'm not sure how to set it up to listen continuously.
Can someone please help here?
What about this?
Have your web service on your host comprise of one page, one database and one cron job.
The database has one table with one record that holds a time.
The cron job checks the the database-table-record every 10 minutes and if the time in the record is in the past, the cron sends you an email.
The page, when requested, simply updates the record to be the current time + 10 minutes. Have your Windows machine request this page every 10 minutes.
So essentially, the cron job is ready to send you an email, but it never can because the PC is always requesting a page to reset the time - until it can't.
Alright, so here's how I Finally achieved this whole idea, as suggested by #Warren above.
Created a simple db table in mysql in my hosting server, with just 2
fields, id and next_time.
Created a simple php page, which inserts/updates the current time + 10mins into the above table.
Created a python script, that checks in this table, if the time stored is < the current time. If yes, then sends a mail to me.
Scheduled this python script as cron job to run every 10 mins.
Thus when the PC hangs, for more than 10 mins, the script would let me know.
Thanks a lot for the help in coming up with this plan. Hope this helps someone else thinking of a similar thing to do.
Improvisation: I moved the above codes to my local raspberry pi web server, to remove dependency on the remote hosting server.
Next step: I'm planning to let the python script on the raspberry pi control a relay, which would toggle the reset switch of the PC, when the above event happens. So, not only would I know when the windows goes on BSOD, but it'll also be restarted on it's own.
Well, as a next step, I made some more simplification to the solution for the original requirement.
No more PHP now. Just one Python Script, and a small hardware improvement.
As I'm still learning new ways with this Raspberry Pi, I now connected the RPi to the PC via ethernet cable as a peer-to-peer connection.
Enabled ping response from Windows, as per this link.
Then wrote another python script to simply ping the windows PC (with a static IP for ethernet adapter)
If the ping fails, then send the email as the earlier script.
As earlier, setup this new script as the cron job to run every 10 mins, instead of the earlier script.
So if the windows hangs, I assume the ping would fail too, and thus an email would be sent out.
Thus now the web-server and database are both eliminated from the equation.
Still waiting for the Relay module to arrive, so I can implement the next step of automatic hard reboot.
Recently L started experiencing performance issues with my online application hosted on bluehost.
I have an online form that takes a company name and event handler "onKeyUp" tied up to that field. Every time you put a character into the field it sends request to server which makes multiple mysql queries to get the data. Mysql queries all together take about 1-2 seconds. But since requests are send after every character that is put in it easily overloads the server.
The solution for this problem was to cancel previous XHR request before sending a new one. And it seemed to work fine for me (for about a year) until today. Not sure if bluehost changed any configuration on server (I have VPS), or any php/apache settings, but right now my application is very slow due to the amount of users i have.
And i would understand gradual decrease in productivity that may be caused bu database grow, but it suddenly happened over the weekend and speeds went down like 10 times. usual request that took about 1-2 seconds before now takes 10-16 seconds.
I connected to server via SSH & ran some stress test sending lots of queries to see what process monitor (top) will show. And as I expected, for every new request it was a php process created that was put in queue for processing. This queue waiting, apparently, took the most of wait-time.
Now I'm confused, is it possible that before (hypothetical changes on server) every XHR Abort command was actually causing PHP process to quit, reducing additional load on server, and therefore making it work faster? And now for some reason this doesn't work anymore?
I have WAMP installed on Windows 7, as my test environment, and when I export the same database and run the stress-test locally it works fast. Just like it used to be on server before. But on windows I dont have such handy process monitor as TOP, so i cannot see if php processes are actually created and killed respectively.
Not sure how to do the troubleshooting at this point.
so im building a application that monitors rasp pi based devices on a network. the devices are running a program that provides a statistical array about the devices performance that we need to log, you can access this array via a socket connection to the device on a port. The network currently has 100 of these devices but will soon grow to several hundreds of devices on a single network.
Currently the application approached this by deploying a script via ssh2_scp to each of the devices, then the application running through the list of local ips and using stream_context_create && get_file_contents to ping the monitoring file on the remote devices. The monitoring file, then gets the stat array from the local machine then $_POSTS the data back to the app which stores this in the db.
This is not really ideal at the moment as im recording it takes around 1.45mins to cycle through the ip's check them (in a cheat fashion using a counter $i++ and while to cycle through a range of numbers rather then getting all the ips from the database which it will need to do when more ips are added and new locations) and retrieve the results and insert them into the db, with the cron job set to run the ping script every 2mins, as the number of devices increases this will go over the 2minute periodical gap and start to get backlogs of data. The problem with this is there isn't really any method of checking weather the stream get contexts retrieves any data, or to check if that device is operating correctly from the data submitted back separately.
The server the application is sitting on is a massive beast so computational power on that side is not a problem, but on the rasp pi devices it's running id prefer it not to run any web server, at the most maybe the inbuilt php web server, but id prefer them not run any web server for security, as well as the fact their primary aim is not a web server.
I've been looking at running php daemon services from the command line, and wondering if its better suited to run the monitor application as a daemon services to establish socket connections directly to the machines to retrieve data back and forth. If i was to go down this road how would i approach it, would i create a daemon script for the monitored devices that listened on a port and returned the stat array through that, then the monitoring application daemon to establish connections to each devices and feed the data in?
Any advice on best method/most efficient way of doing this highly appreciated
you can access this array via a socket connection to the device on a port.
I would simply create a script that accepted a range of hosts as a parameter, then hit this port with netcat. The script iterates through the specified hosts, and dumps results to the database. This alleviates any rasp pi side execution. For scalability, simply run multiple of the scripts, each with different ranges of devices simultaneously on your beastly server.
I'm trying to have 100 Android devices that display text string based on a server parameter. When the server has the text changed from "Hello World" to "Everything Changed" I want all 100 android devices to update simultaneously and ideally instantly as soon as the change happens.
It runs on an isolated LAN so C2DM isn't feasible and polling every second seems rather traffic heavy (especially if there are 1000 devices later). Are there any recommendations on how to move from polling to pushing or at least making this scalable?
I've been considering just keeping the connection open and the server returns content only when it changes but worried about timeout issues and PHP's capability of handling 100's of concurrent connections... Any pointers or advice to try?
You should not pull. If you are in private networks, and amount of devices is limited, you better keep tcp sockets opened, and send data from server to client via opened socket.
But you must understand what are you doing.
So read following:
1) to understand how many connections can be opened and is it enough for your needs
How many socket connections possible?
2) if you have a lot of devices, I mean more than thousand, you may failed on serverside. To not failed you must read about async io
http://en.wikipedia.org/wiki/Asynchronous_I/O and some other found in the Net.
and async io in php Can PHP asynchronously use sockets?
I'm trying to index many hundrets of web-pages.
In Short
Calling a PHP script using a CRON-job
Getting some (only around 15) of the least recently updated URLs
Querying theses URLs using CURL
The Problem
In development everything went fine. But when I started to index much more then some testpages, CURL refused to work after some runs. It does not get any data from the remote server.
Error messages
These errors CURL has printed out (of course not at once)
couldn't connect to host
Operation timed out after 60000 milliseconds with 0 bytes received
I'm working on a V-Server and tried to connect to the remote server using Firefox or wget. Also nothing. But when connecting to that remote server from my local machine everything works fine.
Waiting some hours, it again works for some runs.
For me it seems like a problem on the remote server or a DDOS-protection or something like that, what do you guys think?
You should be using proxies when you send out too many requests as your IP can be blocked by the site by their DDOS protection or similar setups.
Here are somethings to note : (What I used for scraping datas of websites)
1.Use Proxies.
2.Use Random User Agents
3.Random Referers
4.Random Delay in crons.
5.Random Delay between requets.
What I would do is make the script run for ever and add sleep in between.
ignore_user_abort(1);
set_time_limit(0);
Just trigger it with visiting the url for a sec and it will run forever.
How often is the script run? It really could be triggering some DOS-like protection. I would recommend implementing some random delay to make the requests seem delayed by some time to make them appear more "natural"