I am working on a PHP product application, which will be deployed on several client servers (say 500+). What my client's requirement is, when a new feature released in this product, we need to push all the changes ( files & db ) to clients server using FTP/SFTP. I am a bit concerned about transferring files using FTP to 500+ server at a time. I am not sure how handle this kind of situation. I have came up with some ideas, like
When user (product admin) click update, we send an ajax request, which will update first 10 servers, and returns with the remaining count of servers. From the ajax response, we will send again for next 10, and so on.
OR
Create a cron, which runs every 1 mins, which check whether any update is active, and update the first 10 active servers. Once it complete the transfer for a server, it changes the status for that server to 0.
I just want know, is there any other method to do these kind of tasks ?
Add the whole code to a code repository mechanism like Git and then push all over present files to the created repository. Go to any one server and write a cron for auto pull the repository to the severs and upload that cron to every server.
In future if you like to add new feature just pull the whole repository and add the feature. Push the code again to repository it will be pulled by cron automatically in all the server where you kept the cron previously.
First, I would like to provide some suggestions and insight into your suggested methods:
In both the methods, you'll have to keep a list of all the servers where your application has been deployed and have keep track whether the update has been applied to a particular one or not. That can be difficult if in future you want to scale from 500+ to say 50,000+.
You are also not considering the case, where the target server might not be functioning at the time you send the request for update.
I suggest, instead of sending the update from your end to target server, you achieve the same in the opposite direction. As you said, you are developing an entire PHP Application to be deployed on Client Server, I suggest you develop an Update Module into it. The Target Server can send a request to your servers at designated time to check whether there is any Update available or not. If there is, then I suggest following two ways to proceed further
You send an update list, providing the names and paths of files to be updated, along with any DB changes, the same can be processed on Client Side accordingly.
You can just send a response saying there is an update available, then a separate process launches on Client Server which will download all the files and DB changes from the Server.
For maintaining concurrency of updates you can implement a Token System, or can rely on the Time-stamp at which the update happened.
Related
I'm developing a web app using Laravel (a PHP framework). The app is going to be used by about 30 of my co-workers on their Windows laptops.
My co-workers interview people on a regular basis. They will use the web app to add a new profile to a database once they interview somebody for the first time and they will append notes to these profiles on subsequent visits. Profiles and notes are stored using MySQL, but since I'm using Laravel, I could easily switch to another database.
Sometimes, my co-workers have to interview people when they're offline. They might visit a group of interviewees, add a few profiles and add some notes to existing ones during a session without any internet access.
How should I approach this?
With a local web server on every laptop. I've seen applications ship with some kind of installer including a LAMP stack, but I can't find any documentation on this.
I could install the app and something like XAMPP on every laptop
myself. That would be possible, but in the future more people might use the app and not all of them might be located nearby.
I could use Service Workers, maybe in connection with a libray such
as UpUp. This seems to be the most elegant approach.
I'd like to give option (3) a try, but my app is database driven and I'm not sure whether I could realize this approach:
Would it be possible to write all the (relevant) data from the DB to - let's say - a JSON file which could be accessed instead of the DB when in offline mode? We don't have to handle much data (less than 100 small data records should be available during an interview session).
When my co-workers add profiles or notes in offline mode, is there any "Web Service" way to insert data into the db that has been entered?
Thanks
Pida
I would think of it as building the app in "two parts".
First, the front end uses ajax calls to the back end (which isn't anything but a REST API). If there isn't any network connection, store the data in the browser using local storage.
When the user later has network connection, you could send the data that exists in the local storage to the back end and clear the local storage.
If you add web servers on the laptops, the databases and info will only be stored on their local laptops and would not be synced.
You can build what you describe using service workers to cache your site's static content to make it available offline, and a specific fetch handler in the service worker to detect a failed PUT or POST and queue the data in IndexedDB. You'd then periodically check IndexedDB for any queued data when your web app is loaded, and attempt to resend it.
I've described this approach in more detail at https://developers.google.com/web/showcase/case-study/service-workers-iowa#updates-to-users-schedules
That article assumes the use of the sw-precache library for caching your site's static assets, and the sw-toolbox library to provide runtime fetch handlers that check for failed business-logic requests. It also uses a promise-based IndexedDB wrapper called simpleDB although I'd probably go with the more recent idb library nowadays.
I started with the php quick start and created a new php file that fetches information from a websites and send the information to the glass. How would i send this information at 8 am every morning?
That depends on your hosting platform.
You can create a special text file to do this if you're using Apache server. If you're hosting your PHP files on Windows Azure, you can use the Scheduler to make a call to your file according to the schedule you want.
Then, to send the data to Glass, you would just have the PHP file make a call to the API with the completed data. This way if you need to change when you're sending the data, you don't have to change your code, just the scheduler settings.
There are two things you need to do, broadly speaking:
You need to store the user's auth token and refresh token (usually sent in an object or array, and should be stored that way). The quickstart takes actions for a user that is hitting your website, so you're fetching the tokens at that moment. You need to save them after the first time they visit your site so you can use the authorization information later when you want to send information to them.
You need some way to schedule a job to run at 8am every morning, trigger the job at that time, and for that job to trigger your web server to take the appropriate action. This is a problem outside the Mirror API and is usually done with something like cron on UNIX or Task Scheduler on Windows. There are several PHP packages which appear to do the same sort of thing.
Update: You can store the credentials using whatever data store you're familiar with. PHP has modules that work with MySQL, for example, but this isn't your only option.
Yes, you need to create and send a card for each user (ie - each auth code). Each card is permitted to one user only, so a user can delete their copy of the card without changing the card from another user.
I am designing an app that will have to periodically check a web address for updates.
The updates will be in a MySQL database tables.
I know the idea way to do this, is to create a service that is constantly running and trying to check for an update periodically (lets say 10 seconds).
Below are some questions that are unclear to me as I start my quest to accomplish this task.
Do I need to manually do the check for an update everytime from the client to server ? (that is, take a value on the client side, send it to server side, and do a head to head comparison), OR, the php/MySQL server can send a notification that there is an update that took place.
I came across a RSS feature in several posts in SO, however those tend to be NEWS applications, mine is not anything like that. Does it help to use RSS feeds in here ?
I'm planning on doing the check every 10 seconds. This means that upon the install of the app and since the app is on the device (and as long as there is internet connectivity) I will keep on fetching the server. This tends to be bandwidth and ram/cpu consuming for the client side. How do big apps like Viber, WhatsApp manage to do so ?
Just thinking out loud - , in order to avoid such a hassle, I was thinking with each update on the server, send a notification to the user with a certain code, and do the math inside the onReceive, if the code was something related to an update, 1-not show the notification to the user, 2-run the server update check thing.
ideas ?
The trouble is traditional php/html systems are client centric (the client has to initiate all communications).
You might want to look into websockets, or node.js. These technologies will allow you to create a persistent connection between the server and its clients. This in turn will allow you to 'push' data to the client when appropriate, without the client having to ask for it every x amount of minutes.
Node.js Chat Example
Node.js info
SWho works with cached clients system knows that sometimes you have to update server and client files. So far I've managed to solve partially the problem, by making one call every time the software is opened to ask PHP what version of the software he's in. With the result, I compare to the version that Flex is in and voalá. Problem is, whenever I need to make an emergency update inside the business hour range, it's impossible to know how many clients have the Flex version already opened.
So to sunup: The cache problem I solved by controlling the version in start-up time, if your browser cached it, the version won't match with the server's app.
The only solution I can think to solve the 'already opened app' problem is to make a gateway between the PHP Services and Flex calls, where I would have to pass the Flex version and compare it inside the gateway, before the service is actually called, although I don't like this solution.
Any ideas?
Thanks.
You can download this application from Adobe website. http://labs.adobe.com/technologies/airlaunchpad/ It will allow you to build a new test app, and you need to select in the menu : "auto update" property. That will generate all the necessary files for you both for server and client.
The end result will have a server based xml file, and setup in each of the client apps to check on recurring basis if the xml file offers newer version of the application, and if true, automatically downloads and updates it. You can update the "check for update" frequency to your liking in the source code, by default it is tied to the application open event.
This frequent update will check for updates also while app is open, so it should solve your problem.
Okay, in my head this is somewhat complicated and I hope I can explain it. If anything is unclear please comment, so I can refine the question.
I want to handle user file uploads to a 3rd server.
So we have
the User
the the website (server where the website runs on)
the storage server (which recieves the file)
The flow should be like:
The Website requests an upload url from the storage clouds gateway, that points directly to the final storage server (something like http://serverXY.mystorage.com/upload.php). Along with the request a "target path" (website specific and globally unique) and a redirect url is sent.
the Website generates an upload form with the storage servers upload url as target, the user selects a file and clicks the submit button. The storage server handles the post request, saves the file to a temporary location (which is '/tmp-directory/'.sha1(target-path-fromabove)) and redirects the back to the redirect url that has been specified by the website. The "target path" is also passed.
I do not want any "ghosted files" to remain if the user cancels the process or the connection gets interrupted or something! Also entries in the websites database that have not been correctly processed int he storage cloud and then gets broken must be avoided. thats the reason for this and the next step
these are the critical steps
The website now writes en entry to its own database, and issues a restful request to the storage api (signed, website has to authenticate with secret token) that
copies the file from its temporary location on the storage server to its final location (this should be fast because its only a rename)
the same rest request also inserts a database row in the storage networks database along with the websites id as owner
All files in tmp directory on the storage server that are older than 24 hours automatically get deleted.
If the user closes the browser window or the connection gets interrupted, the program flow on the server gets aborted too, right?
Only destructors and registered shutdown functions are executed, correct?
Can I somehow make this code part "critical" so that the server, if it once enters this code part, executes it to teh end regardless of whether the user aborts the page loading or not?
(Of course I am aware that a server crash or an error may interrupt at any time, but my concerns are about the regular flow now)
One of me was to have a flag and a timestamp in the websites database that marks the file as "completed" and check in a cronjob for old incompleted files and delete them from the storage cloud and then from the websites database, but I would really like to avoid this extra field and procedure.
I want the storage api to be very generic and use it in many other future projects.
I had a look at Google storage for developers and Amazon s3.
They have the same problem and even worse. In amazon S3 you can "sign" your post request. So the file gets uploaded by the user under your authority and is directly saved and stored and you have to pay it.
If the connection gets interrupted and the user never gets back to your website you dont even know.
So you have to store all upload urls you sign and check them in a cronjob and delete everything that hasnt "reached its destination".
Any ideas or best practices for that problem?
If I'm reading this correctly, you're performing the critical operations in the script that is called when the storage service redirects the user back to your website.
I see two options for ensuring that the critical steps are performed in their entirety:
Ensure that PHP is ignoring connection status and is running scripts through to completion using ignore_user_abort().
Trigger some back-end process that performs the critical operations separately from the user-facing scripts. This could be as simple as dropping a job into the at queue if you're using a *NIX server (man at for more details) or as complex as having a dedicated queue management daemon, much like the one LrdCasimir suggested.
The problems like this that I've faced have all had pretty time-consuming processes associated with their operation, so I've always gone with Option 2 to provide prompt responses to the browser, and to free up the web server. Option 1 is easy to implement, but Option 2 is ultimately more fault-tolerant, as updates would stay in the queue until they could be successfully communicated to the storage server.
The connection handling page in the PHP manual provides a lot of good insights into what happens during the HTTP connection.
I'm not certain I'd call this a "best practice" but a few ideas on a general approach for this kind of problem. One of course is to allow the transaction of REST request to the storage server to take place asynchronously, either by a daemonized process that listens for incoming requests (either by watching a file for changes, or a socket, shared memory, database, whatever you think is best for IPC in your environment) or a very frequently running cron job that would pick up and deliver the files. The benefits of this are that you can deliver a quick message to the User who uploaded the file, while the background process can try, try again if there's a connectivity issue with the REST service. You could even go as far as to have some AJAX polling taking place so the user could get a nice JS message displayed when you complete the REST process.