My usecase is eg. to send three FB and two GA-firebase events from server-side code to the corresponding tracking endpoints. But that significantly decreases loading time for each page-view.
I tried to implement it as a loop as described here, tried the wp_remote_post blocking attribute in combination with timeout: 0.01 but all of them increase the page load still significantly. Adding the redundant method attribute to wp_remote_post as mentioned here helped a bit, which seems very confusing to me
Without the post_request the page loads in 70ms, with the post request it goes to 600ms. I couldn't see much difference in the blocking attribute in load time change.
I also tried to use wp_schedule_single_event( time(), 'trigger_async_schedule_hook' ); but this didnt work in my local docker setup. I guess due to missing cron abilities.
Since php relies on HTTP it seems that this seems not really possible at first glance?! I'm used to it from node where you can send requests and wait for them in another context aka after page load.
What is the way to go here for?
Which is the way to go here? I thought about an external queue outside of PHP execution, but that seems overkill. Also other wp-async ideas would be very helpful.
PS: I searched the whole internet but couldn't find exactly the solution.
Cheers
Related
I'm making 3 subsequent AJAX requests to a PHP file.
$.post("page/1");
$.post("page/2");
$.post("page/3");
They all seem to wait for one another to finish before firing. I can watch the activity in Network Console, and see that they really do wait for each other. I can't figure out how to get them to fire all at once.
I am using PHP sessions, which I saw on here that might be the issue. But I added session_close() as suggested and that doesn't help. I don't think PHP is the issue because at least the browser would have made the 3 ajax calls and then they would wrongfully queue up. But that's not the case, they all wait for each other. 1 after another.
I've check jQuery Ajax.settings and async is set to true. I'm using 1.7.2 from the CDN.
Any ideas on where to look on how to get these to fire all at once?
Thanks!
By default, you can have 4 concurrent HTTP requests.
As for XMLHttpRequests, its sometimes 6 (FF). But varies.
It would seem they are determined by the domain in question.
See this related question.
So here's the lowdown:
The client i'm developing for is on HostGator, which has limited their max_execution_time to 30 seconds and it cannot be overridden (I've tried and confirmed it cannot be via their support and wiki)
What I'm have the code doing is take an uploaded file and...
loop though the xml
get all feed download links within the file
download each xml file
individually loop though each xml array of each file and insert the information of each item into the database based on where they are from (i.e. the filename)
Now is there any way I can queue this somehow or split the workload into multiple files possibly? I know the code works flawlessly and checks to see if each item exists before inserting it but I'm stuck getting around the execution_limit.
Any suggestions are appreciated, let me know if you have any questions!
The timelimit is in effect only when executing PHP scripts through a webserver, if you execute the script from CLI or as a background process, it should work fine.
Note that executing an external script is somewhat dangerous if you are not careful enough, but it's a valid option.
Check the following resources:
Process Control Extensions
And specifically:
pcntl-exec
pcntl-fork
Did you know you can trick the max_execution_time by registering a shutdown handler? Within that code you can run for another 30 seconds ;-)
Okay, now for something more useful.
You can add a small queue table in your database to keep track of where you are in case the script dies mid-way.
After getting all the download links, you add those to the table
Then you download one file and process it; when you're done, you check them off (delete from) from the queue
Upon each run you check if there's still work left in the queue
For this to work you need to request that URL a few times; perhaps use JavaScript to keep reloading until the work is done?
I am in such a situation. My approach is similar to Jack's
accept that execution time limit will simply be there
design the application to cope with sudden exit (look into register_shutdown_function)
identify all time-demanding parts of the process
continuously save progress of the process
modify your components so that they are able to start from arbitrary point, e.g. a position in a XML file or continue downloading your to-be-fetched list of XML links
For the task I made two modules, Import for the actual processing; TaskManagement for dealing with these tasks.
For invoking TaskManager I use CRON, now this depends on what webhosting offers you, if it's enough. There's also a WebCron.
Jack's JavaScript method's advantage is that it only adds requests if needed. If there are no tasks to be executed, the script runtime will be very short and perhaps overstated*, but still. The downsides are it requires user to wait the whole time, not to close the tab/browser, JS support etc.
*) Likely much less demanding than 1 click of 1 user in such moment
Then of course look into performance improvements, caching, skipping what's not needed/hasn't changed etc.
I'm working on a simple PHP application, using CouchDB and PHP-on-Couch to access some views, and it's working great. My next step is to introduce Ajax to update the frontend with data from the database.
I understand you can use the _changes notifications to detect any changes made on the database easily enough. So, its a matter of index.html monitoring for changes (through long polling), which calls loadView.php to update the page content.
Firstly, I hope the above is the correct method of going about it...
Secondly, when browsing to index.html, the page seems to never fully load (page load bar never completes). When a change is made, Firebug shows a the results as expected, but not any subsequent changes. At this time, the page seems to have stopped the infinite loading.
So far, i'm using jQuery to make the Ajax call...
$.getJSON('http://localhost:5984/db?callback=?', function(db) {
console.log(db.update_seq);
$.getJSON('http://localhost:5984/db/_changes?since='+db.update_seq+'&feed=continuous&callback=?', function(changes) {
console.log(changes);
});
});
Any ideas what could be happening here?
I believe the answer is simple enough.
A longpoll query is AJAX, guaranteed to respond only once, like fetching HTML or an image. It may take a little while to respond while it waits for a change; or it may reply immediately if changes have already happened.
A continuous query is COMET. It will never "finish" the HTTP reply, it will keep the connection open forever (except for errors, crashes, etc). Every time a change happens, zoom, Couch sends it to you.
So in other words, try changing feed=longpoll to feed=continuous and see if that solves it.
For background, I suggest the CouchDB Definitive Guide on change notifications and of course the excellent Couchbase Single Server changes API documentation.
I have a website that's about 10-12 pages strong, using jQuery/Javascript throughout. Since not all scripts are necessary in each and every page, I'm currently using a switchstatement to output only the needed JS on any given page, so as to reduce the number of requests.
My question is, how efficient is that, performance-wise ? If it is not, is there any other way to selectively load only the needed JS on a page ?
This may not be necessary at all.
Bear in mind that if your caching is properly set up, embedding a JavaScript will take time only on first load - every subsequent request will come from the cache.
Unless you have big exceptions (like, a specific page using a huge JS library), I would consider embedding everything at all times, maybe using minification so everything is in one small file.
I don't see any performance issues with the method you are using, though. After all, it's about deciding whether to output a line of code or not. Use whichever method is most readable and maintainable in the long term.
Since you're using JS already, you can use JS solution completely - for example you could use yepnope instead of php. I don't know what's the structure of your website and how you determine which page needs what or at what point is something included (on load, on after some remote thing has finished delivering data), however if you use $.ajax extensively, you could also use yepnope to pull additional JS that's needed once $.ajax is done with what it was supposed to do.
You can safely assume the javascript is properly cached on the clientside.
As I also assume you serve a minified file, seen the size of your website I'd say the performance is neglectable.
It is much better to place ALL your JavaScript in a single separate ".js" file and reference this file in your pages.
The reason is that the browser will cache this file efficiently and it will only be downloaded once per session (or less!).
The only downside is you need to "refresh" a couple of times if you change your script.
So, after tinkering a bit, I decided to give LABjs a try. It does work well, and my code is much less bloated as a result. No noticeable increase in performance given the size of my site, but the code is much, much more maintainable now.
Funny thing is, I had a facebook like button in my header. After analyzing the requests in firebug I decided to remove it, and gained an astounding 2 seconds on the pages loading time. Holy crap is this damn thing inneficient...
Thanks for the answers all !
I have a while loop that constructs a url for an SMS api.
This loop will eventually be sending hundreds of messages, thus being hundreds of urls.
How would i go about doing this?
I know you can use header(location: ) to chnage the location of the browser, but this sint going to work, as the php page needs to remain running
Hope this is clear
thankyouphp h
You have a few options:
file_get_contents as Trevor noted
curl_ - Use the curl library of commands to make the request
fsock* - Handle the connection a bit lower level, but making and managing the socket connection.
All will probably work just fine and you should pick one depending on your overall needs.
After you construct each $url, use file_get_contents($url)
If it just a case that during the construction of all these URLs you get the error "Maximum Execution Time Exceeded", then just add set_time_limit(10); after the URL generation to give your script an extra 10 seconds to generate the next URL.
I'm not quite sure what you are actually asking in this question - do you want the user to visit the urls (if so, can you does the end users web browser support javascript?), just be shown the urls, for the urls to be generated and stored or for the PHP script to fetch each url (and do you care about the user seeing the result) - but if you clarify the question, the community may be able to provide you with a perfect answer!
Applying a huge amount guesswork, I infer from your post that you need to dynamically create a URL, and the invoking of that URL causes an SMS message to be sent.
If this is the case, then you should not be trying to invoke the URL from the client but from server side using the url_wrappers or cURL.
You should also consider running the loop in a seperate process and reporting back to the browser using (e.g.) AJAX.
Have a google for spawning long running processes in PHP - but be warned there is a lot of bad advice on the topic published out there.
C.