I were programming for more than 15+ years, and yesterday I was thinking about starting to use Laravel, but when I installed it (fresh installation), their default "welcome" route was using 140ms as response time from server (just response from server, not loading anything else from html). So 1 route, no database, anything.
And I used to create my own framework which had response time 47ms with connecting to database, executing few queries and working with templates (created my own templating system). Both tested on same server.
And because of that I am a bit worried about the response time, will it work with a lot of users on website?
I am trying to use laravel just because I see a lot of jobs opportunity that are asking for laravel developers.
Related
Evening All,
At by absolute wits end and hoping someone may be able to save me! I am in the process of migrating a number of PHP applications into Azure. I am using:
Linux based App Service running PHP 7.4 (2 vCPUs, 8Gb RAM) at a cost of £94 a month.
Azure Database on MySQL 8.0 (2 vCPUs) at £114 a month.
My PHP apps run well, decent load time of under 1 second per page. Wordpress performance however is awful. I am going from a 1 second page load to around 10 seconds, particularly on the back end. I have read all of the Azure guides and have implemented the following obvious points:
Both the App Service and the MySQL install are in the same data center
App Service is set to 'Always On'
Connection Redirection is set to Preferred and tested as working
The same app runs fine on a very basic £10 or so a month shared hosting package. I have also tried the same setup in Amazon Web Services today and page load is back to a second or so.
In Chrome Console, the delay is in TTFB. I have disabled all the plugins and none stand out as making a huge difference. Each adds a second or so page load, suggesting to me a consistent issue when a page requires a number of database calls.
What is going on with Azure and the awful Wordpress performance?! Is there anything else I can investigate or try? Really keen to stay with Azure but can't cope with the huge increase in cost for a performance hit.
The issue turned out to be the way the file system runs in the app service. It is NOT an issue with the database. The App Service architecture is just too slow at present with file read/writes, of which Wordpress uses a lot. Investigated the various file cache options but none improved enough.
Ended up setting up a fairly basic, and significantly cheaper, virtual machine, running with the same database and performance is hugely improved.
Not a great answer, but App Services are not up to Wordpress at present!
The comments below are correct. The "problem" is the database. You can either move MySQL to a Virtual Machine (which will give you a better performance) or you can also try to use cache plugins such as WP Super Cache as well decrease the number of requests.
You can find a full list of tips in the following link:
https://azure.microsoft.com/en-us/blog/10-ways-to-speed-up-your-wordpress-site-on-azure-websites/
PS: ignore the date, it's still relevant
I'm not a back-end developer and I don't know much about devops, but I will try.
I've been working with Docker containers as a front-end developer with many applications, mostly PHP-based, Laravel, Phalcon, Symphony, you name it.
One things that has been bugging me for years now is the performance of local environment, especially when it comes to (any?) databases - it doesn't matter if I'm using remote MySQL database on a server or setup an MySQL container on localhost - requests are always so super slow compared to production, in worst cases reloading some pages took tens of seconds if not minutes.
I'm wondering whether is it possible for a simple front-end dev to setup mocked MySQL database somehow, like an object that will pretend to be my MySQL database with all the structure and data but much faster? 99% of the time I do not want to update records in my bases anyway, so something read-only would be totally sufficient. Of course I want this to work as a normal db, it means it can receive and send data.
I won't lie I had the same problem with almost every single startup I worked on (excepting... WordPress :P), not only in PHP but also in other languages, I'm using top-tier MacBook so it's performance is still far away from real web servers but I think it should be sufficient.
Thanks for any hints :)
I was going to port a custom webstore application to Laravel, so that it's new, shiny and is a pleasure to support. After spending a couple of hours on setting up the framework and starting to port the main layout, I decided to test how it performs. I have installed a copy of my original app and the ported part to laravel, where I set up a Section model with a translation from dimsav/laravel-translatable and output the main navigation with Section::all() of about 30 sections.
I was shocked when I saw the result of simple observation of the Netwrok tab in Firebug:
For my laravel setup it takes roughly 360ms to render just the above. I imagine how much the response time will be when the whole page would be ported...
For the original app it takes ~30ms to serve the whole homepage with the same navigation, popular products, submenus, footer navigation, checking cart contents etc.
Both on the same virtual server and even using the same database, no caching in either.
I profiled the code to discover any issues, but all I found is that
6.4% is spent on autoloading (32ms - more than the whole requesto for the other version),
12.5%/63ms - service registerting
11.1%/56ms - configuration loadig
44.5% on endless Pipline calls. about the time seems to be spent on endless Pipeline calls.
Just to confirm, an empty laravel app responds in about 17ms.
Am I missing somethign here? I imagined there would be some performance degrade when moving to the framework, but (assuming the reponse time will increase for the complete setup, so I would expect ~20x) 20x seems crazy. Are these time normal for laravel and is there any big win from using caching (Redis for instance) or any other optimization techinques? I wonder if there actually are any except for caching?
On a $5 VPS instance from Digital Ocean with deployed apps I see response times:
512MBMemory
1 Core Processor
20GBSSD Disk
Lumen 60-120 ms
Laravel 100-300 ms
These particular apps do not caching. The 100-150 ms response time seems to be the base response time with laravel for a view that has already been compiled. Obviously having slow queries will increase that.
Adding redis or memcached will dramatically degrease your response time compared to going to the db in most cases getting pretty close to the base response time of laravel.
You should make sure to set APP_DEBUG to false in your .env file:
# /.env
APP_ENV=production
APP_DEBUG=false
Run php artisan optimize
Ive built an AngularJS application over the last several months that utilizes a MySQL database for its data. This data is fetched by Angular making calls to PHP and PHP returns JSON strings etc.
The issue is once this application is running inside node-webkit, none of the php works, so all of the content areas are empty. I assume (though the documentation on this issue is null and so i have no confirmation) this happens because Node-webkit is a client-side application framework and therefor wont run server-side languages like php. Is there a way to expand node webkit to run php and other server side languages?
I have done my best to find an answer to this question before posting, but documentation for this is nonexistent, and all of the information I have found about node-webkit talks about installing node on your server and installing npms for MySQL and having angular make calls to node. This defeats the purpose of the application entirely as it is designed so that the exe/deb/rpm/dmg can run and you can set up a database with any cloud database provider and be ready to go. Not ideal if you have to buy a vps just to run this one thing.
I have to assume this is possible in some way. i refuse to believe that everyone with an nw application hard codes all their data.
Thanks in advance
I know of four methods to accomplish this. Some of which you have preferred not to do but I am going to offer them in the hopes it helps you or someone else.
Look for an NPM that can do this for you. You should be able to do this functionality within node.js. - https://www.npmjs.com/search?q=mysql
You can host your PHP remotely. Using node-remote you can give this server the appropriate access to your NW.js project.
You can code a RESTful PHP application that your JavaScript can pass off information to.
You can use my boilerplate code to run PHP within a NW.js project. It however fires up an express.js web server internally to accomplish this. But the server is restricted to the machine and does not accept outside connections - https://github.com/baconface/php-webkit
1 and 4 both carry a risk in your case. Your project can be reversed engineered to reveal the source code and the connection information can be retrieved rather easy. So this should only be in an application on trusted machines and 2 and 3 are the ideal solutions.
I’ve been working on a php project where I’m trying to create a cards game.
That obviously needs to be updated in real-time, so, having almost finished the underlying server logic, I went for the naiive/obvious solution for fetching the data from the server - heartbeats or periodic ajax requests - and was thrilled to see the page working through that.
Misery began when I started thinking there could be a less "stressful" way, that’s when I found a couple of conversations here (and in other websites) about "Comet" and “AJAX PUSH” or “Server Push” which I’ve read about intensively.
I found a demo in zeitoun.net which was very simple and ridiculously easy to make it work on my localhost.
As I was writing this question I've gone through the "similar question" panel. and to be honest it's very confusing which option to go with.
Which would you recommend, knowing that I wanna make sure the website can serve up to 2000 users, and that I'm using PHP on Apache?
Keep using the current method, periodic client ajax requests (I've refined the server response to that, and it actually returns nothing most of the time unless a change was to be sent, but still I'm worried about the amount of hits per second the server is going to recieve).
Go for the "too good to be true" solution at zeitoun.net.
Use APE which will require me to switch my operating system to Linux (which I'm willing to do if it turned out to be a promising solution).
Take a deeper look into https://stackoverflow.com/questions/4262543/what-are-good-resources-for-learning-html-5-websockets and go for HTML5 Websocket instead (regardless of browser-support and used fallbacks).
None of the above?