I wanted to know that if 1000 users are concurrently using a website built with laravel 5 and also querying database regularly then how does laravel 5 perform ?
I know it would be slow but will it be highly slow that it would be unbearable ?
Note that i am also going to use ajax a lot.
And lets assume i am using digital ocean cloud service with following configurations
2GB memory
2 vCPU
40GB SSD
I don't expect completely real figures as it is impossible to do so but at least provide some details whether i should go with laravel with some considerable performance.
Please also provide some some tools through which i can check the speed of my laravel 5 application as well as how it will perform when there is actual load as well as other tools through which i can test speed and performance.
And it would be great if someone has real experience of using laravel especially Laravel 5.
And what about Lumen does that really make application faster than laravel and how much ?
In short, yes. At least newer versions of Laravel are capable (Laravel 7.*).
That being said, this is really a three part conundrum.
1. Laravel (Php)
High Concurrency Architecture And Laravel Performance Tuning
Honestly, I wouldn't be able to provide half the details as this amazing article provides. He's got everything in there from the definition of concurrency all the way to pre-optimization times vs. after-optimization times.
2. Reading, Writing, & Partitioning Persisted Data (Databases)
MySQL vs. MongoDB
I'd be curious if the real concern is Php's Laravel, or more of a database read/write speed timing bottleneck. Non relational databases are an incredible technology, that benefit big data much more than traditional relational databases.
Non-relational Databases (Mongo) have a much faster read speed than MySql (Something like 60% faster if I'm remembering correctly)
Non-relational Databases (Mongo) do have a slower write speed, but this usually is not an inhibitor to the user experience
Unlike Relational Databases (MySQL), Mongo DB can truly be partitioned, spread out across multiple servers.
Mongo DB has collections of documents, collections are pretty synonymous to tables and documents are pretty synonymous to rows.
The difference being, MongoDB has a very JSON like feel to it. (Collections of documents, where each document looks like a JSON object).
The huge difference, and benefit, is that each document - AKA row - does not have have the same keys. When using mongo DB on a fortune 500 project, my mentor and lead at the time, Logan, had a phenomenal quote.
"Mongo Don't Care"
This means that you can shape the data how you're wanting to retrieve it, so not only is your read speed faster, you're usually not being slowed by having to retrieve data from multiple tables.
Here's a package, personally tested and loved, to set up MongoDB within Laravel
Jessengers ~ MongoDB In Laravel
If you are concerned about immense amounts of users and transferring data, MongoDB may be what you're looking for. With that, let's move on to the 3rd, and most important point.
3. Serverless Architecture (Aka Horizontal scaling)
Aws, Google Cloud, Microsoft Azure, etc... I'm sure you've hear of The Cloud.
This, ultimately, is what you're looking for if you're having concurrency issues and want to stay within the realm of Laravel.
It's a whole new world of incredible tools one can hammer away at -- they'er awesome. It's also a whole new, quite large, world of tools and thought to learn.
First, let's dive into a few serverless concepts.
Infrastructure as Code Terraform
"Use Infrastructure as Code to provision and manage any cloud, infrastructure, or service"
Horizontal Scaling Example via The Cloud
"Create a Laravel application. It's a single application, monolithic. Then you dive Cloud. You discover Terraform. Ahaha, first you use terraform to define how many instances of your app will run at once. You decide you want 8 instances of your application. Next, you of course define a Load Balancer. The Load Balancer simply balances the traffic load between your 8 application instances. Each application is connected to the same database, ultimately sharing the same data source. You're simply spreading out the traffic across multiples instances of the same application."
We can of course top that, very simplified answer of cloud, and dive into lambdas, the what Not to do's of serverless, setting up your internal virtual cloud network...
Or...we can thank the Laravel team in advance for simplifying Serverless Architecture
Yep, Laravel Simplified Serverless (Shout out Laravel team)
Serverless Via Laravel Vapor
Laravel Vapor Opening Paragraph
"Laravel Vapor is an auto-scaling, serverless deployment platform for Laravel, powered by AWS Lambda. Manage your Laravel infrastructure on Vapor and fall in love with the scalability and simplicity of serverless."
Coming to a close, let's summarize.
Oringal Concern
Ability to handle a certain amount of traffic in a set amount of time
Potential Bottlenecks with potential solutions
Laravel & Php
(High Concurrency Architecture And Laravel Performance
Tuning
Database & Persisting/Retrieving Data Efficiently
Jessengers ~ MongoDB In Laravel
Serverless Architecture For Horizontal Scaling
Serverless Via Laravel Vapor
I'll try to answer this based on my experience as a software developer. To be honest I definitely will ask for an upgrade whenever it hits 1000 concurrent users at the same time because I won't take a risk with server failure nor data failure.
But let's break it how to engineer this.
It could handle those users if the data fetched is not complex and there are not many operations from Laravel code. If it's just passing through from the database and almost no modification from the data, it'll be fast.
The data fetched by the users are not unique. Let's say you have a news site without user personalization. the news that the users fetched mostly will be the same. You cached the data from memory (Redis, which I recommend) or from the web server(Nginx, should be avoided), your Laravel program will run fast enough.
Querying directly from the database is faster than using Laravel ORM. you might consider it if needed, but I myself will always try to use ORM because it will help code to be more readable and secure.
Splitting database, web server, CDN, and cache server is obviously making it easier to monitor server usage.
Try to upgrade it to the latest version. I used to work with a company that using version 5, and it's not really good at performance.
opcode caching. cache the PHP file code. I myself never use this.
split app to backend and frontend. use state management for front end app to reduce requests data to the server.
Now let's answer your question
Are there any tools for checking performance? You can check Laravel debug bar, these tools provide for simple performance reports. I myself encourage you to make a test for each of the features you create. You can create a report from that unit test to find which feature taking time to finish.
Are lume faster than laravel? Yeah, Lumen is faster because they disabled some features from Laravel. But please be aware that Taylor seems gonna stop Lumen for development. You should consider this for the future.
If you're aware of performance, you might not choose Laravel for development.
Because there is a delay between each server while opening a connection. Whenever a user creating a request, they open a connection to the server. server open connections to the cache server, database server, SMTP server, or probably other 3rd parties as well. It's the real bottleneck that happened on Laravel. You can make keep-alive connections with the database and cache server to reduce connection delay, but you wouldn't find it in Laravel because it will dispose the connection whenever the request finish.
It's a typed language, mostly compiled language are faster
In the end, you might not be able to use Laravel with that server resources unless you're creating a really simple application.
A question like this need an answer with real numbers:
luckily this guy already have done it in similar conditions as you want to try and with laravel forge.
with this php config:
opcache.enable=1
opcache.memory_consumption=512
opcache.interned_strings_buffer=64
opcache.max_accelerated_files=20000
opcache.validate_timestamps=0
opcache.save_comments=1
opcache.fast_shutdown=1
the results:
Without Sessions:
Laravel: 609.03 requests per second
With Sessions:
Laravel: 521.64 requests per second
so answering your question:
With that memory you would be in trouble to get 1000 users making requests, use 4gb memory and you will be in better shape.
I don't think you can do that with Laravel.
I tried benchmarking Laravel with an 8 core CPU, 8 GB RAM and 120GB HDD and I just got 200-400 request per second.
Related
I've just started using YII and managed to finish my first app. unfortunately, launch day is close and I want this app to be super fast. So far, the only way of speeding it up I've come across, is standard caching. What other ways are there to speed up my app?
First of all, read Performance Tuning in the official guide. Additionally:
Check HTTP caching.
Update your PHP. Each major version gives you a good boost.
Use redis (or at least database) for sessions (default PHP sessions are using files and are blocking).
Consider using nginx instead (or with) apache. It serves content much better.
Consider using CDN.
Tweak your database.
These are all general things that are relatively easy to do. If it's not acceptable afterwards, do not assume. Profile.
1. Following best practices
In this recipe, we will see how to configure Yii for best performances and will see some additional principles of building responsive applications. These principles are both general and Yii-related. Therefore, we will be able to apply some of these even without using Yii.
Getting ready
Install APC (http://www.php.net/manual/en/apc.installation.php)
Generate a fresh Yii application using yiic webapp
2.Speeding up sessions handling
Native session handling in PHP is fine in most cases. There are at least two possible reasons why you will want to change the way sessions are handled:
When using multiple servers, you need to have a common session storage for both servers
Default PHP sessions use files, so the maximum performance possible is limited by disk I/O
3.Using cache dependencies and chains
Yii supports many cache backends, but what really makes Yii cache flexible is the dependency and dependency chaining support. There are situations when you cannot just simply cache data for an hour because the information cached can be changed at any time.
In this recipe, we will see how to cache a whole page and still always get fresh data when it is updated. The page will be dashboard-type and will show five latest articles added and a total calculated for an account. Note that an operation cannot be edited as it was added, but an article can.
4.Profiling an application with Yii
If all of the best practices for deploying a Yii application are applied and you still do not have the performance you want, then most probably, there are some bottlenecks with the application itself. The main principle while dealing with these bottlenecks is that you should never assume anything and always test and profile the code before trying to optimize it.
If most of your app is cacheable you should try a proxy like varnish.
Go for general PHP Mysql Performance turning.
1)Memcache
Memcahced open source distributed memory object caching system it helps you to speeding up the dynamic web applications by reducing database server load.
2)MySQL Performance Tuning
3)Webserver Performance turning for PHP
I've recently built a reasonably complex database application for a university handling almost 200 tables. Some tables can (e.g. Publications) hold 30 or more fields and store 10 one-to-one FK relations and up to 2 or 3 many-to-many FK relations (using crossreference-ref tables). I use integer IDs throughout and normalisation has been key every step of the way. AJAX is minimal and most pages are standard CRUD forms/processes.
I used Symfony 1.4, and Doctrine ORM 1.2, MySQL, PHP.
While the benefits to development time and ease of maintenance have been enormous (in using an MVC and ORM), we've been having problems with speed. That is, when we have more than a few users logged in and active at any one time the application slows considerbly (up to 20 seconds to save or edit a record).
We're currently engaged in discussions with our SysAdmin but they say that we should have more than enough power. With 6 or more users engaged in activity, we end up queuing 4 CPUs in a virtual server environment while memory usage is low (no bleeds).
Of course, we're considering multi-threading our mySQL application (if this will help), refining our code (though much of it is generated by the MVC) and refining our cache usage (this could be better, though the majority of the screen used is user-login specific and dynamic); we've installed APC, extra memory, de-fragmented our database, I've tried unsetting all recordsets (though I understand this is now automatic within the ORM), instigating manual garbage recycling...
But the question I'm asking is whether mySQL, PHP, and Symfony MVC was actually a poor choice for developing an application of this size in the first place? If so, what do people normally use/recommend for a web-based database interface application of this kind of size/complexity?
I don't have any experience with Symfony or Doctrine. However, there are certainly larger sites than yours that are built on these projects. DailyMotion for instance uses both, and it serves a heck of a lot more than a few simultaneous users.
Likewise, PHP and MySQL are used on sites as large as Wikipedia, so scalability should not be a problem. (BTW, MySQL should be multithreaded by default—one thread per connection.)
Of course, reasonable performance is relative. If you're trying to run a site serving 200 concurrent requests off a beige box sitting in a closet, then you may need to optimize your code a lot more than a Fortune 500 company with their own data centers.
The obvious thing to do is actually profile your application and see where the bottleneck is. Profiling MySQL queries is pretty simple, and there are also assorted PHP profiling tools like Xdebug. From there you can figure out whether you should switch frameworks, ORMs (or ditch ORM altogether), databases, or refactor some code, or just invest in more processing power.
The best way to speed up complex database operation is not calling them :).
So analyize which parts of your application you can cache.
Symfony has a pretty well caching system (in symfony2 even better) that can be used pretty granulary.
On database side another way would be to use views or nested sets to store aggregated data.
Try to find parts where this is appropriate.
Ok I do have a small messaging site for my client. Well its more likely a post-comment system(created in PHP). Now my client want a system that can comment to another existing comment and add some features like liking and tagging. Another thing is the existing system is heavily used by my client in his company as they use it like a skype chat(that makes it write-read intensive). well my client want's to use open source software as possible. so I used mysql community edition.
Too much about my story... So I had a 1 week research about NoSql databases and I found it right for my requirements as my client wants to add features (that means adding and adding columns and tables from time to time.) Now these are nosql database systems that caught my eye.(well if you can suggest other nosql database system its ok)
MongoDB
CouchDB
Redis
Now my question is which of the three is good for my situation? I also read some bad things about those 3 nosql databases
MongoDB is crappy on its 2.x version
CouchDB is slow (my client doesn't want slow)
Redis is memory-based so it just writes on the disk on certain intervals. so when the system crash in the middle of the interval then the data is lost
I want to have some opinions about this and any advice that can help me to cope up with my upcoming situation
MongoDB is a popular solution to this, and my personal preference. The great thing about Mongo (besides being schemaless) is that you can have nested/embedded documents. So for example, you can have a comment which has an array of sub-comments which each have their own arrays of sub-comments. I don't know of any other datastore that has that feature. It's also fast.
CouchDB has some nice features, but Mongo is so similar and much better.
Redis is very different from the other two. It's used mostly as an alternative to memcached. So it's primarily used for temporary data. Although it has some nice pubsub features built in. A lot of people use both MongoDB and Redis, but for different things.
this is my first question here, which is regarding a specific website optimization.
A few moths ago, we launched [site] for one of our clients which is some kind of community website.
Everything works great, but now this website is getting bigger and it shows some slowness when the pages are loading.
The server specs:
PHP 5.2.1 (i think we need to upgrade on 5.3 to make use of the new garbage collector)
Apache 2.2
Quad Core Xeon Processor # 2,8 Ghz and 4 GB DDR 3 RAM.
XCACHE 1.3 (we added this a few months ago)
Mysql 5.1 (we are using innodb as engine)
Codeigniter framework
Here is what we did so far and what we intend to do further :
Beside xcache, we don't really use a caching mechanism because most of the content comes live and beside this, we didn't wanted to optimize prematurely because we didn't know what to expect as far as the traffic flow.
On the other hand, we have installed memcached and we want to implement a cache system based on memcached.
Regarding the database structure, we have reached 3NF with most of our tables, and yes we have some slow queries(which we plan to optimize) but i think because the tables that produce slow queries are the one for blog comments(~44,408 rows) / user logs tracking (~725,837 rows) / user comments (~698,964 rows) etc which are quite big tables. The entire database is 697.4 MB in size for now.
Also, here are some stats for January 2011:
Monthly unique visitors: - 127.124
Monthly unique views: 4.829.252
Monthly unique visits: 242.708
Daily average:
Unique new visitors: 7.533
Unique new views : 179.680
Just let me know if you need more details.
Any advice is highly appreciated.
Thank you.
When it come to performance issue, there is no golden rule or labelled sticky note that first tell that is related to database. Maybe what i could suggest is to do performance profiling and there are many free and paid tools over the Internet that allows you to do so.
First start of with web server layer, make sure everything is done correctly and optimized as what is be possible.
Then move on to next layer (which i assume is your database). Normally from layman perspective whenever someone mentioned InnoDB MySQL, we assume there are indexes being created to optimize and search operations. The usage of indexes also quite important because you don't want to indexing something wrong and make things worse. My advise to this is to get a DBA equivalent personnel to troubleshoot using a staging environment.
Another tricks you could possibility look at is the contents, from web page contents to database data, make sure you show/keep data where is needed only, do no store unnecessary information into database and using smart layout on the webpage. A cut down of a seconds or two might do a big difference in terms of usability and response time.
It is very hard to explain the detail here unless we have in-depth information about your application, its architecture and your environment, but above are some commonly used direction people use to troubleshoot such incident.
Good luck!
This site has excellent resources http://www.websiteoptimization.com/
The books that are mentioned are excellent. There are just too many techniques to list here and we do not know what you have tried so far.
Sorry for the delay guys, i have been very busy to find the issue and i did it.
Well, the problem was because of apache mostly, i had an access log of almost 300 GB which at midnight was parsed to generate webalizer stats. Mostly when this was happening the website was very very slow. I disabled webalizer for the domain, cleared the logs, and what to see, it is very fast again, doesn't matter the hour you access it.
I now only have just a few slow queries that i tend to fix today.
I also updated to CI 2.0 Reactor as suggested and started to use the memcached driver.
Who would knew that apache logs can be so problematic...
Based on the stats, I don't think you are hitting load problems... on a hunch, I would look to the database first. Database partitioning might be a good place to start.
But you should really do some profiling of your application first. How much time is spent in the application versus database. Are there application methods that are using lots of time and just need some tweaking? Are database queries not written efficiently? Do you need more or better database indices?
Everything looks pretty good-- if upgrading codeigniter is an option, the new codeigniter 2.0 (reactor) adds support for memcache (New Cache driver with file system, APC and memcache support). Granted you're already using xcache, these new additions may be worth looking at.
When cache objects weren't enough for our multi-domain platform that saw huge traffic, we went the route of throwing more hardware at it-- ram, servers/database. Then we moved to database clustering to handle single account forecasted heavy load. And now switching from apache to nginx... It's a never ending battle, but what worked for us was being smart about what we cached and increasing server memory then distributing this load across servers...
Cache as many database calls as you can. In my CI application I have a settings table that rarely changes, so I cache all calls made to it as I am constantly querying the settings table.
Cache your views and even your controllers as well. I tend to cache basically as much as I can in my CI applications and then refresh the cache when a file changes.
Only autoload important libraries, models and helpers. I've seen people autoload up to 10 libraries and on-top of that a few helpers and then a model. You only really need to autoload the database and session libraries if you are using them.
Regarding point number 3, are you autoloading many things in your config/autoload.php file by any chance? It might help speed things up only loading things you need in your controllers as you need them with exception of course the session and database libraries.
I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well.
The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM.
I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load.
We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware?
I am particularly curious
how many and how powerful servers are
needed (number of processors/cores, size of RAM)
what network equipment should
be used (what kind of switches,
network cards)
any other hardware,
like particular disc storage
solutions, etc, that are needed
Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).
Once you get past the point where a couple of physical machines aren't giving you the peak load you need, you probably want to start virtualising.
EC2 is probably the most flexible solution at the moment for the LAMP stack. You can set up their VMs as if they were physical machines, cluster them, spin them up as you need more compute-time, switch them off during off-peak times, create machine images so it's easy to system test...
There are various solutions available for load-balancing and automated spin-up.
If you can make your app fit, you can get use out of their non-relational database engine as well. At very high loads, relational databases (and MySQL in particular) don't scale effectively. The peak load of SimpleDB, BigTable and similar non-relational databases can scale almost linearly as you add hardware.
Moving away from a relational database is a huge step though, I can't say I've ever needed to do it myself.
I'm not so sure about hardware, but from a software point-of-view:
With an efficient data layer that will cache objects and collections returned from the database then I'd say a standard master-slave configuration would work fine. Route all writes to a beefy master and all reads to slaves, adding more slaves as required.
Cache data as objects returned from your data-mapper/ORM and not HTML, and use Memcached as your caching layer. If you update an object then write to the db and update in memcached, best use IdentityMap pattern for this. You'll probably need quite a few Memcached instances although you could get away with running these on your web servers.
We could never get MySQL clustering to work properly.
Be careful with the SQL queries you write and you should be fine.
Piotr, have you tried asking this question on moodle.org yet? There are a couple of similar scoped installations whose staff members answer that currently.
Also, depending on what your timeframe for deployment is, you might want to check out the moodle 2.0 line rather than the moodle 1.9 line, it looks like there are a bunch of good fixes for some of the issues with moodle's architecture in that version.
also: memcached rocks for this. php acceleration rocks for this. serverfault is probably the better *exchange site for this question though