im trying to do some development where i will be implementing web cache.
Im using codeigniter 4 for this and it did have cache library built-in.
However, there are other 3rd party cache software like redis.
Based on my research, both serve the same purpose.
Therefore, what is the need of redis instead of my framework cache?
While they may serve the same purpose, adding redis to your project you will offload the caching to a different server. Thus reducing the load of your app server.
It mostly depends on your setup and the expected load:
If it's simple project without much traffic or queries, you can
keep using codeigniter's caching.
If you expect lots of traffic or tons of SQL/NoSQL queries, it's best
to offload the caching to a dedicated redis server/service to keep it
running smoothly. This adds some complexity to the project of course.
If you are interested in reading more points of view, this post has some good points on Redis as to when to use it or not: https://stackoverflow.com/a/3967052/9442192
Related
So I switched over from using Laravel's eager loading to checking the cache first before hitting the db. I've got it to the point where it's only doing 8 queries to the database per request vs the 100+ queries it was using before.
However, the response time is now slower than it was before. Per this stackoverflow question, Laravel Caching with Redis is very slow, it seems that using the cache facade is a lot slower than calling redis/memcached directly.
Why is the cache facade slower than calling the cache directly? Should I switch over to using redis/memcached directly? What can I do to improve the performance of my application while still using the cache facade, if possible?
Leveraging caching as much as possible and minimizing the number of queries to the DB is a requirement for this project I'm working on.
Another thing to note is that I'm using both memcached and redis for the cache drivers, as there are some things that are stored in memcached that are shared between the services I'm writing and the old services, and there are items that are exclusive to the services I'm writing which I can use Redis for instead.
I wanted to know that if 1000 users are concurrently using a website built with laravel 5 and also querying database regularly then how does laravel 5 perform ?
I know it would be slow but will it be highly slow that it would be unbearable ?
Note that i am also going to use ajax a lot.
And lets assume i am using digital ocean cloud service with following configurations
2GB memory
2 vCPU
40GB SSD
I don't expect completely real figures as it is impossible to do so but at least provide some details whether i should go with laravel with some considerable performance.
Please also provide some some tools through which i can check the speed of my laravel 5 application as well as how it will perform when there is actual load as well as other tools through which i can test speed and performance.
And it would be great if someone has real experience of using laravel especially Laravel 5.
And what about Lumen does that really make application faster than laravel and how much ?
In short, yes. At least newer versions of Laravel are capable (Laravel 7.*).
That being said, this is really a three part conundrum.
1. Laravel (Php)
High Concurrency Architecture And Laravel Performance Tuning
Honestly, I wouldn't be able to provide half the details as this amazing article provides. He's got everything in there from the definition of concurrency all the way to pre-optimization times vs. after-optimization times.
2. Reading, Writing, & Partitioning Persisted Data (Databases)
MySQL vs. MongoDB
I'd be curious if the real concern is Php's Laravel, or more of a database read/write speed timing bottleneck. Non relational databases are an incredible technology, that benefit big data much more than traditional relational databases.
Non-relational Databases (Mongo) have a much faster read speed than MySql (Something like 60% faster if I'm remembering correctly)
Non-relational Databases (Mongo) do have a slower write speed, but this usually is not an inhibitor to the user experience
Unlike Relational Databases (MySQL), Mongo DB can truly be partitioned, spread out across multiple servers.
Mongo DB has collections of documents, collections are pretty synonymous to tables and documents are pretty synonymous to rows.
The difference being, MongoDB has a very JSON like feel to it. (Collections of documents, where each document looks like a JSON object).
The huge difference, and benefit, is that each document - AKA row - does not have have the same keys. When using mongo DB on a fortune 500 project, my mentor and lead at the time, Logan, had a phenomenal quote.
"Mongo Don't Care"
This means that you can shape the data how you're wanting to retrieve it, so not only is your read speed faster, you're usually not being slowed by having to retrieve data from multiple tables.
Here's a package, personally tested and loved, to set up MongoDB within Laravel
Jessengers ~ MongoDB In Laravel
If you are concerned about immense amounts of users and transferring data, MongoDB may be what you're looking for. With that, let's move on to the 3rd, and most important point.
3. Serverless Architecture (Aka Horizontal scaling)
Aws, Google Cloud, Microsoft Azure, etc... I'm sure you've hear of The Cloud.
This, ultimately, is what you're looking for if you're having concurrency issues and want to stay within the realm of Laravel.
It's a whole new world of incredible tools one can hammer away at -- they'er awesome. It's also a whole new, quite large, world of tools and thought to learn.
First, let's dive into a few serverless concepts.
Infrastructure as Code Terraform
"Use Infrastructure as Code to provision and manage any cloud, infrastructure, or service"
Horizontal Scaling Example via The Cloud
"Create a Laravel application. It's a single application, monolithic. Then you dive Cloud. You discover Terraform. Ahaha, first you use terraform to define how many instances of your app will run at once. You decide you want 8 instances of your application. Next, you of course define a Load Balancer. The Load Balancer simply balances the traffic load between your 8 application instances. Each application is connected to the same database, ultimately sharing the same data source. You're simply spreading out the traffic across multiples instances of the same application."
We can of course top that, very simplified answer of cloud, and dive into lambdas, the what Not to do's of serverless, setting up your internal virtual cloud network...
Or...we can thank the Laravel team in advance for simplifying Serverless Architecture
Yep, Laravel Simplified Serverless (Shout out Laravel team)
Serverless Via Laravel Vapor
Laravel Vapor Opening Paragraph
"Laravel Vapor is an auto-scaling, serverless deployment platform for Laravel, powered by AWS Lambda. Manage your Laravel infrastructure on Vapor and fall in love with the scalability and simplicity of serverless."
Coming to a close, let's summarize.
Oringal Concern
Ability to handle a certain amount of traffic in a set amount of time
Potential Bottlenecks with potential solutions
Laravel & Php
(High Concurrency Architecture And Laravel Performance
Tuning
Database & Persisting/Retrieving Data Efficiently
Jessengers ~ MongoDB In Laravel
Serverless Architecture For Horizontal Scaling
Serverless Via Laravel Vapor
I'll try to answer this based on my experience as a software developer. To be honest I definitely will ask for an upgrade whenever it hits 1000 concurrent users at the same time because I won't take a risk with server failure nor data failure.
But let's break it how to engineer this.
It could handle those users if the data fetched is not complex and there are not many operations from Laravel code. If it's just passing through from the database and almost no modification from the data, it'll be fast.
The data fetched by the users are not unique. Let's say you have a news site without user personalization. the news that the users fetched mostly will be the same. You cached the data from memory (Redis, which I recommend) or from the web server(Nginx, should be avoided), your Laravel program will run fast enough.
Querying directly from the database is faster than using Laravel ORM. you might consider it if needed, but I myself will always try to use ORM because it will help code to be more readable and secure.
Splitting database, web server, CDN, and cache server is obviously making it easier to monitor server usage.
Try to upgrade it to the latest version. I used to work with a company that using version 5, and it's not really good at performance.
opcode caching. cache the PHP file code. I myself never use this.
split app to backend and frontend. use state management for front end app to reduce requests data to the server.
Now let's answer your question
Are there any tools for checking performance? You can check Laravel debug bar, these tools provide for simple performance reports. I myself encourage you to make a test for each of the features you create. You can create a report from that unit test to find which feature taking time to finish.
Are lume faster than laravel? Yeah, Lumen is faster because they disabled some features from Laravel. But please be aware that Taylor seems gonna stop Lumen for development. You should consider this for the future.
If you're aware of performance, you might not choose Laravel for development.
Because there is a delay between each server while opening a connection. Whenever a user creating a request, they open a connection to the server. server open connections to the cache server, database server, SMTP server, or probably other 3rd parties as well. It's the real bottleneck that happened on Laravel. You can make keep-alive connections with the database and cache server to reduce connection delay, but you wouldn't find it in Laravel because it will dispose the connection whenever the request finish.
It's a typed language, mostly compiled language are faster
In the end, you might not be able to use Laravel with that server resources unless you're creating a really simple application.
A question like this need an answer with real numbers:
luckily this guy already have done it in similar conditions as you want to try and with laravel forge.
with this php config:
opcache.enable=1
opcache.memory_consumption=512
opcache.interned_strings_buffer=64
opcache.max_accelerated_files=20000
opcache.validate_timestamps=0
opcache.save_comments=1
opcache.fast_shutdown=1
the results:
Without Sessions:
Laravel: 609.03 requests per second
With Sessions:
Laravel: 521.64 requests per second
so answering your question:
With that memory you would be in trouble to get 1000 users making requests, use 4gb memory and you will be in better shape.
I don't think you can do that with Laravel.
I tried benchmarking Laravel with an 8 core CPU, 8 GB RAM and 120GB HDD and I just got 200-400 request per second.
I'm using Cache_Lite for html and array Cache in my project. I found Cache_Lite may lead to high system IO problem. Maybe because the performance of Cache_Lite is not good
I'm asking is there any stable php html/page cache to use?
I already have APC installed for opcode cache, Memcached installed for common data/array cache.
I've had exact problem with Cache Lite, as library doesn't properly implement file locks.
Solved it with new library and drop in replacement for Cache Lite.
https://github.com/mpapec/simple-cache/blob/master/example_clite1.php
https://github.com/mpapec/simple-cache/blob/master/example_clite2.php
https://github.com/mpapec/simple-cache/blob/master/example_clite3.php
Just to mention that library lacks some features that I didn't found useful like cache cleaning and caching in memory (_memoryCaching property which is false by default and marked as "beta quality" in original library).
Algorithm which is used for file locking follows this diagram,
Without more information it is hard to know if you are currently experiencing an IO problem or are likely to experience an IO problem in the future. (If your site is not getting much traffic or you are using a SSD you are unlikely to have a problem)
Cache Lite appears to be a file based caching system. This may lead to IO problems if your site experiences heavy load / lots of concurrent users / is hosted on a shared server / has other programs heavily using the filesystem.
An alternative to Cache Lite is memcache which is a key/value store that stores data in memory. This may not be suitable if you are storing large amounts of data or you server does not have any spare RAM as it stores all of its information in memory. Another benefit of memory is that it is much faster than accessing files from the disk. If you are only accessing a small amount of data or the same data repeatedly this is not likely to be an issue though because of disk/OS caching.
I would suggest checking to see if your system is currently experiencing any issues with IO before worrying about IO performance (unless you plan on getting slashdotted or something)
You could install a tool like Munin http://munin-monitoring.org/ and monitor your system to see if IO is a problem or is becoming a problem. Once installed check the cpu graph and look at the iowait data.
EDIT: Just saw the comment above, depending on your needs reverse proxys are another great tool checkout https://www.varnish-cache.org/ . At work we use a combination of the two ( memcache and varnish) We have 1 machine serving over 900,000 page views per month, this site includes static and dynamic content.
If you're talking about https://pear.php.net/package/Cache_Lite then i could tell you a story. We used it once, but it proved to be unreliable for websites with lots of request.
We then switched to Zend_Cache (ZF1) in combination with memcached. I can be used as standalone component.
However, you have to tune it a bit in order to use tags. There are a few implementations out there to get the job done: https://github.com/bigwhoop/taggable-zend-memcached-backend
I've just started using YII and managed to finish my first app. unfortunately, launch day is close and I want this app to be super fast. So far, the only way of speeding it up I've come across, is standard caching. What other ways are there to speed up my app?
First of all, read Performance Tuning in the official guide. Additionally:
Check HTTP caching.
Update your PHP. Each major version gives you a good boost.
Use redis (or at least database) for sessions (default PHP sessions are using files and are blocking).
Consider using nginx instead (or with) apache. It serves content much better.
Consider using CDN.
Tweak your database.
These are all general things that are relatively easy to do. If it's not acceptable afterwards, do not assume. Profile.
1. Following best practices
In this recipe, we will see how to configure Yii for best performances and will see some additional principles of building responsive applications. These principles are both general and Yii-related. Therefore, we will be able to apply some of these even without using Yii.
Getting ready
Install APC (http://www.php.net/manual/en/apc.installation.php)
Generate a fresh Yii application using yiic webapp
2.Speeding up sessions handling
Native session handling in PHP is fine in most cases. There are at least two possible reasons why you will want to change the way sessions are handled:
When using multiple servers, you need to have a common session storage for both servers
Default PHP sessions use files, so the maximum performance possible is limited by disk I/O
3.Using cache dependencies and chains
Yii supports many cache backends, but what really makes Yii cache flexible is the dependency and dependency chaining support. There are situations when you cannot just simply cache data for an hour because the information cached can be changed at any time.
In this recipe, we will see how to cache a whole page and still always get fresh data when it is updated. The page will be dashboard-type and will show five latest articles added and a total calculated for an account. Note that an operation cannot be edited as it was added, but an article can.
4.Profiling an application with Yii
If all of the best practices for deploying a Yii application are applied and you still do not have the performance you want, then most probably, there are some bottlenecks with the application itself. The main principle while dealing with these bottlenecks is that you should never assume anything and always test and profile the code before trying to optimize it.
If most of your app is cacheable you should try a proxy like varnish.
Go for general PHP Mysql Performance turning.
1)Memcache
Memcahced open source distributed memory object caching system it helps you to speeding up the dynamic web applications by reducing database server load.
2)MySQL Performance Tuning
3)Webserver Performance turning for PHP
I have a high traffic website and I need make sure my site is fast enough to display my pages to everyone rapidly.
I searched on Google many articles about speed and optimization and here's what I found:
Cache the page
Save it to the disk
Caching the page in memory:
This is very fast but if I need to change the content of my page I have to remove it from cache and then re-save the file on the disk.
Save it to disk
This is very easy to maintain but every time the page is accessed I have to read on the disk.
Which method should I go with?
Jan & idm are right but here's how to:
Caching (pages or contents) is crutial for performance. The minimum calls you request to the database or the file system is better whether if your content is static or dynamic.
You can use a PHP accelerator if you need to run dynamic content:
My recommendation is to use Alternative PHP Cache (APC)
Here's some benchmark:
What is the best PHP accelerator to use?
PHP Accelerators : APC vs Zend vs XCache with Zend Framework
Lighttpd – PHP Acceleration Benchmarks
For caching content and even pages you can use: Memcached or Redis.
Memcached:
Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.
Redis
Redis is an open source, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.
Both are very good tool for caching contents or variables.
Here's some benchmark and you can choose which one you prefer:
Redis vs Memcached
Redis vs Memcached
Redis VS Memcached (slightly better bench)
On Redis, Memcached, Speed, Benchmarks and The Toilet
You can install also Varnish, nginx, or G-Wan
Varnish:
Varnish is an HTTP accelerator designed for content-heavy dynamic web sites. In contrast to other HTTP accelerators, such as Squid, which began life as a client-side cache, or Apache, which is primarily an origin server, Varnish was designed from the ground up as an HTTP accelerator.
nginx
nginx (pronounced ?engine-x?) is a lightweight, high-performance Web server/reverse proxy and e-mail (IMAP/POP3) proxy, licensed under a BSD-like license. It runs on Unix, Linux, BSD variants, Mac OS X, Solaris, and Microsoft Windows.
g-wan
G-WAN is a Web server with ANSI C scripts and a Key-Value store which outperform all other solutions.
Here's some benchmark and you can choose which one you prefer:
Serving static files: a comparison between Apache, Nginx, Varnish and G-WAN
Web Server Performance Benchmarks
Nginx+Varnish compared to Nginx
Apache, Varnish, nginx and lighttpd
G-WAN vs Nginx
You have a good idea, which is close to what i do myself. If i have a page that is 100% static, i'll save a html version of it and serve that to the user instead of generating the content again every time. This saves both mysql queries and several io operations in some cases. Every time i make some change, my administration interface simply removes the html file and recreates it.
This method has proven to be around 100x faster on my server.
The big question with website performance is "do you serve static pages, or do you serve dynamic pages?".
Static pages
The best way to speed up static pages is to cache them outside your website. If you can afford to, serve them from a CDN (Akamai, Cotendo, Level3). In this case, the traffic never hits your site. There are several ways to control the cache - from fixed duration to the standard HTTP cache directives.
Even if you can't serve your HTML from a CDN, storing your images, javascript and other static assets on a CDN can speed up your site - you could use a cloud service like Amazon for this.
If you can't afford a CDN for your HTML, you could use your own caching proxy layer, as book of Zeus suggests. I've had good results with Varnish. Ideally, you'd run your caching proxy on its own hardware - but you can run it on your existing servers.
Dynamic pages
Dynamic pages are harder to cache - so then you need to concentrate on making the pages themselves as efficient as possible. This basically means hunting the bottleneck - in most systems, the bottleneck is the database (but by no means always).
If you're confident your bottleneck is the database, there are several ways caching options - you can cache "snippets" of HTML, or you can cache database queries. Using an accelerator helps with this - I wouldn't invent one from scratch. This probably means re-architecting (parts of) your application.
You have to profile your site first.
Instead of wild guess one have to determine certain bottleneck(s) and then solve that certain problem.
Cahing is not a silver bullet nor a synonym for the optimization.
Sometimes caching is not applicable (for the ads, for example), sometimes it will help nothing as the reason of ht site slowness may be in some unrelated spot.
Your site may run out of memory. So, memory caching will make the things worse.
I can't believe someone has a high traffic site and said nmot a word of the prior profiling. How can you run it knowing nothing of it's internals? CPU load, memory load, disk i/o and such.
I can add:
Cache everything you can
Minimize number of includes
Use accelerator
Please, investigate, what makes your site slow. Don't forget about YSlow and similar things, they can help you a lot.
Besides, if you have heavy calculations you could write php extension for them, but i don't think this is your case