Building Codeigniter Custom Application (seeking Advice) NoSQL? - php

Recently I've worked with my Developer to build a Codeigniter application to track user clicks on some ad campaigns using PHP/MySQL. In the past, I had used other paid tools out there to accomplish this, though they lacked the customization of building your own. This is what led me to this journey. Currently everything is setup on a VPS (paying about $70/month through Knownhost), so I think it should be good enough for what I'm trying to accomplish.
That being said, after building the initial tool and finally putting it to the test, it's super slow (usually resulting in 404 error, or a stuck query). I did some research, and I've discovered a few of my limitations. Anytime I try to gather click data on 5000+ clicks, it super slow (not loading 90% of the time).
1). Everything is built using MySQL (vs. NoSQL) and using a relational database. Accessing the data like this seems to be resulting in super slow queries. MongoDB looks like a valid non-relational database solution that looks affordable.
2). I'm not using a Cloud Database system (i.e. Amazon DynamoDB or Amazon RDS).
Next steps?
The next thing I'm going to test is loading the application on AWS Elastic Beanstalk to see if this improves the speed. If not, I will be going back to the drawing board and finding a plan of attack to optimize the page load dilemma I'm encountering.
Question:
1). Of the 3 points above, which do you think is going to result in loading the data faster. Obviously, some of it comes down to the programmer's ability to minimize and write efficient code ;)
2). If this were you, which of these areas would you attack first (if any). And if none, what would you recommend?
Thanks in advance, everyone. I appreciate your advice.

Related

Can Laravel 5 handle 1000 concurrent users without lagging badly?

I wanted to know that if 1000 users are concurrently using a website built with laravel 5 and also querying database regularly then how does laravel 5 perform ?
I know it would be slow but will it be highly slow that it would be unbearable ?
Note that i am also going to use ajax a lot.
And lets assume i am using digital ocean cloud service with following configurations
2GB memory
2 vCPU
40GB SSD
I don't expect completely real figures as it is impossible to do so but at least provide some details whether i should go with laravel with some considerable performance.
Please also provide some some tools through which i can check the speed of my laravel 5 application as well as how it will perform when there is actual load as well as other tools through which i can test speed and performance.
And it would be great if someone has real experience of using laravel especially Laravel 5.
And what about Lumen does that really make application faster than laravel and how much ?
In short, yes. At least newer versions of Laravel are capable (Laravel 7.*).
That being said, this is really a three part conundrum.
1. Laravel (Php)
High Concurrency Architecture And Laravel Performance Tuning
Honestly, I wouldn't be able to provide half the details as this amazing article provides. He's got everything in there from the definition of concurrency all the way to pre-optimization times vs. after-optimization times.
2. Reading, Writing, & Partitioning Persisted Data (Databases)
MySQL vs. MongoDB
I'd be curious if the real concern is Php's Laravel, or more of a database read/write speed timing bottleneck. Non relational databases are an incredible technology, that benefit big data much more than traditional relational databases.
Non-relational Databases (Mongo) have a much faster read speed than MySql (Something like 60% faster if I'm remembering correctly)
Non-relational Databases (Mongo) do have a slower write speed, but this usually is not an inhibitor to the user experience
Unlike Relational Databases (MySQL), Mongo DB can truly be partitioned, spread out across multiple servers.
Mongo DB has collections of documents, collections are pretty synonymous to tables and documents are pretty synonymous to rows.
The difference being, MongoDB has a very JSON like feel to it. (Collections of documents, where each document looks like a JSON object).
The huge difference, and benefit, is that each document - AKA row - does not have have the same keys. When using mongo DB on a fortune 500 project, my mentor and lead at the time, Logan, had a phenomenal quote.
"Mongo Don't Care"
This means that you can shape the data how you're wanting to retrieve it, so not only is your read speed faster, you're usually not being slowed by having to retrieve data from multiple tables.
Here's a package, personally tested and loved, to set up MongoDB within Laravel
Jessengers ~ MongoDB In Laravel
If you are concerned about immense amounts of users and transferring data, MongoDB may be what you're looking for. With that, let's move on to the 3rd, and most important point.
3. Serverless Architecture (Aka Horizontal scaling)
Aws, Google Cloud, Microsoft Azure, etc... I'm sure you've hear of The Cloud.
This, ultimately, is what you're looking for if you're having concurrency issues and want to stay within the realm of Laravel.
It's a whole new world of incredible tools one can hammer away at -- they'er awesome. It's also a whole new, quite large, world of tools and thought to learn.
First, let's dive into a few serverless concepts.
Infrastructure as Code Terraform
"Use Infrastructure as Code to provision and manage any cloud, infrastructure, or service"
Horizontal Scaling Example via The Cloud
"Create a Laravel application. It's a single application, monolithic. Then you dive Cloud. You discover Terraform. Ahaha, first you use terraform to define how many instances of your app will run at once. You decide you want 8 instances of your application. Next, you of course define a Load Balancer. The Load Balancer simply balances the traffic load between your 8 application instances. Each application is connected to the same database, ultimately sharing the same data source. You're simply spreading out the traffic across multiples instances of the same application."
We can of course top that, very simplified answer of cloud, and dive into lambdas, the what Not to do's of serverless, setting up your internal virtual cloud network...
Or...we can thank the Laravel team in advance for simplifying Serverless Architecture
Yep, Laravel Simplified Serverless (Shout out Laravel team)
Serverless Via Laravel Vapor
Laravel Vapor Opening Paragraph
"Laravel Vapor is an auto-scaling, serverless deployment platform for Laravel, powered by AWS Lambda. Manage your Laravel infrastructure on Vapor and fall in love with the scalability and simplicity of serverless."
Coming to a close, let's summarize.
Oringal Concern
Ability to handle a certain amount of traffic in a set amount of time
Potential Bottlenecks with potential solutions
Laravel & Php
(High Concurrency Architecture And Laravel Performance
Tuning
Database & Persisting/Retrieving Data Efficiently
Jessengers ~ MongoDB In Laravel
Serverless Architecture For Horizontal Scaling
Serverless Via Laravel Vapor
I'll try to answer this based on my experience as a software developer. To be honest I definitely will ask for an upgrade whenever it hits 1000 concurrent users at the same time because I won't take a risk with server failure nor data failure.
But let's break it how to engineer this.
It could handle those users if the data fetched is not complex and there are not many operations from Laravel code. If it's just passing through from the database and almost no modification from the data, it'll be fast.
The data fetched by the users are not unique. Let's say you have a news site without user personalization. the news that the users fetched mostly will be the same. You cached the data from memory (Redis, which I recommend) or from the web server(Nginx, should be avoided), your Laravel program will run fast enough.
Querying directly from the database is faster than using Laravel ORM. you might consider it if needed, but I myself will always try to use ORM because it will help code to be more readable and secure.
Splitting database, web server, CDN, and cache server is obviously making it easier to monitor server usage.
Try to upgrade it to the latest version. I used to work with a company that using version 5, and it's not really good at performance.
opcode caching. cache the PHP file code. I myself never use this.
split app to backend and frontend. use state management for front end app to reduce requests data to the server.
Now let's answer your question
Are there any tools for checking performance? You can check Laravel debug bar, these tools provide for simple performance reports. I myself encourage you to make a test for each of the features you create. You can create a report from that unit test to find which feature taking time to finish.
Are lume faster than laravel? Yeah, Lumen is faster because they disabled some features from Laravel. But please be aware that Taylor seems gonna stop Lumen for development. You should consider this for the future.
If you're aware of performance, you might not choose Laravel for development.
Because there is a delay between each server while opening a connection. Whenever a user creating a request, they open a connection to the server. server open connections to the cache server, database server, SMTP server, or probably other 3rd parties as well. It's the real bottleneck that happened on Laravel. You can make keep-alive connections with the database and cache server to reduce connection delay, but you wouldn't find it in Laravel because it will dispose the connection whenever the request finish.
It's a typed language, mostly compiled language are faster
In the end, you might not be able to use Laravel with that server resources unless you're creating a really simple application.
A question like this need an answer with real numbers:
luckily this guy already have done it in similar conditions as you want to try and with laravel forge.
with this php config:
opcache.enable=1
opcache.memory_consumption=512
opcache.interned_strings_buffer=64
opcache.max_accelerated_files=20000
opcache.validate_timestamps=0
opcache.save_comments=1
opcache.fast_shutdown=1
the results:
Without Sessions:
Laravel: 609.03 requests per second
With Sessions:
Laravel: 521.64 requests per second
so answering your question:
With that memory you would be in trouble to get 1000 users making requests, use 4gb memory and you will be in better shape.
I don't think you can do that with Laravel.
I tried benchmarking Laravel with an 8 core CPU, 8 GB RAM and 120GB HDD and I just got 200-400 request per second.

Mongo, Express vs mySQL

im currently in the face of considering what to use for building a piece of software - The system needs to handle complexity like:
- User Management (ex: Trainer Login - Client login)
Different dashboards (Depending on user profile)
Workout Builder (Trainer must be able to create workout programs and send(email) and attach (Client can see workout program in system) the program to a client)
Diet Plans (much like the above)
Workout Library
Booking/Calendar (Client should be able to book a trainer)
Training Logs etc...
As you can see, there would be alot of relations/bindings etc, and personlization (Dashboards) etc... I think you get the idea :) - However, im a Frontend Developer, I do have php experience and mySQL (However a long time ago) - So the question is... Is this system possible to build completely with ex: Angular, Express, Mongo and Node - Or would I have to depend on a database system like mySQL and use ex: PHP for the system ?
Thx in advance for any answers :)
In my opinion, if your hands on experience with PHP and MySQL is good enough you should go ahead and deploy your application with PHP and MySQL with MongoDB as an additional database.
I understand that MEAN stack can power up your complete app, but the development time would be longer, and for what I have felt while using MongoDB over petabytes of data is that MongoDB is amazingly great for storing complex data in a flat architecture in massive size. But just like all databases, even MongoDB has certain constraints.
You should go ahead with MySQL for your usual Login credentials and minor activities, for storing Diet Plans, Workout Libraries use MongoDB. Because that gives you a flexibility of the varying document structure and high availability. Over the time you will find MongoDB easier to work upon than MySQL.
Using MEAN Stack is great. But, now I prefer to use a mixed architecture of MySQL, MongoDB, and PostGres. If you are going to use any framework it would probably have ACL in it or available as an add-on, and that could help you with building permissions and roles of users.
Also, if you are using MongoDB, make sure you code according to the engine MMAP or WiredTiger, I had to do a major recoding because of the storage engine changes. Just a heads up!
Yes, it is possible to build on pure JavaScript stack like MEAN: MongoDB, Angular, Express, Node.js
Everything that MySQL does, MongoDB can do also. The question is only in proper database design and performance for specific use cases.

How to increase the execution speed and reduce execution time of a social website like 'Facebook' developed in PHPFox?

I've an already developed and running social website like 'Facebook'.
This website has been developed using PHPFox v3.0.7(which is a social networking platform created in php).
The website functions are working well, no issues with them.
The main, major and serious issue I'm facing with the website is the slow execution speed. For any kind of operation it takes too much time and user has to wait for that much time. This really irritates the user and is affecting the performance of a website.
So, I did research on 'Facebook'- The World's Largest Social Networking Website developed in PHP. If 'Facebook' can execute at rapid speed in spite of heavy user load and continuous operations why can't my site?
First thing is the site is developed using a framework called 'PHPFox' so the entire Database design, caching and all other things have been managed by the framework itself, I can't change the framework's settings but ultimately I want to increase the execution speed of my website.
So how should I achieve it? If you have any best in class solution please provide me the guidance for it.
Any kind of help would be highly appreciated.
Please feel free to ask me any of the queries you have regarding the issue I'm facing.
Thanks.
There are several ways you can increase the speed.
Using CDN,
Enable caching,
Using load balancing server,
Enable compression,
Enabling bootstrap,
Optimize database and db driver
You can use the following server speedup method
Enabling mod_deflacte
Enabling memchached
I have faced the same problem earlier when working on PHPFox and most of the things can be handled from the admin control panel.
By using memcached, i have resolved the speed issue. You can follow the simple steps which are mentioned in the knowledge base of PHPFox and i hope you will find some solution. how-to-enable-memcached
First of all facebook uses HHMV(HipHop Virtual Machine) which is an execution engine for PHP, ie their PHP code is first transformed into intermediate HipHop bytecode (HHBC), which is then dynamically translated into the x86-64 machine code, optimized and natively executed.
Thus everytime you login to facebook ,you just run the highly
efficient machine code.
I assume your site do not have HHMV , so here are some few tools & tips to boost PHP running websites,
memcached
cassandra
Optimizing db driver
Using msqli instead of mysql
Using a server with better configuration.
Using minified versions of jquery & bootstrap (ie bootstrap.min.js & jquery.min.js).
Hi In your main question, what you are asking is for the advice to increase the performance of your PHPFox application. Whereas, in the different note section, you are looking for the solution to send the Push-Notifications.
I will be answering you regarding the 'Increasing the performance of your website'.
As you must be aware, performance tuning/optimization is the bigger task. It has various stages as listed by '#Myself Malay'. Some of them can be done quick and others will take time. So you need to know them, prioritize them. For ex: Reducing the requests of JS and CSS files, by making them load using the CDN, can be accomplished quickly. Serving your resources (HTML, CSS, JS) in compressed form (gzip) also can be done quickly.
Optimizing your queries will be time consuming. Enabling Cache should be quick. If you want to establish the load balancers then it will take some time.
Here, I am adding some reference links regarding the PHPFox, which will help you.
http://www.ipragmatech.com/phpfox3-performance-xcache-apc-memcached.html#.VXkzXt93M_M
How to optimize your database tables in PHPFox
Speedup PHPFox V3
PHPFox - Optimization Settings
Note: Some of the tweaks may require the modification of the core files. Be careful, while doing so as they will make auto-update very difficult. Always take your code and DB backup during modification to your core files. Make a document about the changes you have done as which files, where why etc, which will help you to re-apply all those changes again, in case of updating your code to new version.
Below, I am adding some of the links, which will help you in understanding the Performance Optimization to the next level
Books for Building Scalable Web Applications? (DB Performance/Tuning, Networking, General Performance, etc.)
Scalability and Performance of Web Applications, Approaches?
My personal favorite book is: Scaling PHP Applications by Steve Corona. This book is the best reference ever available for the PHP Developers.
Eventhough I am not able to give any direct solution to your problem, hope this helps upto certain extent. All the best :-)
First of all, stop comparing your Phpfox site with Facebook. Facebook has hundreds of thousands of programmers working day and night to keep the performance up. It is a horrable truth that PHP is a slow server side scripting language. If your site improves, it's performance goes down. Fortunately, there are some tools available that can be used for increasing the overall performance of your website. Some of them are Memcached, Cassendra, Varnix etc. You can use CD networks to serve data faster. Choosing a good hosting server also helps to make your site ready for heavy traffic.
Facebook uses over 100 unique technologies to keep the site going with a respectable speed. Hiphop PHP is a technology that was developed by the company itself and is used to compile PHP scripts faster than any native C++ compiler. Hiphop makes PHP Codes to execute 30-50% faster. Well, HHVM is an alternative to Hiphop PHP and is available for free but requires nginx. Think again, moving from Apache to nginx is not a good decision if you are just a beginner.

How to optimize this website. Need general advices

this is my first question here, which is regarding a specific website optimization.
A few moths ago, we launched [site] for one of our clients which is some kind of community website.
Everything works great, but now this website is getting bigger and it shows some slowness when the pages are loading.
The server specs:
PHP 5.2.1 (i think we need to upgrade on 5.3 to make use of the new garbage collector)
Apache 2.2
Quad Core Xeon Processor # 2,8 Ghz and 4 GB DDR 3 RAM.
XCACHE 1.3 (we added this a few months ago)
Mysql 5.1 (we are using innodb as engine)
Codeigniter framework
Here is what we did so far and what we intend to do further :
Beside xcache, we don't really use a caching mechanism because most of the content comes live and beside this, we didn't wanted to optimize prematurely because we didn't know what to expect as far as the traffic flow.
On the other hand, we have installed memcached and we want to implement a cache system based on memcached.
Regarding the database structure, we have reached 3NF with most of our tables, and yes we have some slow queries(which we plan to optimize) but i think because the tables that produce slow queries are the one for blog comments(~44,408 rows) / user logs tracking (~725,837 rows) / user comments (~698,964 rows) etc which are quite big tables. The entire database is 697.4 MB in size for now.
Also, here are some stats for January 2011:
Monthly unique visitors: - 127.124
Monthly unique views: 4.829.252
Monthly unique visits: 242.708
Daily average:
Unique new visitors: 7.533
Unique new views : 179.680
Just let me know if you need more details.
Any advice is highly appreciated.
Thank you.
When it come to performance issue, there is no golden rule or labelled sticky note that first tell that is related to database. Maybe what i could suggest is to do performance profiling and there are many free and paid tools over the Internet that allows you to do so.
First start of with web server layer, make sure everything is done correctly and optimized as what is be possible.
Then move on to next layer (which i assume is your database). Normally from layman perspective whenever someone mentioned InnoDB MySQL, we assume there are indexes being created to optimize and search operations. The usage of indexes also quite important because you don't want to indexing something wrong and make things worse. My advise to this is to get a DBA equivalent personnel to troubleshoot using a staging environment.
Another tricks you could possibility look at is the contents, from web page contents to database data, make sure you show/keep data where is needed only, do no store unnecessary information into database and using smart layout on the webpage. A cut down of a seconds or two might do a big difference in terms of usability and response time.
It is very hard to explain the detail here unless we have in-depth information about your application, its architecture and your environment, but above are some commonly used direction people use to troubleshoot such incident.
Good luck!
This site has excellent resources http://www.websiteoptimization.com/
The books that are mentioned are excellent. There are just too many techniques to list here and we do not know what you have tried so far.
Sorry for the delay guys, i have been very busy to find the issue and i did it.
Well, the problem was because of apache mostly, i had an access log of almost 300 GB which at midnight was parsed to generate webalizer stats. Mostly when this was happening the website was very very slow. I disabled webalizer for the domain, cleared the logs, and what to see, it is very fast again, doesn't matter the hour you access it.
I now only have just a few slow queries that i tend to fix today.
I also updated to CI 2.0 Reactor as suggested and started to use the memcached driver.
Who would knew that apache logs can be so problematic...
Based on the stats, I don't think you are hitting load problems... on a hunch, I would look to the database first. Database partitioning might be a good place to start.
But you should really do some profiling of your application first. How much time is spent in the application versus database. Are there application methods that are using lots of time and just need some tweaking? Are database queries not written efficiently? Do you need more or better database indices?
Everything looks pretty good-- if upgrading codeigniter is an option, the new codeigniter 2.0 (reactor) adds support for memcache (New Cache driver with file system, APC and memcache support). Granted you're already using xcache, these new additions may be worth looking at.
When cache objects weren't enough for our multi-domain platform that saw huge traffic, we went the route of throwing more hardware at it-- ram, servers/database. Then we moved to database clustering to handle single account forecasted heavy load. And now switching from apache to nginx... It's a never ending battle, but what worked for us was being smart about what we cached and increasing server memory then distributing this load across servers...
Cache as many database calls as you can. In my CI application I have a settings table that rarely changes, so I cache all calls made to it as I am constantly querying the settings table.
Cache your views and even your controllers as well. I tend to cache basically as much as I can in my CI applications and then refresh the cache when a file changes.
Only autoload important libraries, models and helpers. I've seen people autoload up to 10 libraries and on-top of that a few helpers and then a model. You only really need to autoload the database and session libraries if you are using them.
Regarding point number 3, are you autoloading many things in your config/autoload.php file by any chance? It might help speed things up only loading things you need in your controllers as you need them with exception of course the session and database libraries.

What are best practices for self-updating PHP+MySQL applications?

It is pretty standard practice now for desktop applications to be self-updating. On the Mac, every non-Apple program that uses Sparkle in my book is an instant win. For Windows developers, this has already been discussed at length. I have not yet found information on self-updating web applications, and I hope you can help.
I am building a web application that is meant to be installed like Wordpress or Drupal - unzip it in a directory, hit some install page, and it's ready to go. In order to have broad server compatibility, I've been asked to use PHP and MySQL -- is that **MP? In any event, it has to be broadly cross-platform. For context, this is basically a unified web messaging application for small businesses. It's not another CMS platform, think webmail.
I want to know about self-updating web applications. First of all, (1) is this a bad idea? As of Wordpress 2.7 the automatic update is a single button, which seems easy, and yet I can imagine so many ways this could go terribly, terribly wrong. Also, isn't the idea that the web files are writable by the web process a security hole?
(2) Is it worth the development time? There are probably millions of WP installs in the world, so it's probably worth the time it took the WP team to make it easy, saving millions of man hours worldwide. I can only imagine a few thousand installs of my software -- is building self-upgrade worth the time investment, or can I assume that users sophisticated enough to download and install web software in the first place could go through an upgrade checklist?
If it's not a security disaster or waste of time, then (3) I'm looking for suggestions from anyone who has done it before. Do you keep a version table in your database? How do you manage DB upgrades? What method do you use for rolling back a partial upgrade in the context of a self-updating web application? Did using an ORM layer make it easier or harder? Do you keep a delta of version changes or do you just blow out the whole thing every time?
I appreciate your thoughts on this.
Frankly, it really does depend on your userbase. There are tons of PHP applications that don't automatically upgrade themselves. Their users are either technical enough to handle the upgrade process, or just don't upgrade.
I purpose two steps:
1) Seriously ask yourself what your users are likely to really need. Will self-updating provide enough of a boost to adoption to justify the additional work? If you're confident the answer is yes, just do it.
Since you're asking here, I'd guess that you don't know yet. In that case, I purpose step 2:
2) Release version 1.0 without the feature. Wait for user feedback. Your users may immediately cry for a simpler upgrade process, in which case you should prioritize it. Alternately, you may find that your users are much more concerned with some other feature.
Guessing at what your users want without asking them is a good way to waste a lot of development time on things people don't actually need.
I've been thinking about this lately in regards to database schema changes. At the moment I'm digging into WordPress to see how they've handled database changes between revisions. Here's what I've found so far:
$wp_db_version is loaded from wp-includes/version.php. This variable corresponds to a Subversion revision number, and is updated when wp-admin/includes/schema.php is changed. (Possibly through a hook? I'm not sure.) When wp-admin/admin.php is loaded, the WordPress option named db_version is read from the database. If this number is not equal to $wp_db_version, wp-admin/upgrade.php is loaded.
wp-admin/includes/upgrade.php includes a function called dbDelta(). dbDelta() scans $wp_queries (a string of SQL queries that will create the most recent database schema from scratch) and compares it to the schema in the database, altering the tables as necessary so that the schema is brought up-to-date.
upgrade.php then runs a function called upgrade_all() which runs specific upgrade_NNN() functions if $wp_db_version is less than target values. (ie. upgrade_250(), the WordPress 2.5.0 upgrade, will be run if the database version is less than 7499.) Each of these functions run their own data migration and population procedures, some of which are called during the initial database setup script. Nicely cuts down on duplicate code.
So, that's one way to do it.
Yes it would be a security feature if PHP went and overwrote its files from some place on the internet with no warning. There's no guarantee that the server is connecting correctly to your update server (it might download someone code crafted by someone else if DNS poisoning occured) - giving someone else access to your client's data. Therefore digital signing would be important.
The user could control updates by setting permissions on the web directory so that PHP only has read access to the files - this procedure could simply be documented with your program.
One question remains (I really don't know the answer to): can PHP overwrite files if it's currently using them (e.g. if the update.php file itself needed to be updated)? Worth testing.
I suppose you've already ruled this out, but you could host it as a service. (Think wordpress.com)
I'd suggest that you package your application with pear and set up a channel. Your users can then upgrade the application through a standard interface (pear). It's not entirely automatic (unless the users have some kind of automation running on top of pear), but it's standard, so any sysadmin can maintain it.
I think your best option is an update checking mechanism that will alert the administrator when there are update(s).
As you mention, there are a number of potential security problems. Due to those alone, I would suggest not doing this. Instead, try creating a fairly smart upgrading script.
Just my 2 cents: I'd consider an automatically self updating application within my CMS as a security hole, so if you decide to code this feature, you should consider to implement different levels of this behavior:
Automatically update
Check for updates and notify
Disable

Categories