Ok I do have a small messaging site for my client. Well its more likely a post-comment system(created in PHP). Now my client want a system that can comment to another existing comment and add some features like liking and tagging. Another thing is the existing system is heavily used by my client in his company as they use it like a skype chat(that makes it write-read intensive). well my client want's to use open source software as possible. so I used mysql community edition.
Too much about my story... So I had a 1 week research about NoSql databases and I found it right for my requirements as my client wants to add features (that means adding and adding columns and tables from time to time.) Now these are nosql database systems that caught my eye.(well if you can suggest other nosql database system its ok)
MongoDB
CouchDB
Redis
Now my question is which of the three is good for my situation? I also read some bad things about those 3 nosql databases
MongoDB is crappy on its 2.x version
CouchDB is slow (my client doesn't want slow)
Redis is memory-based so it just writes on the disk on certain intervals. so when the system crash in the middle of the interval then the data is lost
I want to have some opinions about this and any advice that can help me to cope up with my upcoming situation
MongoDB is a popular solution to this, and my personal preference. The great thing about Mongo (besides being schemaless) is that you can have nested/embedded documents. So for example, you can have a comment which has an array of sub-comments which each have their own arrays of sub-comments. I don't know of any other datastore that has that feature. It's also fast.
CouchDB has some nice features, but Mongo is so similar and much better.
Redis is very different from the other two. It's used mostly as an alternative to memcached. So it's primarily used for temporary data. Although it has some nice pubsub features built in. A lot of people use both MongoDB and Redis, but for different things.
Related
I wanted to know that if 1000 users are concurrently using a website built with laravel 5 and also querying database regularly then how does laravel 5 perform ?
I know it would be slow but will it be highly slow that it would be unbearable ?
Note that i am also going to use ajax a lot.
And lets assume i am using digital ocean cloud service with following configurations
2GB memory
2 vCPU
40GB SSD
I don't expect completely real figures as it is impossible to do so but at least provide some details whether i should go with laravel with some considerable performance.
Please also provide some some tools through which i can check the speed of my laravel 5 application as well as how it will perform when there is actual load as well as other tools through which i can test speed and performance.
And it would be great if someone has real experience of using laravel especially Laravel 5.
And what about Lumen does that really make application faster than laravel and how much ?
In short, yes. At least newer versions of Laravel are capable (Laravel 7.*).
That being said, this is really a three part conundrum.
1. Laravel (Php)
High Concurrency Architecture And Laravel Performance Tuning
Honestly, I wouldn't be able to provide half the details as this amazing article provides. He's got everything in there from the definition of concurrency all the way to pre-optimization times vs. after-optimization times.
2. Reading, Writing, & Partitioning Persisted Data (Databases)
MySQL vs. MongoDB
I'd be curious if the real concern is Php's Laravel, or more of a database read/write speed timing bottleneck. Non relational databases are an incredible technology, that benefit big data much more than traditional relational databases.
Non-relational Databases (Mongo) have a much faster read speed than MySql (Something like 60% faster if I'm remembering correctly)
Non-relational Databases (Mongo) do have a slower write speed, but this usually is not an inhibitor to the user experience
Unlike Relational Databases (MySQL), Mongo DB can truly be partitioned, spread out across multiple servers.
Mongo DB has collections of documents, collections are pretty synonymous to tables and documents are pretty synonymous to rows.
The difference being, MongoDB has a very JSON like feel to it. (Collections of documents, where each document looks like a JSON object).
The huge difference, and benefit, is that each document - AKA row - does not have have the same keys. When using mongo DB on a fortune 500 project, my mentor and lead at the time, Logan, had a phenomenal quote.
"Mongo Don't Care"
This means that you can shape the data how you're wanting to retrieve it, so not only is your read speed faster, you're usually not being slowed by having to retrieve data from multiple tables.
Here's a package, personally tested and loved, to set up MongoDB within Laravel
Jessengers ~ MongoDB In Laravel
If you are concerned about immense amounts of users and transferring data, MongoDB may be what you're looking for. With that, let's move on to the 3rd, and most important point.
3. Serverless Architecture (Aka Horizontal scaling)
Aws, Google Cloud, Microsoft Azure, etc... I'm sure you've hear of The Cloud.
This, ultimately, is what you're looking for if you're having concurrency issues and want to stay within the realm of Laravel.
It's a whole new world of incredible tools one can hammer away at -- they'er awesome. It's also a whole new, quite large, world of tools and thought to learn.
First, let's dive into a few serverless concepts.
Infrastructure as Code Terraform
"Use Infrastructure as Code to provision and manage any cloud, infrastructure, or service"
Horizontal Scaling Example via The Cloud
"Create a Laravel application. It's a single application, monolithic. Then you dive Cloud. You discover Terraform. Ahaha, first you use terraform to define how many instances of your app will run at once. You decide you want 8 instances of your application. Next, you of course define a Load Balancer. The Load Balancer simply balances the traffic load between your 8 application instances. Each application is connected to the same database, ultimately sharing the same data source. You're simply spreading out the traffic across multiples instances of the same application."
We can of course top that, very simplified answer of cloud, and dive into lambdas, the what Not to do's of serverless, setting up your internal virtual cloud network...
Or...we can thank the Laravel team in advance for simplifying Serverless Architecture
Yep, Laravel Simplified Serverless (Shout out Laravel team)
Serverless Via Laravel Vapor
Laravel Vapor Opening Paragraph
"Laravel Vapor is an auto-scaling, serverless deployment platform for Laravel, powered by AWS Lambda. Manage your Laravel infrastructure on Vapor and fall in love with the scalability and simplicity of serverless."
Coming to a close, let's summarize.
Oringal Concern
Ability to handle a certain amount of traffic in a set amount of time
Potential Bottlenecks with potential solutions
Laravel & Php
(High Concurrency Architecture And Laravel Performance
Tuning
Database & Persisting/Retrieving Data Efficiently
Jessengers ~ MongoDB In Laravel
Serverless Architecture For Horizontal Scaling
Serverless Via Laravel Vapor
I'll try to answer this based on my experience as a software developer. To be honest I definitely will ask for an upgrade whenever it hits 1000 concurrent users at the same time because I won't take a risk with server failure nor data failure.
But let's break it how to engineer this.
It could handle those users if the data fetched is not complex and there are not many operations from Laravel code. If it's just passing through from the database and almost no modification from the data, it'll be fast.
The data fetched by the users are not unique. Let's say you have a news site without user personalization. the news that the users fetched mostly will be the same. You cached the data from memory (Redis, which I recommend) or from the web server(Nginx, should be avoided), your Laravel program will run fast enough.
Querying directly from the database is faster than using Laravel ORM. you might consider it if needed, but I myself will always try to use ORM because it will help code to be more readable and secure.
Splitting database, web server, CDN, and cache server is obviously making it easier to monitor server usage.
Try to upgrade it to the latest version. I used to work with a company that using version 5, and it's not really good at performance.
opcode caching. cache the PHP file code. I myself never use this.
split app to backend and frontend. use state management for front end app to reduce requests data to the server.
Now let's answer your question
Are there any tools for checking performance? You can check Laravel debug bar, these tools provide for simple performance reports. I myself encourage you to make a test for each of the features you create. You can create a report from that unit test to find which feature taking time to finish.
Are lume faster than laravel? Yeah, Lumen is faster because they disabled some features from Laravel. But please be aware that Taylor seems gonna stop Lumen for development. You should consider this for the future.
If you're aware of performance, you might not choose Laravel for development.
Because there is a delay between each server while opening a connection. Whenever a user creating a request, they open a connection to the server. server open connections to the cache server, database server, SMTP server, or probably other 3rd parties as well. It's the real bottleneck that happened on Laravel. You can make keep-alive connections with the database and cache server to reduce connection delay, but you wouldn't find it in Laravel because it will dispose the connection whenever the request finish.
It's a typed language, mostly compiled language are faster
In the end, you might not be able to use Laravel with that server resources unless you're creating a really simple application.
A question like this need an answer with real numbers:
luckily this guy already have done it in similar conditions as you want to try and with laravel forge.
with this php config:
opcache.enable=1
opcache.memory_consumption=512
opcache.interned_strings_buffer=64
opcache.max_accelerated_files=20000
opcache.validate_timestamps=0
opcache.save_comments=1
opcache.fast_shutdown=1
the results:
Without Sessions:
Laravel: 609.03 requests per second
With Sessions:
Laravel: 521.64 requests per second
so answering your question:
With that memory you would be in trouble to get 1000 users making requests, use 4gb memory and you will be in better shape.
I don't think you can do that with Laravel.
I tried benchmarking Laravel with an 8 core CPU, 8 GB RAM and 120GB HDD and I just got 200-400 request per second.
im currently in the face of considering what to use for building a piece of software - The system needs to handle complexity like:
- User Management (ex: Trainer Login - Client login)
Different dashboards (Depending on user profile)
Workout Builder (Trainer must be able to create workout programs and send(email) and attach (Client can see workout program in system) the program to a client)
Diet Plans (much like the above)
Workout Library
Booking/Calendar (Client should be able to book a trainer)
Training Logs etc...
As you can see, there would be alot of relations/bindings etc, and personlization (Dashboards) etc... I think you get the idea :) - However, im a Frontend Developer, I do have php experience and mySQL (However a long time ago) - So the question is... Is this system possible to build completely with ex: Angular, Express, Mongo and Node - Or would I have to depend on a database system like mySQL and use ex: PHP for the system ?
Thx in advance for any answers :)
In my opinion, if your hands on experience with PHP and MySQL is good enough you should go ahead and deploy your application with PHP and MySQL with MongoDB as an additional database.
I understand that MEAN stack can power up your complete app, but the development time would be longer, and for what I have felt while using MongoDB over petabytes of data is that MongoDB is amazingly great for storing complex data in a flat architecture in massive size. But just like all databases, even MongoDB has certain constraints.
You should go ahead with MySQL for your usual Login credentials and minor activities, for storing Diet Plans, Workout Libraries use MongoDB. Because that gives you a flexibility of the varying document structure and high availability. Over the time you will find MongoDB easier to work upon than MySQL.
Using MEAN Stack is great. But, now I prefer to use a mixed architecture of MySQL, MongoDB, and PostGres. If you are going to use any framework it would probably have ACL in it or available as an add-on, and that could help you with building permissions and roles of users.
Also, if you are using MongoDB, make sure you code according to the engine MMAP or WiredTiger, I had to do a major recoding because of the storage engine changes. Just a heads up!
Yes, it is possible to build on pure JavaScript stack like MEAN: MongoDB, Angular, Express, Node.js
Everything that MySQL does, MongoDB can do also. The question is only in proper database design and performance for specific use cases.
Hi all i am a Student working on a project in an Hospital we designed an application where patient can book an appointment with doctor similar to this application (apphp.com/php-medical-appointment/examples/sample2/index.php) and our application uses php and mysql and runs on microcms framework now what we are trying to do is to get this application integrated with MedTrak (http://www.intersystems.com/trakcare/) which uses CACHE DB (Intersystems Cache db a post relational db)
we have written our application using mysql so is there any possible ways that we can fire data from our application to their db and. get, data from their db
So far we have tried these methods
odbc will it work cause we have to write our application in ODBC again
Help
Enterprise Application Patterns is a great book and I highly recommend it. However, I would add that even if you had top notch messaging middle-ware available to ensure the 2 applications are as loosely coupled as possible, at some point you will have to read or write to the Cache database, and you will probably need both.
Also, a sophisticated approach to integration may or may not be feasible on a student project. Perhaps it would be sufficient to have most of your code talk to an abstract communication layer that encapsulates the exact integration? You could start with whatever is simplest to implement but could have a story about how it could be changed later. Even this is probably quite hard enough for a student project, since the interface really should assume the communication is asynchronous.
In any case, at some point the rubber will meet the road and you will have to read and/or write to the Cache database. And at that point, ODBC is available, and sounds like it would be a good choice for you. There are other methods to connect to Cache but ODBC is widely used (and therefore probably more reliable) and doesn't require you to learn Cache Object Script, which would be a lot of extra work for your situation.
There are many ways to achieve this - the best way to learn about this is to read "Enterprise Integration Patterns".
I wouldn't recommend writing directly to each other's database - it is a fragile way of gluing apps together, because a change in one schema requires you to change the other app at the exact same time. You have to deal with exotic failure modes - one database may be down for backups, which means you can't write the changes from the other database to it.
Read the book for alternatives!
I am going to develop a social + professional networking website using Php (Zend or Yii framework). We are targeting over 5000 requests per minute. I have experience in developing advanced websites, using MVC frameworks.
But, this is the first time, I am going to develop something keeping scalability in mind. So, I will really appreciate, if someone can tell me about the technologies, I should be looking for.
I have read about memcache and APC. Which one should I look for? Also, should I use a single Mysql server or a master/slave combination (if its later, then why and how?)
Thanks !
You'll probably want to architect your site to use, at minimum, a master/slave replication system. You don't necessarily need to set up replicating mysql boxes to begin with, but you want design your application so that database reads use a different connection than writes (even if in the beginning both connections connect to the same db server).
You'll also want to think very carefully about what your caching strategy is going to be. I'd be looking at memcache, though with Zend_Cache you could use a file-based cache early on, and swap in memcache if/when you need it. In addition to record caching, you also want to think about (partial) page-level caching, and what kind of strategies you want to plan/implement there.
You'll also want to plan carefully how you'll handle the storage and retrieval of user-generated media. You'll want to be able to easily move that stuff off the main server onto a dedicated box to serve static content, or some kind of CDN (content distribution network).
Also, think about how you're going to handle session management, and make sure you don't do anything that will prevent you from using a non-file-based session storage ((dedicated) database, or memcache) in the future.
If you think carefully, and abstract data storage/retrieval, you'll be heading in a good direction.
Memcached is a distributed caching system, whereas APC is non-distributed and mainly an opcode cache.
If (and only if) your website has to live on different webservers (loadbalancing), you have to use memcache for distributed caching. If not, just stick to APC and its cache.
About MySQL database, I would advise a gridhosting which can autoscale according to requirements.
Depending on the requirements of your site it's more likely the database will be your bottle neck.
MVC frameworks tend to sacrifice performance for easy of coding, especially in the case of ORM. Don't rely on the ORM, instead benchmark different ways of querying the database and see which suits. You want to minimise the number of database queries, fetch a chunk of data at once instead of doing multiple small queries.
If you find that your php code is a bottle neck(profile it before optimizing) you might find facebook's hiphop useful.
Should I use Cassandra for a 100,000 user project? In MySQL 5, I have full-text search and table partitioning. I'm starting a Q&A system like SO with CodeIgniter. It's a move from vBulletin to a new system. In the old vBulletin system I had 100,000 users, with a total post count around 80,000. In the next 3 or 4 years, I expect there will be more and more users and posts both. So, should I use Cassandra instead of MySQL 5?
If I use Cassandra, I need to change from Grid-Service to Dedicated-Virtual hosting at Media Temple. Because Cassandra is not provided as part of a hosting system, I need to use a VPS or DV server solution. If I use MySQL, hosting is not a problem, but then what about performances, search speed.
By the way, what database is Stack Overflow using?
From the information you provided, I would suggest to stick to MySQL.
Just as a side-note, Facebook was using MySQL at first, and eventually moved to Cassandra only after it was storing over 7 Terabytes of inbox data, for over 100 million users.
Source: Lakshman, Malik: Cassandra - A Decentralized Structured Storage System.
Wikipedia also handles hundreds of Gigabytes of text data in MySQL.
You say 100,000 users - but how many concurrent users?
Cassandra is not built in hosting system
Using a hosted service on a single server suggests a very small scale operation - and your obviously limited by your budget. There's certainly no advantage running Cassandra on a single server node.
In mysql 5 have full text search
Which is not a very scalable solution - you should definitely think about using a normalized search (which I believe you'd have to do if you were migrating to Cassandra anyway).
Given that you can comfortably scale the MySQL solution to multiple databases using replication before you even think about fully clustered solution, and you obviously don't have the budget to do your own hosting, migrating to Cassandra seems like a massive overkill.
I would NOT recommend you using cassandra in your case for the following reasons:
Cassandra needs good understanding of the application you're building. It will be much harder to make changes and to run complex queries against data stored in cassandra. SQL is more flexible and easier to maintain. Cassandra is good when you need to store huge amounts of data and when you know exactly how the data stored in cassandra will be accessed and sorted.
Mysql works fine for millions of rows if properly indexes are built.
If you hit some bottlenecks in the future with mysql, you may look at what exactly your problems are and scale them using cassandra. I mean you must be able to combine both approaches: SQL and noSQL in the same project.
With regards to mysql full-text index I can say that it's useless. I mean that it works too bad to be used in high-loaded projects. Look at sphinxsearch.com, which is a great implementation of full-text search made for sql databases.
But if you expect that your system grows fast and is going to serve millions of users, you should consider cassandra since the beginning.