Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm currently using Amazon SimpleDB. But it seems, by the cost, that it will be too expensive as to what I can afford.
I've one m1.small Amazon EC2 instance which runs the front-end web server very well. In my single SDB domain, I've four attributes (two of which I can delete since they have data I rarely need now) and the item name. I perform only getAttribute queries (no selects). Bascially, the item name is what I use to find data.
Around 20 reads and 8 writes per second occur on it. The box usage is terribly high which pushes my costs up.
Which would be the best database choice, hosted on a t1.micro instance (since it's the only cheap and low-level 64-bit instance and other 64-bit instances are far too expensive)?
Redis/MongoDB/CouchDB or what? Would it be even possible to host a database server that can sustain the load I mentioned above on so small an instance?
I have migrated some of my databases from SimpleDB to MongoDB for other reasons.
However, I wanted to continue with the hosted model for the database. So instead of using Amazon SimpleDB I am now using a combination of MongoHQ (mongohq.com) and MongoLab (mongolab.com).
They also have a free tier, not based on traffic but on the size of your database. You will need to analyze the costs based on the amount of data you will be dealing with.
It seems to me that if you are only using 2 attributes you should be fine with the free tier for a while (MongoLab.com has a 250Mb limit for the free tier)
Since both of those hosted service can be hosted in Amazon EC2, they are close to your front end, you will not incur in bandwidth costs because they are all inside AWS, and will help with performance since you will be using the high-speed internal AWS network.
In terms of performance I think 20 reads and 8 writes per second is not a big deal and the server will take care of all the cycles needed to support your app.
You can batch all your writes and use the default that provides no response to make the writes much faster.
For your reads, make sure you index your collection correctly and it should run fine.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
There is an application built on Laravel and the application should be ready for a load of 1000 requests per second.
I have done the below tasks:
1- Composer autoload has been dumped
2- Query results are cached
3- All views have been minified
What else should I consider ?
(App runs on docker container)
How are you measuring if you reach the TPS? I would first get a baseline in order to know if your far of and based on that start looking into which part of your application stack (this includes the web and database server and other services used.) Tools that are available to use are JMeter or Apache Bench
In order to reach the 1000 TPS you'll need to tweak the webserver to allows for this type of loads. How to approach this is dependent on the webserver used. So it is difficult to provide you with specifics.
With regards to your DB server there are tools available to benchmark them as well such as pgBadger(postgres) or log files specific for the slow queries.
Ultimately you would also like to be on one of the latests PHP version as they are quite some performance gains in every new version. Currently the latest released PHP version is 7.4
In my opinion these tweaks would have a greater performance gain then tweaking the PHP code (assuming there is no mis-use of php). But this of course depends on the specifics of you application.
Optionally you should also be able to scale vertically (oppose of horizontally) to increase the TPS every time with the number of TPS per application server.
Tips to Improve Laravel Performance
Config caching,
Routes caching.
Remove Unused Service.
Classmap optimization.
Optimizing the composer autoload.
Limit Use Of Plugins.
Here is full detailed article click
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 days ago.
Improve this question
Am planning to install Moodle in Amazon EC2 with ELB. The approach am thinking of is couple of Moodle intance and couple of DB instances. each Moodle instance points the DB instances through a load balancer and each DB syncs automatically.
Please advice will it works.
I don't think that there is an option to have multiple DB instances in AWS synchronized between each other and all being both read and write. It seems they can only have read replicas, see here: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html and https://aws.amazon.com/rds/mysql/ (Replication chapter).
Also, it would also be a big overhead to synchronize a high number of DB instances. Will this be synchronous? In that case it would leave a performance penalty.
The best option would be to have a number of Moodle instances behing a LB, all of them pointing to the same DB. I do not think the bottleneck sits in the DB. If you also tune the DB, add some performant storage (SSDs, see the link above for details) everything should be ok.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I want to implement Chat system on my website where users can interact with each other in rooms. This is my first time when I am implementing chat system.
On searching, I found that phpFreeChat is a good option but on going through its introduction I found that it doesn't use DB at all. So I am wondering, how good in its performance and how much is it flexible as compared to any DB-based approach.
Anyone who have used can please give a viewpoint whether I should go for phpFreeChat so after that I can start learning more about. The website has a huge traffic of around 3 million visits per month.
Any pull based chat system (in which the clients will have to actively contact the server to ask for updates) is hugely resource intensive. Every client will make a request every so many seconds; multiply that by the number of clients and you're very soon DDoSing your own server.
A proper system should be push based, in which every client has a persistent connection to the server and the server is able to push messages to all relevant parties in realtime. This is perfectly possible using web sockets or long poll as fallback. A pub/sub protocol like WAMP is perfect for this use, as are more specialised protocols like XMPP.
Writing to a file or database is entirely unnecessary and would only be a secondary feature for the purpose of data persistence. The server just needs to be a message broker, storage is not required.
Depends on what you need - my first chat application was also file based and it was (and still is) pretty quick, but customizing and adding new functions is a pain in the ass. If your only need is to have quick chat without complex functions, go for file based. If you need to make user rights and other complex things, go for database based system.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need to design a mobile application which requires a lot of database queries. A lot of means, a peak value can be 1 million in a sec. I dont know which database to use and which backed to use. In client side, i will be using phonegap for android and ios and i will be needing a webinterface for pc also.
My doubts are, i am planning to host the system online and use google cloud message to push data to users.
Can online hostings handle this much traffic?
I am planning to use php as backed. Or python?
the software need not be having a lot of calculation but a lot of queries.
And, which database system to use? Mysql or, google cloud sql?
Also tell me about using hadoop or other technologies like load balancers.
I may be totally wrong about the question itself.
Thank you very much in advance.
From what I understand, if you want to store unstructured data and retrieve it really fast, you should be looking at NoSql segment for storage and try to do a POC using a few of the available solutions in market. I would like to suggest giving a try to Aerospike NoSql DB which has a track record of easily doing 1 Million TPS on a single machine.
Google AppEngine could be the answer, it could be programmed in python or php (or java) and easily support scaling up to millions of requests per second and scaling down to just a few to save the resources (and your money).
they use their own NoSQL db, however there's a possibility to use SQL-based backend (not recommended).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am creating a website and am expecting somewhat normal usage. I am setting up the system for now with 1 Apache Server and 2 DB servers. I want any DB operations to be reflected in both DB servers so that I can have 1 server as Backup. Now how do I do it ?
The ways I can think of are :
Perform same operations in both DB from PHP. This seems like a terrible idea.
Update 1 DB and sync both DB servers periodically. This seems better.
Is there any better way to achieve this ? How is it done in Enterprises ?
If you're using MySQL, there is quite powerful built-in replication.
Check out the docs
A terrible idea is to have backup each time a new operation happens. No modern, nor old application works this way. Even Windows System Restore makes backup on scheduled times, not on each operation.
I'd suggest you to make an sql dump script. And schedule a cron job wich will run it once a day, or twice a day. If you really need the data on the server immediately (assuming, you need if one of the DB servers crashes, your app continue working immediately with the backup server) you can make an import script, which will run once the dump finishes.
If you are not in the special case, when you need once the first DB server is shutdown'd, to have another one opened, you can just store the dumped sql files on the machine and not load them on real database, if they are not needed.