Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to create a web site with laravel based on a database (Mysql or Mongodb) that has almost 500.000 records. The main problem so far is that i want to update the database records daily with cronjobs. What is the best database solution to use in order to have a good flow on the updates. The tables of the database are not many and there are not really relashionships on the records. Can you advise me which database to choose? Mysql or Mongodb? Is it possible to host that web site on a sharing hosting? or do i need to move to a dedicated server. As i say before the records are 500.000 and they will be adding (new), deleting (trash) and updating (existing) around 5% of the records daily.
MySQL or MongoDB: we can't answer without knowing how the data are structured and what usage you are planning. If you require relations and cross-references between records OR if data consistency is vital to your application, then go with MySQL. Otherwise, if data are not related and rapidity is most important then ensured data consistency, MongoDB is the choice
Yes, you can host it in a shared hosting service
25k records to elaborate in a whole day requires low computational resource, it should not be a big problem
I just want to advice you to keep in consideration further development of your application: are you completely sure that records will always be around 500k and they will not increase to millions or even more?
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am currently developing an application in which there is a part where users vote for an image and a message "You are the Nth voter" is displayed. Suppose, 1000 or even 100 users vote for a particular image in a span of 2-3 seconds. How will I make sure that each of those users is displayed the correct value of N with N being incremented correctly in the database?
I am using MySQL and PHP.
In general, relational databases implement the ACID properties (read up more about them on Wikipedia or in some other source).
These guarantee that transactions do not interfere with each other and that the database remains consistent. So, if a bunch of users vote around the same time and a bunch of queries query, then each query will be consistent with a view of the data at the time it is run. Of course, the results might change over time.
I should also add this. The enforcement of ACID properties adds overhead to databases. Not all databases are 100% ACID compliant all the time, so this also depends on your database setup (and in the case of MySQL on the storage engine). However, in general, you don't have to worry about things being "incremented correctly" in the database, if the code is properly written.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have a large MySQL database on my server. The size of the database is around 100 Gb. There are many tables in this database. I want to transfer this database to another server.
I want to insert around 1000 records at a time and I'm planning to do this using a PHP script.
My plan is to create a lookup table to define table names and insert the data by checking the largest insert id and then take the next 1000 records and then insert them.
How good is this approach? Suggest some better solution/code if you have done this before
I guess this is a production Database. So, such large Database while you move data there will be some transactions that are modifying the DB contents.
So, much better to use an existing tool. For this case I would recommend you MySQL Replication.
I have used it to migrate SQL Server DB's as this is one recommended way and found that for MySQL it is an equivalent way.
Once the DB be fully replicated you will need to change your links to the new DB, and stop replication.
On this way no information should be lost.
I would suggest installing phpMyAdmin onto your server and utilizing that for exporting your database from the current server and importing it to the new server. I believe this would be a much cleaner and more efficient approach.
You can download phpMyAdmin here: https://www.phpmyadmin.net/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am planning to create a php myadmin inventory system for a large firm (at least 1000 branches)
There will be a centralized server and database kept in one place. where all the branches can insert and retrieve data from the centralized server. there will be at least 2000 sales bill and 100 purchase bills (at least 1 gb data from a branch)
My doubt is that is it technical feasible for me to use php and mysql (apache) for this project? the data will be vast? do i need to change the front end to jsp and back end to any other database?
I dont know much about php and mysql database....? any one who went through this scenario already could help me.
I suspect this question will not stay open for long, it is way too generic, but for what its worth: yes, it is feasible, I did a similar project before.
You would have to be careful with your data schema structure, and
would need to tune mysql server quite a bit, but this is true for
any database.
You also might want to employ local servers and replication to
central server.
Your reporting server should be separate, since its workload
should not affect main data performance.
These are some thoughts that come to mind.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
While making a website like facebook
we can follow two approach of database design so please suggest better
Concern is Data Secuirty and Backup Management
Approach 1
Design a table that will hold all the data of the personal and some other table that will hold other keys like image etc
the problem will come when there are 10 lakhs of entries in the table is it possible thereafter to take backup
some hosting company wont allow to do so.
Approach 2
While the user signup assign the separate table to the users in that way user will reach to ten thousand rows in 5 years or so just a assumption
but that means million tables in the database if million user signup and that again a problem i believe
Please suggest better way if anyone can
Sites the size of Facebook have unique challenges specific to their setups. Facebook, Twitter, Google, etc. all maintain their own forks of database engines and often even write their own, and they'll be using different databases for different purposes. Very little of what they do is going to be applicable to anything you build.
Approach #1 is by far the better. With proper indexes and a good database design, MySQL can support billions of rows. It cannot as easily support millions of tables.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am developing a php application using CodeIgniter. I am planning to split the single MySQL database to multiple sqlite databases. That is, one database(MySQL/PostgreSQL/SQLite) that handles authentication and one sqlite database per user that holds information related to users. I do not use any joins and it will have more reads than writes.
Is it a good idea to split the database into multiple sqlite databases for speed? Also, will it have problem when scaling to multiple servers? I can use redirection depending on user to point to right server.
Edit:
Decided to use MariaDB as my server for all users.
By splitting data into multiple sqlite databases, instead of speed, you will gain major headache and time sink. Don't do this, unless you know you have to, and can prove it with hard numbers, not hypothetical scenarios.
The advice above applies if the system you're building has some value (will be used commercially, etc.). Of course, if this is just a toy/training project, you're welcome to do whatever you like, and learn from it.