Is this at all possible? Or would it be necessary to perform individual queries against each DB, and then process the result in code or a tmp db table or whatever?
If you have 2 different servers then you can not couple these servers in a way that the data can be joined.
You need to execute the queries seperately on the servers and process the data later. It is possible to link a SQL Server with other systems though.
Related
So far we have a server with 2 databases and a mysql user that accesses any of them. For example
`select * from magento.maintable, erp.maintable`
now our erp is very slow and we want to separate our database on another server, but we have hundreds (almost a thousand) sql queries that have access in the same query to the two databases, for example
`insert into magento.table
select * from erp.maintable`
or
select * from erp.maintable inner join magento.table...
and more and more
How can I make everything work the same without changing these queries? but with the databases on different servers
To access the databases I have created a class for each database and through an object I make the queries, insertions, updates and deletions, like this
` public function exec($query, $result_array = true)
{
$this->data->connect();
$result = $this->data->query($query, $result_array);
$this->data->disconnect();
return $result;
}`
all help is welcome, the point is to find an optimal way to do this and not have to manually change 1000 sql queries made by another programmer
To access more than one database server in one query, you either have to use FEDERATED database engine or use replication to replicate the ERP-data from another server to the original one.
The use of FEDERATED engine is likely to cause additional performance problems and the replication requires some work to set up.
If the sole reason for the new server is the performance in ERP, you might want to see why the ERP is slow and try to solve that (optimize, move both databases to a new server, etc). When you have both databases on the same server, the query optimizer is able to combine and make efficient use of indexes.
I was experiencing some problems with duplicate queries slowing down rendering a HTML table (very similar select queries inside a while loop). So I created some simple caching functions (php):
check_cache()
write_cache()
return_cache()
These functions prevent the server from asking anything from the database.
Which sped things up quite a lot!
Later I read that MySQL caches SELECT statements:
The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again.
Why does this increase performance if MySQL is already doing this?
Possible issues
1) If your application updates tables frequently, then the query cache will be constantly purged and you won't get any benefit from this.
2)The query cache is not supported for partitioned tables.
3)The query cache does not work in an environment where you have multiple mysqld servers updating the same MyISAM tables.
4)The SELECT statements should be identical.
I'm developing a project where I need to retrieve HUGE amounts of data from an MsSQL database and treat that data. The data retrieval comes from 4 tables, 2 of them with 800-1000 rows, but the other two with 55000-65000 rows each one.
The execution time wasn't tollerable, so I started to rewrite the code, but I'm quite inexperienced with PHP and MsSQL. My execution of PHP atm is in localhost:8000. I'm generating the server using "php -S localhost:8000".
I think that this is one of my problems, the poor server for a huge ammount of data. I thought about XAMPP, but I need a server where I can put without problems the MsSQL Drivers to use the functions.
I cannot change the MsSQL for MySQL or some other changes like that, the company wants it that way...
Can you give me some advices about how to improve the performance? Any server that I can use to improve the PHP execution? Thank you really much in advance.
The PHP execution should least of your concerns. If it is, most likely you are going about things in the wrong way. All the PHP should be doing is running the SQL query against the database. If you are not using PDO, consider it: http://php.net/manual/en/book.pdo.php
First look to the way your SQL query is structured, and how it can be optimised. If in complete doubt, you could try posting the query here. Be aware that if you can't post a single SQL query that encapsulates your problem you're probably approaching the problem from the wrong angle.
I am assuming from your post that you do not have recourse to alter the database schema, but if so that would be the second course of action.
Try to do as much data processing in SQL Server as possible. Don't do data joining or other type of data processing that can be done in the RDBMS.
I've seen PHP code that retrieved data from multiple tables and matched lines based on several conditions. This is just an example of a misuse.
Also try to handle data in sets in SQL (be it MS* or My*) and avoid, if possible, line-by-line processing. The optimizer will output a much more performant plan.
This is small database. Really. My advices:
- Use paging for the tables and get data by portions (by parts)
- Use indexes for tables
- Try to find more powerful server. Often hosters companies uses one database server for thousands user's databases and speed is very slow. I suffered from this and bought dedicated server finally.
We have an iOS app which must download a large amount of user data from a remote server (in JSON format) and then insert this data into the local SQLite database. Because there is so much data, the insertion process takes more than 5 mins to complete, which is unacceptable. The process must be less than 30 seconds.
We have identified a potential solution: get the remote server to store the user's data in an SQLite database (on the remote machine). This database is compressed and then downloaded by the app. Therefore, the app will not have to conduct any data insertion, making the process much faster.
Our remote server is running PHP/MySQL.
My question:
What is the fastest and most efficient way to create the SQLite database on the remote server?
Is it possible to output a MySQL query directly into an SQLite table?
Is it possible to create a temporary MySQL database and then convert it to SQLite format?
Or do we have to take the MySQL query output and insert each record into the SQLite database?
Any suggestions would be greatly appreciated.
I think it's better to have a look at why the insert process is taking 5 minutes.
If you don't do it properly in SQLite, every insert statement will be executed in a separate transaction. This is known to be very slow. It's much better to do all the inserts in one single SQLite transaction. That should make the insert process really fast, even if you are talking about a lot of records.
In pseudo code, you will need to the following:
SQLite.exec('begin transaction');
for (item in dataToInsert) {
SQLite.exec('insert into table values ( ... )');
}
SQLite.exec('end transaction');
The same applies by the way if you want to create the SQLite database from PHP.
You can read a lot about this here: Improve INSERT-per-second performance of SQLite?
I'm currently coding a php website. however one of the requirement is to have it connect to two database concurrently and does the same action on the two database. so if one fail, the another will continue. Is it possible for this to work? server in php with mysql
You should look into replication