I'm working on a university project at the moment and they've informed me that they won't support mysqli in their hosting environment.
While this seems a backwards decision to me, it's what I'm dealing with. Is there a way to handle transactions in MySQL using the old mysql library?
We have a volatile process involving student data that really needs transaction handling. The process was written by students years ago and is naturally prone to errors. All of it was written with mysql functions until I converted them to mysqli. We don't have time or budget to refactor the whole thing, so I just need a way to get transactions working again. Thanks!
you do not want to go back to the old mysql as it is being taken out of PHP for a reason.
Look into PDO: http://php.net/manual/en/book.pdo.php
And you will soon find yourself completely falling in love with this one. It exceeds mysql and mysqli by far, and your university will be happy to see this coming in
Transactions in mysql are handled by means of simple SQL commands like START TRANSACTION and such. You are allowed to issue these commands from client using whatever API you like.
Related
Can I somehow detect how much MySQL queries performed via old mysql and how much via mysqli connections.
My basic work is recode, refactor and support old projects. I migrate from procedural to OOP style and from mysql to mysqli. I need to know how much old requests left from old coders (code is messy and crazy as usual to calculate this in code directly).
Thanks!
mysqli has http://php.net/manual/en/mysqli.get-connection-stats.php
and for any driver you can run show status like 'Com\_%'; query.
I see no point in such numbers though, for the task provided.
You can override both methods with override_function and log whatever you need.
I'm developing a project where I need to retrieve HUGE amounts of data from an MsSQL database and treat that data. The data retrieval comes from 4 tables, 2 of them with 800-1000 rows, but the other two with 55000-65000 rows each one.
The execution time wasn't tollerable, so I started to rewrite the code, but I'm quite inexperienced with PHP and MsSQL. My execution of PHP atm is in localhost:8000. I'm generating the server using "php -S localhost:8000".
I think that this is one of my problems, the poor server for a huge ammount of data. I thought about XAMPP, but I need a server where I can put without problems the MsSQL Drivers to use the functions.
I cannot change the MsSQL for MySQL or some other changes like that, the company wants it that way...
Can you give me some advices about how to improve the performance? Any server that I can use to improve the PHP execution? Thank you really much in advance.
The PHP execution should least of your concerns. If it is, most likely you are going about things in the wrong way. All the PHP should be doing is running the SQL query against the database. If you are not using PDO, consider it: http://php.net/manual/en/book.pdo.php
First look to the way your SQL query is structured, and how it can be optimised. If in complete doubt, you could try posting the query here. Be aware that if you can't post a single SQL query that encapsulates your problem you're probably approaching the problem from the wrong angle.
I am assuming from your post that you do not have recourse to alter the database schema, but if so that would be the second course of action.
Try to do as much data processing in SQL Server as possible. Don't do data joining or other type of data processing that can be done in the RDBMS.
I've seen PHP code that retrieved data from multiple tables and matched lines based on several conditions. This is just an example of a misuse.
Also try to handle data in sets in SQL (be it MS* or My*) and avoid, if possible, line-by-line processing. The optimizer will output a much more performant plan.
This is small database. Really. My advices:
- Use paging for the tables and get data by portions (by parts)
- Use indexes for tables
- Try to find more powerful server. Often hosters companies uses one database server for thousands user's databases and speed is very slow. I suffered from this and bought dedicated server finally.
I just converted from mysql to mysqli api on PHP...
i noticed that some mysql calls require $connection resource parameter to be indicated, w/c i found quite annoying latey as i tried to port my scripts to this new API... is there a way i can configure PHP to use the latest connection resource automatically instead of having to decalre it every single time i make these calls? - kinda like the behaviour of the old mysql API..
I do hope there is a switch or something for this.
Nope, there is no [sane] way.
But that's not the problem.
The point is that switching to mysqli mechanically makes no sense.
The only reason for moving from mysql to mysqli is prepared statements.
If you aren't going to use them, and if you want to stick with raw API methods which require repeating the same useless 4-6 lines of code for the every query, without using any abstraction (which will reduce that amount to 1-2 lines) - there is no point in switching drivers then. Keep on with old mysql. 5.5 is no out yet and 5.4 is still rarely available on shared hostings. Means you have 5-6 years ahead for your old code to run with no problem.
I'm sorry, this is a very general question but I will try to narrow it down.
I'm new to this whole transaction thing in MySQL/PHP but it seems pretty simple. I'm just using mysql not mysqli or PDO. I have a script that seems to be rolling back some queries but not others. This is uncharted territory for me so I have no idea what is going on.
I start the transaction with mysql_query('START TRANSACTION;'), which I understand disables autocommit at the same time. Then I have a lot of complex code and whenever I do a query it is something like this mysql_query($sql) or $error = "Oh noes!". Then periodically I have a function called error_check() which checks if $error is not empty and if it isn't I do mysql_query('ROLLBACK;') and die($error). Later on in the code I have mysql_query('COMMIT;'). But if I do two queries and then purposely throw an error, I mean just set $error = something, it looks like the first query rolls back but the second one doesn't.
What could be going wrong? Are there some gotchas with transactions I don't know about? I don't have a good understanding of how these transactions start and stop especially when you mix PHP into it...
EDIT:
My example was overly simplified I actually have at least two transactions doing INSERT, UPDATE or DELETE on separate tables. But before I execute each of those statements I backup the rows in corresponding "history" tables to allow undoing. It looks like the manipulation of the main tables gets rolled back but entries in the history tables remain.
EDIT2:
Doh! As I finished typing the previous edit it dawned on me...there must be something wrong with those particular tables...for some reason they were all set as MyISAM.
Note to self: Make sure all the tables use transaction-supporting engines. Dummy.
I'd recommend using the mysqli or PDO functions rather than mysql, as they offer some worthwhile improvements—especially the use of prepared statements.
Without seeing your code, it is difficult to determine where the problem lies. Given that you say your code is complex, it is likely that the problem lies with your code rather than MySQL transactions.
Have you tried creating some standalone test scripts? Perhaps you could isolate the SQL statements from your application, and create a simple script which simply runs them in series. If that works, you have narrowed down the source of the problem. You can echo the SQL statements from your application to get the running order.
You could also try testing the same sequence of SQL statements from the MySQL client, or through PHPMyAdmin.
Are your history tables in the same database?
Mysql transactions only work using the mysqli API (not the classic methods). I have been using transactions. All I do is deactivate autocommit and run my SQL statements.
$mysqli->autocommit(FALSE);
SELECT, INSERT, DELETE all are supported. as long as Im using the same mysqli handle to call these statements, they are within the transaction wrapper. nobody outside (not using the same mysqli handle) will see any data that you write/delete using INSERT/DELETE as long as the transaction is still open. So its critical you make sure every SQL statement is fired with that handle. Once the transaction is committed, data is made available to other db connections.
$mysqli->commit();
Is there any way to make a PDO object throw an error if a query takes too long? I have tried PDO::ATTR_TIMEOUT to no effect.
I'd like a way to have a query throw an error if it is running for longer than a certain amount of time. This is not something that I can do in the database, ie, no maintenance jobs running on the db or anything.
I'm not sure what you mean by "This is not something that I can do in the database", but I would suggest that you have the person administering the database set up an Oracle profile to limit this on the database side. There are parameters such as CPU_PER_CALL and LOGICAL_READS_PER_CALL that can cap queries. The profile can be applied only to specific users if desired.
I'm not sure if you can do this in Oracle but I'm going to say it's not possible to do this within PHP since PHP is issuing the query to Oracle to be run and then is waiting for Oracle's response back. It may be possible to modify the PDO extension to support this, but you would need to modify the extension code (the actual C code) as there probably isn't any way to do this in just PHP.