I am developing an app for owncloud and am using the Owncloud API, not the App Framework.
In this environment I can start SQL-Transactions via \OCP\DB::beginTransaction(); and I can commit the transaction cia \OCP\DB::commit();.
But I can't find a way to rollback a transaction. I googled it all day and searched through the Owncloud core files but couldn't find a way to do it.
Does anyone know how to do this? Right now I can just leave the transaction uncommited in my ajax requests, because they have only one transaction. But in other scripts I have to do multiple transactions one after another which are independent from another. I have to manually delete all my inserted rows in case anything goes wrong, which is not very nice.
Edit 2014/07/30:
I have found out that the OC_DB_StatementWrapper-Class, which is return by \OCP\DB::prepare of Owncloud does not provide a method to do this. However, it passes all unknown calls to the underlying \Doctrine\DBAL\Driver\Statement-object. This class is described here: Doctrine.DBAL.Statement
It has a private $_conn (instance of \Doctrine\DBAL\Connection) which has a method rollback to rollback a transaction. However, $_conn is private, so I can not access it.
I have finally found a solution myself. To those interested in how it works, here is the solution:
$conn = \OC::$server->getDatabaseConnection();
$conn->rollBack();
This will rollback the previously via \OCP\DB::beginTransaction() opened transaction. To start a new transaction, just call \OCP\DB::beginTransaction() again - works like a charm.
Related
I am currently working on a project to uses Yii and stumbled across something that made me scratch my head. I started a db transaction using Yii (which just calls PDO::beginTransaction) and did some database stuff and at the end, store a flash message for the user and do a redirect. I forgot though to commit the transaction so nothing got stored in my database, but what caught my attention is that my flash message also did not appear. Doing a commit or rollback makes the flash message appear just fine.
Basically, I noticed that I could not store any session related data and have it stick after a redirect if I started a transaction and didn't commit/rollback. I normally don't leave transactions hanging so I never noticed this behavior before.
So is there a relationship between the 2 that would prevent Sessions from working properly?
Session is written to the database at the end of the request. If you make an explicit rollback, it still gets written to the db outside of the transaction. If you don't, the rollback happens implicitly AFTER the session saving queries are run.
Note: Now that I know where the issue comes from, I modified the question. It now contains only the information needed.
I'm new to the Laravel PHP framework.
I have a very small app working on my computer. It is connected to a MySQL database and has a User model. I use the Auth class to log in and out.
Everything works fine, but when I am logged in, loading a page takes about a second which is very slow. When I'm not logged in, it's a matter of milliseconds.
By using the built-in profiler, I realized two problems. First, like I said, loading a page takes a bit more than 1000 milliseconds. Second, the framework makes one SQL every time I load a page when I'm logged in. The query searches a user with a certain id (my id). I guess it is there to get information about the logged in user. But isn't there supposed to be some sort of cache. Will this be a problem if my website will have to manage many requests per seconds.
I realized that using Auth::check() in the view is what is causing the issue. I have around 4 Auth::check() is my Blade view. When I have none, it goes fast. If I have one, it is slow. Then, no matter how many I have, it doesn't get much slower. It's like if the Auth class' initialization takes too much time or something like that. I guess it explains why it only happens when I'm logged in.
I dived into Laravel's code and I found out that when Auth::check() is called for the first time, the Auth class needs to "activate" my Session by retrieving the user's info from the database. That explains the query being executed every page request. But since the profiler says that the query doesn't even take a millisecond to execute, I still don't know why it slows down the app.
New information: Even when I'm not sending a query to the database, the simple act of connecting to it takes almost a second. This is the reason it is slow. I think I'm getting really close to solve the issue.
Any idea so far?
Thanks in advance.
Notes
The fact that Auth::check() is in the view doesn't change anything.
Using another method like Auth::guest() doesn't solve the issue.
New: Connecting to the database is what is slow.
I finally found a way to fix this.
When reading some posts on many forums about XAMPP, MySQL, and PHP, and I read somewhere that it is preferable to use 127.0.0.1 because locahost needs an extra DNS lookup.
In the database configuration file, I simply changed locahost for 127.0.0.1.
Now everything is fast.
I find this really strange. Using locahost in the configuration file used to make the database connection take more than a second!
I do not agree with Hammo's example. Having any user information other than their ID within the session is a security risk, which is why most frameworks take this route. Is there anything else being run when the user is logged in, apart from the query for their record? It's definitely not that that's slowing your application down.
I have to create a process call on a db field(s) being a certain status. I have heard you can execute a cURL call with a db trigger but Google is not being kind enough to return anything I can use.
So I guess my question is three parts:
Can this be done?
Reference?
Alternative Solution?
Workflow:
db field is updated with status, need to kick off script/request/process that run the next step in my work flow (This is a PHP script) that will pull the recorded in the db and process another step, then update the db with the results.
You shouldn't use triggers for that, as a trigger blocks transactions so it will make your database very slow. Also you'd need to install unsafe language to Postgres — pl/sh, pl/perl, pl/python or other.
There are 2 better solutions for this problem:
have a process which connects to database and LISTENs for NOTIFY events generated by your trigger — this will work instantly;
periodically check for new data using, for example, a cron script - this would work with a delay.
If you can call a shell script,
http://plsh.projects.postgresql.org/
you can call a curl.
But I get a creepy feeling about the approach...
If the remote server goes offline, data inconsistency??
Alternative:
I wouldn't put business logic in triggers, only customized constraints or denormalisation.
Do what you need to do with either middle-tier, or stored procedures.
Regards,
//t
I think what you're looking for is a trigger in postgres that will run the necessery script. Triggers are explained in the documentation, the syntax for adding a new trigger is explained here. The trigger type you're looking for appears to be an AFTER UPDATE trigger. As far as I know, the script you run will have to check if the field is of the required status, as postgres will always run the trigger.
If I have a database.php class (singleton) which reads and writes information for users of my web application, what happens when simultaneous requests for the same database function is called?
Is it possible that the database class will return the wrong information to other users accessing the same function at the same time?
What other similar problems could occur?
what happens when simultaneous requests for the same database function is called? Is it possible that the database class will return the wrong information to other users accessing the same function at the same time?
Absolutely not.
Each PHP request is handled entirely in it's own process space. There is no threading, no application server connection pool, no shared memory, nothing funky like that. Nothing is shared unless you've gone out of your way to do so (like caching things in APC/memcached).
Every time the application starts, your Singleton will get created. When the request ends, so does the script. When the script exits, all of the variables, including your Singleton, go away with it.
What other similar problems could occur?
Unless you are using transactions (and if you're using MySQL, using a transaction-safe table type like InnoDB), it is possible that users could see partial updates. For example, let's say that you need to perform an update to three tables to update one set of data properly. After the first update has completed but before the other two have completed, it's possible for another process to come along and request data from the three tables and grab the now inconsistent data. This is one form of race condition.
PHP provides a mechanism to register a shutdown function:
register_shutdown_function('shutdown_func');
The problem is that in the recent versions of PHP, this function is still executed DURING the request.
I have a platform (in Zend Framework if that matters) where any piece of code throughout the request can register an entry to be logged into the database. Rather than have tons of individual insert statements throughout the request, slowing the page down, I queue them up to be insert at the end of the request. I would like to be able to do this after the HTTP request is complete with the user so the length of time to log or do any other cleanup tasks doesn't affect the user's perceived load time of the page.
Is there a built in method in PHP to do this? Or do I need to configure some kind of shared memory space scheme with an external process and signal that process to do the logging?
If you're really concerned about the insert times of MySQL, you're probably addressing the symptoms and not the cause.
For instance, if your PHP/Apache process is executing after the user gets their HTML, your PHP/Apache process is still locked into that request. Since it's busy, if another request comes along, Apache has to fork another thread, dedicate more memory to it, open additional database connections, etc.
If you're running into performance problems, you need to remove heavy lifting from your PHP/Apache execution. If you have a lot of cleanup tasks going on, you're burning precious Apache processes.
I would consider logging your entries to a file and have a crontab load these into your database out of band. If you have other heavy duty tasks, use a queuing/job processing system to do the work out of band.
Aside from register_shutdown_function() there aren't built in methods for determining when a script has exited. However, the Zend Framework has several hooks for running code at specific points in the dispatch process.
For your requirements the most relevant would be the action controller's post-dispatch hook which takes place just after an action has been dispatched, and the dispatchLoopShutdown event for the controller plugin broker.
You should read the manual to determine which will be a better fit for you.
EDIT: I guess I didn't understand completely. From those hooks you could fork the current process in some form or another.
PHP has several ways to fork processes as you can read in the manual under program execution. I would suggest going over the pcntl extension - read this blog post to see an example of forking child processes to run in the background.
The comments on this random guy's blog sound similar to what you want. If that header trick doesn't work, one of the comments on that blog suggests exec()ing to a separate PHP script to run in the background, if your Web host's configuration allows such a thing.
This might be a little hackish for your taste, but it would be an effective and simple workaround.
You could create a "queue" managed by your DB of choice, and first, store the request in your queue table in the database, then output an iframe that leads to a script that will trigger the instructions that match up with the queue_id in your database.
ex:
<?php
mysql_query('INSERT INTO queue ('instructions') VALUES ('something would go here');
echo('<iframe src="/yourapp/execute_queue/id?' . mysql_insert_id() . '" />');
?>
and the frame would do something like
ex:
<?php
$result = mysql_query('SELECT instructions FROM queue WHERE id = ' . $_GET['id']);
// From here, simply execute some instruction based on the "instructions" field, then delete the instruction from the database.
?>
Like I said, I can see how you could consider this hackish, but the frame will load independent of its parent page, so it would achieve what you want without running some other app in the background.
Hope this at least points you in the right direction.
Cheers!