SQL Server Stored procedure aborting when called from php - php

I have spent hours trying to get to the bottom of this and it is driving me la la.
I have a simple stored procedure that just inputs 10,000 rows into a table. This is done because it takes long enough to prove the point (about 12 seconds).
create table lala (id int not null identity(1,1), txt varchar(100))
go
CREATE PROCEDURE lalatest
AS
BEGIN
DECLARE #x int;
SET #x = 10000;
WHILE #x > 0
BEGIN
INSERT INTO lala (txt) VALUES ('blah ' + CAST(#x AS varchar(10)));
SET #x = #x - 1;
END;
END;
When I run the stored procedure from within SQL Server Management Studio, 10,000 rows are inserted into the table.
When I run it from php it just aborts after about 150 rows (the number of rows varies, suggesting a timeout issue) with no error messages or any indication that it has finished early
The remote query timeout setting is set to the default of 600 seconds, so its not that.
The php:
$sql = "exec lalatest";
sqlsrv_query($cn, $sql);
I have tried specifying a timeout value (which should be indefinite by default anyway) on the sqlsrv_query line and it made no difference.
I'm using php 5.6.7
and Microsoft SQL Server 2012 (SP3) (KB3072779) - 11.0.6020.0 (X64)
Oct 20 2015 15:36:27
Standard Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) (Hypervisor)
I have all errors and warnings turned on
Anyone got any ideas as to what I need to do to make this work?

Would you believe it, I have made it work. I added SET NOCOUNT ON as the first line in the stored procedure and now it works from php as well.
Looks like having the rows affected info throws the php sqlsrv driver a wobbly. Strange how it only does it after a certain time (about a second), perhaps something to do with when the message buffer is flushed or something. Yes, I'm guessing, but this has solved the problem, so I'm happy.
CREATE PROCEDURE lalatest
AS
BEGIN
SET NOCOUNT ON; -- must be added so that it will run from php
DECLARE #x int;
SET #x = 10000;
WHILE #x > 0
BEGIN
INSERT INTO lala (txt) VALUES ('blah ' + CAST(#x AS varchar(10)));
SET #x = #x - 1;
END;
END;

Related

Laravel 5.7.13 - How to execute oracle PL/SQL Declare-Begin-End statement

I connect to a Oracle DB with "Oracle SQL Developer" tool, and I can run successfully statements like this (for example):
DECLARE
p0_ VARCHAR2(32000) := 'Charlie';
p1_ FLOAT := 35;
p2_ VARCHAR2(32000) := 'Spain';
...
BEGIN
Users.Validate(p0_,p1_,p2_ ....);
END;
Also, I can run succesfully simple querys like Select * from .... The server responses in 4-5 seconds.
My problem:
I have a Laravel 5.7.13 project with oracle connection (yajra/laravel-oci8 module) and its works ok when I run simple querys like Select ...:
DB::select("Select ...");
But I can not run the beginning statement:
DB::select("DECLARE
p0_ VARCHAR2(32000) := 'Charlie';
p1_ FLOAT := 35;
p2_ VARCHAR2(32000) := 'Spain';
...
BEGIN
Users.Validate(p0_,p1_,p2_ ....);
END;");
I also tried with DB::Select(DB::raw("DECLARE ...")) statement, and DB::Statement(...) but not working.
I am using Xampp and apache, and when I try to run this statements, Apache does not responds, no errors, simply it looks like is trying to execute the statement, but not finishes, even though I have a timeout of 10 seconds (max_execution_time=10 in php.ini, and also forced by my php code with the instruction: ini_set('max_execution_time', 10)).
How can I execute this kind of statement from Laravel?
Could be a user permissions problem? (I have configured in Laravel and "Oracle SQL Developer" tool the same user and connection)
Thanks!
As far as concernes, it is not possible to execute a PL/SQL block directly from Lavarel.
So the best (only ?) option would be to first create a stored procedure directly in the database, using SQLDeveloper or another Oracle client, and then to invoke it from Lavarel, passing it the relevant arguments.
Here is an example based on your use case :
1) Create the stored procedure (NB : the max size of an Oracle varchar2 is 4000, not 32000) :
CREATE OR REPLACE PROCEDURE myproc
(
p0 IN VARCHAR2(4000),
p1 IN FLOAT,
p2 IN VARCHAR2(4000)
)
AS
BEGIN
Users.Validate(p0,p1,p2);
END;
2) Call the stored procedure from Lavarel :
DB::statement('exec myproc("Charlie", 35, "Spain")');
If you need to return something from the procedure, use DB::select instead of DB::statement.

MariaDB Avoid Deadlocks

My original error was
Error No: 1213 - Deadlock found when trying to get lock; try
restarting transaction
Okay, so I wrote a loop with max retries and a wait in between to try and get through the deadlocks.
$Try = 0;
while (!$Result = $dbs->query($MySQL)) {
$Try++;
if ($Try === MYSQL_MAX_RETRIES)
HandleMySQLError($dbs->error, $MySQL, false, $Test, $Trace);
else
sleep(MYSQL_RETRY_WAIT);
}
But now I'm constantly getting some of the original error still, and a new error
Got error 35 "Resource deadlock avoided" during COMMIT
But I can't really seem to find out what this means or how to fix it?
EDIT
I left out a ton of information when I first wrote this, but the server is a RedHat 7 AWS EC2 (well, 3 of them) in a Galera & MariaDB cluster.
The query I am running is a call to a stored procedure
call`getchatmessages`('<ChatID>','<UserID>',from_unixtime('<Some Timestamp>'));
And the stored procedure is as follows
CREATE DEFINER=`root`#`%` PROCEDURE `getchatmessages`(IN `__ChatID` CHAR(36), IN `__UserID` CHAR(36), IN `__Timestamp` TIMESTAMP(6))
BEGIN
DECLARE `__NewChatMessages` TINYINT(1) DEFAULT 0;
DECLARE `__i` INT(11) DEFAULT 0;
DECLARE `__Interval` INT(11) DEFAULT 100; -- ms
DECLARE `__Timeout` INT(11) DEFAULT 15000; -- ms
while `__NewChatMessages`=0 and `__i`<`__Timeout`/`__Interval` do
select 1 into `__NewChatMessages` from `chatmessages` where `ChatID`=`__ChatID` and `DateTimeAdded`>ifnull(`__Timestamp`,0) limit 1;
update `chatusers` set `DateTimeRead`=now(6) where `ChatID`=`__ChatID` and `UserID`=`__UserID`;
do sleep(`__Interval`/1000);
set `__i`=`__i`+1;
end while;
select `chatmessages`.`Body`, `chatmessages`.`ChatID`, `chatmessages`.`UserID`,
`chatmessages`.`ChatMessageID`, `chatmessages`.`DateTimeAdded`, UNIX_TIMESTAMP(`chatmessages`.`DateTimeAdded`) `Timestamp`, `users`.`FirstName`,
`users`.`LastName`
from `chatmessages`
join `users` using (`UserID`)
where `chatmessages`.`ChatID`=`__ChatID`
and `chatmessages`.`DateTimeAdded`>ifnull(`__Timestamp`,0)
order by `chatmessages`.`DateTimeAdded` desc
limit 100;
END
Deadlock in Galera Cluster (MariaDB Galera Cluster, 3 nodes) is not a typical deadlock, but a way of communicating the multi-master conflicts:
http://galeracluster.com/documentation-webpages/dealingwithmultimasterconflicts.html
The easiest way to avoid deadlocks is to write to 1 node at a time, i.e. configure HA proxy to write to 1 node only. In your case you will run sp on Node1 (does not matter which node, but always on 1 node, sort of "sticky sessions").
More information here: https://severalnines.com/blog/avoiding-deadlocks-galera-set-haproxy-single-node-writes-and-multi-node-reads
Is this Proc being called inside a transaction? If so, I argue strongly with its design. You have a loop with a sleep hanging onto the transaction.
Instead, have the UPDATE be a transaction by itself.
This may virtually eliminate the deadlocks. However you should still deal with deadlocks, as discussed by other answer(s).
Edit Since there are no BEGINs, and autocommit=ON, the OP is already following this advice. Alas.

ODBC, PHP and SQL Variables

I have a fairly large, complex chunk of SQL that creates a report for a particular page. This SQL has several variable declarations at the top, which I suspect is giving me problems.
When I get the ODBC result, I have to use odbc_next_result() to get past the empty result sets that seem to be returned with the variables. That seems to be no problem. When I finally get to the "real" result, odbc_num_rows() tells me it has over 12 thousand rows, when in actuality it has 6.
Here is an example of what I'm doing, to give you an idea, without going into details on the class definitions:
$report = $db->execute(Reports::create_sql('sales-report-full'));
while(odbc_num_rows($report) <= 1) {
odbc_next_result($report);
}
echo odbc_num_rows($report);
The SQL looks something like this:
DECLARE #actualYear int = YEAR(GETDATE());
DECLARE #curYear int = YEAR(GETDATE());
IF MONTH(GETDATE()) = 1
SELECT #curYear = #curYear - 1;
DECLARE #lastYear int = #curYear-1;
DECLARE #actualLastYear int = #actualYear-1;
DECLARE #tomorrow datetime = DATEADD(dd, 1, GETDATE());
SELECT * FROM really_big_query
Generally speaking it's always a good idea to start every stored procedure, or batch of commands to be executed, with the set nocount on instruction - which tells SQL Server to supress sending "rows effected" messages to a client application over ODBC (or ADO, etc.). These messages effect performance and cause empty record sets to be created in the result set - which cause additional effort for application developers.
I've also used ODBC drivers that actually error if you forget to suppress these messages - so it has become instinctive for me to type set nocount on as soon as I start writing any new stored procedure.
There are various question/answers relating to this subject, for example What are the advantages and disadvantages of turning NOCOUNT off in SQL Server queries? and SET NOCOUNT ON usage which cover many other aspects of this command.

PHP and SQLSRV isn't inserting all the rows it should

I'm utilizing PHP 5.5 and connecting to a MS SQL database. Connection works as expected. I'm attempting to do a mass insert (around 2000 rows). Rather than using PHP to run 2000 INSERT statements, I'm sending a query like so:
DECLARE #count INT
SET #count = 0
WHILE(#count < 2000)
BEGIN
INSERT INTO Table (Field1, Field2)
VALUES ('data1', 'data2')
SET #count = (#count + 1)
END
When I plug this query directly into SQL Server Mangement Studio, it inserts 2000 rows as expected. When I run the query via PHP like so:
$connectInfo = array ("UID"=>'user',"PWD"=>'#pass',"Database"=>"database", "ReturnDatesAsStrings"=>true);
$link = sqlsrv_connect("dataserver",$connectInfo);
$query = "DECLARE #count INT
SET #count = 0
WHILE(#count < 2000)
BEGIN
INSERT INTO License (Field1, Field2)
VALUES('data1','data2')
SET #count = (#count + 1)
END";
if(!$foo = sqlsrv_query($link,$query)) {
die(print_r(sqlsrv_errors(), true));
}else{
die("hooray beer time!");
}
This only inserts 524 rows in the table, not the expected 2000 (and some times it's 309 rows, X rows, its erratic). It doesn't error out, and I get "hooray beer time!" after every script run. By default, sqlsrv_query() function has no time limits, I thought. Has anyone run into anything like this before? It has me baffled.
Ahh, it seems that after each insert in the While loop, SQL returns a little row of Text that says (1) row affected (or something to that tune). Turning this off via:
SET NOCOUNT ON
has remedied this issue. Thanks all!

Stored Procedure fails when performing Update

I'm using the SQL Server Driver for PHP to connect to an SQL Server 2008 Express. Right now, I'm trying to replace all SELECT, UPDATE and INSERT statements by stored procedures. This is working fine for SPs that just contain a SELECT statement. But now I tried to do one with an update, and I keep getting the error message "Executing SQL directly; no cursor.". I can call the SP fine from Management Studio with the same parameter values.
Any ideas?
Cheers
Alex
EDIT: here's one update procedure. The funny part is, the procedure is actually executed fine and updates the data like it's supposed to. But it still returns an error, resulting in an exception.
First, the PHP code that fails:
if (! $this->Result = sqlsrv_query($this->Conn, $strQuery, $arrParameters, array("Scrollable"=>SQLSRV_CURSOR_STATIC)))
{
$this->sendErrorMail($strQuery, $arrParameters);
throw new Exception(4001);
}
SQL
USE [testsite]
GO
/****** Object: StoredProcedure [dbo].[Items_countDownload] Script Date: 09/09/2010 18:03:28 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Alexis Hildebrandt
-- Create date: 2010-09-09
-- Description: Increases the download count by 1
-- =============================================
ALTER PROCEDURE [dbo].[Items_countDownload]
#Id INT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #DownloadCount INT = 0, #MaxCount INT = 0, #Id2 INT = 0
DECLARE itemCursor CURSOR SCROLL
FOR
SELECT Id, Downloads
FROM Items
WHERE Id = #Id
OR SKU IN
(
SELECT SKU FROM Items WHERE Id = #Id
)
FOR UPDATE OF Downloads
OPEN itemCursor
FETCH NEXT FROM itemCursor
INTO #Id, #DownloadCount;
-- Find the largest Download count across all versions of the item
WHILE ##FETCH_STATUS = 0
BEGIN
IF #MaxCount < #DownloadCount
SET #MaxCount = #DownloadCount;
FETCH NEXT FROM itemCursor
INTO #Id, #DownloadCount;
END
-- Increase the download count by one for all versions
FETCH FIRST FROM itemCursor
INTO #Id, #DownloadCount;
WHILE ##FETCH_STATUS = 0
BEGIN
UPDATE Items
SET Downloads = #MaxCount + 1
WHERE CURRENT OF itemCursor
FETCH NEXT FROM itemCursor
INTO #Id, #DownloadCount;
END
CLOSE itemCursor;
DEALLOCATE itemCursor;
END
look into the permissions: Executing SQL directly; no cursor
Try removing "Scrollable"=>SQLSRV_CURSOR_STATIC option from the call. The procedure you're calling doesn't return any open cursors to PHP code.

Categories