unixODBC PHP Update statement error - php

I'm using Ubuntu+php+unixodbc+mdbtools for working with .mdb file.
Every thing(connection+select) works good, but Insert or Update statements.
My code is something like this :
$mdbConnection = new \PDO("odbc:mdbdriver",$user , $password , array('dbname' =>$FileName) );
$SelectResult = $mdbConnection->query("Select * from Zone");
$UpdateResult = $mdbConnection->query("Update Zone Set ShahrCode = 99");
$SelectResult returns correct result, but the second one throws an error that cause apache to segfault error.
I test it with isql command.Running Select statement is successful but Update is not.
#isql mdbdriver
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL>Update Zone Set ShahrCode = 99
Error at Line : syntax error near Update
syntax error near Update
Got no result for 'Update Zone Set ShahrCode = 99' command
[08001][unixODBC]Couldn't parse SQL
[ISQL]ERROR: Could not SQLExecute
Or
SQL> Update [Zone] Set ShahrCode = 99
Error at Line : syntax error near Update
syntax error near Update
Got no result for 'Update [Zone] Set ShahrCode = 99' command
[ISQL]ERROR: Could not SQLExecute
How should I fix this error ?
Thanks all

Personally, I wouldn't spend a lot of time trying to get PHP + mdb_tools + unixODBC to work together reliably. I have tried on several occasions and have been quite unsuccessful despite my best efforts.
My recommendations would be:
If maintaining your data in an Access .mdb file is a firm requirement then one must assume that Windows machines are involved in the project. In that case I would suggest that you run your PHP code on a Windows machine and use COM_DOTNET to manipulate the Access database (via Windows ODBC using ADODB.Connection and related objects).
If running your PHP code on Linux is a firm requirement then there is a strong case for moving your data from the Access .mdb into some other database that works better with PHP. (MySQL would be one of the more common choices.)
If both 1. and 2. are firm requirements then perhaps the best option might be to move the .mdb file to a Windows machine and use ODBTP to manipulate the .mdb file from PHP code running on the Linux machine.

At last I found a solution :
mdbtools can not write to mdb files yet.
MDB Tools currently has read-only support for Access 97 (Jet 3) and
Access 2000/2002 (Jet 4) formats. Write support is currently being
worked on and the first cut is expected to be included in the 0.6
release.
Using simple compiled java application is our solution.
Create a simple java project with Jackcess library.
Enable CLI params for java application and do what you want with
mdb file.
You can even get mdb file path with CLI params.
Compile java project.
In PHP you can use exec('cd path/to/javaproject;java -cp .
YourJavaProject "mdbfilepath" "insert|update|or select"',$output);

Related

phpmyadmin import & overwrite table

this has been annoying me for weeks and i cant find a proper solution.
im running a VPS
centos 7 (and aapanel which has no relevance)
php 7.4
mysql 5.7
phpmyadmin 5.0
ive gone into phpmyadmin, exported a table, updated 5000 rows, and i want to import and overwrite the old data into the same table.
the 'browse' and import is not an option(as vps error 503/low ram to load) and so ive tried to add it by SQL tab:
LOAD DATA LOCAL INFILE '/database/links.csv'
INTO TABLE links
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';
with permission denied
*yes, ive added
[MySQLi]
mysqli.allow_local_infile = On
to my.conf and even tried to add to php.ini
and restarted apache and even tried removing LOCAL(Saw that on stack too) with no avail*
does anyone have an updated version, or know of a solid solution to this annoying, but should be easy solution?
EDIT
root user = fixes issue for permission denied... but...
LOAD DATA INFILE
error
#1290 - The MySQL server is running with the --secure-file-priv option so it cannot execute this statement
--secure-file-priv has been removed from my.conf and apache restarted and error still appears
LOAD DATA LOCAL INFILE
error
#2000 - LOAD DATA LOCAL INFILE is forbidden, check mysqli.allow_local_infile
mysqli.allow_local_infile = On is still in my.conf
file has full permissions(777) and tried changing owner (www/root/mysql)
Carefully read difference between LOCAL and non-LOCAL versions of LOAD DATA command: https://dev.mysql.com/doc/refman/5.7/en/load-data.html#load-data-local
You need to check state of secure_file_priv config variable for non-LOCAL version of command:
mysql> select ##secure_file_priv;
+-----------------------+
| ##secure_file_priv |
+-----------------------+
| /var/lib/mysql-files/ |
+-----------------------+
File must be located here.
For LOCAL version of the command you must check permission of client to read such file. In case of phpmyadmin it will be php as client.
In both cases double check that mysql server or php both have enough permissions to enter directory and read file. The easiest way is to login under system user for mysql or php and just try to read file:
sudo -u mysql_or_php_user /bin/bash
or
su -s /bin/bash mysql_or_php_user
then
head /database/links.csv

PDOException (2006) SQLSTATE[HY000] [2006] MySQL server has gone away [duplicate]

I get this error when I try to source a large SQL file (a big INSERT query).
mysql> source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 3
Current database: *** NONE ***
Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.
Here is my max-packet size:
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
Here is the file size:
$ ls -s file.sql
79512 file.sql
When I try the other method...
$ ./mysql -u root -p my_db < file.sql
Enter password:
ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M
Adding this line into my.cnf file solves my problem.
This is useful when the columns have large values, which cause the issues, you can find the explanation here.
On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server
5.6"
On Linux (Ubuntu): /etc/mysql
You can increase Max Allowed Packet
SET GLOBAL max_allowed_packet=1073741824;
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet
The global update and the my.cnf settings didn't work for me for some reason. Passing the max_allowed_packet value directly to the client worked here:
mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql
In general the error:
Error: 2006 (CR_SERVER_GONE_ERROR) - MySQL server has gone away
means that the client couldn't send a question to the server.
mysql import
In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.
So you've the following possibilities:
Add force option (-f) for mysql to proceed and execute rest of the queries.
This is useful if the database has some large queries related to cache which aren't relevant anyway.
Increase max_allowed_packet and wait_timeout in your server config (e.g. ~/.my.cnf).
Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.
Try applying --max-allowed-packet option for mysql.
Common reasons
In general this error could mean several things, such as:
a query to the server is incorrect or too large,
Solution: Increase max_allowed_packet variable.
Make sure the variable is under [mysqld] section, not [mysql].
Don't afraid to use large numbers for testing (like 1G).
Don't forget to restart the MySQL/MariaDB server.
Double check the value was set properly by:
mysql -sve "SELECT ##max_allowed_packet" # or:
mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"
You got a timeout from the TCP/IP connection on the client side.
Solution: Increase wait_timeout variable.
You tried to run a query after the connection to the server has been closed.
Solution: A logic error in the application should be corrected.
Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.
Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).
The running thread has been killed, so retry again.
You have encountered a bug where the server died while executing the query.
A client running on a different host does not have the necessary privileges to connect.
And many more, so learn more at: B.5.2.9 MySQL server has gone away.
Debugging
Here are few expert-level debug ideas:
Check the logs, e.g.
sudo tail -f $(mysql -Nse "SELECT ##GLOBAL.log_error")
Test your connection via mysql, telnet or ping functions (e.g. mysql_ping in PHP).
Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:
sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
On Linux, use strace. On BSD/Mac use dtrace/dtruss, e.g.
sudo dtruss -a -fn mysqld 2>&1
See: Getting started with DTracing MySQL
Learn more how to debug MySQL server or client at: 26.5 Debugging and Porting MySQL.
For reference, check the source code in sql-common/client.c file responsible for throwing the CR_SERVER_GONE_ERROR error for the client command.
MYSQL_TRACE(SEND_COMMAND, mysql, (command, header_length, arg_length, header, arg));
if (net_write_command(net,(uchar) command, header, header_length,
arg, arg_length))
{
set_mysql_error(mysql, CR_SERVER_GONE_ERROR, unknown_sqlstate);
goto end;
}
I solved the error ERROR 2006 (HY000) at line 97: MySQL server has gone away and successfully migrated a >5GB sql file by performing these two steps in order:
Created /etc/my.cnf as others have recommended, with the following contents:
[mysql]
connect_timeout = 43200
max_allowed_packet = 2048M
net_buffer_length = 512M
debug-info = TRUE
Appending the flags --force --wait --reconnect to the command (i.e. mysql -u root -p -h localhost my_db < file.sql --verbose --force --wait --reconnect).
Important Note: It was necessary to perform both steps, because if I didn't bother making the changes to /etc/my.cnf file as well as appending those flags, some of the tables were missing after the import.
System used: OSX El Capitan 10.11.5; mysql Ver 14.14 Distrib 5.5.51 for osx10.8 (i386)
Just in case, to check variables you can use
$> mysqladmin variables -u user -p
This will display the current variables, in this case max_allowed_packet, and as someone said in another answer you can set it temporarily with
mysql> SET GLOBAL max_allowed_packet=1072731894
In my case the cnf file was not taken into account and I don't know why, so the SET GLOBAL code really helped.
You can also log into the database as root (or SUPER privilege) and do
set global max_allowed_packet=64*1024*1024;
doesn't require a MySQL restart as well. Note that you should fix your my.cnf file as outlined in other solutions:
[mysqld]
max_allowed_packet=64M
And confirm the change after you've restarted MySQL:
show variables like 'max_allowed_packet';
You can use the command-line as well, but that may require updating the start/stop scripts which may not survive system updates and patches.
As requested, I'm adding my own answer here. Glad to see it works!
The solution is increasing the values given the wait_timeout and the connect_timeout parameters in your options file, under the [mysqld] tag.
I had to recover a 400MB mysql backup and this worked for me (the values I've used below are a bit exaggerated, but you get the point):
[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
max_allowed_packet = 1024M
interactive_timeout = 1000000
net_buffer_length = 200M
net_read_timeout = 1000000
set GLOBAL delayed_insert_timeout=100000
Blockquote
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
A couple things could be happening here;
Your INSERT is running long, and client is disconnecting. When it reconnects it's not selecting a database, hence the error. One option here is to run your batch file from the command line, and select the database in the arguments, like so;
$ mysql db_name < source.sql
Another is to run your command via php or some other language. After each long - running statement, you can close and re-open the connection, ensuring that you're connected at the start of each query.
If you are on Mac and installed mysql through brew like me, the following worked.
cp $(brew --prefix mysql)/support-files/my-default.cnf /usr/local/etc/my.cnf
Source: For homebrew mysql installs, where's my.cnf?
add max_allowed_packet=1073741824 to /usr/local/etc/my.cnf
mysql.server restart
I had the same problem in XAMMP
Metode-01: I changed max_allowed_packet in the D:\xampp\mysql\bin\my.ini file like that below:
max_allowed_packet=500M
Finally restart the MySQL service once and done.
Metode-02:
the easier way if you are using XAMPP. Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
I encountered this error when I use Mysql Cluster, I do not know this question is from a cluster usage or not. As the error is exactly the same, so give my solution here.
Getting this error because the data nodes suddenly crash. But when the nodes crash, you can still get the correct result using cmd:
ndb_mgm -e 'ALL REPORT MEMORYUSAGE'
And the mysqld also works correctly.So at first, I can not understand what is wrong. And about 5 mins later, ndb_mgm result shows no data node working. Then I realize the problem. So, try to restart all the data nodes, then the mysql server is back and everything is OK.
But one thing is weird to me, after I lost mysql server for some queries, when I use cmd like show tables, I can still get the return info like 33 rows in set (5.57 sec), but no table info is displayed.
This error message also occurs when you created the SCHEMA with a different COLLATION than the one which is used in the dump. So, if the dump contains
CREATE TABLE `mytab` (
..
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
you should also reflect this in the SCHEMA collation:
CREATE SCHEMA myschema COLLATE utf8_unicode_ci;
I had been using utf8mb4_general_ci in the schema, cause my script came from a fresh V8 installation, now loading a DB on old 5.7 crashed and drove me nearly crazy.
So, maybe this helps you saving some frustating hours... :-)
(MacOS 10.3, mysql 5.7)
Add max_allowed_packet=64M to [mysqld]
[mysqld]
max_allowed_packet=64M
Restart the MySQL server.
If it's reconnecting and getting connection ID 2, the server has almost definitely just crashed.
Contact the server admin and get them to diagnose the problem. No non-malicious SQL should crash the server, and the output of mysqldump certainly should not.
It is probably the case that the server admin has made some big operational error such as assigning buffer sizes of greater than the architecture's address-space limits, or more than virtual memory capacity. The MySQL error-log will probably have some relevant information; they will be monitoring this if they are competent anyway.
This is more of a rare issue but I have seen this if someone has copied the entire /var/lib/mysql directory as a way of migrating their DB to another server. The reason it doesn't work is because the database was running and using log files. It doesn't work sometimes if there are logs in /var/log/mysql. The solution is to copy the /var/log/mysql files as well.
For amazon RDS (it's my case), you can change the max_allowed_packet parameter value to any numeric value in bytes that makes sense for the biggest data in any insert you may have (e.g.: if you have some 50mb blob values in your insert, set the max_allowed_packet to 64M = 67108864), in a new or existing parameter-group. Then apply that parameter-group to your MySQL instance (may require rebooting the instance).
For Drupal 8 users looking for solution for DB import failure:
At end of sql dump file there can commands inserting data to "webprofiler" table.
That's I guess some debug log file and is not really important for site to work so all this can be removed. I deleted all those inserts including LOCK TABLES and UNLOCK TABLES (and everything between). It's at very bottom of the sql file. Issue is described here:
https://www.drupal.org/project/devel/issues/2723437
But there is no solution for it beside truncating that table.
BTW I tried all solutions from answers above and nothing else helped.
I've tried all of above solutions, all failed.
I ended up with using -h 127.0.0.1 instead of using default var/run/mysqld/mysqld.sock.
If you have tried all these solutions, esp. increasing max_allowed_packet up to the maximum supported amount of 1GB and you are still seeing these errors, it might be that your server literally does not have enough free RAM memory available...
The solution = upgrade your server to more RAM memory, and try again.
Note: I'm surprised this simple solution has not been mentioned after 8+ years of discussion on this thread... sometimes we developers tend to overthink things.
Eliminating the errors which triggered Warnings was the final solution for me. I also changed the max_allowed_packet which helped with smaller files with errors. Eliminating the errors also sped up the process incredibly.
if none of this answers solves you the problem, I solved it by removing the tables and creating them again automatically in this way:
when creating the backup, first backup structure and be sure of add:
DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT
CREATE PROCEDURE / FUNCTION / EVENT
IF NOT EXISTS
AUTO_INCREMENT
then just use this backup with your db and it will remove and recreate the tables you need.
Then you backup just data, and do the same, and it will work.
How about using the mysql client like this:
mysql -h <hostname> -u username -p <databasename> < file.sql

Using Cassandra PDO Driver on Windows

Is there any way to have Cassandra PDO at Windows with Wamp?
This is for development purposes I don't want to install Linux and change all the environment.
https://code.google.com/a/apache-extras.org/p/cassandra-pdo/
I'm using Windows 7 (64 Bit), Wamp 2.5, PHP 5.5.
OK, here's what I found out:
1) It's totally possible
2) The docs that appear in the first google search results are a bit obsolete
Start by downloading the latest Datastax Community Cassandra here:
http://planetcassandra.org/cassandra/
Install & setup properly. In fact, most of the configuration is done by the installer, you just have to edit the apache-cassandra/conf/cassandra.yaml file to find all paths to /var/lib... and change those into something like d:/cassandra/... That includes "commitlog", "data", "saved_caches". Restart the Cassandra service, examine the logs. Mine shown no problem. The OpsCenter at ...:8888/opscenter/index.html was working fine, showing one node online.
Now, the PHP part.
There's a sneaky thing called Thrift. From what I've learned today (I first heard about Cassandra and Thrift yesterday), it's a way describe a binary protocol of connecting to some online service, in this case, to Cassandra. It will basically generate PHP files that will provide all the connectivity you need from PHP itself (no extensions needed).
You will need:
1) The Thrift PHP libs
2) The .exe Thrift compiler
Both can be downloaded here:
https://thrift.apache.org/download
Then use the following command to compile PHP files that will act as a "driver" to connect your PHP applications to Cassandra:
thrift --gen php D:\DataStaxCommunity\apache-cassandra\interface\cassandra.thrift
Put the result in some PHP include_path folder.
Also, find the PHP Thrift libs (in the libs archive from the same download page) and put those in a folder accessible to your script (e.g. include_path or the project folder).
Refer this page:
thrift.apache.org/lib/php
I guess that should help!
I have same problem as you, but when i tried this method, it works correctly for me.
Reference link
Here is a code example, very easy to understand :
<?php
require_once 'Cassandra/Cassandra.php';
$o_cassandra = new Cassandra();
$s_server_host = '127.0.0.1'; // Localhost
$i_server_port = 9042;
$s_server_username = ''; // We don't use username
$s_server_password = ''; // We don't use password
$s_server_keyspace = 'cassandra_tests';
$o_cassandra->connect($s_server_host, $s_server_username, $s_server_password, $s_server_keyspace, $i_server_port);
$s_cql = "CREATE TABLE carles_test_table (s_thekey text, s_column1 text, s_column2 text,PRIMARY KEY (s_thekey));";
$st_results = $o_cassandra->query($s_cql);

Oracle reproducing timeouts IDLE_TIME/CONNECT_TIME

I need some help with oracle settings to reproduce some issues we're having and to clarify, I'm not a oracle expert at all - no experience with it.
I've managed to installed oracle-xe(because it's easiest & smallest) and got our software running on it.
Now, reports say, that the connection in some some production setups time out(long running scripts/programs) and are not reconnected (not even throwing exceptions).
So that's what I'm trying to reproduce.
After some browsing around on the internet I found that running these queries should limit my connection & idle time to 1 minute:
ALTER PROFILE DEFAULT LIMIT IDLE_TIME 1;
ALTER PROFILE DEFAULT LIMIT CONNECT_TIME 1;
ALTER SYSTEM SET RESOURCE_LIMIT = TRUE;
Results in:
SQL> select * from user_resource_limits where resource_name in ('IDLE_TIME','CONNECT_TIME');
RESOURCE_NAME LIMIT
-------------------------------- ----------------------------------------
IDLE_TIME 1
CONNECT_TIME 1
After that I made a simple php script 'test.php', it runs a query - sleeps, and runs a new query.
require_once('our software');
$account1 = findAccount('email1#example.com');
sleep(100);
$account2 = findAccount('email2#example.com');
Isn't this supposed to time out?
Some extra details about what software I'm running:
Centos 5
Oracle XE
php 5.3.5
using oci8(not pdo)

PHP ODBC Extension - Data Source Name Too Long

I've been trying to install the ODBC extension for PHP onto one of our servers (Redhat), and i think it got it installed correctly but now when i try to test the connection i get the error message about the data source name being too long... It sounds like a simple thing to fix, but i can't work out..how..or where.
Basically these are the settings i've got at the moment:
# odbcinst -j
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /root/.odbc.ini
SQLULEN SIze.......: 4
SQLSETPOSIROW......: 2
I've got my MySQL driver defined in the odbcinst.ini as such:
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/libmyodbc5.so
Setup = /usr/lib/libodbcmyS.so
FileUsage = 1
I've double checked and the Driver & Setup paths are correct and pointing to the correct files.
So now I'm trying to add a System Data Source by editing the odbc.ini file. I've tried it in various different formats, following examples from different sites, e.g.
http://developer.mindtouch.com/en/kb/Using_the_ODBC_extension_on_Linux#Install_unixODBC
http://dev.mysql.com/doc/refman/5.0/en/connector-odbc-configuration-dsn-unix.html
As you can see i've commented some of them out and tried different ones:
;[mytest]
;driver = MySQL
;Database = moodle
;Server = localhost
;Socket = /var/lib/mysql/mysql.sock
;[mynew]
;Description = MySQL
;Driver = MySQL
;SERVER = localhost
;USER = root
;PASSWORD =
;PORT = 3306
;DATABASE = moodle
[Default]
Driver = /usr/lib/libmyodbc5.so
Description = Connector/ODBC 5 Driver DSN
SERVER = localhost
PORT =
USER = root
Password =
Database = moodle
SOCKET =
However, whenever i run
isql -v
to see if there are any problems, i always get:
[IM010][unixODBC][Driver Manager]Data source name too long
[ISQL]ERROR: Could not SQLConnect
My googling around the error only ever seems to turn up results for people having a connection string within something like ASP, nothing about how to get it working on the server in this way...
Could anyone offer me any advice/help?
If you need more information, let me know.
Thank you!
When launching isql you have to specify the data source name as defined in the man page.
SYNOPSIS
isql DSN [UID [PWD]] [options]
OPTIONS
DSN Name of the data source you want to connect to.
Given your configuration in /etc/odbc.ini, you would launch isql -v Default to test your connection. In the configuration file, the data source name is the one you've defined between brackets.

Categories