I currently have 2 ODBC connections set up on my web server. One that connects to the our enterprise QAD database and another that connects to our custom database used to extend our database. In this paticular example I have my employee records in the QAD database, and then an employee number in another table in the custom database.
Is there any way for me to set up a cross join between the two odbc connections in php so that I don't have to loop through the results of the first query and send several queries based on the returned results to tie my records together in a php array?
The best i've been able to come up with is to build an IN clause from my first query from our custom database, send the second query to the QAD database, and then do an array merge in php. However, this is an extremely slow process compared to a normal SQL join.
Not sure if you've already found a solution to this but there is a Progress article on how to do this.
Quick Guide To Setting Up MultiDatabase ODBC Connectivity
I had a similar requirement - I wanted to create a join between a table in the primary QAD database and a custom table in our custom database. I have tested this and it works well although my setup is slightly different. I needed to connect to QAD from Microsoft SSRS to create reports against the QAD data - I needed to create some reports that the standard QAD report designer could not handle.
I have tested this on Progress 10.1c (this method is only supported in 10.1b+).
So the steps I took were:
Create the oesql.properties config file as per the article relevant to the primary and custom databases.
Create the ODBC System DSN on the client machine (in my case a Windows Server 2008 R2 machine running SQL Server 2008 R2 with SSRS) with the additional database references as per the article.
Create a Linked Server in SQL Server via the ODBC DSN
Create a view which uses the OpenQuery syntax to extract data from QAD (in my case this was created in the ReportServer database) via the linked server.
Create standard T-SQL query using the view in point 3 as the data source. This was ultimately the datasource for my SSRS report.
I believe it is important that the bit versions of the OS/Database and the ODBC drivers match but haven't confirmed this yet.
Whilst my requirement is different to your's ultimately it's the QAD server config and ODBC setup that's key. As long as your PHP client can perform a similar capability in terms of the OpenQuery command then you may get this working. I don't have any experience with PHP so can't help you there.
It seems a bit convoluted but actually works very well and in a lot of cases actually outperforms querying data using QAD browses!
Hope this helps.
Edit:
Here's a sample of an OpenQuery command - you can see that the table joins work in the normal way but just require and additional piece in the table reference.
CREATE VIEW [dbo].[vQADData] AS SELECT * FROM OPENQUERY(LinkedServerName,
'
SELECT custTable.item_date AS DESP_DATE, so_mstr.so_site AS SITE, so_mstr.so_po AS PO_NO, so_mstr.so_inv_nbr AS INV_NO,
ad_mstr.ad_name AS ADNAME, ad_mstr.ad_city AS ADCITY, ad_mstr.ad_state AS ADSTATE
FROM customdbname.pub.customtable custTable
INNER JOIN pub.so_mstr ON so_mstr.so_nbr = custTable.so_nbr
INNER JOIN pub.ad_mstr ON ad_mstr.ad_addr = so_mstr.so_ship
INNER JOIN pub.sod_det ON sod_det.sod_nbr = custTable.so_nbr
WHERE so_mstr.so_site = ''SiteName'' AND so_mstr.so_shipvia = ''SHIPPER'' AND custTable.item_date IS NULL
')
Then just access the view using normal SQL syntax.
SELECT * FROM vQADData
Thanks to Tiran for his suggested solution. For those people that are trying to reference multiple tables via SQL server as Tiran was doing, I have additional input.
I'm trying to pull data from multiple sources (Progress), same table structure at the same time and insert it into our data warehouse (SQL Server). So I'm just trying to do a union of multiple same structured tables in different databases. Tiran's solution started me down that same path but the linking of Progress databases was a cumbersome process that required me to find a Progress DBA with 2-3 days free time (his quote) to put this together. When I spoke with people at Progress directly, they also pointed out that if I created a view with a union on the Progress side, it would sequentially extract the data from each source in the view, not simultaneously. However, this led me to another discovery that looks like it's going to solve our needs and totally skips dealing with linking tables on the Progress side.
Here's an example with three sources, same tables (this should work for cross source joined different tables as well). All names here are provided just for clarity in the examples.
Source 1 - Table_A
Source 2 - Table_A
Source 3 - Table_A
Create an ODBC connection to Source 1 named source1.
Create an ODBC connection to Source 2 named source2.
Create an ODBC connection to Source 3 named source3.
(Note, you typically want to be sure to set the connection setting to Read Uncommitted).
In SQL Server, create linked Server connections to each Source.
ls_source1
ls_source2
ls_source3
In your SQL Server database that you need to reference the Progress databases in, create a view joining the three different linked server connections together using a union. The linked server references will each need to use openquery. This example using select * from each linked server source presumes that all columns are named and structured the same from each source.
CREATE VIEW table_name_v as
SELECT *
FROM
(SELECT *
FROM OPENQUERY(ls_source1,
'select *
from source1.dbo.Table_A
')
union
SELECT *
FROM OPENQUERY(ls_source2,
'select *
from source2.dbo.Table_A
'
union
SELECT *
FROM OPENQUERY(ls_source3,
'select *
from source3.dbo.Table_A
'
)
) x
With the view created, you can now query all three tables in different Progress sources at the same time. No extra set up on the Progress side is necessary.
There is an important caveat that I'm currently working on a work-around for. If you are on a 64bit machine using 64bit SQL Server, you need to use a 64bit driver to connect to the Progress database with the linked server option. My needs require I have both the 32bit and 64bit drivers on the same machine and have run into issues with that as apparently they don't play nice together when on the same machine. I have been able to install both 64bit and 32bit drivers on the same machine (there was a glitch in Progress' website that was supposed to send me a link for that driver but I was able to get someone there to direct me to the correct place to retrieve the 64bit odbc driver. The average person should not need both drivers and can just use the 64bit. As an alternate work around, if I'm not able to get both drivers co-existing on the same machine, I've found and confirmed that the company Connx provides a driver that provides a 64bit/32bit bridge that resolves that issue for me. Ideally though, no third party software will be necessary.
A new issue has cropped up now unfortunately, as I the linked servers that I set up and were using are no longer functioning properly. Two steps forward, one step back....
Just thought I'd share my findings as I'm sure there are others looking.
Short Answer: You can't JOIN tables between two connections.
Scenarios: (all of them in one single connection)
By default, in most databases, you can Join tables in different schemas by prefixing the schema name before the table, like this:
(...) FROM defaultDB.TableA INNER JOIN extensionDB.TableA ON ({Condition}) (...)
Depending on your Database (I don't know about Progress DB deeply), you may not be able to join Tables that belongs to schemas in different servers.
Joining tables in different databases (Eg: Progress x MySQL) it's even more complicated. I've heard about Oracle Gateway, a proprietary solution that (not really sure) could achieve this last scenario.
In summary:
If your situation does not fit the first scenario (which points to the most obvious approach), I guess the shortest solution would be profiling your code and optimize possible performance bottlenecks. Adapting your code for parallel processing could be a bolder improvement.
Related
I'm trying to develop an Android app that provides information on a chosen topic. All the information is stored on a MySQL database with one table for each topic. What I want to achieve is that when the user chooses a topic, the corresponding table should be downloaded to SQLite so that it can be used offline. Also, any changes to that particular table in the MySQL should be in synced with the SQLite db automatically when the phone connects to Internet the next time.
I have understood how to achieve the connection using PHP and HTTP requests. What I wanna know what is the best logic to sync any entries in a particular table in MuSQL database to the one in SQLite. I read about using various sync services but I don't understand how to use them. All my tables have exactly the same schema so is there an efficient way to achieve the sync ?
I have a decent knowledge in SQL but I'm kinda new to Android.
I was wondering does anyone know if it is possible in Zend to run one query which will connect to two different databases on two different servers and combine the results in one result set?
You can federate one database in the other one, via a wrapper. You reference table via nicknames, and then you execute a query normally, as both tables were in the same database (joining, sorting, etc.)
Federation is free between DB2 databases or with Informix (because is from IBM). If you want to federate another data source (Oracle, Excel, flat files), you have to buy that separately.
With federation, you do not need to do the join at application level, but at database level.
Thanks for taking an interest in this post, basically I'm looking for some advice on working with data from different database implementations within PHP, or if PHP isn't suitable for these tasks, any recommendations regarding other approaches.
The task I would like to accomplish can be illustrated with the following example. I have a MySQL database where I store demographic information regarding users organised by user_id on 'server A', this runs to about 200,000 rows. On server B, I have users usage data stored by user_id and event_id in a Vertica database that runs to about 300,000,000 rows.
I would like to find a way to join these datasets so I can produced summarised output consisting of aggregated user events taken from the Vertica database grouped by data contained in the MySQL database such as age and location, through a join on the 'user_id' field.
I realise that this could be accomplished by creating a copy of either of these tables on the other server but I'm curious if this can be achieved without it.
My questions are:
Can PHP do operations like this? if so a link to an example would be really welcome.
Do you need to load the data into arrays and join there? Can you join arrays in PHP like tables in a database? Can PHP handle large arrays like this?
Are there any other approaches that I should be considering instead?
Thanks in advance for any help,
James
I suggest using TALEND ! is an open source ETL tool that has Mysql and Vertica connector implemented into it.
You can aggregate data sets from any rdbms as long as TALEND has access to them ! and then dump them where you need.
Give it a try .
I have undertaken a small project which already evolved a current database. The application was written in php and the database was mysql.
I am rewriting the application, yet I still need to maintain the database's structure as well as data. I have received an sql dump file. When I try running it in sql server management studio I receive many errors. I wanted to know what work around is there to convert the sql script from the phpMyAdmin dump file that was created to tsql?
Any Ideas?
phpMyAdmin is a front-end for MySQL databases. Dumping databases can be done in various formats, including SQL script code, but I guess your problem is that you are using SQL Server, and T-SQL is different from MySQL.
EDIT: I see the original poster was aware of that (there was no MySQL tag on the post). My suggestion would be to re-dump the database in CSV format (for example) and to import via bulk insert, for example, for a single table,
CREATE TABLE MySQLData [...]
BULK
INSERT MySQLData
FROM 'c:\mysqldata.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
This should work fine if the database isn't too large and has only few tables.
You do have more problems than making a script run, by the way: Mapping of data types is definitely not easy.
Here is an article about migration MySQL -> SQL Server via the DTS Import/Export wizard, which may well be a good way if your database is large (and you still have access, ie, not only have the dump).
The syntax between Tsql and Mysql is not a million miles off, you could probably rewrite it through trial and error and a series of find and replaces.
A better option would probably be to install mysql and mysqlconnector, and restore the database using the dubp file.
You could then create a Linked Server on the SQL server and do a series of queries like the following:
SELECT *
INTO SQLTableName
FROM OPENQUERY
(LinkedServerName, 'SELECT * FROM MySqlTableName')
MySQL's mysqldump utility can produce somewhat compatible dumps for other systems. For instance, use --compatible=mssql. This option does not guarantee compatibility with other servers, but might prevent most errors, leaving less for you to manually alter.
I'm in the process of trying to migrate my ASPNET site to Django. All my current data is tied up in MS SQL Server on a shared host that (helpfully) doesn't support python... I could use my existing code to export all the data but I've also stopped using Windows. The site is mainly compiled VB.NET so I'd need to install Windows and Visual Studio and then figure out what I'm trying to do... I don't like that plan.
Rather than go through that headache, I'd like to use PHP to export the entire database (or a table at a time) to JSON. SimpleJSON in Python will make it ludicrously easy to import so it seems like a plan.
So far, so good. In PHP I've managed to connect to the SQL Server and perform simple queries, but I need a list of tables so I know what data I need to copy. I want all the tables because there are legacy tables from when I rewrote the site about three years ago, and I'd like to keep that data somewhere...
So first thing: Does anybody know the SQL query for listing all tables?
I've tried mssql_query('sp_tables'); but this returns a strange list:
mydatabasename
mydatabasename
dbo
dbo
syscolumns
syscolumns
SYSTEM TABLE
SYSTEM TABLE
Secondly: In your opinion, would I be better off writing each table dump to its own .json file or should I keep it all together?
Thirdly: Am I going about this the wrong way?
Select table_name from information_schema.tables
for any given table you can get the metadata with
sp_help tablename
You do query with :
SHOW TABLES;
(you need to select DB before this.)
you can try this
SELECT * FROM information_schema.tables