Anyone have experience integrating with MYOB? - php

Looking to integrate a web application with MYOB. There's not much in terms of documentation out there. I've found a couple of companies that provide middleware, but nothing promising. Just thought I'd see if anyone else out there has had experience with this and might be able to save me a bit of time.
Cheers

Depends what you want to do with the myob database.
Retrieve information is fairly easy, there is a myob odbc connector you can install.
Write permission requires you to be a myob developer partner, costs you a lot more per year.
I did a job to synchronize data from MYOB db to MySql. So all the updates in myob db is reflected in mysql, but not the other way around

You may want to wait...
MYOB is overhauling their 90's style application (netbios/etc), moving it to .net framework with an API (they said it would be done Q2 2009, moved to Q2 2010).
They have non complaint SQL syntax (insert/select), yes that's right only two keywords... (the idea is if you screw up the query, say to an invoice, you have to write a insert into the credit table)
Unfortunately, read/write functionality doesn't come with the MYOB licence. If I can remember correctly it was Read for AUD$250 one time and AUD$800yr for write permissions. Most companies can afford as it cuts down on labor.
The ODBC driver hard to use, because it executes the MYOB application and leaves it in the background. (you may want to create your own rest service to query)

Related

Application logs in database or file

I want to make a detailed logger for my application and because it can get very complex and have to save a lot of different things I wonder where is the best to save it in a database(and if database wich kind of database is better for this kind of opperations) or in file(and if file what kind of format:text,csv,json,xml).My first thought was of course file because in database I see a lot of problems but I also want to be able to show those logs and for this is easier with database.
I am building a log for HIPPA compliance and here is my rough implementation (not finished yet).
File VS. DB
I use a database table to store the last 3 months of data. Every night a cron will run and push the older data (data past 3 months) off into compressed files. I haven't written this script yet but it should not be difficult. That way the last 3 months can be searched, filtered, etc. But the database won't be overwhelmed with log entries.
Database Preference
I am using MSSQL because I don't have a choice. I usually prefer MySQL though as it has better pager optimization. If you are doing more than a very minimal amount of searching and filtering or if you are concerned about performance you may want to consider an apache solr middle man. I'm not a db expert so I can't give you much more than that.
Table Structure
My table is 5 columns. Date, Operation (create, update, delete), Object (patient, appointment, doctor), ObjectID, and Diff (a serialized array of before and after values, changed values only no empty or unchanged values for the sake of saving space).
Summary
The most important piece to consider is: Do you need people to be able to access and filter/search the data regularly? IF yes consider a database for the recent history or the most important data.
If no a file is probably a better option.
My hybrid solution is also worth considering. I'll be pushing the files off to a amz file server so it doesn't take up my web servers space.
You can create the detail & Complex logger with using the some existing libraries like log4php because that is fully tested as part of the performance compare to you design custom for your self and it will also save time of development, I personally used few libraries from php and dotnet for our complex logger need in some financial and medical domain projects
here i would suggest if you need to do from the php then use this
https://logging.apache.org/log4php/
I think the right answer is actually: Neither.
Neither the file or a DB give you proper search, filtering, and you need that when looking at logs. I deal with logs all day long (see http://sematext.com/logsene to see why), and I'd tackle this as follows:
log to file
use a lightweight log shipper (e.g. Logagent or Filebeat)
index logs into either your own Elasticsearch cluster (if you don't mind managing and learning) or one of the Cloud log management services (if you don't want to deal with Elasticsearch management, scaling, etc. -- Logsene, Loggly, Logentries...)

Online and offline synchronization

I am working on a project that synchronizes online and offline features due to the unstable Internet. I have come up with a possible solution. That is to create 2 similar databases for both online and offline and sync the two. My question is that is this a good method? Or are there better options?
I have researched online on the subject but I haven't come across anything substantive. One useful link I found was on database Replication. But I want the offline version to detect Internet presence and sync accordingly.
Pls can you help me find solutions or clues to solve my problem?
I'd suggest you have an online storage for syncing and a local database(browser indexeddb, program sqllite or something similar) and log all your changes in your local database but have a record with what data was entered after last sync.
When you have a connection you sync all new data with the online storage at set intervals(like once every 5 mins or constant stream if you have the bandwidth/cpu capacity)
When the user logs in from a "fresh" location the online database pushes all data to the client who fills the local database with the data and then it resumes normal syncing function.
Plan A: Primary-Primary replication (formerly called Master-Master). You do need to be careful PRIMARY KEYs and UNIQUE keys. While the "other" machine is offline, you could write conflicting values to a table. Later, when they try to sync up, replication will freeze, requiring manual intervention. (Not a pretty sight.)
Plan B: Write changes to some storage other than the db. This suffers the same drawbacks as Plan A, plus there is a bunch of coding on your part to implement it.
Plan C: Galera cluster with 3 nodes. When all 3 nodes are up, all can take writes. If one node goes down, or network problems make it seem offline to the other two, it will automatically become read-only. After things get fixed, the sync is done automatically.
Plan D: Only write to a reliable Primary; let the other be a readonly Replica. (But this violates your requirement about an "unstable Internet".)
None of these perfectly fits the requirements. Plan A seems to be the only one that has a chance. Let's look at that.
If you have any UNIQUE key in any table and you might insert new rows into it, the problem exists. Even something as innocuous as a 'normalization table' wherein you insert a name and get back an id for use in other tables has the problem. You might do that on both servers with the same name and get different ids. Now you have a mess that is virtually impossible to fix.
Not sure if its outside the scope of the project but you can try these:
https://pouchdb.com/
https://couchdb.apache.org/
" PouchDB is an open-source JavaScript database inspired by Apache CouchDB that is designed to run well within the browser.
PouchDB was created to help web developers build applications that work as well offline as they do online.
It enables applications to store data locally while offline, then synchronize it with CouchDB and compatible servers when the application is back online, keeping the user's data in sync no matter where they next login. "

Practicality of multiple databases per client vs one database

I'm going to try to make this as brief as possible while covering all points - I work as a PHP/MySQL developer currently. I have a mobile app idea with a friend and we're going to start developing it.
I'm not saying it's going to be fantastic, but if it catches on, we're going to have a LOT of data.
For example, we'd have "clients," for lack of a better term, who would have anywhere from 100-250,000 "products" listed. Assuming the best, we could have hundreds of clients.
The client would edit data through a web interface, the mobile interface would just make calls to the web server and return JSON (probably).
I'm a lowly cms-developing kinda guy, so I'm not sure how to handle this. My question is more or less about performance; the most I've ever seen in a MySQL table was 340k, and it was already sort of slow (granted it wasn't the best server either).
I just can't fathom a table with 40 million rows (and potential to continually grow) running well.
My plan was to have a "core" database that held the name of the "real" database, so the user would come in and try to access a client's data, it would go to the core database and figure out which database to get the information from.
I'm not concerned with data separation or data security (it's not private information)
Yes, it's possible and my company does it. I'm certainly not going to say it's smart, though. We have a SAAS marketing automation system. Some client's databases have 1 million+ records. We deal with a second "common" database that has a "fulfillment" table tracking emails, letters, phone calls, etc with over 4 million records, plus numerous other very large shared tables. With proper indexing, optimizing, maintaining a separate DB-only server, and possibly clustering (which we don't yet have to do) you can handle a LOT of data......in many cases, those who think it can only handle a few hundred thousand records work on a competing product for a living. If you still doubt whether it's valid, consider that per MySQL's clustering metrics, an 8 server cluster can handle 2.5million updates PER SECOND. Not too shabby at all.....
The problem with using two databases is juggling multiple connections. Is it tough? No, not really. You create different objects and reference your connection classes based on which database you want. In our case, we hit the main database's company class to deduce the client db name and then build the second connection based on that. But, when you're juggling those connections back and forth you can run into errors that require extra debugging. It's not just "Is my query valid?" but "Am I actually getting the correct database connection?" In our case, a dropped session can cause all sorts of PDO errors to fire because the system no longer can keep track of which client database to access. Plus, from a maintainability standpoint, it's a scary process trying to push table structure updates to 100 different live database. Yes, it can be automated. But one slip up and you've knocked a LOT of people down and made a ton of extra work for yourself. Now, calculate the extra development and testing required to juggle connections and push updates....that will be your measure of whether it's worthwhile.
My recommendation? Find a host that allows you to put two machines on the same local network. We chose Linode, but who you use is irrelevant. Start out with your dedicated database server, plan ahead to do clustering when it's necessary. Keep all your content in one DB, index and optimize religiously. Finally, find a REALLY good DB guy and treat him well. With that much data, a great DBA would be a must.

Calling from an Sybase Database ODBC

I am writing a scheduling program for my company and I wanted to pull information from our Management Information System to supplement the schedule. The MIS has information on all of the jobs we need to run including due dates, piececounts, operations, estimated run times and other valuable information for a scheduler. I talked to support for the software and they basically stonewalled me. They kept avoiding my questions.
When I forced the issue by having the CEO call them, they gave up the database was a Sybase database and that it was ODBC compliant. Then they me a 500 page document of the data mappings of the database, but no explanation. Looking through it, i can tell a lot of it is just general settings for the software, and i believe i found the tables that store the job information. But i have no idea what the fields in the table are.
I connected to the ODBC connection successful in a python interpreter shell. I did a select * from table statement and i got a crapton of information back. But i dont know what i selected. Is there any way to see what fields im collecting information from?
So Basically I am asking if there is a way to know what information i drew from a table without knowing the field names.
Thanks
If I were the CEO, my first thought would be to buy scheduling software before I'd ask an individual or team in my company to write such a thing. It's a difficult but important problem. Why would you want to develop, debug, and maintain such a thing? It's been solved. I'd rather just use an existing solution. Just saying.
I am asking if there is a way to know what information i drew from a
table without knowing the field names.
The field names and types are the easy part. You can ask Sybase to DESCRIBE TABLE. It'll give you all the column names and types.
But it won't have any meta-data that gives you business context for what they mean. You'll have to go back to that MIS group, domain experts, or know the process well yourself to figure that out.

Basic Mysql and PHP accounting application

I am not a programmer, but I have been tasked with making a basic accounting app for a small business.
I was looking at pbooks, but I am not sure this is customizable enough for my needs.
I need to be able to count each day how many food items are sold, how many drink items, how many guests, and then tie orders of food and drinks to guests if it is a guest that purchases it.
Is pbooks customizable enough to do this? Using the live demo you do not seem to be able to generate reports just for dates, or for certain customers..., perhaps there is a better bookkeeping solution?
Otherwise, I think I have enough mysql knowhow to do this, and theh php code should mostly just be getting the queries right.
Right?
Additionally, can anyone recommend a live demonstration of such a system?
I have not been able to find any live demos where I can demonstrate how you can generate reports for a given time period, or show the total sales for a guest or such.
I need to demonstrate this to show why it is a much better solution that an ever expanding mess of an excel workbook....
Hire a programmer
Seriously. It will cost less and be done faster and correctly. I would agree that just about anything is better then an Excel spreadsheet but a beginners rendition of an accounting application is not one of them.
Otherwise, I think I have enough mysql
knowhow to do this, and theh php code
should mostly just be getting the
queries right.
Right?
Wrong
The PHP code will be much more complex then simply "getting the queries right."
Additionally, can anyone recommend a
live demonstration of such a system?
I have not been able to find any live
demos where I can demonstrate how you
can generate reports for a given time
period, or show the total sales for a
guest or such.
If you can't install free, open source, community backed software on your own then you should not be tasked with this job.
Again, my suggestion would be to either hire a programmer who knows exactly what they are doing or seek support from the community for which projects you are interested in. This is not a discussion forum.
There's phpBMS or online service like Freeagent or Freshbooks which are good options if you need to use the package for IRS returns, because they're kept up to date with the latest tax rules
http://www.nolapro.com/
Try it it's free web-based accounting software.
http://bambooinvoice.org/
I've seen this employed with reasonable degrees of success for smaller scope accounting (keeping records, emailing invoices, generating pdfs). It's built under CodeIgniter so if you're familiar with that framework's approach to MVC it is also reasonably extendable.
You need a POS system, not an accounting system
I just came across Akaunting - being continually updated as of 2020. I haven't tested it yet.
Also ran into Simple Accounting System but looks sort of simple and disorganized from looking the files on sourceforge. Haven't actually tried it. Also hasn't had updates since early 2015.

Categories