I have a client who has a feed of leads which have Name, IP, Address, OptIn Time/Date, and I want them to be able to post the data to my hosted SQL database. If you are familiar with lead generation you will get what Im trying to do.
Id also like to know if its possible to write a script and place it on my server so that when someone posts a CSV file to it I can have the script automatically post the data in the CSV to the SQL server.
Is this possible? And are there any tutorials our reference manuals, sources, etc. I can use to accomplish this?
The answer to your question is Yes.
You can go about this two ways:
Write an API for your database which is consumed by those wishing to search/write/query your database. To do this, you can use any language that you are comfortable with. PHP, XML and Python are not interchangeable. XML is a format specification, it describes what the data should look like when its being transported between two systems. So you can use any programming language that provides XML libraries to write your code. In addition to XML, JSON has emerged as the more popular transport format especially for mobile and web applications.
The second option is to use a service like apigee, google cloud endpoints and mashery which will automate a lot of this process for you. Each requires its own amount of effort (with google cloud endpoints perhaps requiring the most effort). For example apigee will automatically create an API for you as long as you can provide it access to your data source.
Related
So the Google Map API tutorial Creating a Store Locator with PHP, MySQL & Google Maps takes the output from the mySQL database and processes it into XML using php (excuse my amateur terminology).
My simple question is, is it absolutely necessary to output mySQL data into XML?
Is there no other way whereby the data can be grabbed from mySQL and converted into variables for use in js functions elsewhere? with the example, anybody is able to run a simple query and see all my data? It doesnt seem very secure and in which case why not just simply echo the mySQL query results?
While it is not strictly required to convert the data into XML, this standard format is well suited to represent structured data. But you may very well decide to use JSON for example, or any home-brewed format of your liking. Do not reinvent the wheel though.
The second issue you are rising is not related to the output format, but rather is a problem of access control.
As far as I can tell from a very quick glance at the tutorial, indeed anyone could connect to your server and extract data.
In order to limit access to the data (i.e. limit access to the script phpsqlsearch_genxml.php), you could, for example, verify the presence of a session cookie in the ajax request. Implement such a check at the very beginning of the script, and perhaps generate an error if the user is not logged.
Ok I found a few questions on how to get data from a MYSql database into an iOS app, but I am just asking a few best practices here. I know these can all be separate questions, but I am hoping that they can be answered in a way that they relate to each other.
Am I correct to understand that to be able to get data into an iOS app - I need to first generate a JSON file, have that stored on a server and than have the app download this file??
If the previous answer is NO then does that mean, I can pull in data on the fly?
Lastly I have seen PHP examples to create JSON files, but iOS is in Objective-c. Does this mean I need to load a UIWebView to be able to load the PHP page that generates the file?
What I have:
I have a MYSql database - it is set up through PHPMyAdmin, so I am not familiar enough with the creation process of the database yet. I will look into that.
I can also export the JSON file from PHPMyAdmin, but that is no good to me in a iOS app.
I also have the parsing from a JSON file into an iOS app sorted, but I want to be able to do this on the fly instead of creating potentially hunderds of files.
I hope someone can help me here:-)
I am not necessarily asking for code, but would be mad to ignore it:-)
The problem is that there are not any iOS libraries for directly connecting to a MySQL server; and you really wouldn't want to do that, anyway. So, you need an intermediary server capable of sending data in a format your iOS application can understand. Note, this does not mean the data has to be JSON formatted. But it is very easy to use JSON as the format for your data. Most languages have native support for generating JSON from its native object format(s).
Once you have a server capable of sending data in your preferred format, you need to write some way for your iOS application to retrieve it. You do not have to use a UIWebView for this. As mentioned, the NSURLConnection framework is very easy to use to make such a request. However, there are a lot of other factors to consider when making network requests and others have already done most of the work for you. I like using the AFNetworking framework in conjunction with JSONKit. AFNetworking makes asynchronous calls to remote web services very easy, and JSONKit is nicer than NSJSONSerialization in my opinion.
What I do to retrieve data from MySQL to my iOS app is:
Create a PHP file on your server and prepare it for GET methods (you're going to send data from the iOS app)
Send a request from your iOS app to your php file, like: "www.yourdomain.com/data.php?name=..."
Process the information on your php file and echo the json output.
When connectionDidFinishLoading: convert the NSData to an Array using NSJSONSerialization.
Do whatever you like with the output information
That's just do way I do. I'm not familiar with other approaches.
PHP (and any other server side language) can take the data from the MySQL database and output it to any client as JSON, on the fly. No need to save the JSON to disk beforehand. Of course, from the client's point of view, there really is no fundamental difference (except the data will always be the latest representation of what's in the database).
You also don't have to use a UIWebView. There's a number of ways to make an HTTP request using Objective-C, but you'll likely want to look at something along the lines of NSURLConnection's sendSynchronousRequest:returningResponse:error: method (I prefer using synch methods inside an async block, but that's not the only way). You can find many tutorials on how to do similar things, as well as higher level libraries to simplify the process.
I've decided to try out some DB connections for my android applikation. However I need some advice regarding structure.
I came to know that in order to fetch data from DB with Android I need to use php scripts. I'm a total novice in this area and for me it sounds a bit akward. Therefore I figured I could create a Java server which the application connects to, and among other things this server also fetches data from the DB and returns.
In a performance perspective, what's best? Let the application itself fetch data from the DB, or connect to the server which fetches it for you? How about security? For clearance, I will have a Java server anyhow to take care of other things.
Sorry about the Vista paint skills.
Thanks for any input!
Like others have said it doesn't matter which you use PHP vs Java. But you're on the right track. Most developers create a web service on their HTTP Server and the app talks to that. That's usually in the form of JSON or XML strings over HTTP. Popular choice is using the REST architecture which essentially tells you that stuff you access on the service are resources and you structure your service based on that.
For the PHP vs Java question it's really up to you so do which ever you can setup faster and are more familiar with. I will also say Java has productivity advantages in Android's case because you can create plain old java objects as your models for results and share that code between the server and your Android client. You get this because you can use something like Gson library which serializes and deserializes objects into JSON format. You can also use Google AppEngine for hosting your Java code too.
Well this problem is irrelevant in context to android programming i would say.
Assuming that You are returning the extracted data to your device in json format, the entire performance issue is sort of restricted to the performance of java or php in retrieving data from database and converting it to json and sending to the client.
And as far as simple operations are considered the efficiency wont matter much in both cases, it is just a matter of preference on the developers part.
I read some nice articles about how to connect to a remote MySQL database via Android.
Found some really interesting links here and here.
So the common way for getting data seems to be using some kind of webservice (interface, in this case a php script) which queries the db and renders the result in JSON (or XML) format. Then its possible to parse this output with the android JSON_Object implementation. So far so good.
Receiving data from the database and showing it up in a android listview was done in about minutes.
But what is the best practice for writing (inserting) data into tables?
Should a webservice be used here too? (or rather direct mysql conn)
What is the best method to push data to a webservice? (for ex. to insert a new entity in a database) and which format should be used?
In this case I do not use any html forms or anything to post the parameters. So how to post these parameters to the php script? (from within the android app!)
Of course this operation should be secure as well. Implementing a data manipulation machanism is bit more risky (in order to keep the db persistant)
I think, that many apps use some kind of DB, to synchronize data (ex: highscores).
So there should be a best practise for that.
I would recommend keeping anything database-specific hidden behind a web service.
If you build a dependency on MySQL into your application and later find that you need to change databases, the entire installed base has to be cut over. Think about the logistics of accomplishing that for a few minutes and you'll start to realize it's a nightmare.
Premiumsoft's Navicat for MySQL comes with a HTTP tunnel (PHP script) you might be able to use. It basically provides a method for doing anything to a MySQL database over HTTP.
I'd just make sure there are no licensing issues if you plan to distribute your app.
For example I need to grab from http://gmail.com/ the number of free storage:
Over <span id=quota>2757.272164</span> megabytes (and counting) of free storage.
And then store those numbers in a MySql database.
The number, as you can see, is dynamically changing.
Is there a way i can setup a server side script that will be grabbing that number, every time it changes, and saving it to database?
Thanks.
Since Gmail doesn't provide any API to get this information, it sounds like you want to do some web scraping.
Web scraping (also called Web
harvesting or Web data extraction) is
a computer software technique of
extracting information from websites
There are numerous ways of doing this, as mentioned in the wikipedia article linked before:
Human copy-and-paste: Sometimes even
the best Web-scraping technology can
not replace human’s manual examination
and copy-and-paste, and sometimes this
may be the only workable solution when
the websites for scraping explicitly
setup barriers to prevent machine
automation.
Text grepping and regular expression
matching: A simple yet powerful
approach to extract information from
Web pages can be based on the UNIX
grep command or regular expression
matching facilities of programming
languages (for instance Perl or
Python).
HTTP programming: Static and dynamic
Web pages can be retrieved by posting
HTTP requests to the remote Web server
using socket programming.
DOM parsing: By embedding a
full-fledged Web browser, such as the
Internet Explorer or the Mozilla Web
browser control, programs can retrieve
the dynamic contents generated by
client side scripts. These Web browser
controls also parse Web pages into a
DOM tree, based on which programs can
retrieve parts of the Web pages.
HTML parsers: Some semi-structured
data query languages, such as the XML
query language (XQL) and the
hyper-text query language (HTQL), can
be used to parse HTML pages and to
retrieve and transform Web content.
Web-scraping software: There are many
Web-scraping software available that
can be used to customize Web-scraping
solutions. These software may provide
a Web recording interface that removes
the necessity to manually write
Web-scraping codes, or some scripting
functions that can be used to extract
and transform Web content, and
database interfaces that can store the
scraped data in local databases.
Semantic annotation recognizing: The
Web pages may embrace metadata or
semantic markups/annotations which can
be made use of to locate specific data
snippets. If the annotations are
embedded in the pages, as Microformat
does, this technique can be viewed as
a special case of DOM parsing. In
another case, the annotations,
organized into a semantic layer2,
are stored and managed separated to
the Web pages, so the Web scrapers can
retrieve data schema and instructions
from this layer before scraping the
pages.
And before I continue, please keep in mind the legal implications of all this. I don't know if it's compliant with gmail's terms and I would recommend checking them before moving forward. You might also end up being blacklisted or encounter other issues like this.
All that being said, I'd say that in your case you need some kind of spider and DOM parser to log into gmail and find the data you want. The choice of this tool will depend on your technology stack.
As a ruby dev, I like using Mechanize and nokogiri. Using PHP you could take a look at solutions like Sphider.
Initially I thought it was not possible thinking that the number was initialized by javascript.
But if you switch off javascript the number is there in the span tag and probably a javascript function increases it at a regular interval.
So, you can use curl, fopen, etc. to read the contents from the url and then you can parse the contents looking for this value to store it on the datanase. And set this up a cron job to do it on a regular basis.
There are many references on how to do this. Including SO. If you get stuck then just open another question.
Warning: Google have ways of finding out if their apps are being scraped and they will block your IP for a certain period of time. Read the google small print. It's happened to me.
One way I can see you doing this (which may not be the most efficient way) is to use PHP and YQL (From Yahoo!). With YQL, you can specify the webpage (www.gmail.com) and the XPATH to get you the value inside the span tag. It's essentially web-scraping but YQL provides you with a nice way to do it using maybe 4-5 lines of code.
You can wrap this whole thing inside a function that gets called every x seconds, or whatever time period you are looking for.
Leaving aside the legality issues in this particular case, I would suggest the following:
Trying to attack something impossible, stop and think where the impossibility comes from, and whether you chose the correct way.
Do you really think that someone in his mind would issue a new http connection or even worse hold an open comet connection to look if the common storage has grown? For an anonimous user? Just look and find a function that computes a value based on some init value and the current time.