I have close to 120 JSON fields in objects.
A whole bunch of various versions of this sample JSON object
I really don't have time to map every field, nor do I care if the field title is called Array 0 or some other default.
I just want the data stuffed into a database, maintaining it's structure in indexable format.
JSON is a perfectly valid data structure, why not use it?
I did find this -> https://github.com/adamwulf/json-to-mysql and hypothetically it would do the job.
But the software is broke (no error but nothing ever gets populated).
Other alternatives such as https://doctrine-couchdb.readthedocs.org/en/latest/reference/introduction.html involve mapping. I don't have time to map 120 fields and nothing should require mapping, just use the existing.
Whats an easy way to stuff large JSON objects into MySQL, auto building the structure from an existing format?
Related
I have an associative array in php consisting of about 4k elements.
Product Id and Product Name
a sample row:
'434353', 'TeaCups'
So no big data. In fact the whole php array file is about 80kb.
This is static data, so I won't be changing, deleting any data.
Considering the size of the array and the number of elements in it,
Would it be better to access data from the array or I should create
a database instead?
The data might be read about 20k times a day.
PS: Each time the data will be read, I will be fetching exactly one
element
If this is static data, I recommend you store this data in a JSON format as a file, that you can access via PHP using the fopen() function.
However, if the data becomes bigger, like lets say, 2 GB, or even 200 MB, unless if you have a supercomputer, you should use the database and query from there.
Note that databases are usually only useful when you have a lot of information, or if you have too much information to process in a regular JSON.
In our environment we use MS SQL w/ stored procedures. In those procedures we return our results as XML and when access the data as we need to.
I'm introducing some charts into our tools which are 3rd party and require specific formats in order to operate and I am running into an issue.
In this screenshot, you are able to see what the structure should look like which I can get to work with the plugin just fine. The issue is with how SimpleXML handles single result sets.
As you can see in the image below, with one result item, it is no longer formatted as an array. The problem being is that the plugin expects to find the data in the format in the first example, but when there is only one value, it doesn't store it as an array.
As you can see from this image, the dataset for the escalationTypes is in the array format where the one below, submittedByDepartment is not.
I am trying to find out if there is something I can do to fix the root of this problem, with SimpleXML. Is this a common issue found with SimpleXML with a workaround?
UPDATE
Here is a sample of the XML Object I am working with: http://pastebin.com/uPh0m3qX
I'm not 100% clear what your required structure is, but in general, it's not a good idea to jump straight from XML to JSON or vice versa. Both can represent data in various ways, and what you really want is to extract data from one and turn it into the other.
Note that this is what SimpleXML is designed to help you with - it never contains any arrays, but it gives you an API which helps you extract the data you need.
If I understand correctly, you want to build an array from each of the dataset elements, and put those into your JSON, so you'd want something like this:
foreach ( $xml->children() as $item_name => $item ) {
foreach ( $item->dataset as $dataset ) {
$json[$item_name]['dataset'] = (array)$dataset->attributes();
}
}
Note that neither of those loops will behave differently if there is only one item to loop over. SimpleXML decides whether to behave like an array or an object based on how you use it, not based on what the XML looks like.
Note that while you can build a more general XML to array (or XML to JSON) function, getting it to always give the desired output from any input will probably take more time, and lead to harder-to-debug code, than writing specific code like the above.
you can declare the object as an array by adding (array) before referencing the variable. (array)$wasobject
I'm quite new to programming so I apologize if the answer to my question is obvious:
I need to pass data between MySQL and an iOS app. I'm using php as the go between. The query result that I get via php I'm just passing to my app as a comma-separated/new line separated (comma for new column, new line for new row of data).
I keep reading about JSON and I've read (stackoverflow question on why json and it's associated links) to try and figure out why I would convert my php output into JSON format and then deserialize? on my app side. I keep reading how JSON is very light weight etc. but when I look at it, it seems like I would end up sending so much more data.
ex. if I'm sending some vehicle data:
JSON for 2 vehicles:
[{type:'car',wheeles:4,wings:'no'},{type:'plane',wheeles:24,wings:'yes'}]
Same info in csv:
car,4,no[/n]
plane,24,yes
Of course there are no headers in the csv, but I know that the info will come as type,wheels,wings sending it again and again I would think the total number of bits sent would be a lot more.
My questions are:
1. Would sending the CSV be faster than the JSON string (I think the answer is yes, but would like to hear from the Pros)
2. Given that it is faster and I know the order the data is coming in, is there any reason I should still choose JSON over CSV (some form of robustness of the data as JSON vs. CSV or something else)?
Would sending the CSV be faster than the JSON string (I think the answer is yes, but would like to hear from the Pros)
Given that particular data structure: Yes, but it is unlikely to be significantly faster. Especially if you use gzip compression at the HTTP level.
If profiling showed that the transfer times were the cause of a significant slow down (unlikely!), then you could always send arrays of data instead of objects.
Given that it is faster and I know the order the data is coming in, is there any reason I should still choose JSON over CSV (some form of robustness of the data as JSON vs. CSV or something else)?
JSON is properly standardised. CSV isn't (there are some common conventions and it can mostly be decoded reliably, but edge cases can be problematic).
JSON encoders and decoders are widely available and highly compatible with each other.
A JSON based format can be, to some extent, self-documenting which makes maintenance of the code that deals with it easier.
I have a 5 level multidimensional array. The number of keys in the array fluctuates but I need to store it in a database so I can access it with PHP later on. Are there any easy ways to do this?
My idea was to convert the array into a single string using several different delimiters like #* and %* and then using a series of explode() to convert the data back into an array when I need it.
I haven't written any code at this point because I'm hoping there will be a better way to do this. But I do have a potential solution which I tried to outline below:
here's an overview of my array:
n=button number
i=item number
btn[n][0] = button name
btn[n][1] = button desc
btn[n][2] = success or not (Y or N)
btn[n][3] = array containing item info
btn[n][3][i][0] = item intput type (Default/Preset/UserTxt/UserDD)
btn[n][3][i][1] = array containing item value - if more than one index then display as drop down
Here's a run-down of the delimiters I was going to use:
#*Button Title //button title
&*val1=*usr1234 //items and values
&*val2=*FROM_USER(_TEXT_$*name:) //if an items value contains "FROM_USER" then extract the data between the perenthesis
&*val3=*FROM_USER(_TEXT_$*Time:) //if the datatype contains _TEXT_ then explode AGAIN by $* and just display a textfield with the title
&*val4=*FROM_USER($*name1#*value1$*name2#*value2) //else explode AGAIN by $* for a list of name value pairs which represent a drop box - name2#*value2
//sample string - a single button
#*Button Title%*val1=*usr1234&*val2=*FROM_USER(_TEXT_$*name:)&*val3=*FROM_USER(_TEXT_$*date:)&*val4=*FROM_USER($*name1#*value1$*name2#*value2)
In summary, I am seeking some ideas of how to store a multidimensional array in a single database table.
What you want is a data serialization method. Don't invent your own, there are plenty already out there. The most obvious candidates are JSON (json_encode) or the PHP specific serialize. XML is also an option, especially if your database may support it natively to some degree.
Have a look at serialize or json_encode
The best decision for you is json_encode.
It has some advantages for json_encode beside serialize for storing in db.
taking smaller size
if you
must modify data manually in db there will be some problems with serialize, because this format stores size of values that has been serialized and modifying this values you must count and modify this params.
SQL (whether mySQL or any other variant) does not support array data types.
The way you are supposed to deal with this kind of data in SQL is to store it across multiple tables.
So in this example, you'd have one table that contains buttonID, buttonName, buttonSuccess, etc fields, and another table that contains buttonInputType and buttonInputValue fields, as well as buttonID to link back to the parent table.
That would be the recommended "relational" way of doing things. The point of doing it this way is that it makes it easier to query the data back out of the DB when the time comes.
There are other options though.
One option would be to use mySQL's enum feature. Since you've got a fixed set of values available for the input type, you could use an enum field for it, which could save you from needing to have an extra table for that.
Another option, of course, is what everyone else has suggested, and simply serialise the data using json_encode() or similar, and store it all in a big text field.
If the data is going to be used as a simple block of data, without any need to ever run a query to examine parts of it, then this can sometimes be the simplest solution. It's not something a database expert would want to see, but from a pragmatic angle, if it does the job then feel free to use it.
However, it's important to be aware of the limitations. By using a serialised solution, you're basically saying "this data doesn't need to be managed in any way at all, so I can't be bothered to do proper database design for it.". And that's fine, as long as you don't need to manage it or search for values within it. If you do, you need to think harder about your DB design, and be wary of taking the 'easy' option.
O.K. so I'm pretty clever - I've made a library for keeping a bunch of WP themes that are all set up to my needs so when I put out a new site I can just create the new blog in a few minutes.
As a holder for the new domain everything in the sql file that has the old domain in it I replace with [token].
Everything was working fine right up to the point where I made one with a child theme that apparently serialized data before entering it into the database. End up with stuff like this:
Theme','a:8:{s:12:\"header_image\";s:92:\"http://[token]wp-content/uploads/2011/05/494-Caring-for-fruit-trees-PLR.jpg\";s:16:\"background_image
So I dig into serialization and it turnss out that s:92 bit for example is the number of characters in that value. Since i'm changing [token] it changes and breaks.
Up till now I did all my changes to the sql file and didn't edit the database other than to populate it with the sql file - but I'm at a loss on how to deal with the data in the serial array.
Any ideas?
Easiest way would be to grab that data and use the unserialize() function, like
$arr = unserialize($data);
Then edit the data that way. When you're done, re-serialize it with serialize(), and store it back. You may have to to do a print_r() on the unserialized data to see how it's stored to see what you need to edit.
If you do the changes directly from the serialized data, you'll have to get the length of the current substring, make the change, then get the new length and splice that back into the serialized data, which is way more complicated than it needs to be.