I am trying to pull a report in PHP for active listings.
I've made progress, however, I cannot understand how this works and there is nothing out there that can explain it.
For example, in the Samples provided from the PHP library, I see quite a few XML files. When you run the RequestReportResponse sample, does that generate the XML file, or does the XML file tell the RequestReportResponse what to do based on values and functions?
I am asking because, with the MWS Scratchpad - I select all the necessary fields, submit it then refresh the Amazon Reports page of my seller central section and it shows a pending report.
I'm just asking how the XML content affects the report or how the report can affect the XML.
The answer to your question comes in two parts.
Part 1 - Calling the Amazon API
Most MWS requests do not require any file (be it plain text or XML) to be sent to Amazon. For example, all parameters needed to do send RequestReport can (and must) be sent as regular parameters. I'm not sure what Amazon would do if you did submit a file along with it as I've never tried. But then again... why would you?
One of the calls that does require a file to be send is the SubmitFeed call where that file is the actual feed to be submitted. It depends on the type of feed you're submitting if Amazon expects it to be plain text or XML.
Part 2 - Handling Amazon's API responses
When you get information back from Amazon's API, it usually is in XML format (there are a few calls that may return plaintext instead). You will need to decode this data to get your information out.
To make it a bit clearer, I'll outline a typical process for you:
The process of getting all your listings from Amazon:
Do a RequestReport call to Amazon. No XML attached
Decode the XML that you're getting back (it is a RequestReportResponse). If all went well, you'll get a RequestReportId as part of the response, and Amazon will start processing your request.
Amazon may need a few minutes to actually create the report, in cases of very complex or large requests or during high activity hours it may actually take up to an hour or more. So we need to find out when the request we made is actually done.
Poke Amazon API with a GetReportRequestList call asking for the status of your request with ReportRequestIdList.Id.1={YourRequestIdHere}. This also does not need a XML attachment.
Decode the XML that you're getting back. (it is a GetReportRequestListResponse)
If its ReportProcessingStatus is not _DONE_, wait for at least 45 seconds, then repeat from step 3. If the report is actually done, you'll see a valid GeneratedReportId in the response. If it is missing, you'll need to do an extra GetReportList call to find its ID.
Call GetReport to finally fetch your report with ReportId={YourGeneratedReportIdHere}
Decode whatever you're getting back. Depending on the type of report you requested, the response may be XML or plain text.
This process is explained in detail (and with a pretty flow chart) in Amazon Marketplace Web Service Reports API Section Reference (Version 2009-01-01)
To finally answer your question with respect to getting active listings from Amazon MWS:
None of the three calls require you to send XML to Amazon. The data you receive from Amazon will be in XML format (with the possible exception step 6 if you requested a plain text report).
Related
I've searched stackoverflow and haven't found anything quite like what I'm having trouble with. I have a .NET app that is pulling data from Microsoft Exchange Server and feeding it into a PHP app via a curl request. I can read the e-mail just fine, and I can retrieve attachment data just fine (or so I think). The problem is that the attachment data doesn't seem usable. It comes in the following format:
{"<string #1>":
{"Content":"<very, very long string>",
"ContentId":null,
"ContentLocation":null,
"ContentType":null,
"Id":"<string #1 repeated>",
"Name":"Picture (Device Independent Bitmap)",
"Filename:"null
}
}
This is just one example. I have a few that are jpegs, pdfs, etc. ContentID, ContentLocation, ContentType, and Filename are always null, probably due to the ancient version of Exchange being run. Content is a string that can easily be several hundred thousand characters long. I've tried displaying it in an image tag, and I've tried base64_decoding it, but it only ever comes up with garbage. Decoders online have only come up with garbage as well. What data do I need to use to display this image? Or is there a specific set of manipulations I need to do on the string to get it to be human-readable?
I'd be happy to include the string, but as I said: it's a very lengthy string.
I have a question, I work with many monitoring web, if you know PRTG you can make a URL that returns the status of the various sensors and alarm messages, and process this information in different web graphic pages; Now I have been asked to do the same but with the tool Will they hear the NAGIOS process? I do not understand how it should build the URL if alguin worked with this I would appreciate help me.
Example URL with PRTG:
https://10.213.8.25/api/table.json?content=sensors&output=json&columns=status,message&filter_status=4&filter_objid=9336&filter_objid=9495&filter_objid=9496
Return:
{"prtg-version":":","treesize":000,"sensors":[{"objid":1001.....}]}
You can get JSON starting with Nagios Core version 4.0.7.
Just browse to http://<address_of_your_nagios_server>/nagios/jsonquery.html
and you'll find a JSON Query Generator page that can help your build your query URL, execute it, and get JSON results. After executing the query, on the right-hand side of the page, the generated URL will be given and below it, the results of the query. You can paste the generated URL into a browser or call from your application to get raw JSON.
More information on this feature can be found here: https://labs.nagios.com/2014/06/19/exploring-the-new-json-cgis-in-nagios-core-4-0-7-part-1/
I'm running a simple piece of php code, like so:
echo file_get_contents( 'http://example.com/service?params' );
When I run the code on my local machine (at work) or from my shared hosting account, or if I simply open the URL in my browser, I get the following result:
{"instances":[{"timestamp":"2014-02-28 18:03:39.0","ids":[{"id":"525125875"}],"cId":179,"cInstanceId":9264183220}]}
However, when I run the exact same code on either of two different web severs at my workplace, I get the following slightly different result:
{"instances":[{"timestamp":"2014-02-28 18:03:39.0","ids":[{"id":"632572147"}],"cId":179,"cInstanceId":4302001980}]}
Notice how a couple of the numbers are different, and that's all. Unfortunately, these different numbers are the wrong numbers. The result should be identical to the first one.
The server I'm making the call to is external to my workplace.
I've tried altering the file_get_contents call to include headers and masquerade as a browser, but nothing seems to give a different result (well, other than an error due to an accidentally malformed request). I can't use cURL because it's not installed on the servers where this code needs to be deployed.
Any clue what could be causing the differing results? Perhaps something in the request headers? Although I'm not sure why something in the headers would cause the service to return different data.
thanks.
(edit)
The service URL I'm testing with is:
http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/CygnetLastNInstancesServlet?lastN=1&cygnetId=179&endTimestamp=2014-02-28+21%3A35%3A48
The response it gives is a bit different than what I posted above; I simplified and shortened the responses in my SO post to make it easier to read--but the essential information given, and the differences, are still the same.
I give the service a timestamp, the number of images I want to fetch which were created prior to that timestamp, and a 'cygnetId', which defines what sort of data I want the images to show (solar wind velocity, radiation belt intensity, etc).
The service then echoes back some of the information I gave it, as well as URL segments for the images I requested.
With the returned data, I can build the URL for an image.
Here's the URL for an image built from a "correct" response:
http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/StreamByDataIdServlet?allDataId=525125875
Here's the URL for an image built from a "wrong" response:
http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/StreamByDataIdServlet?allDataId=632572147
If you click the links above, you'll see that the "wrong" URL does not open an image--the response is blank.
I am using PHP and AJAX requests to get the output of a program that is always running and print it on a webpage at 5 second intervals. Sometimes this log file can get up to 2mb in size. It doesn't seem practical to me for the AJAX request to fetch the whole contents of this file every 5 seconds if the user has already gotten the full contents at least once. The request just needs to get whatever contents the user hasn't gotten in a previous request.
Problem is, I have no clue on where to begin to find what contents the user hasn't received. Any hints, tips, or suggestions?
Edit: The output from the program starts off with a time (HH:MM:SS AM/PM), everything after has no pattern. The log file may span over days, so there might not be just one "02:00:00 PM" in the file, for example. I didn't write the program that is being logged, so there isn't a way for me to modify the format in which it prints it's output.
I think using a head request might get you started along the right path.
check this thread:
HTTP HEAD Request in Javascript/Ajax?
if you're using jquery, it's a simple type change in the ajax call:
$.ajax({url: "some url", type: "HEAD".....});
personally, I would check the file size & date modified against the previous response, and fetch the new data if it has been updated. I'm not sure if you can fetch only parts of a file via ajax, but I'm sure this can be accomplished via php pretty easily, possibly this thread may help:
How to read only 5 last line of the text file in PHP?
It depends how your program is made and how does it print your data, but you can use timestamps to reduce the amount of data. If you have some kind of IDs, you should probably use them insteam of timestamps.
Real world problem: I'm generating a page dinamically. This page is an xml which is retrieved by the user (curl, file_get_contents or whatever can by made server side scripting).
Once the user make the request, he start waiting and I start retrieving a large set of data from the db and building an xml with them (using the php dom objects). Once I've done I fire the "print $document->saveXML()". It takes about 8 minutes to create this 40 megabytes document. Then as it is ready I serve the page/document. Now I have a user who has a 60 seconds connection timeout: he said I need to send the first octet each 60 seconds. How can I achieve such a thing?
Since it's useless to post a 23987452 lines code cause nobody is gonna read them, I'll explain the script which serves this page as real-very-pseudo-pseudo-code:
grab all the data from the db: an enormous set of rows
create a domdocument element
loop through each row and add a node element to the domdocument to contain a piece of data
call the $dom->saveXML() to get the document as a string
print the string so the user retrieve an xml document
1) I can't send real data since it is an xml document and it has to begin with "<?xml..." to not mess up the parser.`
2) The user can't deal with firewall/serverconfig
3) I can't deal with "buy a more powerful server"
4) I tried using an ob_start() at the top of the script and then at the beginning of each loop a "header("Transfer-Encoding: chunked"); ob_flush(); "
but nothing: nothing comes before the 8 minutes.
Help me guys!!
I would
Generate a random value
Start the XML generating script as a background process (see e.g. here)
Make the generating script write the XML into a file with the random value as the name when the script is done
Frequently poll for the existence of that empty file, e.g. using Ajax requests every 10 seconds, until it's there. Then fetch the XML from the file.
You send padding and still have it be valid XML. Trivial examples include whitespace in a lot of places, or comments. Once you've sent the XML declaration, you could start a comment, and keep sending padding:
<?xml version="1.0">
<!-- this comment to prevent timeouts:
30
60
90
⋮
or whatever, the exact data doesn't matter of course.
That's the easy solution. The better solution is to make that generation run in the background, and e.g., use AJAX to poll the server every 10s to check if its done. Or to implement an alternate notification method (e.g., email a URL when the the document is ready).
If this isn't a browser accessing, you may want a trivially simple API: Have one request to start generating the document, and another to fetch it. The one to fetch it may return "not ready yet" as e.g., a HTTP status code 500, 503, or 504. Then the script requesting should retry later. (For example, with curl, the --retry option will do this).