I have an XSD scheme which has 10K lines. It takes 5 seconds to validate my XML with 500 lines. I get dynamically XML via POST from external server, on every click of the user on my homepage. The validation takes 5+ seconds, which is very much for every click of the user.
PHP Example:
$doc = new DOMDocument();
$doc->load('file.xml'); //100 to 500 lines
$doc->schemaValidate('schema.xsd'); //schema.xsd 10 000 lines
Do you have any idea how I can validate the XML against the XSD faster?
Some things to check:
Is the schema a local file, or are you fetching it over the network (e.g. via http: or file: to a mounted volume)?
Can you cache your schema? Many schema validation engines let you load the schema and cache it, and then do multiple validations against an internal representation.
What does your schema look like? 5 seconds for a 10K schema seems pretty slow.
What XML schema validator are you using?
You could create a subset of the XSD, which contains only the parts you need for your site. Validate against the full schema only after the final submit.
Use a different XML library and/or do your remote operation in the background and have the web read the latest cache.
Related
I'm about to implement a REST server (in ASP.NET although I think that's irrelevant here). where what I want to do is the request is made and it returns the result. However, this result is an .XLSX file that could be a million rows.
If I'm generating a million row spreadsheet, it's going to take about 10 minutes. An http request will time out. So what's the best way to handle this delay in the result.
Second, what's the best way to return a very large file as the REST result?
Update: The most common use case is the REST server is an Azure cloud service web worker (basically IIS on Azure). The client is a PHP web app running on a different server in a different location. The PHP web app needs to send up a report template (generally 25K) and the data which can be a connection string to a SQL database, or... could be a 500M XML file. So that is the request, an XML file containing the template and datasource(s).
The response if a file - PDF, DOCX, XLSX, PPTX, or HTML. That can be a BLOB inside an XML file or it can be the file itself. In the case of an error then it must return XML with the error information. The big issue is it can take 10 minutes to generate this file if everything goes right. When it's a 1 million row spreadsheet, it takes time to pull down all that data and populate the created XLSX file. Second issue, this is then a really large file.
So even if everything is perfect, there's a big delay and a large response.
I see two options:
Write file to response stream during its generation (from client side this looks like downloading large file);
Start file generation task on server side and return task id immediatly. Add API methods, that allows retreive task status, cancel it or get results (if task completed).
interesting question,
i sure hope you have a stable connection, anyway, at the client side, in this case, php, set the timeouts to very high values. in php
set_time_limit(3600*10);
curl_setopt($curlh,CURLOPT_TIMEOUT,3600*10);
So, I searched some here, but couldn't find anything good, apologies if my search-fu is insufficient...
So, what I have today is that my users upload a CSV text file using a form to my PHP script, and then I import that file into a database, after validating every line in it. The text file can be put to about 70,000 lines long, and each lines contains 24 fields of values. This is obviously not a problem since dealing with that kind of data. Every line needs to be validated plus I check the DB for duplicates (according to a dynamic key generated from the data) to determine if the data should be inserted or updated.
Right, but my clients are now requesting an automatic API for this, so they don't have to manually create and upload a text file. Sure, but how would I do it?
If I were to use a REST server, memory would run out pretty quickly if one request contained XML for 70k posts to be inserted, so that's pretty much out of the question.
So, how should I do it? I have thought about three options, please help med decide or add more options to the list
One post per request. Not all clients have 70k posts, but an update to the DB could result in the API handling 70k requests in a short period, and it would probably be daily either way.
X amount of posts per request. Set a limit to the number of posts that the API deals with per request is set to, say, 100 at a time. This means 700 requests.
The API requires for the client script to upload a CSV file ready to import using the current routine. This seems "fragile" and not very modern.
Any other ideas?
If you read up on SAX processing http://en.wikipedia.org/wiki/Simple_API_for_XML and HTTP Chunk Encoding http://en.wikipedia.org/wiki/Chunked_transfer_encoding you will see that it should be feasible to parse the XML document whilst it is being sent.
I have now solved this by imposing a limit of 100 posts per request, and I am using REST through PHP to handle the data. Uploading 36,000 posts takes about two minutes with all the validation.
First of all don't use XMl for this! Use JSON, it is fastest than xml.
I Use on my project import from xls. file is very large, but script work fine, just client must create files with same structure for import
I recently wrote a PHP plugin to interface with my phpBB installation which will take my users' Steam IDs, convert them into the community ids that Steam uses on their website, grab the xml file for that community id, get the value of avatarFull (which contains the link to the full avatar), download it via curl, resize it, and set it as the user's new avatar.
In effect it is syncing my forum's avatars with Steam's avatars (Steam is a gaming community/platform and I run a gaming clan). My issue is that whenever I am reading the value from the xml file it takes around a second for each user as it loads the entire xml file before searching for the variable and this causes the entire script to take a very long time to complete.
Ideally I want to have my script run several times a day to check each avatarFull value from Steam and check to see if it has changed (and download the file if it has), but it currently takes just too long for me to tie up everything to wait on it.
Is there any way to have the server serve up just the xml value that I am looking for without loading the entire thing?
Here is how I am calling the value currently:
$xml = #simplexml_load_file("http://steamcommunity.com/profiles/".$steamid."?xml=1");
$avatarlink = $xml->avatarFull;
And here is an example xml file: XML file
The file isn't big. Parsing it doesn't take much time. Your second is wasted mostly for network communication.
Since there is no way around this, you must implement a cache. Schedule a script that will run on your server every hour or so, looking for changes. This script will take a lot of time - at least a second for every user; several seconds if the picture has to be downloaded.
When it has the latest picture, it will store it in some predefined location on your server. The scripts that serve your webpage will use this location instead of communicating with Steam. That way they will work instantly, and the pictures will be at most 1 hour out-of-date.
Added: Here's an idea to complement this: Have your visitors perform AJAX requests to Steam and check if the picture has changed via JavaScript. Do this only for pictures that they're actually viewing. If it has, then you can immediately replace the outdated picture in their browser. Also you can notify your server who can then download the updated picture immediately. Perhaps you won't even need to schedule anything yourself.
You have to read the whole stream to get to the data you need, but it doesn't have to be kept in memory.
If I were doing this with Java, I'd use a SAX parser instead of a DOM parser. I could handle the few values I was interested in and not keep a large DOM in memory. See if there's something equivalent for you with PHP.
SimpleXml is a DOM parser. It will load and parse the entire document into memory before you can work with it. If you do not want that, use XMLReader which will allow you to process the XML while you are reading it from a stream, e.g. you could exit processing once the avatar was fetched.
But like other people already pointed out elsewhere on this page, with a file as small as shown, this is likely rather a network latency issue than an XML issue.
Also see Best XML Parser for PHP
that file looks small enough. It shouldn't take that long to parse. It probably takes that long because of some sort of network problem and the slowness of parsing.
If the network is your issue then no amount of trickery will help you :(.
If isn't the network then you could try a regex match on the input. That will probably be marginally faster.
Try this expression:
/<avatarFull><![CDATA[(.*?)]]><\/avatarFull>/
and read the link from the first group match.
You could try the SAX way of parsing (http://php.net/manual/en/book.xml.php) but as i said since the file is small i doubt it will really make a difference.
You can take advantage of caching the results of simplexml_load_file() somewhere like memcached or filesystem. Here is typical workflow:
check if XML file was processed during last N seconds
return processing results on success
on failure get results from simplexml
process them
resize images
store results in cache
tl;dr: I want to load an XML file once and reuse it over and over again.
I have a bit of javascript that makes an ajax request to a PHP page that collects and parses some XML and returns it for display (like, say there are 4,000 nodes and the PHP paginates the results into chunks of 100 you would have 40 "pages" of data). If someone clicks on one of those other pages (besides the one that initially loads) then another request is made, the PHP loads that big XML file, grabs that subset of indexes (like records 200-299) and returns them for display. My question is, is there a way to load that XML file only once and just reuse it over and over?
The process on each ajax request is:
- load the xml file (simplexml_load_file())
- parse out the bits needed (with xpath)
- use LimitIterator to grab the specific set of indexes I need
- return that set
When what I'd like it to be when someone requests a different paginated result is:
- use LimitIterator on the data I loaded in the previous request (reparse if needed)
- return that set
It seems (it is, right?) that hitting the XML file every time is a huge waste. How would I go about grabbing it and persisting it so that different pagination requests don't have to reload the file every time?
Just have your server do the reading and parsing of the paginated file based on the user input and feedback. Meaning it can be cached on the server much quicker than it would take the client to download and cache the entire XML document. Use PHP, Perl, ASP or what have you to paginate the data prior to displaying it to the user.
I believe the closest thing you are going to get is Memcached.
Although, I wouldn't worry about it, especially if it is a local file. include like operations are fairly cheap.
To the question "hitting the XML file every time is a huge waste" then answer is yes, if you have to parse that big XML file everytime. As I understand, you want to save the chunk the user is interested in so that you don't have to do that everytime. How about a very simple file cache? No extension required, fast, simple to use and maintain. Something like that:
function echo_results($start)
{
// IMPORTANT: make sure that $start is a valid number
$cache_file = '/path/to/cache/' . $start . '.xml';
$source = '/path/to/source.xml';
$mtime = filemtime($cache_file);
if (file_exists($cache_file)
&& filemtime($cache_file) < $mtime)
{
readfile($cache_file);
return;
}
$xml = get_the_results_chunk($start);
file_put_contents($cache_file, $xml);
echo $xml;
}
As an added bonus, you use the source file's last modification time so that you automatically ignore cached chunks that are older than their source.
You can even save it compressed and serve it as-is if the client supports gzip compression (IOW, 99% of browsers out there) or decompress it on-the-fly otherwise.
Could you load it into $_SESSION data? or would that blow out memory due to the size of the chunk?
I have a large XML file (600mb+) and am developing a PHP application which needs to query this file.
My initial approach was to extract all the data from the file and insert it into a MySQL database - then query it that way. The only issue with this was that it was still slow, plus the XML data gets updated regularly - meaning I need to download, parse and insert data from the XML file into the database everytime the XML file is updated.
Is it actually possible to query a 600mb file? (for example, searching for records where TITLE="something here"?) Is it possible to get it to do this in a reasonable amount of time?
Ideally would like to do this in PHP, though I could also use JavaScript too.
Any help and suggestions appreciated :)
Constructing an XML DOM for a 600+ Mb document is definitely a way to fail. What you need is SAX-based API. SAX, though, does not usually allow XPath to be used, but you can emulate it with imperative code.
As for the file being updated, is it possible to retrieve only differences anyhow? That would massively speed up subsequent processing.