I am trying to generate a RSS feed from a mysql database I already have. Can I use PHP in the XML file that is to be sent to the user so that it generates the content upon request? Or should I use cron on the PHP file and generate an xml file? Or should I add the execution of the php file that generates the xml upon submitting the content that is to be used in the RSS? What do you think is the best practice?
All three approaches are technically possible. However, I would not use cron, because it delays the update process of your XML-files after the database content has changed.
You can easily embed PHP-Code in your XML-files, you just have to make sure that the files are interpreted as PHP on the serverside, either by renaming them with a *.php extension or by changing the server directives in the .htaccess-file.
But I think that the best practice here is to generate new XML-files upon updating the database contents. I guess that the XML-files are viewed more often than the database content changes, so this approach reduces the server load.
Use a cron to automate a PHP script that builds the XML file. You can even automate the mail part as well in your PHP.
The third method you mentioned. I don't understand how cron can be used here, if there are data coming in users' request. The first method cannot be implemented.
Set the Content-type header to text/xml and have your PHP script generate XML just as it would generate any other content. You may want to consider using caching though, so you don't overwhelm the server by accident.
Related
I am trying to manage caching on heavily used webpage written in PHP. I have marked some cacheable sections of PHP code, which I want to execute only pre-cache when administrator make changes in CMS. For this, I use this method:
I have file (for example "index-source.php") with some marked ares of PHP code, which are interpretable alone. When admin change some settings, these marked parts are executed and replaced with result (for example MySQL queries which reads menu items from DB are replaced with generated HTML menu). Resulted file is saved as new "index.php", which still have some PHP code, which can't be optimized by caching.
Now to my problem
If we assume, that this server is heavilly load, which means there is for example 100 requests per second, which in PHP requires file index.php. If I will use file_put_contents() to overwrite this index.php with new pre-cached version, is there any risk, that some requests will be interrupted, because of locked/not fully overwritten file? Basically I want to somehow update my PHP file and assure that PHP will include complete old or complete new version of that file or wait few milliseconds until file is overwritten. I dont't want PHP to fail require or load partially overwritten file.
Is that possible? Thanks
file_put_contents is not what you want.
Have a look at this project, and dive into the source to get a feel for what challenges you may have to face as well as the solution chosen.
https://github.com/PHPSocialNetwork/phpfastcache
I've used a couple of days to think of a best practice to generate a PDF, which end users can customize the layout for themselves. The PDF output needs to be saved on the server or sent back to the PHP file so the PHP file can save it, and the PHP file needs to know that it went OK.
I thought the best way to do this was to use XML, XSLT and Apache Cocoon. But I'm not sure if this is possible or if it's a good idea since I can't find any information of people doing anything similar. It cannot be an uncommon problem.
The idea came when I read about Cocoon converting XML through XSLT to PDF:
http://cocoon.apache.org/2.1/howto/howto-html-pdf-publishing.html
and being able to take in variables:
http://old.nabble.com/how-to-access-post-parameters-from-sitemap-td31478752.html
This is what I had in mind:
A php file gets called by a user, the php file generates a source XML file with a specific name
The php file then makes a request to Cocoon (on the same web server) to apply the user defined XSLT on the XML file. A parameter will be needed here to know which XSLT to apply.
The request is handled by the PHP file and then saved as a PDF on the server, and can later be mailed away.
Will this work at all? Is there a better way to handle this?
The core problem is that the users need to be able to customize the layout on the PDFs themselves, and I need the server to save the PDF and to mail it later on. The users will use it for order confirmations, invoices, etc. And I wouldn't like to hard code the layout for each user.
I've had some good results in the past by setting up JasperReports Server and creating reports using iReport Designer. They're both available in F/OSS ("community") editions, though you can pay for support and value-adds if you need those things.
This was a good solution for us, since we could access it via the Java API for our Java system, and via SOAP for our PHP system. The GUI designer made tweaking reports very easy for non-technical business staff too.
I use webkithtml2pdf to generate my PDF:s. Just create a document with HTML and CSS for printing like you would usually do, the run it through the converter.
It works great for generating things like invoices. You can use SVG for logos and illustrations, and they will look great in print since they are vector based. Even rounded corners with dotted outlines works perfectly.
A minor gotcha is that the input html must have th htm or html file name suffix, so you can't use the default tempfile functions.
Hi,
I download a large amount of files for data mining. I used to use PHP for this purpose but I am finding it to be too slow. Also I just want a small part of the web page. I want to achieve two things
Curl should be able to utilize all my download bandwidth
Is there any way to download only a part of the web page where my data resides.
I am not confined to PHP. If curl works better in terminal I would use that.
Yes, you can download only a part of the page by using the CURLOPT_RANGE option, and you can also provide a write callback function that simply returns an error when you've received "enough" data and you want to stop and move on.
Are you downloading HTML? Your comment leads me to believe that you are. If that's the case, simply load up the html with Simple PHP DOM and get only the part that you want. Although, I find it hard to believe that grabbing just the HTML is slowing you down. Are you downloading any files or media as well?
Link : http://simplehtmldom.sourceforge.net/
There is no way to download only part of a page. When you request a URL, the server response is what it is.
Utilize more of your bandwidth by using cURL's ability to make multiple connections at once.
I've been tasked to maintain a PHP website with a function which automatically generates RTF files and provides a link to download. Each time the previously generated file is overwritten by the new one.
However, it seems that upon attempting to download the generated file, the browser will sometimes retrieve a cached version which is normally different from the latest version (so you get the same document as last time rather than the one you requested).
I managed to work around this by giving each generated file a unique name based on the current timestamp but this generates lots of clutter in the directory which will need to be cleaned out periodically. Ideally I would like to tag this file such that the browser won't cache it and will get the latest version every time. How might I achieve this?
In addition to the possibility of adding a random GET string to the URL (often the easiest way), it would also be possible to solve this by sending the right headers.
Because you are generating static files, this would require a setting in a .htaccess file. It would have to look like this:
<FilesMatch "\.(rtf)$">
Header set Cache-Control "no-store"
</FilesMatch>
Easiest way? Instead of linking to http://yoursite.com/file.rtf, link to http://yoursite.com/file.rtf?<?=time()?> . This will append a query string parameter which will vary each time the client requests it, and it won't therefore be cached.
You could tag the current time value to the file you serve
.../file.rtf?15072010141000
That way you don't have to generate unique names, but you can ensure future requests are not cached.
Although the simple solution of using no-cache header as suggested by Pekka will work, you will lose the potential benefit of caching if same file is downloaded several times.
If the RTF file is big and you would like your user to enjoy benefit of caching when the file is actually not changed, you might want to check this answer:
How to use HTTP cache headers with PHP
I am just starting out with Haxe development, and wanted to give the PHP side a go, but am already a little confused.
What is the best way to save some form data to XML files in a folder on a server with Haxe compiled to PHP?
Well you can do it two ways.
Make the website form in haxe, which includes:
making proper .htaccess file for the project on server,
writting a Main class (that htaccess will be pointing) which will take a request,
and return either a form html document or will take the data from the form...
then put that data into xml format,
and finally put that data into a file.
Here are Api files you should have a look at:
File methods for writting to a file
Web class that will get request data and fire up proper class and function, getURI, getMethod, getParams
Template class for generating simple html / very simple
Depending on complexity of xml you may want to use a specialized class
And the second way is almost the same, but you only compile to one file.
And in your html form, you put your action link to the php filed that came out of compilation...