I'm coding an API and got stuck on the UPDATE part of things. From what I've read about REST the update operation should be exposed by using HTTP PUT.
Ok, PUT gives me just a stream of data. At least in PHP the decoding of this data is my responsibility. So how do I mix string data and file upload and use PUT? I know I can do it in POST but I'm trying to do it the RESTful way.
Should I use multipart/form-data and is that portable for PUT (I mean is it easy to send this kind of request in different languages)? I'm trying to figure out the proper way to do this. Again, if I use multipart/form-data I'm responsible for the parsing so there might be some errors or performance degradation. Can you suggest a parser if this multipart/... is the way to do what I'm asking?
Thanks
General rule of PUT is that is idempotent
Calling 2x PUT /user/{userId}/files/foo.txt ends up in the same state, with the 2nd call you would simply override the foo.txt. You are 'setting' things.
Calling 2x POST /user/{userId}/files would end up in two different files. You are 'adding' things.
Therefore I would use PUT if you want to write to a dedicated target. What kind of files do you want to upload. E.g. if it is a picture-upload I would use POST (where you would get the target url inside response). If you are designing a kind of file-storage for a user I would use PUT, because most likely users want to write (set) to a certain location (like you would on a ordinary file-system).
Maybe you have more details/requirements for a concrete case?
What kind of data are you attempting to PUT? Remember that PUT is a directed publishing method. The client sends data to the server and essentially says "PUT this file into /home/sites/.../myfile.txt".
Useful for when you're publishing data to a site and are creating a new page. Not so useful if it's a standard file upload form ("Upload an avatar image here!"). You don't want to allow potentially malicious users to specify where an uploaded file should go.
That's when you use POST, which translates into "here's a file, it's called myfile.txt, do what you want with it".
Related
I am concerned about the safety of fetching content from unknown url in PHP.
We will basically use cURL to fetch html content from user provided url and look for Open Graph meta tags, to show the links as content cards.
Because the url is provided by the user, I am worried about the possibility of getting malicious code in the process.
I have another question: does curl_exec actually download the full file to the server? If yes then is it possible that viruses or malware be downloaded when using curl?
Using cURL is similar to using fopen() and fread() to fetch content from a file.
Safe or not, depends on what you're doing with the fetched content.
From your description, your server works as some kind of intermediary that extracts specific subcontent from a fetched HTML content.
Even if the fetched content contains malicious code, your server never executes it, so no harm will come to your server.
Additionally, because your server only extracts specific subcontent (Open Graph meta tags, as you say),
everything else that is not what you're looking for in the fetched content is ignored,
which means your users are automatically protected.
Thus, in my opinion, there is no need to worry.
Of course, this relies on the assumption that the content extraction process is sound.
Someone should take a look at it and confirm it.
does curl_exec actually download the full file to the server?
It depends on what you mean by "full file".
If you mean "the entire HTML content", then yes.
If you mean "including all the CSS and JS files that the feched HTML content may refer to", then no.
is it possible that viruses or malware be downloaded when using curl?
The answer is yes.
The fetched HTML content may contain malicious code, however, if you don't execute it, no harm will come to you.
Again, I'm assuming that your content extraction process is sound.
Short answer is file_get_contents is safe you retrieve data, even curl is. It is up to you what you do with that data.
Few Guidelines:
1. Never Run eval on that data.
2. Don't save it to database without filtering.
3. Don't even use file_get_contents or curl.
Use: get_meta_tags
array get_meta_tags ( string $filename [, bool $use_include_path = false ] )
// Example
$tags = get_meta_tags('http://www.example.com/');
You will have all meta tags parsed, filtered in an array.
you can use httpclient.class instead of file_get_content or curl. because it connect's the page through the socket.After download the data you can take the meta data using preg_match.
Expanding on the answer made by Ray Radin.
Tips on precautionary measures
He is correct that if you use sound a sound process to search the fetched resource there should be no problem in fetching whatever url is provided. Some examples here are:
Don't store the file in a public facing directory on your webserver. Then you expose yourself to this being executed.
Don't store it in a database, this might lead to a second order sql injection attack
In general, don't store anything from the resource you are requesting, if you have to do this use a specific whitelist of what you are searching for
Check the header information
Even though there is no foolprof way of validating what you are requesting with a specific url. There are ways you can make your life easier and prevent some potential issues.
For example a url might point to a large binary, large image file or something similar.
Make a HEAD request first to get the header information. Then look at the Content-type and Content-length headers to see if the content is a plain text html file
You should however not trust these since they can be spoofed. Doing this will hovewer make sure that even non-malicous content won't crash your script. Requesting image files is presumably something you don't want users to do.
Guzzle
I recommend using Guzzle to do your request since it is in my opinion provides some functionallity that should make this easier
It is safe but you will need to do a proper data check before using it. As you should with any data input anyway.
I'm trying to parse data from http://skytech.si/
I looked around a bit and I find out that the site uses http://skytech.si/skytechsys/data.php?c=tabela to show data. When I open this file in my browser I get nothing. Is the file protected and can run only from server side or something?
Is there any way to get data from it? If I cold get HTML data (perhaps in a table?) I would probably know how to parse it.
If not, would it be still possible to parse website and how?
I had a look at the requests made;
http://skytech.si/skytechsys/?c=graf&l=bf0b3c12e9b2c2d65bd5ae8925886b57
http://skytech.si/skytechsys/?c=tabela
Forbidden
You don't have permission to access /skytechsys/ on this server.
This website doesn't allow 'outside' GET requests. You could try parsing the data via file-put-contents but I don't think you will be able to get specific data tables (aside from those on that home) due to AJAX requests that need to be made. I believe the /data? is the controller to handle data which is not exposed via the API.
When you open this URL in your browser you send GET request. Data returned under this address is accessible after sending POST request with params as follows c:tabela, l:undefined, x:undefined. Analyze headers next time and look on Network log if you are using Chrome/Chromium.
If that website does not expose an API, it is not recommended to parse the data, as their HTML structure is prone to change.
See:
http://php.net/manual/en/function.file-put-contents.php
And then you can interpret it with an HTML-parsing engine or with an regular expression (not recommended).
hello i was wondering what is the best way to upload a video to a website? should I do it through a page with GET and if so how is the file uploaded through http? i am a little confused as to how this would work? I am trying to upload files from iphone and android devices so i cannot use a form to do this, at least i don't think so is there a way to upload my file through http? or what is the most convenient way? thank you
The most pressing issue here is that the HTTP specification requires that GET requests be both safe and idempotent. Uploading video will likely be neither of these.
Section 9.1.1 Safe Methods in RFC 2616:
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
So no, bandwidth has nothing to do with it. HTTP itself says you shouldn't be uploading any sort of file by way of the GET method.
GET does not allow for enough bandwidth for a video. Use POST or PUT instead.
The official standard (RFC 2616) states
The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI.
So, uploading a video would not come under 'retrieving information'.
POST should be used, like this:
<form action="process.php" method="post">
I understand that within same folder, I can use include() function for external PHP file, but now I would like to call the function in another PHP file which located in another URL.
For example, my live website (liveexample.com/table.php) has drop-down list and table, but without data.
My another PHP file (dataexample.com/data.php) is connected to database and process to extracting data out. But, it is in another server.
I need to make my data on [dataexample.com/data.php] delivers to [liveexample.com/table.php] and let the looping to draw table with data out on [liveexample.com/table.php] page.
Anyone has idea to design this method of delivering data from another server to another by using function call in PHP?
Or any other better solution to deliver my data between two different servers such as make the data record set into array and send to [liveexample.com/table.php]?
Please give me advise or consultation. Appreciate much!
I think SOAP webservice would be perfect for you to attain what you want but if possible just copy the same codes you have from the separate server.
If you make [dataexample.com/data.php] output your data as XML, then you can use it as a web service. What that means is, you can take that XML output (by sending a request the the data URL), and then parse it to load the data. This way, you can use that service any way you want. One way would be like you wanted, other examples would be via AJAX, or Flash etc.
So here are a few topics worth looking into:
using PHP for web services: http://wso2.org/library/3032
parsing XML data: http://www.w3schools.com/php/php_xml_simplexml.asp
I hope this will give a pretty good idea of how to achieve what you want to accomplish, because there a few options you can go by. Like Cristopher said, SOAP is one of them.
Have a great day.
So, currently I'm organizing my blog based on filename: To create a post I enter the name of the file. As opposed to storing the post in the database, I store them in PHP files. So each time I create a post, A new row in the table is created with the filename and a unique ID. To reference the post (e.g. for comments) I get the name of the current file, then search the entries table for a matching file name. The post ID of the comment matches the ID of that post.
Obviously this isn't the standard way of organizing a blog, but I do it this way for a few reasons:
Clean URL's (even cleaner than mod_rewrite can provide from what I've read)
I always have a hard copy of the post on my machine
Easier to remember the URL of a specific post (kind of a part of clean URL's)
Now I know that the standard way would be storing each post in the database. I know how to do this, but the clean URL's is the main problem. So now to my questions:
Is there anything WRONG with the way I'm doing it now, or could any problems arise from it in the future?
Can the same level of clean URL's that I can get now be achieved with mod_rewrite? If so, links are appreciated
I will be hosting this on a web host. Do only certain web-hosts provide access to the necessary files for mod_rewrite, or is it generally standard on all web-hosts?
Thanks so much guys!
P.S. To be clear, I don't plan on using a blogging engine.
As cletus said, this is similar to Movable Type. There's nothing inherently wrong with storing your data in files.
One thing that comes to mind is: how much are you storing in the files? Just the post content, or does each PHP file contain a copy of the entire design of the page as opposed to using a base template? How difficult would it be to change the design later on? This may or may not be a problem.
What exactly are you looking for in terms of clean URLs? Rewrite rules are quite powerful and flexible. By using mod_rewrite in conjunction with a main PHP file that answers all requests, you can pretty much have any URL format you want, including user-friendly URLs without obscure ID numbers or even file extensions.
Edit:
Here is how it would work with mod_rewrite and a main PHP file that processes requests:
Web server passes all requests (e.g., /my-post-title) to, say, index.php
index.php parses the request path ("my-post-title")
Look up "my-post-title" in the database's "slug" or "friendly name" (whatever you want to call it) column and locates the appropriate row that way
Retrieve the post from the database
Apply a template to the post data
Return the completed page to the client
This is essentially how systems like Drupal and WordPress work.
Also, regarding how Movable Type works, it's been a while since I've used it so I might be wrong, but I believe it stores all posts in the database. When you hit the publish button, it generates plain HTML files by pulling post data from the database and inserting it into a template. This is incredibly efficient when your site is under heavy load - there are no scripts running when a visitor opens up your website, and the server can keep up with heavy visitation when it only needs to serve up static files.
So obviously you've got a lot of options when figuring out how your solution should work. The one you proposed sounds fine, though you might want to give careful consideration to how you'll maintain a large number of posts in individual files, particularly if you want to change the design of the entire site later on. You might want to consider a templating engine like Smarty, and just store post data (no layout tags) in your individual files, for instance. Or just use some basic include() statements in your post files to suck in headers, footers, nav menus, etc.
What you're describing is kind of like how Movable Type works. The issues you'll need to cover are:
Syndication: RSS/Atom;
Sitemap: for Google;
Commenting; and
Tagging and filtering content.
It's not unreasonable not to use a database. If I were to do that I'd be using a templating engine like Smarty that does a better job of caching the results than PHP will out of the box.