I wrote a program that reads a binary file into the RAM and then sends it using an HTTP request to my server. It uses the PUT method and the binary file is (in) the body.
Now how can I tell my server to receive and safe the file in a folder?
If possible without any additional libraries that I would need to download (unless it's more efficient).
I know, there are some similar threads to this one, but they either they where about receiving text or they were about doing it with libraries or there simply was no sufficient answer.
I'd also like to know, if it would be more efficient or smarter to use the POST method or any other instead of PUT.
You can get at the data by opening a stream to php://input, like so:
$datastr = fopen('php://input',rb);
if ($fp = fopen('outputfile.bin', "wb")){
while(!feof($datastr)){
fwrite($fp,fread($datastr,4096)) ;
}
}
As to whether to use POST or anything else depends on what is happening with the data, and whether you care about being RESTful or such. See other questions/answers, indempotency.
The advantage I would see with using POST is that it's more commonly used (on most submission forms where you upload a file), and therefore has more support from within PHP and html.
Related
I am using Birt 4.5 and PHP/MYSQL.
I am able to run birt reports with php. I have enabled tomcat and copied 'birt-runtime-4_5_0/WebViewerExample' to tomcat/webapps and renamed it to birt.
So I can run birt viewer with php;
<?php
$fname = "report/test.rptdesign&__showtitle=false";
$dest = "http://localhost:8081/birt/frameset?__report=";
$dest .= $fname;
header("Location: $dest" );
?>
Above code is working fine. But report connectstring already saved in test.rptdesign file.
I want to remove DB login credentials from test.rptdesign file and assign it while report open with PHP.
I have tried with report parameters. But all the parameters will display on browser address-bar.
Is there any secure way to do this? This is very important when we need to change the database location. It is very hard to change the data source of each and every .rptdesign file.
Thank You,
Supun
I don't believe using report parameters to handle a database connection is the right way. In addition to the address-bar problem you mentionned, it will cause unexpected issues: for example you won't be able to use this database to feed the dataset of another report parameter.
With Tomcat the best approach is to externalize the database connection in a connection pool: easy, robust, and reports might run significantly faster.
Alternatively the datasource can be externalized in a BIRT library (.rptlibrary) and shared across all report-designs: thus only the library needs to be updated when the database location is changing.
I agree with Dominique that sending the database parameters via the query is most likely an inappropriate solution - and you've not given any explanation of whether this is a requirement of the system.
But it is quite trivial to proxy the request via PHP and decorate the URL with the required parameters, something like...
<?php
$_GET['__showtitle']=$_GET['__showtitle'] ? $_GET['__showtitle'] : 'false';
$_GET['__report']=$fname; // NB this should be NULL in your code!
$_GET['dbuser']='a_db_user';
$_GET['passwd']='s3cr3t';
$qry=http_build_query($_GET);
$url="http://localhost:8081/birt/frameset?" . $qry;
// if its simply returning HTML, then just....
$fin=fopen($url, 'r');
while ($l=fgets($fin)) {
print $l;
}
exit;
If the returned content contains relative links the you'll need to rewrite the output stream. If the content type is unusual or you want to project other headers (e.g. for caching) to the browser, then you'll need to use Curl, capture the headers and relay them.
I have a raspberry pi running a lamp stack, an arduino and a camera hooked up. The end goal is that when my arduino takes a photo, it then writes an image to a php address which is then emailed.
Right now, I'm trying to get the image to get placed in the right place.
Here's my php snippet:
<?php
print_r($_FILES);
move_uploaded_file($_FILES["file"]["tmp_name"], "/var/www/images/mypic.jpg");
?>
My python code is doing:
import requests
r = requests.get('https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png')
r2 = requests.post('http://192.168.1.100/accept_image.php', data = r.content)
I realize the image is going to get overwritten. That's not a problem. I can always add a timestamp later etc etc.
However, this gives me an error code. I'm a beginner at php and use python mainly for scientific computing so not sure if I'm passing the picture correctly. I know that the ip is correct as I can connect to it and it's all in network.
I have looked at this Python script send image to PHP but am still getting stuck.
EDIT:
Upon further debugging:
print_r($_POST);
returns an empty array. Not sure why?
To have a file accessible to PHP in $_FILES, you must use HTML-form style encoding of files (multipart/form-data). This is different from a standard POST request including the content in the request body. This method is explained in http://php.net/manual/en/features.file-upload.post-method.php - the key here is:
PHP is capable of receiving file uploads from any RFC-1867 compliant browser.
What's you're trying to do is not sending it the RFC-1867 way, but a plain-old POST request.
So you have two options:
Send the data from Python using multipart/form-data encoding. It shouldn't be too hard but requires some work on the Python side.
Just grab the data on the PHP side not from $_FILES but by directly reading it from the POST body, like so:
.
$data = file_get_contents('php://input');
file_put_contents("/var/www/images/mypic.jpg", $data);
It's more of a memory hog on the PHP side, and means you need to do some validation that you actually got the data, but is quite simpler.
To clarify, in PHP $_POST is only populated when the Content-type request header is multipart/form-data or application/x-www-form-urlencoded (and of course the data is encoded in the proper way).
If you get a POST request with anything else in the body, you can read it directly by reading from the php://input stream, and you're responsible for handling / decoding / validating it.
I'm currently developing a Restful Json-API in PHP. I want to send a PUT-Request to items/:id to update a record. The data will be transferred as application/json.
I want to call the API with
curl -H "Content-Type: application/json" -X PUT -d '{"example" : "data"}' "http://localhost/items/someid"
On the server side, I'm not able the retrieve the request body. I tried
file_get_contents("php://input");
but this returns an empty string. Also a fopen()/fread() combination doesn't work.
When calling via POST, everything works great, I can read the json perfectly on the server side. But the API isn't Restful anymore. Does anyone have a solution for this? Is there another way to send and receive Json?
btw, I'm developing the API with the Slim Framework.
php://input is only readable once for PUT requests:
Note: A stream opened with php://input can only be read once; the stream does not support seek operations. However, depending on the SAPI implementation, it may be possible to open another php://input stream and restart reading. This is only possible if the request body data has been saved. Typically, this is the case for POST requests, but not other request methods, such as PUT or PROPFIND.
http://php.net/manual/en/wrappers.php.php
The Slim framework already reads the data upon request. Take the data from the Request object, into which it has been read.
On the server side, I'm not able the retrieve the request body. I tried file_get_contents("php://input");
You can only use file_get_contents( 'php://input', 'r' ); once per request. Retrieving its values will truncate the values as well, so if you call it twice, it'll return an empty string. Slim's request object contains the values you need, so:
<?php
$app = new Slim( );
$app->put( '/items/someid', function () use ( $app ) {
echo $app->request( )->put( 'example' ); // should display "data".
});
The example from the PHP manual uses fopen to access php://input in read mode. Have you tried doing it that way instead?
EDIT: The manual page for PHP:// says some stuff that seems to suggest that PUT data might not be available in some cases!
Note: A stream opened with php://input can only be read once; the
stream does not support seek operations. However, depending on the
SAPI implementation, it may be possible to open another php://input
stream and restart reading. This is only possible if the request body
data has been saved. Typically, this is the case for POST requests,
but not other request methods, such as PUT or PROPFIND.
I don't know where this will leave you regarding PUT processing. One page seems to say it's possible, the other seems to imply that it won't work under the wrong set of circumstances
I was reading the SLIM framework documentation the other day and it said that some browsers have problems with PUT and DELETE.
Excerpt:
Unfortunately, modern browsers do not provide native support for PUT requests. To work around this limitation, ensure your HTML form’s method is “post”, then add a method override parameter to your HTML form like this:
<form action="/books/1" method="post">
... other form fields here...
<input type="hidden" name="_METHOD" value="PUT"/>
<input type="submit" value="Update Book"/>
</form>
Source: http://www.slimframework.com/documentation/stable
From a tutorial I read on Sitepoint, I learned that I could load JS files through PHP (it was a comment, anyway). The code for this was in this form:
<script src="js.php?script1=jquery.js&scipt2=main.js" />
The purpose of using PHP was to reduce the number of HTTP requests for JS files. But from the markup above, it seems to me that there are still going to be the same number of requests as if I had written two tags for the JS files (I could be wrong, that's why I'm asking).
The question is how is the PHP code supposed to be written and what is/are the advantage(s) of this approach over the 'normal' method?
The original poster was presumably meaning that
<script src="js.php?script1=jquery.js&scipt2=main.js" />
Will cause less http requests than
<script src="jquery.js" />
<script src="main.js" />
That is because js.php will read all script names from GET parameters and then print it out to a single file. This means that there's only one roundtrip to the server to get all scripts.
js.php would probably be implemented like this:
<?php
$script1 = $_GET['script1'];
$script2 = $_GET['script2'];
echo file_get_contents($script1); // Load the content of jquery.js and print it to browser
echo file_get_contents($script2); // Load the content of main.js and print it to browser
Note that this may not be an optimal solution if there is a low number of scripts that is required. The main issue is that web browser does not load an infinitely number of scripts in parallel from the same domain.
You will need to implement caching to avoid loading and concatenating all your scripts on every request. Loading and combining all scripts on every request will eat very much CPU.
IMO, the best way to do this is to combine and minify all script files into a big one before deploying your website, and then reference that file. This way, the client just makes one roundtrip to the server, and the server does not have any extra load upon each request.
Please note that the PHP solution provided is by no means a good approach, it's just a simple demonstration of the procedure.
The main advantage of this approach is that there is only a single request between the browser and server.
Once the server receives the request, the PHP script combines the javascript files and spits the results out.
Building a PHP script that simply combines JS files is not at all difficult. You simply include the JS files and send the appropriate content-type header.
When it gets more difficult is based on whether or not you want to worry about caching.
I recommend you check out minify.
<script src="js.php?script1=jquery.js&scipt2=main.js" />
That's:
invalid (ampersands have to be encoded)
hard to expand (using script[]= would make PHP treat it as an array you can loop over)
not HTML compatible (always use <script></script>, never <script />)
The purpose of using PHP was to reduce the number of HTTP requests for JS files. But from the markup above, it seems to me that there are still going to be the same number of requests as if I had written two tags for the JS files (I could be wrong, that's why I'm asking).
You're wrong. The browser makes a single request. The server makes a single response. It just digs around in multiple files to construct it.
The question is how is the PHP code supposed to be written
The steps are listed in this answer
and what is/are the advantage(s) of this approach over the 'normal' method?
You get a single request and response, so you avoid the overhead of making multiple HTTP requests.
You lose the benefits of the generally sane cache control headers that servers send for static files, so you have to set up suitable headers in your script.
You can do this like this:
The concept is quite easy, but you may make it a bit more advanced
Step 1: merging the file
<?php
$scripts = $_GET['script'];
$contents = "";
foreach ($scripts as $script)
{
// validate the $script here to prevent inclusion of arbitrary files
$contents .= file_get_contents($pathto . "/" . $script);
}
// post processing here
// eg. jsmin, google closure, etc.
echo $contents();
?>
usage:
<script src="js.php?script[]=jquery.js&script[]=otherfile.js" type="text/javascript"></script>
Step 2: caching
<?php
function cacheScripts($scriptsArray,$outputdir)
{
$filename = sha1(join("-",$scripts) . ".js";
$path = $outputdir . "/" . $filename;
if (file_exists($path))
{
return $filename;
}
$contents = "";
foreach ($scripts as $script)
{
// validate the $script here to prevent inclusion of arbitrary files
$contents .= file_get_contents($pathto . "/" . $script);
}
// post processing here
// eg. jsmin, google closure, etc.
$filename = sha1(join("-",$scripts) . ".js";
file_write_contents( , $contents);
return $filename;
}
?>
<script src="/js/<?php echo cacheScripts(array('jquery.js', 'myscript.js'),"/path/to/js/dir"); ?>" type="text/javascript"></script>
This makes it a bit more advanced. Please note, this is semi-pseudo code to explain the concepts. In practice you will need to do more error checking and you need to do some cache invalidation.
To do this is a more managed and automated way, there's assetic (if you may use php 5.3):
https://github.com/kriswallsmith/assetic
(Which more or less does this, but much better)
Assetic
Documentation
https://github.com/kriswallsmith/assetic/blob/master/README.md
The workflow will be something along the lines of this:
use Assetic\Asset\AssetCollection;
use Assetic\Asset\FileAsset;
use Assetic\Asset\GlobAsset;
$js = new AssetCollection(array(
new GlobAsset('/path/to/js/*'),
new FileAsset('/path/to/another.js'),
));
// the code is merged when the asset is dumped
echo $js->dump();
There is a lot of support for many formats:
js
css
lot's of minifiers and optimizers (css,js, png, etc.)
Support for sass, http://sass-lang.com/
Explaining everything is a bit outside the scope of this question. But feel free to open a new question!
PHP will simply concatenate the two script files and sends only 1 script with the contents of both files, so you will only have 1 request to the server.
Using this method, there will still be the same number of disk IO requests as if you had not used the PHP method. However, in the case of a web application, disk IO on the server is never the bottle neck, the network is. What this allows you to do is reduce the overhead associated with requesting the file from the server over the network via HTTP. (Reduce the number of messages sent over the network.) The PHP script outputs the concatenation of all of the requested files so you get all of your scripts in one HTTP request operation rather than multiple.
Looking at the parameters it's passing to js.php it can load two javascript files (or any number for that matter) in one request. It would just look at its parameters (script1, script2, scriptN) and load them all in one go as opposed to loading them one by one with your normal script directive.
The PHP file could also do other things like minimizing before outputting. Although it's probably not a good idea to minimize every request on the fly.
The way the PHP code would be written is, it would look at the script parameters and just load the files from a given directory. However, it's important to note that you should check the file type and or location before loading. You don't want allow a people a backdoor where they can read all the files on your server.
I currently have two php files (header and footer) on my server that is used as a template and is retrieved on another server that wraps the template files around their software.
Is it possible to display different content based on their url in my template files in php? If so, how?
I don't know if this matters, but the other server uses coldfusion and not php.
The php file could check a parameter in the url, like template.php?url=stackoverflow , so in the php file you could check
if ($_GET['url']=='stackoverflow'){
echo "Stack Overflow template";
}else if ($_GET['url']=='lol'){
echo "Another template";
}else{
echo "error";
}
Edit:
Now the server getting the content, just needs to add that parameter to the url and it gets the template that it wants. You could set a default template in case no parameter is specified.
Would it be possible for their url to contain a get variable like www.theirwebsite.com/?chrome=red? then your file could read that and parse out different themes based on what the variable's value is.
NOt quite sure if I'm understanding you, but you can certainly display different content based on a url.
$remote_content = file_get_contents($someurl);
switch $someurl
case 'www.google.com':
display_google_content();
break;
case 'www.microsoft.com':
throw(BSOD);
break;
default:
display_standard_content();
}
There are two obvious possibilities for how the remote server is attaching your code that spring to mind. The first is using JavaScript to instruct the client to go out and get your content, then write it to the appropriate locations. This should be rather obvious when looking at the HTML source code generated by their application.
The more likely scenario, in my opinion, is that they use CFHTTP to retrieve the content and inject it directly. CFHTTP mimics a broser call -- it's a standard HTTP 1.1 request. It's not going to contain a reference to the url requested on their server. Unless you can convince them to add identifying information to the request, all you'll be able to tell on your server is that the request came from CF (by examining the remote agent).