Communicateur with client-sided serial port through web - php

I'm having an issue with my PHP website (which is using an API, that's why it has to be PHP).
This website is launched on a raspberry pi b+ which is connected to a thermal printer (through serial port), I used a python script to test the printer.
Now my question is: Is it possible to send data through the web to make the raspberry print some data ? So send an instruction like write to the port '/dev/ttyxxx' client sided?
Thanks for your help

If you mean: "I have a PHP application that needs to access the server's serial port":
It is possible for PHP to access the serial port on the server (in this case, your raspberry pi). PHP treats it like it is a normal file.
From the PHP Fopen page:
<?php
// Set timeout to 500 ms
$timeout=microtime(true)+0.5;
// Set device controle options (See man page for stty)
exec("/bin/stty -F /dev/ttyS0 19200 sane raw cs8 hupcl cread clocal -echo -onlcr ");
// Open serial port
$fp=fopen("/dev/ttyS0","c+");
if(!$fp) die("Can't open device");
// Set blocking mode for writing
stream_set_blocking($fp,1);
fwrite($fp,"foo\n");
// Set non blocking mode for reading
stream_set_blocking($fp,0);
do{
// Try to read one character from the device
$c=fgetc($fp);
// Wait for data to arive
if($c === false){
usleep(50000);
continue;
}
$line.=$c;
}while($c!="\n" && microtime(true)<$timeout);
echo "Responce: $line";
?>
If you mean: "I have a website that somehow needs to send something to the client's serial port"
Then the only solution is a browser App.
There's the Chrome Serial API which chrome apps can use.
Video Example

I come to think of several solutions; basically you would want your php-page to parse the data and create a trusted output that can be printed (i.e. a PDF-file, if your printer supports this).
Your next task is how to have this trusted output sent to the printer. Again, several solutions exists.
Have your php-script execute a system executable, e.g. cat output.pdf > /dev/ttyxxx (it is clear here, that I do not know how to print from unix). Notice that the executable is not dependent on input at all, i.e. you want to reduce the risk of injection attacks and the like. This point requires that the output.pdf, that you created, is trustworthy.
Have a cron-job look for output files and send them to the printer. Same considerations as above apply. This might be better as it can avoid bottlenecks if multiple php-sessions are trying to print a document.
Build a smaller framework around the above that can report back if errors occur etc. But still, basically option 1 + magic.
All in all, split the process into two steps. One that accepts the input, parses and checks for erroneousness/malevolent input, and creates the needed output for the printer. This can be done in a protected environment, which if hacked, does not expose the system (at least not more than the usual php would).
Step 2 then takes care of sending the output to the hardware, either bash-script, executable, or python.

Related

Prevent php script from timeout while processing thousands of records

I have a php script that checks if email address is still valid with SMTP HELO, we mark that address as valid in separate CSV and then sends newest offer we have (if user of course requested that). Addresses are taken from txt file where they are stashed line after line.
So flow of the script is like that: open txt file, grab all lines and place it in array, iterate through each record in array & send SMTP HELO, mark as valid/invalid in separate CSV, send email to valid.
We have often 2.000+ records in each source txt file. Unfortunately, I have never passed 400th record as my CloudFlare or nginx gives my timeout.
I have tried following setup inside my php script:
set_time_limit(0); // ignore php timeout
ignore_user_abort(true); // keep on going even if user pulls the plug*
while(ob_get_level())ob_end_clean(); // remove output buffers
ob_implicit_flush(true); // output stuff directly
and some various "hacky" async approaches, but result is always the same.
I thought about "slicing" my input data to safe-size inputs and proceeding one file after another with page reload in the meantime, but I have no idea is it approach worth pursuing or I should look for something else?

How to use WebIOPi in existing website

I'm trying to use WebIOPi but I'm quite lost in getting it to work with my project.
Background:
I'm using Raspberry Pi B+ running Wheezy. I'm working on a web-based application that will only be accessed locally. I have a bunch of php files in /var/www that run on Apache. Now I need to get my coin acceptor to with the project. The coin acceptor http://www.adafruit.com/products/787 sends single pulses (I only need one coin). I first tried the coin acceptor with a python script using interrupts and it works fine.
GPIO.setup(PIN_COIN_INTERRUPT,GPIO.IN)
GPIO.add_event_detect(PIN_COIN_INTERRUPT,GPIO.FALLING,callback=coinEventHandler)
But now I need to be able to capture those pulses and show them on a php page, updating the amount for every coin insert. I've been studying WebIOPi for hours but I can only find info on reading a pin's status, not listening for interrupts. Can anybody point me to the right direction?
Any help would be greatly appreciated. Thank you!
So, you seem to have two problems:
1. how do I, on the server, detect a new coin event
2. how do I then push this to the client browser.
I don't know webiopi at all, so I can't say there's not a way to use that to solve both, but as an alternative:
For part 1: you have a python program which you said works; I would suggest running as a background service and just have it do something simple like writing the latest value of coinage to a file:
GPIO.setup(PIN_COIN_INTERRUPT,GPIO.IN)
GPIO.add_event_detect(PIN_COIN_INTERRUPT,GPIO.FALLING,callback=coinEventHandler)
def coinEvenHandler(*arguments):
try:
f = open("coin.txt","rt")
cnt = int(f.read())
f.close()
except: # handle file doesn't exist and file doesn't contain an int
cnt = 0
f = open("coin.txt","wt")
f.write(str(cnt))
f.close()
For part 2:
1. Create a page which returns the value of "coin.txt"
2. Use Ajax (e.g. jquery) to poll for this value from your client page.

file_get_contents returns different results when called from different servers

I'm running a simple piece of php code, like so:
echo file_get_contents( 'http://example.com/service?params' );
When I run the code on my local machine (at work) or from my shared hosting account, or if I simply open the URL in my browser, I get the following result:
{"instances":[{"timestamp":"2014-02-28 18:03:39.0","ids":[{"id":"525125875"}],"cId":179,"cInstanceId":9264183220}]}
However, when I run the exact same code on either of two different web severs at my workplace, I get the following slightly different result:
{"instances":[{"timestamp":"2014-02-28 18:03:39.0","ids":[{"id":"632572147"}],"cId":179,"cInstanceId":4302001980}]}
Notice how a couple of the numbers are different, and that's all. Unfortunately, these different numbers are the wrong numbers. The result should be identical to the first one.
The server I'm making the call to is external to my workplace.
I've tried altering the file_get_contents call to include headers and masquerade as a browser, but nothing seems to give a different result (well, other than an error due to an accidentally malformed request). I can't use cURL because it's not installed on the servers where this code needs to be deployed.
Any clue what could be causing the differing results? Perhaps something in the request headers? Although I'm not sure why something in the headers would cause the service to return different data.
thanks.
(edit)
The service URL I'm testing with is:
http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/CygnetLastNInstancesServlet?lastN=1&cygnetId=179&endTimestamp=2014-02-28+21%3A35%3A48
The response it gives is a bit different than what I posted above; I simplified and shortened the responses in my SO post to make it easier to read--but the essential information given, and the differences, are still the same.
I give the service a timestamp, the number of images I want to fetch which were created prior to that timestamp, and a 'cygnetId', which defines what sort of data I want the images to show (solar wind velocity, radiation belt intensity, etc).
The service then echoes back some of the information I gave it, as well as URL segments for the images I requested.
With the returned data, I can build the URL for an image.
Here's the URL for an image built from a "correct" response:
http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/StreamByDataIdServlet?allDataId=525125875
Here's the URL for an image built from a "wrong" response:
http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/StreamByDataIdServlet?allDataId=632572147
If you click the links above, you'll see that the "wrong" URL does not open an image--the response is blank.

PHP url - Is this a viable hack?

Following on from this question, i realised you can only use $POST when using a form...d'oh.
Using jQuery or cURL when there's no form still wouldn't address the problem that i need to post a long string in the url.
Problem
I need to send data to my database from a desktop app, so figured the best way is to use the following url format and append the data to the end, so it becomes:
www.mysite.com/myscript.php?testdata=somedata,moredata,123,xyz,etc,etc,thisgetslong
With my previous script, I was using $GET to read the [testdata=] string and my web host told me $GET can only read 512 chars, so that was the problem.
Hack
Using the script below, I'm now able to write thousands of characters; my question, is this viable or is there a better way?
<?
include("connect.php"); //Connect to the database
//hack - read the url directly and search the string for the data i need
$actual_link = "http://$_SERVER[HTTP_HOST]$_SERVER[REQUEST_URI]";
$findme = '=';
$pos = strpos($actual_link, $findme) + 1; //find start of data to write
$data = substr($actual_link, $pos); //grab data from url
$result = mysql_query("INSERT INTO test (testdata) VALUES ('$data')");
// Check result
if ($result) {echo $data;}
else echo "Error ".$mysqli->error;
mysql_close(); ?>
Edit:
Replaced image with PHP code.
I've learned how not to ask a question - don't use the word hack as it riles peoples feathers and don't use an image for code.
I just don't get how to pass a long string to a formless PHP page and whilst i appreciate people's responses, the answers about cURL don't make sense to me. From this page, it's not clear to me how you'd pass a string from a .NET app for example. I clearly need to do lots of research and apologise for my asinine question(s).
The URL has a practical fixed limit of ~2000 chars, so you should not be passing thousands of chars into the URL. The query portion of the URL is only meant to be used for a relatively short set of parameters.
Instead, you can build up a request body to send via cURL/jQuery/etc for POSTing. This is how a browser will submit form data, and you should probably do the same.
In your scenario, there are two important elements that you need to examine.
First, what is the client that is performing the http operation? I can't tell from your text if the client is going to be a browser, or an application. The client is whatever you have in your solution that is going to be invoking a GET or POST operation.
This is important. When you read about query string length limitations online, it's usually within the context of someone using a browser with a long URL. There is no standard across browsers for maximum URL length. But if you think about it in practical fashion, you'd never want to share an immensely large URL by posting it somewhere or sending it in an e-mail; having to do the cut-and-paste into a client browser would frustrate someone pretty quickly. On the other hand, if the client is an application, then it's just two machines exchanging data and there's really no human factor involved.
The second point to examine is the web server. The web server implementation may pose limitations on URL length, or maybe not. Again, there is no standard.
In any event, if you use a GET operation, your constraint will be the minimum of what both your client AND server allow (i.e. if both have no limit, you have no limit; if either has a limit of 200 bytes, your limit is 200 bytes; if one has a 200 byte limit and the other has a 400 byte limit, your limit is 200 bytes)
Taking a look back, you mentioned "desktop app" but have failed to tell us what language you're developing in, and what operating system. It matters -- that's your CLIENT.
Best of luck.

Send empty packets to not timeout the connection

Real world problem: I'm generating a page dinamically. This page is an xml which is retrieved by the user (curl, file_get_contents or whatever can by made server side scripting).
Once the user make the request, he start waiting and I start retrieving a large set of data from the db and building an xml with them (using the php dom objects). Once I've done I fire the "print $document->saveXML()". It takes about 8 minutes to create this 40 megabytes document. Then as it is ready I serve the page/document. Now I have a user who has a 60 seconds connection timeout: he said I need to send the first octet each 60 seconds. How can I achieve such a thing?
Since it's useless to post a 23987452 lines code cause nobody is gonna read them, I'll explain the script which serves this page as real-very-pseudo-pseudo-code:
grab all the data from the db: an enormous set of rows
create a domdocument element
loop through each row and add a node element to the domdocument to contain a piece of data
call the $dom->saveXML() to get the document as a string
print the string so the user retrieve an xml document
1) I can't send real data since it is an xml document and it has to begin with "<?xml..." to not mess up the parser.`
2) The user can't deal with firewall/serverconfig
3) I can't deal with "buy a more powerful server"
4) I tried using an ob_start() at the top of the script and then at the beginning of each loop a "header("Transfer-Encoding: chunked"); ob_flush(); "
but nothing: nothing comes before the 8 minutes.
Help me guys!!
I would
Generate a random value
Start the XML generating script as a background process (see e.g. here)
Make the generating script write the XML into a file with the random value as the name when the script is done
Frequently poll for the existence of that empty file, e.g. using Ajax requests every 10 seconds, until it's there. Then fetch the XML from the file.
You send padding and still have it be valid XML. Trivial examples include whitespace in a lot of places, or comments. Once you've sent the XML declaration, you could start a comment, and keep sending padding:
<?xml version="1.0">
<!-- this comment to prevent timeouts:
30
60
90
⋮
or whatever, the exact data doesn't matter of course.
That's the easy solution. The better solution is to make that generation run in the background, and e.g., use AJAX to poll the server every 10s to check if its done. Or to implement an alternate notification method (e.g., email a URL when the the document is ready).
If this isn't a browser accessing, you may want a trivially simple API: Have one request to start generating the document, and another to fetch it. The one to fetch it may return "not ready yet" as e.g., a HTTP status code 500, 503, or 504. Then the script requesting should retry later. (For example, with curl, the --retry option will do this).

Categories