Unit testing a PHP file with no method defined - php

I have a PHP file that will only connect to DB and displays the resultset in an HTML table. How do we create a unit test script for such PHP file?
Thanks a lot in advance.

If you did not defined any function on your code (and don't want to), this is not an ideal scenario. You can still write a PHPUnit test starting PHP's built-in web server and perform HTTP requests (using Guzzle?). Then, ensure that what's being generated is consistent with the records of your database.
However, it may not be really needed to write tests only for this functionality, since it can be easily tested by hand and you risk to write a lot of duplicate code between your script and the test(s).

Related

Make a Linux user able to include PHP code but not able to view it?

User A has some PHP library files. User B needs access to the library. Is it possible permission-wise to make user B able to include the PHP file but not able to view the source code?
User A library entry file is lib.php.
User B uses lib.php in his start.php like this:
include path/to/lib.php;
However user B won't be able to view the content of lib.php or any other class files thereof.
Is this possible?
You're trying to find a way to do something that can't be done properly. Maybe in a kind of hackish, definitely dirty way.
You really should consider writing an API for your Application that contains all your logic. Then you could just handle everything else with User permission and so on, perfectly clean and state of the art.
Nobody but the API devs can look into the code, but everyone can use it based on his user permissions.
Every other method could is just to hard to handle and will cause more problems than just writing an API. It's worth the time.
Basically what you ask is not possible. The PHP interpreter needs to be able to read the file in order to include it, and if the PHP process can read it then your untrusted user can write some code that would read it in and dump it back out.
A few options you have are:
1) Use an API. Would allow you to keep you code secret as you'd only expose the API. Might take a few days work to implement though (or might not even be possible - impossible to say without knowing what you are doing), so probably not suitable in your situation.
2) Obsfucate your code. There are a number of PHP code obsfucators out there. It wouldn't stop prying eyes completely but it might be enough for your purposes.
3) Create a stub include file. If what your library includes isn't all critical to the running of the code you could create a cut-down stub library for your client to code against, then replace it with the real thing when they've done.

A little php script (logical help needed)

I am a .net developer and devolving an application for a company. For that I need to write a little php script to meet my needs.
My app need to check some information which randomly change almost every second from internet. I am thinking to make a php script so that I can give app the needed information. My idea is to use a simple text file instead of a mysql database (I am free to use a mysql db also). And then make two php pages. For example writer.php and reader.php
work of writer.php is very simple. This file will save the submitted data to the text file I want to use as db.
reader.php will read the text file and then show as simple text and on every read it will also empty the text file.This file will be read by my app.
work done.
Now the logical questions.
reader.php will be read by 40 clients in the same time. If there is
any conflicts?
If this method will be fast than mysql db?
If this method is more resource consuming than a mysql db?
You will have to lock the file for I/O for the time of writting (PHP flock() function). This may slow down things a bit when there will be more clients at same time, as when file will be locked by one user, everyone else would have to wait. The other problem that may appear when writting alot o data is that writting queue may become infinite when there would be many write requests.
MySQL seems to be better idea, as it caches both write and read requests, and it is implemented to avoid simultanous access conflicts.

JavaScript back and forth with running program

Problem:
I'm trying to see if I can have a back and forth between a program running on the server-side and JavaScript running on the client-side. All the outputs from the program are sent to JavaScript to be displayed to the user, and all the inputs from the user are sent from JavaScript to the program.
Having JavaScript receive the output and send the input is easily done with AJAX. The problem is that I do not know how to access an already running program on the server.
Attempt:
I tried to use PHP, but ran into some hurdles I couldn't leap over. Now, I can execute a program with PHP without any issue using proc_open. I can hook into the stdin and stdout streams, and I can get output from the program and send it input as well. But I can do this only once.
If the same PHP script is executed(?) again, I end up running the program again. So all I ever get out of multiple executions is whatever the program writes to stdout first, multiple times.
Right now, I use proc_open in the script which is supposed to only take care of input and output because I do not know how to access the stdout and stdin streams of an already running program. The way I see it, I need to maintain the state of my program in execution over multiple executions of the same PHP script; maintain the resource returned by proc_open and the pipes hooked into the stdin and stdout streams.
$_SESSION does NOT work. I cannot use it to maintain resources.
Is there a way to have such a back and forth with a program? Any help is really appreciated.
This sounds like a job for websockets
Try something like http://socketo.me/ or http://code.google.com/p/phpwebsocket/
I've always used Node for this type of thing, but from the above two links and a few others, it looks like there's options for PHP as well.
There may be a more efficient way to do it, but you could get the program to write it's output to a text file, and read the contents of that text file in with php. That way you'd have access to the full stream of data from the running program. There are issues with managing the size of the file, and handling requests from multiple clients, but it's a simple approach that might be good enough for your needs.
You are running the same program again, because it's the way PHP works. In your case client does a HTTP request and runs the script. Second request will run the script again. I'm not sure if continuous interaction is possible, so I would suggest making your script able to handle discrete transactions.
In order to figure different steps of the same "interaction", you will have to save data about previous ones in database. Basically, you need to give some unique hash to every client to identify them in your script, then it will know who does the request and will be able to differ consecutive requests from one user from requests of different users.
If your script is heavy and runs for a long time, consider making two script - one heavy and one for interaction (AJAX will query second one). In this case, second script will fill data into database and heavy script will simply fetch it from there.

Run scripts from database

Is it possible for a PHP script to insert a block of code into a database, and then some kind of daemon sees that it was put in, and then runs the code? Kind of like a cron job, or a job queue.
Technically you can do it. The code should be stored in a TEXT column and then you could evaluate it with eval: http://es.php.net/manual/en/function.eval.php
This is not a very good idea for several reasons: code stored in a database is not under version control, and the security implications are pretty severe. Most job queue systems store a "job type" and "job parameters" instead of executable code. The job type could be the name of a file to include, or the name of a class to instantiate in an OOP setting. Then you would call a specific function defined there, passing in the parameters.
The job parameters can be any PHP data structure if you use the serialize function to turn it into a string first. http://es.php.net/manual/en/function.serialize.php

Security implications of writing files using PHP

I'm currently trying to create a CMS using PHP, purely in the interest of education. I want the administrators to be able to create content, which will be parsed and saved on the server storage in pure HTML form to avoid the overhead that executing PHP script would incur. Unfortunately, I could only think of a few ways of doing so:
Setting write permission on every directory where the CMS should want to write a file. This sounds like quite a bad idea.
Setting write permissions on a single cached directory. A PHP script could then include or fopen/fread/echo the content from a file in the cached directory at request-time. This could perhaps be carried out in a Mediawiki-esque fashion: something like index.php?page=xyz could read and echo content from cached/xyz.html at runtime. However, I'll need to ensure the sanity of $_GET['page'] to prevent nasty variations like index.php?page=http://www.bad-site.org/malicious-script.js.
I'm personally not too thrilled by the second idea, but the first one sounds very insecure. Could someone please suggest a good way of getting this done?
EDIT: I'm not in the favour of fetching data from the database. The only time I would want to fetch data from the database would be when the content is cached. Secondly, I do not have access to memcached or any PHP accelerator.
Since you're building a CMS, you'll have to accept that if the user wants to do evil things to visitors, they very likely can. That's true regardless of where you store your content.
If the public site is all static content, there's nothing wrong with letting the CMS write the files directly. However, you'll want to configure the web server to not execute anything in any directory writable by the CMS.
Even though you don't want to hit the database every time, you can set up a cache to minimize database reads. Zend_Cache works very nicely for this, and can be used quite effectively as a stand-alone component.
You should put your pages in a database and retrieve them using parameterized SQL queries.
I'd go with the second option but modify it so the files are retrieved using mod_rewrite rather than a custom php function.

Categories