I'm building small application to collect data. For data collection PHP is used, for data storage PostgreSQL is used. PostgreSQL is included so I have full control over it. The PHP for collection is triggered by external entity and I have no control over PHP interpreter that will run the code.
Is there a way how to load php_pgsql.dll? at run-time?
I know it was asked already, for example here, here and my best source of information was here. If I get it right there is no way if I'm not root of the system (because dl() was removed).
I can add PHP to my application the same way I have added PostgreSQL (to have control over PostgreSQL and do not need to ask someone to install, configure, maintain...), BUT my PHP files are triggered by external application so I have no control over used PHP interpreter/environment.
Is there a way to start from PHP code (let's call it systemPHP) the same PHP code but in different PHP environment (myPHP environment I have control over and where I will have the dll included)?
For example if systemPHP starts collect.php the pseudo code of collect.php will be:
if <this is myPHP> { # How to detect it?
<execute the data collection code>
}
else {
<Start collect.php in myPHP transfering all the data to it> # For example if started by apache then also headers, session information etc...
<Send back result from myPHP via the systemPHP>
}
How to achieve this PHP 'tunnel'?
Thanks for any help or hint. I know that best will be root or at least have intelligent admin, however this is not the case :-(
Currently I'm trying workaround by executing database tasks via shell and then getting response back in PHP, but sometimes it works sometimes not and I believe there is a better way of doing this (not to mention speed and resource usage).
Have you looked into using a messaging queue system? Write to the queue, then have your PHP script running that has php_pgsql.dll already loaded, which checks for new messages in the queue and processes them.
Related
Before this question gets closed, I know the setup above is possible. I just want clarification on some things.
I just started learning Aurelia because I want to convert one of my projects into a web app. My project is built with html+css+JavaScript(jQuery)+ PHP(MySql).
I havent used any sort of framework before.
In the guide, they mention a few ways to setup a web server. I used the http server with node. Now this is where I need some help understanding a few things.
I dont want to use node.js. I want to use PHP on the server. Will that work and how?
When using Apache server, I know any PHP page is sent to the interpreter that renders the final html. I use XAMPP and its apache comes bundled with PHP. Does the http server used by node come with PHP? Is this even a sensible question?
Now I know Aurelia is purely front end. If it used to make single page applications, it uses Ajax. So now I made the following assumption:
Using Aurelia, the user accesses the root page of the app that the web server sends. After that, Aurelia makes various Ajax requests to the server which will use my PHP files to do database query stuff.
Is that right or am I missing something. And can I just use xampp(apache) to host my app instead of server from node?
Aurelia is a framework that, after you export it to any server, does not rely on any back-end software at all. This means that with the help of the http- / fetch-client API, you can just call out to your php script.
I have an example in my github:
https://github.com/rjpvroegop/randyvroegop.nl-made-with-aurelia
Here I use the http-client to post data to my php script wich has a very simple email functionality.
You can see the action inside my view-model in src/pages/contact/index.js.
You can see the PHP script in src/assets/components/contactengine.php.
These work the way they should. Note: you have to change your gulp build if you want your PHP served the way I serve mine, from the dist folder after gulp-watch or gulp-export.
Next to that you can use any back-end functionality you would like, as long as it returns the proper data. This PHP script does that. If you would download my distribution to test this you can simply do the following:
gulp export from your terminal in the root folder
copy everything from the export folder to your PHP webserver.
I'm creating a website that requires a file to be generated and stored on the server periodically (an XML feed for iTunes). The page is generated using ExpressionEngine. I discovered that the website's current server has a very restricted cPanel and doesn't have access to cron.
So I'm considering two options; find an alternative way to access the cronjobs (if they are available), or find an alternative way to created regularly scheduled tasks.
Regarding the first option, how would I go about determining if a server has cron available? I'm not sure how useful this would be anyway since I don't think the server allows shell access (it's a very basic setup for people who aren't tech savvy).
Regarding the second option, a friend mentioned to me that the functionality of cronjobs can just be done in PHP. How would I go about this?
Or, am I perhaps thinking too much with this? The page in ExpressionEngine that outputs the XML file is domain.com/itunes/itunes_feed. This just has some EE tags that outputs the relevant XML and the resultant page is in .xml format. Is it enough to just submit the above url to iTunes, or does it have to be a url to the actual pre-existing file on the server?
Option 1
Simply contact your hosts and ask them do they support cron jobs, and if so, how to set up.
Option 2
I only just set up my own set of cron jobs yesterday..
Create a php file that runs the code you want,
Set up and account on https://www.easycron.com/
Upload your php file to easycron
Set the times in which you would like your php code to run
Simple as that! Does that make sense?
I'm considering the idea of a browser-based PHP IDE and am curious about the possibility of emulating the command line through the browser, but I'm not familiar enough with developing tools for the CLI to know if it's something that could be done easily or at all. I'd like to do some more investigation, but so far haven't been able to find very many resources on it.
From a high level, my first instinct is to set up a text input which would feed commands to a PHP script via AJAX and return any output onto the page. I'm just not familiar enough with the CLI to know how to interface with it in that context.
I don't need actual code, though that would be useful too, but I'm looking for more of which functions, classes or APIs I should investigate further. Ideally, I would prefer something baked into PHP (assume PHP 5.3) and not a third-party library. How would you tackle this? Are there any resources or projects I should know about?
Edit: The use case for this would be a localhost or development server, not a public facing site.
Call this function trough a RPC or a direct POST from javascript, which does things in this order:
Write the PHP code to a file (with a random name) in a folder (with a random name), where it will sit alone, execute, and then be deleted at the end of execution.
The current PHP process will not run the code in that file. Instead it has to have exec permissions (safe_mode off). exec('php -c /path/to/security_tight/php.ini') (see php -?)
Catch any ouput and send it back to the browser. You are protected from any weird errors. Instead of exec I recomment popen so you can kill the process and manually control the timeout of waiting for it to finish (in case you kill that process, you can easily send back an error to the browser);
You need lax/normal security (same as the entire IDE backend) for the normal PHP process which runs when called through the browser.
You need strict and paranoid security for the php.ini and php process which runs the temporary script (go ahead and even separate it on another machine which has no network/internet access and has its state reverted to factory every hour just to be sure).
Don't use eval(), it is not suitable for this scenario. An attacker can jump out into your application and use your current permissions and variables state against you.
The basic version would be
you scripts outputs a form with a line input
The form action points to your script
The script takes the input on the form and passes it to eval
pass any output from eval to the browser
output the form again
The problem is, that defined functions and variables are lost between each request.
Would you could to is to add each line that is entered to your session. Lets say
$inputline = $_GET['line'];
$_SESSION['script'] .= $inputline . PHP_EOL;
eval($_SESSION['script'];
by this, on each session a the full PHP script is executed (and of course you will get the full output).
Another option would be to create some kind of daemon (basically an instance of a php -a call) that runs on the server in the background and gets your input from the browser and passes the output.
You could connect this daemon to two FIFO devices (one for the input and one for the output) and communicate via simple fopen.
For each user that is using your script, a new daemon process has to be spawned.
Needless to say, that it is important to secure your script against abuse.
Recently I read about a PHP interpreter written in Javascript php.js, so you could write and execute PHP code using your browser only. I'm not sure if this is what you need in the end but it sounds interesting.
We've tested some products at my university for ssh-accessing our lab servers and used some of the Web-SSH-Tools - they basically do exactly what you want. The Shell-In-A-Box-Project may be bound to any interpreter you like and may be used with an interactive php-interpreter, if desired (on the demo-page, they used a basic-interpreter). The project may serve as a basis for a true PHP-IDE. These have the advantage of being capable of interacting with any console-based editor as well (e.g. vi, emacs or nano), as well as being able to give administrative commands (e.g. creating folders, changing ownerships or ACLs or rebooting a service).
Mozilla also has a full-featured webbased IDE called Bespin, which is also highly extensible and configurable.
As you stated, that the page is not for the public, you of course have to protect the page with Authentication and SSL to combat session hijacking.
I got a situation where I have lots of system configurations/logs off which I have to generate a quick review of the system useful for troubleshooting.
At first I'd like to build kind of web interface(most probably a php site) that gives me the rough snapshot of the system configuration using the available information from support logs. The support logs reside on mirrored servers (call it log server) & the server on which I'll be hosting the site (call it web server) will have to ssh/sftp to access them.
My rough sketch:
The php script on web server will make some kind of connection to the log server & go to the support logs location.
It'll then trigger a perl script at logs server, which will collect relevant stuffs from all the config/log files into some useful xml (there'd be multiple of those).
Someway these xml files are transferred to web server & php will use it to create the html out of it.
I'm very new to php & would like to know if this is feasible or if there's any other alternative/better way of doing this?
It would be great if someone could provide more details for the same.
Thanks in advance.
EDIT:
Sorry I missed to mention that the logs aren't the ones generated on live machine, I'm dealing with sustenance activities for NAS storage device & there'll be plenty of support logs coming from different end customers which folks from my team would like to have a look at.
Security is not a big concern here (I'm ok with using plain text authentication to log servers) as these servers can be accessed only through company's VPN.
Yes, PHP can process XML. A simple way is to use SimpleXML: http://php.net/manual/en/book.simplexml.php
While you can do this using something like expect (I think there is something for PHP too..), I would recommend doing this in two separate steps:
A script, running via Cron, retrieves data from servers and store it locally
The PHP script reads from the local stored data only, in order to generate reports.
This way, you have these benefits:
You don't have to worry about how to make your php script connect via ssh to servers
You avoid the security risks related to allowing your webserver user log in to other servers (high risk in case your script gets hacked)
In case of slow / absent connectivity to servers, long time to retrieve logs, etc. you php script will still be able to quickly show the data -- maybe, along with some error message explaining what went wrong during latest update
In any case, you php script will terminate much quicker since it only has to retrieve data from local storage.
Update: ssh client via php
Ok, from your latest comment I understand that what you need is more a "front-end browser" to display the files, than a report generation tool or similar; in this case you can use Expect (as I stated before) in order to connect to remote machines.
There is a PECL extension for PHP providing expect functionality. Have a look at the PHP Expect manual and in particular at the usage examples, showing how to use it to make SSH connections.
Alternate way: taking files from NFS/SAMBA share
Another way, avoiding to use SSH, is to browse files on the remote machines via locally-mounted share.
This is expecially useful in case interesting files are already shared by a NAS, while I wouldn't recommend this if that would mean sharing the whole root filesystem or huge parts of it.
Hey folks, this question can't be too complicated. Please provide a solution to at least figure out the ultimate root cause of the problem.
I currently write an application, which controls Excel through COM: The app creates a COM-based Excel instance, opens some XLS files and reads their contents.
Scenario I
On Windows 7, I start Apache and mySQL using xmapp-control with system administrator rights. All works as expected. The PHP-based controller script interacts with Excel as expected.
Scenario II
A problem appears, if I start Apache and mySQL as 'background jobs'. Here is how:
I created two jobs using Windows 7 Task Planner. One runs apache_start.bat, the other runs mysql_start.bat.
Both tasks run as SYSTEM with elevated privileges when Windows 7 boots.
Apache and mySQL work as expected. Specifically, Apache serves HTTP request from clients and PHP is able to talk to mySQL.
When I call the PHP controller, which calls and interacts with Excel using COM, I do receive an error.
The error message comes from Excel [not COM itself] and reads like this:
Excel can't read the specified Excel-file
Excel failed to save the file due to an ill-name worksheet
Interestingly, the first during the first run of the PHP-based controller script, it takes a few seconds to render the error message. Each subsequent run immediately renders the error message.
Windows system logs didn't show a single problem report entry.
Note, that the PHP program and the Apache instance didn't change - except the way Apache was started.
At least the PHP controller script is perfectly able to read the file-system, since it provides the pathes to the XLS-file through scandir() of a certain directory.
Concurrency issues can't be the cause of the problem. A single instance of the specific PHP controller interacts with Excel.
Question
Could someone provide details, why this happens? Or provide ways to isolate the ultimate cause of the problem (e.g. by means of a PowerShell 2 script)?
UPDATE-1 :: 2011-11-29
As proposed, I switched the Task Planner job from SYSTEM to a conventional user. Works. Apache and MySQL get started and process request.
Unfortunately, the situation regarding Excel did't change a bit. Still, I see the error.
As assumed earlier, the EXCEL COM server starts. I'm able to change various settings (e.g. suppress dialogs) without a problem through the COM-instance.
The problem happens while calling this:
$excelComObject->Workbooks->Open( 'PathToXLSFile' );
UPDATE-2 :: 2011-11-30
Added the accounts USER, GUEST and EVERYONE with the READABLE right to the access control list of the XLS file . No change.
Modified the app in such a way, that the PHP part creates a copy of the XLS file as a temporary file and moves the contents of the original file into this. Just to ensure, that the problem isn't forced by odd file / path names.
Still, the problem persists.
UPDATE-2 :: 2011-12-05
I'm going to send the EXCEL COM-Server methods in such a way, that Excel creates a blank file and saves it to /tmp. Let's see, if Excel even isn't able to read this file.
Go into the task planner and let everything run as a local user. This will probably require that you enter a password so create one if you don't have one already.
Excel is a user-level application that shouldn't run as SYSTEM. I'm sure there are ways around it, but you should simply let everything run at the correct level.
Having Apache run on the user level isn't a problem.
Try creating the following directories:
C:\Windows\SysWOW64\Config\Systemprofile\Desktop
C:\Windows\System32\Config\Systemprofile\Desktop
it worked for me :-)
http://social.msdn.microsoft.com/Forums/en-US/innovateonoffice/thread/b81a3c4e-62db-488b-af06-44421818ef91
In the past (read: pre Vista) services had an option called "Allow Service to interact with desktop" which allowed services to spawn windows etc. Starting Vista, this is no longer allowed.
I suspect Excel is failing because it can't function under this restriction. Thus, any attempt to run it as a service in your Win7 installation will fail.
You can Windows XP and allow desktop interaction for your Apache process, which I don't really recommend for obvious reasons.
Another approach I would take is to create a PHP script that runs as a regular process and listens on a socket in an infinite loop. Your PHP script that runs under Apache would communicate with the secondary script through the local socket and have the secondary script spawn Excel.
This may sound complicated but in fact it's not a lot of code and it fixes a problem you will soon have anyway: You should only have one instance of Excel running or you may run into problems. The secondary script could queue requests, handing off one by one to Excel then taking the next in the queue.