can PHP know details of ALL PHP environment just now? - php

using sys_getloadavg() we can get server load,
using memory_get_usage() we get MEM asigned to THIS_script.php
however:
is possible some similar to this program using PURE code PHP (not shell, not bash):
<?php
function get_ALL_process_PHP_running_just_now(){
...
...
... get memory of ALL process PHP
return array_process_number();
}
then obtain some similar to:
total scripts running: 35
users running process: 6
process with more of 5 minutes: 2
memory GLOBAL asigned to all process PHP: 8GB
etc...
is possible obtain that info with "admin.php" ?

As far as i know there is no build-in function that pieces together that data, however the functions you refer to (sys_getloadavg, memory_get_usage) are just wrappers around the /proc filesystem (on linux anyway, i don't think too many of them have windows counterparts).
The ordinary filesystem functions, which you use to read files, can be used to read the /proc filesystem, which in turn contains all the information you might want.

Related

PHP / C++: shm_open() error when sharing memory

I've been all over the internet looking for an answer to my problem. Here is the setup, I am running embedded Linux (created with Yocto) which is running the Lighttpd web server with PHP5. In my C++ code I have the following:
shared = shm_open(SHARED_FILE_NAME, O_RDWR | O_CREAT | O_TRUNC, 0666);
ftruncate(shared, FILE_SIZE);
map = mmap(...);
// shm_unlink() isn't called until my C++ thread ends.
Everything works well and I do not get any errors and other C++ processes and threads are also able to access the shared memory and map without any problems (I have one writer thread and all other threads and processes do a read only on the memory). The memory is used as a ring buffer where the writing thread is updating data very quickly. The problems start to occur when trying to access that same memory in PHP. In PHP I do (need read only):
<?php
$shm_key = ftok("/dev/shm/shared_file.shm", 'c');
$shm_id = shm_open($shm_key, "a", 0, 0);
...
?>
When looking at the value from ftok() it returns a non -1 number which means it did not fail. I do get a fail on the PHP's shm_open() call which reads:
Warning: shmop_open(): unable to attach or create shared memory segment in /www/pages/shared.php on line 9
I've changed the permission of the file with chmod 777 /dev/shm/shared.shm just to rule out any file permission issues. Also when I run ipcs -m I do not get any listings for shared memory segments, yet my C++ code is running just fine. I've also looked for SELinux and tried entering setenforce 0 but I get a response of -sh: setenforce: command not found so I figure this isn't an issue. I've also tried running wget <local ip address>/shared.php to see if running locally would return the correct data but when looking at the file which was returned it had the same error messages.
I am looking to be able to have a web page on my embedded system read this shared memory and stream back chunks of binary to feed a graph when a request comes in (not interested in web sockets at the time). I am able to get named pipes to work across PHP and C++ just fine but I need shared memory for this application and the shared memory access seems to be troublesome. Any help is appreciated.
I'm developing PHP functions that need to use C Shared Memory. As your code, my C functions use shm_open, mmap, etc.. and I guess to use PHP ftok(), shmop_open() to access the C's shared memory but this PHP functions don't work.
The two area are not compatible. I found different properties of the two areas in this documents http://menehune.opt.wfu.edu/Kokua/More_SGI/007-2478-008/sgi_html/ch03.html:
C (with shm_open, mmap, like the Straton source code) use “POSIX Shared Memory”
PHP (with shmop_* functions) use “System V Shared Memory”
I suggest you to try with Sync http://php.net/manual/en/book.sync.php: you need the PECL sync extension.

PHP shared memory on windows server

I want to store small array (aprox 1Mb) for a massive access load.
I don't want to use database.
I don't need to secure that data.
I will change this array only one-time per minute, but I have thousands of access each minute for read that data.
Actually I'm using PHP shared memory segment shmop which works fine. But often causes a PHP exeption which results by Apache restart.
Do exist some way to do the same memory operations in windows by any other PHP module?
Do exist better way to handle this data for a fast access?
I don't care about security or reliability of that stored data.
how i work with shared memory variable ($shm):
visitor calls $shm.
if ($shm exists, and is not older than 1 minute): return $shm;
else: create new $shm, return $shm;
visitor2 calls $shm.
if ($shm exists, and is not older than 1 minute): return $shm;
else: create new $shm, return $shm;
etc ...
Apache 2.4.23 (Win64) PHP 5.5.3

Using PHP APC cache in CLI mode using dumpfiles

I've recently started using APC cache on our servers. One of the most important parts of our product is a CLI (Cron/scheduled) process, whose performance is critical. Typically the batchjob consists of running some 16-32 processes in parallel for about an hour (they "restart" every few minutes).
By default, using APC cache in CLI is a waste of time due to the opcode cache not being retained between individual calls. But APC also contains apc_bin_dumpfile() and apc_load_dumpfile() functions.
I was thinking these two function might be used to make APC efficient in CLI mode by having it all compiled sometime outside the batchjob, stored in a single dumpfile and having the individual processes load the dumpfile.
Does anybody have any experience with such a scenario or can you give good reasons why it will or will not work? If any significant gains could reasonably be had, either in memory use or performance? What pitfalls are lurking in the shadows?
Disclaimer: As awesome as APC is when it works in CLI, and it is awesome, it can equally be as frustrating. Use with a healthy load of patience, be thorough, step away from the problem if you're spinning, keep in mind you are working with cache that is why it seems like its doing nothing, it is actually doing nothing. Delete dump file, start with just the basics, if that doesn't work forget it try a new machine, new OS, if it is working make a copy, piece by piece expand functionality - there are loads of things that won't work, if it is working commit or make a copy, add another piece and test again, for sanity-check recheck the copies that were working before, cliches or not; if at first you don't succeed try try again, you can't keep doing the same thing expecting new results.
Ready? This is what you've been waiting for:
Enable apc for cli
apc.enable-cli=1
it is not ideal to create, populate and destroy the APC cache on every CLI request
- previous answer by unknown poster since removed.
You're absolutely right that sucks, lets fix it shall we?
If you try and use APC under CLI and it is not enabled you will get warnings.
something like:
PHP Warning: apc_bin_loadfile(): APC is not enabled,
apc_bin_loadfile not available.
PHP Warning: apc_bin_dumpfile(): APC is not enabled,
apc_bin_dumpfile not available.
Warning: I suggest you don't enable cli in php.ini, it is not worth the frustration, you are going to forget you did it and have numerous other headaches with other scripts, trust me its not worth it, use a launcher script instead. (see below)
apc_loadfile and apc_dumpfile in cli
As per the comment by mightye php we need to disable apc.stat or you will get a warnings
something like:
PHP Warning: apc_bin_dumpfile(): Excluding some files from apc_bin_dump[file].
Cached files must be included using full path with apc.stat=0.
launcher script - php-apc.sh
We will use this script to launch our apc enabled scripts (ex. ./php-apc.sh apc-cli.php) instead of changing the properties in php.ini directly.
#/bin/sh
php -d apc.enable_cli=1 -d apc.stat=0 $1
Ready for the basic functionality? Sure you are =)
basic APC persisted - apc-cli.php
<?php
/** check if dump file exists, you don't want to use file_exists */
if (false !== $dump_file = stream_resolve_include_path('apc.dump'))
/** so where were we lets have a look see shall we */
if (false !== apc_bin_loadfile($dump_file))
/** fetch what was stored last run just for fun */
if (false !== $value = apc_fetch('my.awesome.apc.store'))
echo "$value from apc\n";
/** store what gets fetched the next run just for fun */
apc_store('my.awesome.apc.store', 'awesome in cli');
/** what a shlep lets not do that all over again shall we */
apc_bin_dumpfile(array(),null,'apc.dump');
Notice: Why not use file_exists? Because file_exists == stat you see and we want to reap the reward that is apc.stat=0 so; work within the include path; use absolute and not relative paths - as returned by stream_resolve_include_path(); avoid include_once, require_once use the non *_once counterparts; check your stat usage, when not using APC(Muchos important senor), with the help of a StreamWrapper echo for calls to method url_stat; Oops: Fatal scope over-run error! aborting notice thread. see url_stat
message: Error caused by StreamWrapper outside the scope of this discussion.
The smoke test
Using the launcher execute the basic script
./php-apc.sh apc-cli.php
A whole bunch of nothing happened that's what we want right, why else do you want to use cache? If it did output anything then it didn't work, sorry.
There should be a dump file called apc.dump see if you can find it? If you can't find it then it didn't work, sorry.
Good we have the dump file there were no errors lets run it again.
./php-apc.sh apc-cli.php
What you want to see:
awesome in cli from apc
Success! =)
There are few in PHP as satisfying as a working APC implementation.
nJoy!
I would definitely not use it in the CLI as when you restart it, it's almost as if it was never running in the first place!
The better way of using APC is to have it running on the webserver itself all the time, this way with it being active it will actually do what it's supposed to do!
I tryed with curl and APC.it works
use these commands in CLI
curl --data "param1=value2" http://testsite.com/test.php
so it will post data to test.php and you writes the code in it.

How to view PHP or Apache error log online in a browser?

Is there a way to view the PHP error logs or Apache error logs in a web browser?
I find it inconvenient to ssh into multiple servers and run a "tail" command to follow the error logs. Is there some tool (preferably open source) that shows me the error logs online (streaming or non-streaming?
Thanks
A simple php code to read log and print:
<?php
exec('tail /var/log/apache2/error.log', $error_logs);
foreach($error_logs as $error_log) {
echo "<br />".$error_log;
}
?>
You can embed error_log php variable in html as per your requirement. The best part is tail command will load the latest errors which wont make too load on your server.
You can change tail to give output as you want
Ex. tail myfile.txt -n 100 // it will give last 100 lines
See What commercial and open source competitors are there to Splunk? and I would recommend https://github.com/tobi/clarity
Simple and easy tool.
Since everyone is suggesting clarity, I would also like to mention tailon. I wrote tailon as a more modern and secure alternative to clarity. It's still in its early stages of development, but the functionality you need is there. You may also use wtee, if you're only interested in following a single log file.
You good make a script that reads the error logs from apache2..
$apache_errorlog = file_get_contents('/var/log/apache2/error.log');
if its not working.. trying to get it with the php functions exec or shell_exec and the command 'cat /var/log/apache2/error.log'
EDIT: If you have multi servers(i quess with webservers on it) you can create a file on the machine, when you make a request to that script(hashed connection) you get the logs from that server
I recommend LogHappens: https://loghappens.com, it allows you to view the error log in web, and this is what it looks like:
LogHappens supports kinds of web server log format, it comes with parses for Apache and CakePHP, and you can write your own.
You can find it here: https://github.com/qijianjun/logHappens
It's open source and free, I forked it and do some work to make it work better in dev env or in public env. That is:
Support token for security, one can't access the site without the token in config.php
Support IP whitelists for security and privacy
Sopport config the interval between ajax requests
Support load static files from local (for local dev env)
I've found this solution https://code.google.com/p/php-tail/
It's working perfectly. I only needed to change the filesize, because I was getting an error first.
56 if($maxLength > $this->maxSizeToLoad) {
57 $maxLength = $this->maxSizeToLoad;
58 // return json_encode(array("size" => $fsize, "data" => array("ERROR: PHPTail attempted to load more (".round(($maxLength / 1048576), 2)."MB) then the maximum size (".round(($this->maxSizeToLoad / 1048576), 2) ."MB) of bytes into memory. You should lower the defaultUpdateTime to prevent this from happening. ")));
59 }
And I've added default size, but it's not needed
125 lastSize = <?php echo filesize($this->log) || 1000; ?>;
I know this question is a bit old, but (along with the lack of good choices) it gave me the idea to create this tiny (open source) web app. https://github.com/ToX82/logHappens. It can be used online, but I'd use an .htpasswd as a basic login system. I hope it helps.

performance of passthru("cat file")

I'm using passthru("cat filepath") in my download script. My concern is that it might use a lot of server resource.
What is the difference between directly link a file in a public directory and download a file using passthru("cat filepath") in php?
What is the difference between directly link a file in a public directory and download a file using passthru("cat filepath") in php?
The difference is that linking directly to a file does not invoke PHP, while running a PHP script which in turn runs cat causes, well, both PHP and cat to be invoked. This will take up a moderate amount of extra memory, but won't cause server load under most circumstances.
I was using readfile(), but this function can't be used for files larger than 2gb
You might want to find a better solution than passing all of the file contents through PHP, in that case. Look into X-Sendfile support in your web server software of choice.
Don't use passthru() for that, you're opening yourself to CLI Injection and performance is terrible. readfile() exists just for that.
readfile($filepath);
There is a small overhead when passing through PHP compared to a direct link but we are usually talking of milliseconds. However, the browser will not be able to request a 206 Partial when using readfile() unless you code support for it or use something like PEAR::HTTP_Download.
EDIT: Seems you are using passthru() because apparently readfile() doesn't handle >2GB files properly (I never had that problem with readfile(), in fact I just tested it with a 7.2 GB file and it worked fine). In which case, at least escape your parameters.
function readfile_ext($filepath) {
if(!file_exists($filepath))
return false;
passthru('cat ' . escapeshellarg($filepath));
return true;
}
Instead of passthru('cat filepath'), use the PHP native readfile('filepath'), which has better performance.
Both methods will be slower than simply directly linking to the file though, since PHP has a certain overhead.

Categories