I've been all over the internet looking for an answer to my problem. Here is the setup, I am running embedded Linux (created with Yocto) which is running the Lighttpd web server with PHP5. In my C++ code I have the following:
shared = shm_open(SHARED_FILE_NAME, O_RDWR | O_CREAT | O_TRUNC, 0666);
ftruncate(shared, FILE_SIZE);
map = mmap(...);
// shm_unlink() isn't called until my C++ thread ends.
Everything works well and I do not get any errors and other C++ processes and threads are also able to access the shared memory and map without any problems (I have one writer thread and all other threads and processes do a read only on the memory). The memory is used as a ring buffer where the writing thread is updating data very quickly. The problems start to occur when trying to access that same memory in PHP. In PHP I do (need read only):
<?php
$shm_key = ftok("/dev/shm/shared_file.shm", 'c');
$shm_id = shm_open($shm_key, "a", 0, 0);
...
?>
When looking at the value from ftok() it returns a non -1 number which means it did not fail. I do get a fail on the PHP's shm_open() call which reads:
Warning: shmop_open(): unable to attach or create shared memory segment in /www/pages/shared.php on line 9
I've changed the permission of the file with chmod 777 /dev/shm/shared.shm just to rule out any file permission issues. Also when I run ipcs -m I do not get any listings for shared memory segments, yet my C++ code is running just fine. I've also looked for SELinux and tried entering setenforce 0 but I get a response of -sh: setenforce: command not found so I figure this isn't an issue. I've also tried running wget <local ip address>/shared.php to see if running locally would return the correct data but when looking at the file which was returned it had the same error messages.
I am looking to be able to have a web page on my embedded system read this shared memory and stream back chunks of binary to feed a graph when a request comes in (not interested in web sockets at the time). I am able to get named pipes to work across PHP and C++ just fine but I need shared memory for this application and the shared memory access seems to be troublesome. Any help is appreciated.
I'm developing PHP functions that need to use C Shared Memory. As your code, my C functions use shm_open, mmap, etc.. and I guess to use PHP ftok(), shmop_open() to access the C's shared memory but this PHP functions don't work.
The two area are not compatible. I found different properties of the two areas in this documents http://menehune.opt.wfu.edu/Kokua/More_SGI/007-2478-008/sgi_html/ch03.html:
C (with shm_open, mmap, like the Straton source code) use “POSIX Shared Memory”
PHP (with shmop_* functions) use “System V Shared Memory”
I suggest you to try with Sync http://php.net/manual/en/book.sync.php: you need the PECL sync extension.
Related
using sys_getloadavg() we can get server load,
using memory_get_usage() we get MEM asigned to THIS_script.php
however:
is possible some similar to this program using PURE code PHP (not shell, not bash):
<?php
function get_ALL_process_PHP_running_just_now(){
...
...
... get memory of ALL process PHP
return array_process_number();
}
then obtain some similar to:
total scripts running: 35
users running process: 6
process with more of 5 minutes: 2
memory GLOBAL asigned to all process PHP: 8GB
etc...
is possible obtain that info with "admin.php" ?
As far as i know there is no build-in function that pieces together that data, however the functions you refer to (sys_getloadavg, memory_get_usage) are just wrappers around the /proc filesystem (on linux anyway, i don't think too many of them have windows counterparts).
The ordinary filesystem functions, which you use to read files, can be used to read the /proc filesystem, which in turn contains all the information you might want.
I've recently started using APC cache on our servers. One of the most important parts of our product is a CLI (Cron/scheduled) process, whose performance is critical. Typically the batchjob consists of running some 16-32 processes in parallel for about an hour (they "restart" every few minutes).
By default, using APC cache in CLI is a waste of time due to the opcode cache not being retained between individual calls. But APC also contains apc_bin_dumpfile() and apc_load_dumpfile() functions.
I was thinking these two function might be used to make APC efficient in CLI mode by having it all compiled sometime outside the batchjob, stored in a single dumpfile and having the individual processes load the dumpfile.
Does anybody have any experience with such a scenario or can you give good reasons why it will or will not work? If any significant gains could reasonably be had, either in memory use or performance? What pitfalls are lurking in the shadows?
Disclaimer: As awesome as APC is when it works in CLI, and it is awesome, it can equally be as frustrating. Use with a healthy load of patience, be thorough, step away from the problem if you're spinning, keep in mind you are working with cache that is why it seems like its doing nothing, it is actually doing nothing. Delete dump file, start with just the basics, if that doesn't work forget it try a new machine, new OS, if it is working make a copy, piece by piece expand functionality - there are loads of things that won't work, if it is working commit or make a copy, add another piece and test again, for sanity-check recheck the copies that were working before, cliches or not; if at first you don't succeed try try again, you can't keep doing the same thing expecting new results.
Ready? This is what you've been waiting for:
Enable apc for cli
apc.enable-cli=1
it is not ideal to create, populate and destroy the APC cache on every CLI request
- previous answer by unknown poster since removed.
You're absolutely right that sucks, lets fix it shall we?
If you try and use APC under CLI and it is not enabled you will get warnings.
something like:
PHP Warning: apc_bin_loadfile(): APC is not enabled,
apc_bin_loadfile not available.
PHP Warning: apc_bin_dumpfile(): APC is not enabled,
apc_bin_dumpfile not available.
Warning: I suggest you don't enable cli in php.ini, it is not worth the frustration, you are going to forget you did it and have numerous other headaches with other scripts, trust me its not worth it, use a launcher script instead. (see below)
apc_loadfile and apc_dumpfile in cli
As per the comment by mightye php we need to disable apc.stat or you will get a warnings
something like:
PHP Warning: apc_bin_dumpfile(): Excluding some files from apc_bin_dump[file].
Cached files must be included using full path with apc.stat=0.
launcher script - php-apc.sh
We will use this script to launch our apc enabled scripts (ex. ./php-apc.sh apc-cli.php) instead of changing the properties in php.ini directly.
#/bin/sh
php -d apc.enable_cli=1 -d apc.stat=0 $1
Ready for the basic functionality? Sure you are =)
basic APC persisted - apc-cli.php
<?php
/** check if dump file exists, you don't want to use file_exists */
if (false !== $dump_file = stream_resolve_include_path('apc.dump'))
/** so where were we lets have a look see shall we */
if (false !== apc_bin_loadfile($dump_file))
/** fetch what was stored last run just for fun */
if (false !== $value = apc_fetch('my.awesome.apc.store'))
echo "$value from apc\n";
/** store what gets fetched the next run just for fun */
apc_store('my.awesome.apc.store', 'awesome in cli');
/** what a shlep lets not do that all over again shall we */
apc_bin_dumpfile(array(),null,'apc.dump');
Notice: Why not use file_exists? Because file_exists == stat you see and we want to reap the reward that is apc.stat=0 so; work within the include path; use absolute and not relative paths - as returned by stream_resolve_include_path(); avoid include_once, require_once use the non *_once counterparts; check your stat usage, when not using APC(Muchos important senor), with the help of a StreamWrapper echo for calls to method url_stat; Oops: Fatal scope over-run error! aborting notice thread. see url_stat
message: Error caused by StreamWrapper outside the scope of this discussion.
The smoke test
Using the launcher execute the basic script
./php-apc.sh apc-cli.php
A whole bunch of nothing happened that's what we want right, why else do you want to use cache? If it did output anything then it didn't work, sorry.
There should be a dump file called apc.dump see if you can find it? If you can't find it then it didn't work, sorry.
Good we have the dump file there were no errors lets run it again.
./php-apc.sh apc-cli.php
What you want to see:
awesome in cli from apc
Success! =)
There are few in PHP as satisfying as a working APC implementation.
nJoy!
I would definitely not use it in the CLI as when you restart it, it's almost as if it was never running in the first place!
The better way of using APC is to have it running on the webserver itself all the time, this way with it being active it will actually do what it's supposed to do!
I tryed with curl and APC.it works
use these commands in CLI
curl --data "param1=value2" http://testsite.com/test.php
so it will post data to test.php and you writes the code in it.
Is there a way to view the PHP error logs or Apache error logs in a web browser?
I find it inconvenient to ssh into multiple servers and run a "tail" command to follow the error logs. Is there some tool (preferably open source) that shows me the error logs online (streaming or non-streaming?
Thanks
A simple php code to read log and print:
<?php
exec('tail /var/log/apache2/error.log', $error_logs);
foreach($error_logs as $error_log) {
echo "<br />".$error_log;
}
?>
You can embed error_log php variable in html as per your requirement. The best part is tail command will load the latest errors which wont make too load on your server.
You can change tail to give output as you want
Ex. tail myfile.txt -n 100 // it will give last 100 lines
See What commercial and open source competitors are there to Splunk? and I would recommend https://github.com/tobi/clarity
Simple and easy tool.
Since everyone is suggesting clarity, I would also like to mention tailon. I wrote tailon as a more modern and secure alternative to clarity. It's still in its early stages of development, but the functionality you need is there. You may also use wtee, if you're only interested in following a single log file.
You good make a script that reads the error logs from apache2..
$apache_errorlog = file_get_contents('/var/log/apache2/error.log');
if its not working.. trying to get it with the php functions exec or shell_exec and the command 'cat /var/log/apache2/error.log'
EDIT: If you have multi servers(i quess with webservers on it) you can create a file on the machine, when you make a request to that script(hashed connection) you get the logs from that server
I recommend LogHappens: https://loghappens.com, it allows you to view the error log in web, and this is what it looks like:
LogHappens supports kinds of web server log format, it comes with parses for Apache and CakePHP, and you can write your own.
You can find it here: https://github.com/qijianjun/logHappens
It's open source and free, I forked it and do some work to make it work better in dev env or in public env. That is:
Support token for security, one can't access the site without the token in config.php
Support IP whitelists for security and privacy
Sopport config the interval between ajax requests
Support load static files from local (for local dev env)
I've found this solution https://code.google.com/p/php-tail/
It's working perfectly. I only needed to change the filesize, because I was getting an error first.
56 if($maxLength > $this->maxSizeToLoad) {
57 $maxLength = $this->maxSizeToLoad;
58 // return json_encode(array("size" => $fsize, "data" => array("ERROR: PHPTail attempted to load more (".round(($maxLength / 1048576), 2)."MB) then the maximum size (".round(($this->maxSizeToLoad / 1048576), 2) ."MB) of bytes into memory. You should lower the defaultUpdateTime to prevent this from happening. ")));
59 }
And I've added default size, but it's not needed
125 lastSize = <?php echo filesize($this->log) || 1000; ?>;
I know this question is a bit old, but (along with the lack of good choices) it gave me the idea to create this tiny (open source) web app. https://github.com/ToX82/logHappens. It can be used online, but I'd use an .htpasswd as a basic login system. I hope it helps.
I am having trouble uploading files to S3 from on one of our servers. We use S3 to store our backups and all of our servers are running Ubuntu 8.04 with PHP 5.2.4 and libcurl 7.18.0. Whenever I try to upload a file Amazon returns a RequestTimeout error. I know there is a bug in our current version of libcurl preventing uploads of over 200MB. For that reason we split our backups into smaller files.
We have servers hosted on Amazon's EC2 and servers hosted on customer's "private clouds" (a VMWare ESX box behind their company firewall). The specific server that I am having trouble with is hosted on a customer's private cloud.
We use the Amazon S3 PHP Class from http://undesigned.org.za/2007/10/22/amazon-s3-php-class. I have tried 200MB, 100MB and 50MB files, all with the same results. We use the following to upload the files:
$s3 = new S3($access_key, $secret_key, false);
$success = $s3->putObjectFile($local_path, $bucket_name,
$remote_name, S3::ACL_PRIVATE);
I have tried setting curl_setopt($curl, CURLOPT_NOPROGRESS, false); to view the progress bar while it uploads the file. The first time I ran it with this option set it worked. However, every subsequent time it has failed. It seems to upload the file at around 3Mb/s for 5-10 seconds then drops to 0. After 20 seconds sitting at 0, Amazon returns the "RequestTimeout - Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed." error.
I have tried updating the S3 class to the latest version from GitHub but it made no difference. I also found the Amazon S3 Stream Wrapper class and gave that a try using the following code:
include 'gs3.php';
define('S3_KEY', 'ACCESSKEYGOESHERE');
define('S3_PRIVATE','SECRETKEYGOESHERE');
$local = fopen('/path/to/backup_id.tar.gz.0000', 'r');
$remote = fopen('s3://bucket-name/customer/backup_id.tar.gz.0000', 'w+r');
$count = 0;
while (!feof($local))
{
$result = fwrite($remote, fread($local, (1024 * 1024)));
if ($result === false)
{
fwrite(STDOUT, $count++.': Unable to write!'."\n");
}
else
{
fwrite(STDOUT, $count++.': Wrote '.$result.' bytes'."\n");
}
}
fclose($local);
fclose($remote);
This code reads the file one MB at a time in order to stream it to S3. For a 50MB file, I get "1: Wrote 1048576 bytes" 49 times (the first number changes each time of course) but on the last iteration of the loop I get an error that says "Notice: fputs(): send of 8192 bytes failed with errno=11 Resource temporarily unavailable in /path/to/http.php on line 230".
My first thought was that this is a networking issue. We called up the customer and explained the issue and asked them to take a look at their firewall to see if they were dropping anything. According to their network administrator the traffic is flowing just fine.
I am at a loss as to what I can do next. I have been running the backups manually and using SCP to transfer them to another machine and upload them. This is obviously not ideal and any help would be greatly appreciated.
Update - 06/23/2011
I have tried many of the options below but they all provided the same result. I have found that even trying to scp a file from the server in question to another server stalls immediately and eventually times out. However, I can use scp to download that same file from another machine. This makes me even more convinced that this is a networking issue on the clients end, any further suggestions would be greatly appreciated.
This problem exists because you are trying to upload the same file again. Example:
$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
$s3->putObjectFile('file.jpg','bucket-name','newname-file.jpg');
To fix it, just copy the file and give it new name then upload it normally.
Example:
$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
now rename file.jpg to newname-file.jpg
$s3->putObjectFile('newname-file.jpg','bucket-name','newname-file.jpg');
I solved this problem in another way. My bug was, that filesize() function returns invalid cached size value. So just use clearstatcache()
I have experienced this exact same issue several times.
I have many scripts right now which are uploading files to S3 constantly.
The best solution that I can offer is to use the Zend libraries (either the stream wrapper or direct S3 API).
http://framework.zend.com/manual/en/zend.service.amazon.s3.html
Since the latest release of Zend framework, I haven't seen any issues with timeouts. But, if you find that you are still having problems, a simple tweak will do the trick.
Simply open the file Zend/Http/Client.php and modify the 'timeout' value in the $config array. At the time of writing this it existed on line 114. Before the latest release I was running at 120 seconds, but now things are running smooth with a 10 second timeout.
Hope this helps!
There are quite a bit of solutions available. I had this exact problem but I don't wanted to write a code and figure out the problem.
Initially I was searching for a possibility to mount S3 bucket in the Linux machine, found something interesting:
s3fs - http://code.google.com/p/s3fs/wiki/InstallationNotes
- this did work for me. It uses FUSE file-system + rsync to sync the files in S3. It kepes a copy of all filenames in the local system & make it look like a FILE/FOLDER.
This saves BUNCH of our time + no headache of writing a code for transferring the files.
Now, when I was trying to see if there is other options, I found a ruby script which works in CLI, can help you manage S3 account.
s3cmd - http://s3tools.org/s3cmd - this looks pretty clear.
[UPDATE]
Found one more CLI tool - s3sync
s3sync - https://forums.aws.amazon.com/thread.jspa?threadID=11975&start=0&tstart=0 - found in the Amazon AWS community.
I don't see both of them different, if you are not worried about the disk-space then I would choose a s3fs than a s3cmd. A disk makes you feel more comfortable + you can see the files in the disk.
Hope it helps.
You should take a look at the AWS PHP SDK. This is the AWS PHP library formerly known as tarzan and cloudfusion.
http://aws.amazon.com/sdkforphp/
The S3 class included with this is rock solid. We use it to upload multi GB files all of the time.
A very strange thing is happening. I am running a script on a new server (it works on my current server and laptop).
The strange thing is that I only get it to (sort of) work when I increase memory limit to 1024M (!). It is extracting a large zip file and going through the files, so I thought it was normal. Instead of this script terminating or ending with errors. I get an error from my browser:
The server at www.localhost.com is
taking too long to respond.
Localhost.com? The web server is just localhost:9090 and I can see Apache is still running. Maybe Apache crashes momentarily and it can't find the server? But nothing about apache crashing in the log files.
This isn't a server issue, its more to do with my PHP script and memory usage I think, so no need to move to server fault.
What could be the problem? How can I narrow do the cause, I am at loss here!
The server is a windows server running Apache 2.2 with PHP version 5.3.2. May laptop and the other working server are running version 5.3.0 and 5.3.1 for PHP.
Thanks all for any help
Ensure that,
ini_set('display_errors','On');
ini_set('error_reporting',E_ALL);
ini_set('max_execution_time', 180);
ini_set('memory_limit','1024MB' );
I'd pop this in the top of the script and see what comes out. It should show you errors and the like.
The other thing, have you checked fopen and the path of the file which it's loading?
Abs said,
check files being zipped up can be zipped by PHP (permissions
especially on a Windows OS with multi
users)
I kept getting this problem too, and none of these sites really helped until I started looking at the same thing for people using Internet Explorer. The way I fixed it is to open up the system hosts file, located at C:\Windows\System32\drivers\etc\hosts, and then uncomment out the line that mentions ::1, which is needed for IPv6. After that it worked fine.
Somehow your system's munged up and isn't treating localhost as the local 127.0.0.1 address. Is your hosts file properly configured? This is most likely why you're getting the "too long to respond" error:
marc#panic:~$ host www.localhost.com
www.localhost.com has address 64.99.64.32
marc#panic:~$ wget www.localhost.com
--2010-08-03 22:41:05-- http://www.localhost.com/
Resolving www.localhost.com... 64.99.64.32
Connecting to www.localhost.com|64.99.64.32|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.
www.localhost.com is full valid hostname as far as the DNS system is concerned.
I am not a php guru by any means but are you writing the extracted files to a temporary local storage location that is within the scope of the application? Because if you are not then I think what is happening is that the application is storing the zip file and extracted files in memory and then is attempting to read them. So if it is a large zip and/or the extracted files are large that would introduce a huge amount of overhead on top of the overhead introduced by your read and processing actions.
So if you are not already I would extracted the files and write them to disk in their own folder, dispose of the zip file at this point, and then iterate over the files in your newly created directory and perform whatever actions you need to on them.