how to obtain a stack trace when WAMP server crashes? - php

Some times my WAMP server crashes . I get the following error.
HTTP has encountered exception and needs to close.
Unreferenced Memory.
szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll
szModVer : 5.3.0.0 offset : 0000c309
C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp
C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt
My question is how to obtain stack trace to debug this problem ?
Should I use a windows debugger windows debugger
or is there some setting in WAMP server configuration should i enable ?

You can use Debug Diag.
Select the "Crash" rule in the "Select Rule Type" dialog that pops up when you start Debug Diag.
Also take a look at Tess Ferrandez blog entry Debugging Native memory leaks with Debug Diag 1.1. (Though it's not exactly about what you want, it's never wrong to read that blog ;-))
Debug Symbols contain information that "glue" the executable and the code together. Microsoft's format for debug symbols is called "program database" and they are usually stored in files having the extension .pdb.
Right now you only get "the assembly instruction at php5ts!zend_mm_shutdown+f69". The application called into a function zend_mm_shutdown that is exported by the php5ts.dll, so the debugger "knows" about this function regardless of whether there are debug symbols or not. But it doesn't know about the source code that led the compiler to build the machine instruction at zend_mm_shutdown+f69. The debug symbols contain such information, so a debugger can show you the source code and the context.
You can create debug symbols for both the debug and the release build (for the latter they are usually less accurate). But I haven't found a debug pack for the wamp builds of php.
For the php.net/win32 build you can download the debug packs for their release builds from http://windows.php.net/download/. Or you can download the source code and create a debug build yourself. But you can't mix the wamp executable with the php.net debug packs (i.e. you wouldn't use the the wamp executables/dlls for this).
And maybe seeing the source code can give you a hint about what's going wrong. But somehow I doubt that. The mm in zend_mm_shutdown probably stands for "memory management". It probably just releases some buckets of memory and some of its data structures are wrong at this point. That could be some other code overwriting data of the zend memory management. Could be an edge case that is handled wrong (something that has been freed but not removed from the list/data structure). The bad thing is the underlying problem could be anywhere ...far, far away from the code that is finally causing the access violation. And if zend_mm_shutdown really is some low-level memory management there's probably not much information left about what changed the data structure (and why).
I'd rather try another php build first and see if the problem occurs again. It shouldn't be to hard to replace the wamp files by the php.net build. It might be as easy as to replace the php folder in your wamp installation and then look if you have to copy some files to the apache binary folder, too.
But make a copy/backup of the complete wamp folder first ....just in case ;-)

Please see the logs below from the tool debug diagnostics tool with php 5.3.0 debug pack.
Is it problem with PDO library that is used to access MySql ??
The crash is very intermittent. Please reply.
Thread 61 - System ID 2760
Entry point msvcrt!_endthreadex+3a
Create time 3/31/2010 5:25:46 PM
Time spent in user mode 0 Days 0:0:16.593
Time spent in kernel mode 0 Days 0:0:0.453
Function Arg 1 Arg 2 Arg 3 Source
php5ts!_zend_mm_free_int+139 0288a878 00020004 008eff46
php5ts!_efree+36 0110ff48 02d18868 0090e442
php5ts!_zval_ptr_dtor+66 024ffa88 02d18770 02d18848
php5ts!zend_std_write_property+1f2 02d18848 0110ff48 02d18868
php5ts!pdo_stmt_construct+7d 02d1b968 02d18848 0110eb90
php5ts!zim_PDO_prepare+428 0110eb90 02d18848 00000000
php5ts!zend_do_fcall_common_helper_SPEC+946 024ffbf8 028894c8 024ffe74
php5ts!execute+29e 02d40070 02889400 00000000
php5ts!zend_execute_scripts+f6 00000008 028894c8 00000000
php5ts!php_execute_script+22d 024ffe74 028894c8 00000005
php5apache2_2!php_handler+5d0 03249f98 008238b8 03249f98
libhttpd!ap_run_handler+21 03249f98 03249f98 03249f98
libhttpd!ap_invoke_handler+ae 00000000 029e9fd8 024fff38
libhttpd!ap_die+29e 03249f98 00000000 00788168
libhttpd!ap_get_request_note+1c9c 029e9fd8 029e9fd8 029e9fd8
libhttpd!ap_run_process_connection+21 029e9fd8 00775050 024fff80
libhttpd!ap_process_connection+33 029e9fd8 027de3f0 00ed0000
libhttpd!ap_regkey_value_remove+c7c 029e9fd0 00ed0000 00f10000
msvcrt!_endthreadex+a9 011d3088 00ed0000 00f10000
kernel32!BaseThreadStart+37 77c3a341 011d3088 00000000

Related

PHP / C++: shm_open() error when sharing memory

I've been all over the internet looking for an answer to my problem. Here is the setup, I am running embedded Linux (created with Yocto) which is running the Lighttpd web server with PHP5. In my C++ code I have the following:
shared = shm_open(SHARED_FILE_NAME, O_RDWR | O_CREAT | O_TRUNC, 0666);
ftruncate(shared, FILE_SIZE);
map = mmap(...);
// shm_unlink() isn't called until my C++ thread ends.
Everything works well and I do not get any errors and other C++ processes and threads are also able to access the shared memory and map without any problems (I have one writer thread and all other threads and processes do a read only on the memory). The memory is used as a ring buffer where the writing thread is updating data very quickly. The problems start to occur when trying to access that same memory in PHP. In PHP I do (need read only):
<?php
$shm_key = ftok("/dev/shm/shared_file.shm", 'c');
$shm_id = shm_open($shm_key, "a", 0, 0);
...
?>
When looking at the value from ftok() it returns a non -1 number which means it did not fail. I do get a fail on the PHP's shm_open() call which reads:
Warning: shmop_open(): unable to attach or create shared memory segment in /www/pages/shared.php on line 9
I've changed the permission of the file with chmod 777 /dev/shm/shared.shm just to rule out any file permission issues. Also when I run ipcs -m I do not get any listings for shared memory segments, yet my C++ code is running just fine. I've also looked for SELinux and tried entering setenforce 0 but I get a response of -sh: setenforce: command not found so I figure this isn't an issue. I've also tried running wget <local ip address>/shared.php to see if running locally would return the correct data but when looking at the file which was returned it had the same error messages.
I am looking to be able to have a web page on my embedded system read this shared memory and stream back chunks of binary to feed a graph when a request comes in (not interested in web sockets at the time). I am able to get named pipes to work across PHP and C++ just fine but I need shared memory for this application and the shared memory access seems to be troublesome. Any help is appreciated.
I'm developing PHP functions that need to use C Shared Memory. As your code, my C functions use shm_open, mmap, etc.. and I guess to use PHP ftok(), shmop_open() to access the C's shared memory but this PHP functions don't work.
The two area are not compatible. I found different properties of the two areas in this documents http://menehune.opt.wfu.edu/Kokua/More_SGI/007-2478-008/sgi_html/ch03.html:
C (with shm_open, mmap, like the Straton source code) use “POSIX Shared Memory”
PHP (with shmop_* functions) use “System V Shared Memory”
I suggest you to try with Sync http://php.net/manual/en/book.sync.php: you need the PECL sync extension.

General error: 5 database is locked in PDO using sqlite [duplicate]

When I enter this query:
sqlite> DELETE FROM mails WHERE (id = 71);
SQLite returns this error:
SQL error: database is locked
How do I unlock the database so this query will work?
In windows you can try this program http://www.nirsoft.net/utils/opened_files_view.html to find out the process is handling db file. Try closed that program for unlock database
In Linux and macOS you can do something similar, for example, if your locked file is development.db:
$ fuser development.db
This command will show what process is locking the file:
> development.db: 5430
Just kill the process...
kill -9 5430
...And your database will be unlocked.
I caused my sqlite db to become locked by crashing an app during a write. Here is how i fixed it:
echo ".dump" | sqlite old.db | sqlite new.db
Taken from: http://random.kakaopor.hu/how-to-repair-an-sqlite-database
The SQLite wiki DatabaseIsLocked page offers an explanation of this error message. It states, in part, that the source of contention is internal (to the process emitting the error). What this page doesn't explain is how SQLite decides that something in your process holds a lock and what conditions could lead to a false positive.
This error code occurs when you try to do two incompatible things with a database at the same time from the same database connection.
Changes related to file locking introduced in v3 and may be useful for future readers and can be found here: File Locking And Concurrency In SQLite Version 3
If you want to remove a "database is locked" error then follow these steps:
Copy your database file to some other location.
Replace the database with the copied database. This will dereference all processes which were accessing your database file.
Deleting the -journal file sounds like a terrible idea. It's there to allow sqlite to roll back the database to a consistent state after a crash. If you delete it while the database is in an inconsistent state, then you're left with a corrupted database. Citing a page from the sqlite site:
If a crash or power loss does occur and a hot journal is left on the disk, it is essential that the original database file and the hot journal remain on disk with their original names until the database file is opened by another SQLite process and rolled back. [...]
We suspect that a common failure mode for SQLite recovery happens like this: A power failure occurs. After power is restored, a well-meaning user or system administrator begins looking around on the disk for damage. They see their database file named "important.data". This file is perhaps familiar to them. But after the crash, there is also a hot journal named "important.data-journal". The user then deletes the hot journal, thinking that they are helping to cleanup the system. We know of no way to prevent this other than user education.
The rollback is supposed to happen automatically the next time the database is opened, but it will fail if the process can't lock the database. As others have said, one possible reason for this is that another process currently has it open. Another possibility is a stale NFS lock, if the database is on an NFS volume. In that case, a workaround is to replace the database file with a fresh copy that isn't locked on the NFS server (mv database.db original.db; cp original.db database.db). Note that the sqlite FAQ recommends caution regarding concurrent access to databases on NFS volumes, because of buggy implementations of NFS file locking.
I can't explain why deleting a -journal file would let you lock a database that you couldn't before. Is that reproducible?
By the way, the presence of a -journal file doesn't necessarily mean that there was a crash or that there are changes to be rolled back. Sqlite has a few different journal modes, and in PERSIST or TRUNCATE modes it leaves the -journal file in place always, and changes the contents to indicate whether or not there are partial transactions to roll back.
the SQLite db files are just files, so the first step would be to make sure it isn't read-only. The other thing to do is to make sure that you don't have some sort of GUI SQLite DB viewer with the DB open. You could have the DB open in another shell, or your code may have the DB open. Typically you would see this if a different thread, or application such as SQLite Database Browser has the DB open for writing.
My lock was caused by the system crashing and not by a hanging process. To resolve this, I simply renamed the file then copied it back to its original name and location.
Using a Linux shell that would be:
mv mydata.db temp.db
cp temp.db mydata.db
If a process has a lock on an SQLite DB and crashes, the DB stays locked permanently. That's the problem. It's not that some other process has a lock.
I had this problem just now, using an SQLite database on a remote server, stored on an NFS mount. SQLite was unable to obtain a lock after the remote shell session I used had crashed while the database was open.
The recipes for recovery suggested above did not work for me (including the idea to first move and then copy the database back). But after copying it to a non-NFS system, the database became usable and not data appears to have been lost.
Some functions, like INDEX'ing, can take a very long time - and it locks the whole database while it runs. In instances like that, it might not even use the journal file!
So the best/only way to check if your database is locked because a process is ACTIVELY writing to it (and thus you should leave it the hell alone until its completed its operation) is to md5 (or md5sum on some systems) the file twice.
If you get a different checksum, the database is being written, and you really really REALLY don't want to kill -9 that process because you can easily end up with a corrupt table/database if you do.
I'll reiterate, because it's important - the solution is NOT to find the locking program and kill it - it's to find if the database has a write lock for a good reason, and go from there. Sometimes the correct solution is just a coffee break.
The only way to create this locked-but-not-being-written-to situation is if your program runs BEGIN EXCLUSIVE, because it wanted to do some table alterations or something, then for whatever reason never sends an END afterwards, and the process never terminates. All three conditions being met is highly unlikely in any properly-written code, and as such 99 times out of 100 when someone wants to kill -9 their locking process, the locking process is actually locking your database for a good reason. Programmers don't typically add the BEGIN EXCLUSIVE condition unless they really need to, because it prevents concurrency and increases user complaints. SQLite itself only adds it when it really needs to (like when indexing).
Finally, the 'locked' status does not exist INSIDE the file as several answers have stated - it resides in the Operating System's kernel. The process which ran BEGIN EXCLUSIVE has requested from the OS a lock be placed on the file. Even if your exclusive process has crashed, your OS will be able to figure out if it should maintain the file lock or not!! It is not possible to end up with a database which is locked but no process is actively locking it!!
When it comes to seeing which process is locking the file, it's typically better to use lsof rather than fuser (this is a good demonstration of why: https://unix.stackexchange.com/questions/94316/fuser-vs-lsof-to-check-files-in-use). Alternatively if you have DTrace (OSX) you can use iosnoop on the file.
I added "Pooling=true" to connection string and it worked.
This error can be thrown if the file is in a remote folder, like a shared folder. I changed the database to a local directory and it worked perfectly.
I found the documentation of the various states of locking in SQLite to be very helpful. Michael, if you can perform reads but can't perform writes to the database, that means that a process has gotten a RESERVED lock on your database but hasn't executed the write yet. If you're using SQLite3, there's a new lock called PENDING where no more processes are allowed to connect but existing connections can sill perform reads, so if this is the issue you should look at that instead.
I have such problem within the app, which access to SQLite from 2 connections - one was read-only and second for writing and reading. It looks like that read-only connection blocked writing from second connection. Finally, it is turns out that it is required to finalize or, at least, reset prepared statements IMMEDIATELY after use. Until prepared statement is opened, it caused to database was blocked for writing.
DON'T FORGET CALL:
sqlite_reset(xxx);
or
sqlite_finalize(xxx);
I just had something similar happen to me - my web application was able to read from the database, but could not perform any inserts or updates. A reboot of Apache solved the issue at least temporarily.
It'd be nice, however, to be able to track down the root cause.
lsof command on my Linux environment helped me to figure it out that a process was hanging keeping the file open.
Killed the process and problem was solved.
This link solve the problem. : When Sqlite gives : Database locked error
It solved my problem may be useful to you.
And you can use begin transaction and end transaction to not make database locked in future.
Should be a database's internal problem...
For me it has been manifested after trying to browse database with "SQLite manager"...
So, if you can't find another process connect to database and you just can't fix it,
just try this radical solution:
Provide to export your tables (You can use "SQLite manager" on Firefox)
If the migration alter your database scheme delete the last failed migration
Rename your "database.sqlite" file
Execute "rake db:migrate" to make a new working database
Provide to give the right permissions to database for table's importing
Import your backed up tables
Write the new migration
Execute it with "rake db:migrate"
In my experience, this error is caused by: You opened multiple connections.
e.g.:
1 or more sqlitebrowser (GUI)
1 or more electron thread
rails thread
I am nore sure about the details of SQLITE3 how to handle the multiple thread/request, but when I close the sqlitebrowser and electron thread, then rails is running well and won't block any more.
I ran into this same problem on Mac OS X 10.5.7 running Python scripts from a terminal session. Even though I had stopped the scripts and the terminal window was sitting at the command prompt, it would give this error the next time it ran. The solution was to close the terminal window and then open it up again. Doesn't make sense to me, but it worked.
I just had the same error.
After 5 minets google-ing I found that I didun't closed one shell witch were using the db.
Just close it and try again ;)
I had the same problem. Apparently the rollback function seems to overwrite the db file with the journal which is the same as the db file but without the most recent change. I've implemented this in my code below and it's been working fine since then, whereas before my code would just get stuck in the loop as the database stayed locked.
Hope this helps
my python code
##############
#### Defs ####
##############
def conn_exec( connection , cursor , cmd_str ):
done = False
try_count = 0.0
while not done:
try:
cursor.execute( cmd_str )
done = True
except sqlite.IntegrityError:
# Ignore this error because it means the item already exists in the database
done = True
except Exception, error:
if try_count%60.0 == 0.0: # print error every minute
print "\t" , "Error executing command" , cmd_str
print "Message:" , error
if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back
print "Forcing Unlock"
connection.rollback()
time.sleep(0.05)
try_count += 0.05
def conn_comit( connection ):
done = False
try_count = 0.0
while not done:
try:
connection.commit()
done = True
except sqlite.IntegrityError:
# Ignore this error because it means the item already exists in the database
done = True
except Exception, error:
if try_count%60.0 == 0.0: # print error every minute
print "\t" , "Error executing command" , cmd_str
print "Message:" , error
if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back
print "Forcing Unlock"
connection.rollback()
time.sleep(0.05)
try_count += 0.05
##################
#### Run Code ####
##################
connection = sqlite.connect( db_path )
cursor = connection.cursor()
# Create tables if database does not exist
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS fix (path TEXT PRIMARY KEY);''')
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS tx (path TEXT PRIMARY KEY);''')
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS completed (fix DATE, tx DATE);''')
conn_comit( connection )
One common reason for getting this exception is when you are trying to do a write operation while still holding resources for a read operation. For example, if you SELECT from a table, and then try to UPDATE something you've selected without closing your ResultSet first.
I was having "database is locked" errors in a multi-threaded application as well, which appears to be the SQLITE_BUSY result code, and I solved it with setting sqlite3_busy_timeout to something suitably long like 30000.
(On a side-note, how odd that on a 7 year old question nobody found this out already! SQLite really is a peculiar and amazing project...)
Before going down the reboot option, it is worthwhile to see if you can find the user of the sqlite database.
On Linux, one can employ fuser to this end:
$ fuser database.db
$ fuser database.db-journal
In my case I got the following response:
philip 3556 4700 0 10:24 pts/3 00:00:01 /usr/bin/python manage.py shell
Which showed that I had another Python program with pid 3556 (manage.py) using the database.
An old question, with a lot of answers, here's the steps I've recently followed reading the answers above, but in my case the problem was due to cifs resource sharing. This case is not reported previously, so hope it helps someone.
Check no connections are left open in your java code.
Check no other processes are using your SQLite db file with lsof.
Check the user owner of your running jvm process has r/w permissions over the file.
Try to force the lock mode on the connection opening with
final SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(false);
config.setLockingMode(LockingMode.NORMAL);
connection = DriverManager.getConnection(url, config.toProperties());
If your using your SQLite db file over a NFS shared folder, check this point of the SQLite faq, and review your mounting configuration options to make sure your avoiding locks, as described here:
//myserver /mymount cifs username=*****,password=*****,iocharset=utf8,sec=ntlm,file,nolock,file_mode=0700,dir_mode=0700,uid=0500,gid=0500 0 0
I got this error in a scenario a little different from the ones describe here.
The SQLite database rested on a NFS filesystem shared by 3 servers. On 2 of the servers I was able do run queries on the database successfully, on the third one thought I was getting the "database is locked" message.
The thing with this 3rd machine was that it had no space left on /var. Everytime I tried to run a query in ANY SQLite database located in this filesystem I got the "database is locked" message and also this error over the logs:
Aug 8 10:33:38 server01 kernel: lockd: cannot monitor 172.22.84.87
And this one also:
Aug 8 10:33:38 server01 rpc.statd[7430]: Failed to insert: writing /var/lib/nfs/statd/sm/other.server.name.com: No space left on device
Aug 8 10:33:38 server01 rpc.statd[7430]: STAT_FAIL to server01 for SM_MON of 172.22.84.87
After the space situation was handled everything got back to normal.
If you're trying to unlock the Chrome database to view it with SQLite, then just shut down Chrome.
Windows
%userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Web Data
or
%userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Chrome Web Data
Mac
~/Library/Application Support/Google/Chrome/Default/Web Data
From your previous comments you said a -journal file was present.
This could mean that you have opened and (EXCLUSIVE?) transaction and have not yet committed the data. Did your program or some other process leave the -journal behind??
Restarting the sqlite process will look at the journal file and clean up any uncommitted actions and remove the -journal file.
As Seun Osewa has said, sometimes a zombie process will sit in the terminal with a lock aquired, even if you don't think it possible. Your script runs, crashes, and you go back to the prompt, but there's a zombie process spawned somewhere by a library call, and that process has the lock.
Closing the terminal you were in (on OSX) might work. Rebooting will work. You could look for "python" processes (for example) that are not doing anything, and kill them.

How to configure logrotate with php logs

I'm running php5 FPM with APC as an opcode and application cache. As is usual, I am logging php errors into a file.
Since that is becoming quite large, I tried to configure logrotate. It works, but after rotation, php continues to log to the existing logfile, even when it is renamed. This results in scripts.log being a 0B file, and scripts.log.1 continuing to grow further.
I think (haven't tried) that running php5-fpm reload in postrotate could resolve this, but that would clear my APC cache each time.
Does anybody know how to get this working properly?
I found that "copytruncate" option to logrotate ensures that the inode doesn't change. Basically what is [sic!] was looking for.
This is probably what you're looking for. Taken from: How does logrotate work? - Linuxquestions.org.
As written in my comment, you need to prevent PHP from writing into the same (renamed) file. Copying a file normally creates a new one, and the truncating is as well part of that options' name, so I would assume, the copytruncate option is an easy solution (from the manpage):
copytruncate
Truncate the original log file in place after creating a copy,
instead of moving the old log file and optionally creating a new
one, It can be used when some program can not be told to close
its logfile and thus might continue writing (appending) to the
previous log file forever. Note that there is a very small time
slice between copying the file and truncating it, so some log-
ging data might be lost. When this option is used, the create
option will have no effect, as the old log file stays in place.
See Also:
Why we should use create and copytruncate together?
Another solution I found on a server of mine is to tell php to reopen the logs. I think nginx has this feature too, which makes me think it must be quite common place. Here is my configuration :
/var/log/php5-fpm.log {
rotate 12
weekly
missingok
notifempty
compress
delaycompress
postrotate
invoke-rc.d php5-fpm reopen-logs > /dev/null
endscript
}

Using PHP APC cache in CLI mode using dumpfiles

I've recently started using APC cache on our servers. One of the most important parts of our product is a CLI (Cron/scheduled) process, whose performance is critical. Typically the batchjob consists of running some 16-32 processes in parallel for about an hour (they "restart" every few minutes).
By default, using APC cache in CLI is a waste of time due to the opcode cache not being retained between individual calls. But APC also contains apc_bin_dumpfile() and apc_load_dumpfile() functions.
I was thinking these two function might be used to make APC efficient in CLI mode by having it all compiled sometime outside the batchjob, stored in a single dumpfile and having the individual processes load the dumpfile.
Does anybody have any experience with such a scenario or can you give good reasons why it will or will not work? If any significant gains could reasonably be had, either in memory use or performance? What pitfalls are lurking in the shadows?
Disclaimer: As awesome as APC is when it works in CLI, and it is awesome, it can equally be as frustrating. Use with a healthy load of patience, be thorough, step away from the problem if you're spinning, keep in mind you are working with cache that is why it seems like its doing nothing, it is actually doing nothing. Delete dump file, start with just the basics, if that doesn't work forget it try a new machine, new OS, if it is working make a copy, piece by piece expand functionality - there are loads of things that won't work, if it is working commit or make a copy, add another piece and test again, for sanity-check recheck the copies that were working before, cliches or not; if at first you don't succeed try try again, you can't keep doing the same thing expecting new results.
Ready? This is what you've been waiting for:
Enable apc for cli
apc.enable-cli=1
it is not ideal to create, populate and destroy the APC cache on every CLI request
- previous answer by unknown poster since removed.
You're absolutely right that sucks, lets fix it shall we?
If you try and use APC under CLI and it is not enabled you will get warnings.
something like:
PHP Warning: apc_bin_loadfile(): APC is not enabled,
apc_bin_loadfile not available.
PHP Warning: apc_bin_dumpfile(): APC is not enabled,
apc_bin_dumpfile not available.
Warning: I suggest you don't enable cli in php.ini, it is not worth the frustration, you are going to forget you did it and have numerous other headaches with other scripts, trust me its not worth it, use a launcher script instead. (see below)
apc_loadfile and apc_dumpfile in cli
As per the comment by mightye php we need to disable apc.stat or you will get a warnings
something like:
PHP Warning: apc_bin_dumpfile(): Excluding some files from apc_bin_dump[file].
Cached files must be included using full path with apc.stat=0.
launcher script - php-apc.sh
We will use this script to launch our apc enabled scripts (ex. ./php-apc.sh apc-cli.php) instead of changing the properties in php.ini directly.
#/bin/sh
php -d apc.enable_cli=1 -d apc.stat=0 $1
Ready for the basic functionality? Sure you are =)
basic APC persisted - apc-cli.php
<?php
/** check if dump file exists, you don't want to use file_exists */
if (false !== $dump_file = stream_resolve_include_path('apc.dump'))
/** so where were we lets have a look see shall we */
if (false !== apc_bin_loadfile($dump_file))
/** fetch what was stored last run just for fun */
if (false !== $value = apc_fetch('my.awesome.apc.store'))
echo "$value from apc\n";
/** store what gets fetched the next run just for fun */
apc_store('my.awesome.apc.store', 'awesome in cli');
/** what a shlep lets not do that all over again shall we */
apc_bin_dumpfile(array(),null,'apc.dump');
Notice: Why not use file_exists? Because file_exists == stat you see and we want to reap the reward that is apc.stat=0 so; work within the include path; use absolute and not relative paths - as returned by stream_resolve_include_path(); avoid include_once, require_once use the non *_once counterparts; check your stat usage, when not using APC(Muchos important senor), with the help of a StreamWrapper echo for calls to method url_stat; Oops: Fatal scope over-run error! aborting notice thread. see url_stat
message: Error caused by StreamWrapper outside the scope of this discussion.
The smoke test
Using the launcher execute the basic script
./php-apc.sh apc-cli.php
A whole bunch of nothing happened that's what we want right, why else do you want to use cache? If it did output anything then it didn't work, sorry.
There should be a dump file called apc.dump see if you can find it? If you can't find it then it didn't work, sorry.
Good we have the dump file there were no errors lets run it again.
./php-apc.sh apc-cli.php
What you want to see:
awesome in cli from apc
Success! =)
There are few in PHP as satisfying as a working APC implementation.
nJoy!
I would definitely not use it in the CLI as when you restart it, it's almost as if it was never running in the first place!
The better way of using APC is to have it running on the webserver itself all the time, this way with it being active it will actually do what it's supposed to do!
I tryed with curl and APC.it works
use these commands in CLI
curl --data "param1=value2" http://testsite.com/test.php
so it will post data to test.php and you writes the code in it.

How to view PHP or Apache error log online in a browser?

Is there a way to view the PHP error logs or Apache error logs in a web browser?
I find it inconvenient to ssh into multiple servers and run a "tail" command to follow the error logs. Is there some tool (preferably open source) that shows me the error logs online (streaming or non-streaming?
Thanks
A simple php code to read log and print:
<?php
exec('tail /var/log/apache2/error.log', $error_logs);
foreach($error_logs as $error_log) {
echo "<br />".$error_log;
}
?>
You can embed error_log php variable in html as per your requirement. The best part is tail command will load the latest errors which wont make too load on your server.
You can change tail to give output as you want
Ex. tail myfile.txt -n 100 // it will give last 100 lines
See What commercial and open source competitors are there to Splunk? and I would recommend https://github.com/tobi/clarity
Simple and easy tool.
Since everyone is suggesting clarity, I would also like to mention tailon. I wrote tailon as a more modern and secure alternative to clarity. It's still in its early stages of development, but the functionality you need is there. You may also use wtee, if you're only interested in following a single log file.
You good make a script that reads the error logs from apache2..
$apache_errorlog = file_get_contents('/var/log/apache2/error.log');
if its not working.. trying to get it with the php functions exec or shell_exec and the command 'cat /var/log/apache2/error.log'
EDIT: If you have multi servers(i quess with webservers on it) you can create a file on the machine, when you make a request to that script(hashed connection) you get the logs from that server
I recommend LogHappens: https://loghappens.com, it allows you to view the error log in web, and this is what it looks like:
LogHappens supports kinds of web server log format, it comes with parses for Apache and CakePHP, and you can write your own.
You can find it here: https://github.com/qijianjun/logHappens
It's open source and free, I forked it and do some work to make it work better in dev env or in public env. That is:
Support token for security, one can't access the site without the token in config.php
Support IP whitelists for security and privacy
Sopport config the interval between ajax requests
Support load static files from local (for local dev env)
I've found this solution https://code.google.com/p/php-tail/
It's working perfectly. I only needed to change the filesize, because I was getting an error first.
56 if($maxLength > $this->maxSizeToLoad) {
57 $maxLength = $this->maxSizeToLoad;
58 // return json_encode(array("size" => $fsize, "data" => array("ERROR: PHPTail attempted to load more (".round(($maxLength / 1048576), 2)."MB) then the maximum size (".round(($this->maxSizeToLoad / 1048576), 2) ."MB) of bytes into memory. You should lower the defaultUpdateTime to prevent this from happening. ")));
59 }
And I've added default size, but it's not needed
125 lastSize = <?php echo filesize($this->log) || 1000; ?>;
I know this question is a bit old, but (along with the lack of good choices) it gave me the idea to create this tiny (open source) web app. https://github.com/ToX82/logHappens. It can be used online, but I'd use an .htpasswd as a basic login system. I hope it helps.

Categories