How Zend_Log prevent the race condition while writing a log message? - php

I have looked into a little bit of the Zend_Log, which is a log module of Zend Framework, but I didn't see it using a flock function to prevent a race condition if there are multiple php script writing to the same file.
As per what I know, the web application based on the zend framework treat each request as a individual php process, so the state can't be shared between those process, so keep a write file action in sync is necessary.
Does anyone know the reason why?

Let me answer my own question, after check some documents, in UNIX like system a O_APPEND opened file is thread and process safe to write to the file. But Windows is an exception, maybe php did some wrapper for a windows interpreter.
If a open file is PIPE or FIFO, then it's different from a pure file. If the size bytes exceed the size of PIPE_BUF, then it's not atomic.
Understanding concurrent file writes from multiple processes
O_APPEND document

Related

Do many files in a single directory cause longer loading time under Apache?

Even if there seem to exist a few duplicate questions, I think this one is unique. I'm not asking if there are any limits, it's only about performance drawbacks in context of Apache. Or unix file system in general.
Lets say if I request a file from an Apache server
http://example.com/media/example.jpg
does it matter how many files there are in the same directory "media"?
The reason I'm asking is that my PHP application generates images on the fly.
Once created, it places it at the same location the PHP script would trigger due to ModRewrite. If the file exists, Apache will skip the whole PHP execution and directly serve the static image instead. Some kind of gateway cache if you want to call it that way.
Apache has basically two things to do:
Check if the file exists
Serve the file or forward the request to PHP
Till now, I have about 25.000 files with about 8 GB in this single directory. I expect it to grow at least 10 times in the next years.
While I don't face any issues managing these files, I have the slight feeling that it keeps getting slower when requesting them via HTTP. So I wondered if this is really what happens or if it's just my subjective impression.
Most file systems based on the Berkeley FFS will degrade in performance with large numbers of files in one directory due to multiple levels of indirection.
I don't know about other file systems like HFS or NTFS, but my suspicion is that they may well suffer from the same issue.
I once had to deal with a similar issue and ended up using a map for the files.
I think it was something like md5 myfilename-00001 yielding (for example): e5948ba174d28e80886a48336dcdf4a4 which I then put into a file named e5/94/8ba174d28e80886a48336dcdf4a4. Then a map file mapped 'myfilename-00001' to 'e5/94/8ba174d28e80886a48336dcdf4a4'. This not-quite-elegant solution worked for my purposes and it only took a little bit of code.

What's an easy way to detect a file has changed and update it automatically?

What's an easy way to detect a file has changed and update it automatically?
For example if I have a js/css file that I just uploaded, I'd like the server to detect I uploaded new js/css files and minify them automatically right then and there.
EDIT: I've tried minifying at run time and found out it's not efficient. It's interesting to note, that the file was minified for anyone requesting the file which was an overhead in itself and it was actually faster to not minify the file for delivery.
Ideally, the file should be minified within a few seconds of upload. Instead of a constantly polling system, is there an event based system out there that I could look into?
EDIT: I used mikhailov answer and added the following to the incron file:
/var/www/laravel/public/js/main.js IN_MODIFY yui-compressor -o /var/www/laravel/public/js/main.min.js /var/www/laravel/public/js/main.js
Inotify is a recommended pattern to get notified re file system events (file created, modified or deleted), what Wikipedia says:
Inotify (inode notify) is a Linux kernel subsystem that acts to extend
filesystems to notice changes to the filesystem, and report those
changes to applications.
See the similar use case: how to get notified of files being copied over rsync.

Uploading new version of a core file on running website without maintenance mode

Let's say I have a website running on PHP using Kernel pattern. Let's say I have 1000 requests per second accessing Kernel.php file. I want to upload a new version of that file without turning on a maintenance mode. Is it safe to do it? Can I just upload the new file and at some point requests will be handled by this new one?
Kernel.php is error free for sure
the file is included by require_once() in index.php
forget about maintenance mode in this case, please
I was told to add some information about why I even thought about that approach.
We are trying to develop a system providing possibility of updating, any part of webpage, driven by our engine. The Kernel is just an example - if this file can be modified without maintenance mode, in your opinion, than any other less important might be as well.
Sometimes the update is so simple that turning on maintenance mode is like stopping the military invasion on a country because one of privates (soldier) sneezed.
Since we are talking about blowing up things and inter-process communications: none of us will risk uploading the core files on running website without freezing request for few seconds, but how about template files? It's of course a rhetorical question, but now I think you fully understand all of it.
First let me say that this is probably not a very good idea.
Are you running on a Linux server? If so, renaming files is an atomic operation, and the best way to accomplish this is going to be to upload the new file with a different name, then rename it over the old file.
If not, renaming it over the old file is probably still a better approach than just uploading it in place, since you will probably get some requests while the file is being written, which will cause errors.
Turn on PHP opcode caching for your web server, and set the interval to 5 minutes or more.
You can now copy files overtop of running PHP code, and the next time the interval expires the server will check for modifications and recompile the opcode. You'll have to wait a few minutes before you notice the change, because the server will continue to use the cached code until it expires.
I can not advise what would happen if you break dependencies amount PHP files, or if the server will update 1 file but have a different cached copies of other files.
For the most reliable method. You need to use a feature in your web server that allows you to hot swap the directory for a host. You then install a complete new copy of all your PHP code into a new directory, and then hot swap the host to this new location. No requests should be interrupted.

Possible to write an Apache file handler in Php?

I wonder if and how it is possible to write a custom "file handler" (parsing a file and rendering it with bonuses) for Apache 2 in PHP? Files are text files, and they could be large, so I'm not thinking of loading them entirely in memory, but processing them line by line.
I'm comfortable with Java and other languages but still rookie in PHP; I chose PHP because it's light and especially deployable on every Apache-capable machine (even small NAS), and, well, I like PHP.
Thank you for your hints.
Its not possible to write a file handler in php.
However you could use the rewrite engine to redirect those files that you want to handle to a php script that, then, does the job.
the original file can be obtained from the server variables.

PHP script: How big is too big?

I'm developing a webapp in PHP, and the core library is 94kb in size at this point. While I think I'm safe for now, how big is too big? Is there a point where the script's size becomes an issue, and if so can this be ameliorated by splitting the script into multiple libraries?
I'm using PHP 5.3 and Ubuntu 10.04 32bit in my server environment, if that makes any difference.
I've googled the issue, and everything I can find pertains to PHP upload size only.
Thanks!
Edit: To clarify, the 94kb file is a single file that contains all my data access and business logic, and a small amount of UI code that I have yet to extract to its own file.
Do you mean you have 1 file that is 94KB in size or that your whole library is 94KB in?
Regardless, as long as you aren't piling everything into one file and you're organizing your library into different files your file size should remain manageable.
If a single PHP file is starting to hit a few hundred KB, you have to think about why that file is getting so big and refactor the code to make sure that everything is logically organized.
I've used PHP applications that probably included several megabytes worth of code; the main thing if you have big programs is to use a code caching tool such as APC on your production server. That will cache the compiled (to byte code) PHP code so that it doesn't have to process every file for every page request and will dramatically speed up your code.

Categories