Store the number of views of an image - php

I have a folder on my web server containing lots of PNG images. I would like to store in a database the number of views each image gets. I do not really know how to achieve this. I can only think of using a .htaccess file to rewrite urls in that folder to a php script that would serve those images and also store the visit on MySQL. But maybe there is a better way than serving all the images through a PHP script. I am looking for the simplest possible way to do this. And also, current urls should not be changed.

There isn't a single "good" way to do this. But I cat give you few ideas.
(recommended) Make a proxy PHP script which will take ID (could be file name) of the picture. It will increment the counter in the DB and then redirect to original image.
This is very clear way to achieve expected behaviour. Disadvantage is it changes current filenames. So you have to use following schema
CURRENT_URL -> (mod_rewrite) -> PHP PROXY SCRIPT -> NEW_URL
Redirect all image urls to PHP proxy script which will directly server image (set appropriate headers and output content of file).
This is 100% transparent way to do this but obvious disadvantage is that every image is processed by PHP. So when you have heavy-loaded server this could be problem.
If you have access to logs of web server (access.log for Apache) you can process this file say once a day and update counters in DB. This is very good when approximation is good enough and your server is heavy-loaded because you can parse logs on different machine.

Apache already keeps a log file of all requests (/var/log/apache2/access.log). This file contains the URL requested, time of request etc.
A possible approach could be:
Create a script which parses the access log and updates the database
Configure a cronjob which invokes the script periodically

Related

Dynamic picture loading with PHP and proxy cache?

I am a developer of Wave Framework, a lightweight framework that includes a number of functionality that makes it easier to deploy API's and serve resources dynamically.
One of those features is on-demand image editing. For example, on my server I have this file:
http://www.waher.net/w/resources/images/logo.png
But in my HTML, I load my image from a URL like this:
http://www.waher.net/w/resources/images/160x160&logo.png
This '160x160&logo.png' file does not actually exist and the only file that exists is 'logo.png'. Every HTTP request is routed to PHP and parameters in the file URL are parsed in order to apply additional functionality, like picture resolution.
Why is this useful? If my system has a large number of user avatars and my design changes, I can easily change the avatar picture URL's and everything works as expected. I never have to regenerate all the avatars of all my users, especially of those users that do not exist in my system anymore and just waste resources.
But here's my problem, if I want to implement Nginx to serve static files on my server, my system does not work. This is because Nginx will attempt to load static files itself and throws a 404 Not Found message if picture was not found. I assume that the same is true with Apache and Squid.
One of my clients specifically requested that they wish to serve images and resources through Nginx instead, but they would still like the dynamic images for the ease of development and design.
Is it possible to tell Nginx or Squid to send the request to PHP, if the image file itself is not found, get the 'dynamic' image from PHP and then send it to a user through Nginx? And at the same time always serving it from Nginx cache in any subsequent request to the same file?
I want to have the flexibility of dynamically loaded image files, but also have the speed of Nginx when serving image files. Is something like this possible? Do I need to set specific file headers in PHP that allow for this? (I already set cache and expire headers).
Thank you!

PHP File Upload in Sharded Server Configuration

We use multiple servers to handle incoming web requests which are load-balanced in a round-robin fashion. I've run into an issue that I'm not sure how to solve.
Using AJAX (qqFileUploader), I am uploading a file. By default it goes into the /tmp folder which is fine. The problem is when I try to retrieve that file, that retrieval request gets handled by the next server in line which does not have the file I uploaded. If I keep repeating the request over and over again, it will eventually reach the original server (via round robin load balancing) where the file was stored and then I can open it. Obviously that is not a good solution.
Here is essentially the code: http://jsfiddle.net/Ap27Z/. I removed some of it for brevity. You will see that the uploader object makes a call to a PHP file to do the file upload and then after the file upload is complete, another AJAX call is made to a script to process the .csv file. This is where the process is getting lost in the round-robin.
I read a few questions here on SO relating to uploading files to memory and it seems that it is essentially not currently feasible. Is there another option I can use to upload a file and handle it all within the same request?
The classic solution to this type of problem is to use sticky sessions on your load balancer. That may not be a good solution for you as it would be modifying your whole setup to fix a small problem.
I would suggest adding a sub-domain prefix for each machine e.g. upload goes to www.example.com and then each server is allocated an additional subdomain www1.example.com, www2.example.com which are always passed directly to that server, rather than round robin DNS.
As part of the success result, you could pass back the server name that points to the exact server, rather than the load-balanced name, and then all subsequent Ajax calls that reference the uploaded data use that server specific domain name, rather than the generic load balanced domain name.
Is there another option I can use to upload a file and handle it all
within the same request?
Sure, why not? The code that handles the POSTing of the data can do whatever you want it to do.
There are (at least) 2 solutions to your problem:
You change the load-balancing.
There are several load balancing proxies out there which support session affinity a.k.a. "sticky sessions". That means that a user always gets the same server within a session.
Two programs that can act in this way are HAProxy (related question here on SO) and nginx with a custon module (tutorial here).
You chance the files' location.
The other choice would be to change the location of your stored files to some place that all of your servers can access via the same location. This could be, for example, an NFS mount or a database (with the files stored as BLOBS). This way, it doesn't matter which server processes the request, as all of them have access to the file.

Routing .htaccess to GitHub

I was wondering if there was a way to basically host a site on your server so you can run PHP, but have the actual code hosted on GitHub. In other words...
If a HTTP request went to:
http://mysite.com/docs.html
It'd request and pull in the content (via file_get_contents() or something):
https://raw.github.com/OscarGodson/Core.js/master/docs.html
Or, if they went to:
http://mysite.com/somedir/another/core.js
It'd pull down:
https://raw.github.com/OscarGodson/Core.js/master/somedir/another/core.js
I know GitHub has their own DNS servers, but id rather host it on my so i can run server side code. What would the htaccess code look like for this?
This is beyond the capabilities of .htaccess files, if the requirement is to run the PHP embedded in the HTML stored on github.com at the server on yourserver.com simply by a configuration line like a redirect in the .htaccess file.
A .htaccess file is typically used to provide directives to the Apache web server. These directives can indicate, for example, access permissions, popup password protection, linkages between URLs and the server's file system, handlers for certain types of files when fetched by the server before delivery to the browser, and redirects from one URL to another URL.
An .htaccess file can issue redirects for http://mysite.com/somedir/another/core.js to https://raw.github.com.... but then the browser will be pointed to raw.github.com, not mysite.com. Tricks can be done with frames to make this redirection less transparent to the human at the browser... but these dont affect the fact that the data comes from github.com without ever going to the server at mysite.com
In particular, PHP tags embedded in the HTML on github.com are never received by mysite.com's server and therefore will not run. Probably not want you want. Unless some big changes have occurred in Apache, .htaccess files will not set up that workflow. It might be possible for some expert to write an apache module to do it, but I am not sure.
What you can do is put a cron job on mysite.com that git pull's from github.com every few minutes. Perhaps that is what you want to do instead?
If the server can run PHP code, you can do this.
Basically, in the .htaccess file you use a RewriteRule to send all paths to a PHP script on your server. For example, a request for /somedir/anotherdir/core.js becomes /my-script.php/somedir/anotherdir/core.js. This is how a lot of app frameworks operate. When my-script.php runs the "real" path is in the PATH_INFO variable.
From that point the script could then fetch the file from GitHub. If it was HTML or JavaScript or an image, it could just pass it along to the client. (To do things properly, though, you'll want to pass along all the right headers, too, like ETag and Last-Modified and then also check those files, so that caching works properly and you don't spend a lot of time transferring files that don't need to be transferred again and again. Otherwise your site will be really slow.)
If the file is a PHP file, you could download it locally, then include it into the script in order to execute it. In this case, though, you need to make sure that every PHP file is self-contained, because you don't know which files have been fetched from GitHub yet, so if one file includes another you need to make sure the files dependent on the first file are downloaded, too. And the files dependent on those files, also.
So, in short, the .htaccess part of this is really simple, it's just a single RewriteRule. The complexity is in the PHP script that fetches files from GitHub. And if you just do the simplest thing possible, your site might not work, or it will work but really painfully slowly. And if you do a ton of genius level work on that script, you could make it run OK.
Now, what is the goal here? To save yourself the trouble of logging into the server and typing git pull to update the server files? I hope I've convinced you that trying to fetch files on demand from GitHub will be even more trouble than that.

How can I upload an image from source URL to some destination URL?

Folks
I have an image at some server (SOURCE)
i.e. http://stagging-school-images.s3.amazonaws.com/2274928daf974332ed4e69fddc7a342e.jpg
Now I want to upload it to somewhere else (DESTINATION)
i.e. example.mysite.com/receiveImage.php
First, I am copying image from source to my local server and then uploading it to destination.
It's perfectly working but taking too much time as it copy the image and then uploads...
I want to make it more simple and optimized by directly uploading image from source URL to destination URL.
Is there a way to handle this ?
I am using php/cURL to handle my current functionality.
Any help would be very much appreciated.
Cheers !!
If example.mysite.com/receiveImage.php is your own service, then you may
pass SOURCE URL to your PHP script as GET or POST parameter
in PHP script, use file_get_contents() function to obtain image by URL, and save it to your storage
Otherwise it's impossible by means of HTTP.
However, there are some ways to increase files uploading speed a little:
If files are huge, you may use two threads: one for downloading (it will store all downloaded data to some buffer) and one for uploading (it will get all available data from buffer and upload it to site). As far as I know, this can't be done easily with PHP, because multi-threading is currently not supported yet.
If there are too many files, you may use many threads / processes, which will do download/upload simultaneously.
By the way, these means do not eliminate double traffic for your intermediate service.
One of the services may have a form somewhere that will allow you to specify a URL to receive from/send to, but there is no generic HTTP mechanism for doing so.
copy($url, $uploadDir.'/'.$fileName);
The only way to transfer the image directly from source to destination is to initiate the transfer from either the source or the destination. You can't magically beam the data between these two locations without them talking to each other directly. If you can SSH login to your mysite.com server you could download the image directly from there. You could also write a script that runs on mysite.com and directly downloads the image from the source.
If that's not possible, the best alternative may be to play around with fread/fwrite instead of curl. This should allow you to read a little bit from the source, then directly upload that bit to the destination so download and upload can work in parallel. For huge files this should make a real difference, for small files on a decent connection it probably won't.
create two textfield one url, other filename
in php, use :
uploadDir is path to your file directory ;)
copy($url, $uploadDir.'/'.$fileName);

How to create virtual directories in PHP?

I want to create an application in PHP implementing virtual directory feature.
Example: http://mydomain.com/user001 will display the contents of the url http://mydomain.com/index.php?user=user001. How can I do that?
Note:
I am using Apache server.
The traditional way to do it is mod_rewrite.
Please read this friendly article regarding rewrite.
Next, try to find a simple way in PHP to parse this variable $_SERVER['REQUEST_URI'].
After doing that, you have the name of the directory and you can get its data from the DB.
Intercept the HTTP request using the 'REQUEST_URI' element of $_SERVER. This returns (I believe) only the requested page, not the entire URI/URL - more info here. Once you've grabbed the page request, substitute the address of the actual file that's needed. For example, the user-friendly www.somewebsite.com/page01 becomes a request for the more clunky-sounding www.somewebsite.com?page01.php. This method won't create a virtual directory, as such, but should work okay. I have used a similar method on my own IT website, where each page is loaded via index.php, allowing that file to keep a log of visitors in real time (the site has Webalizer, which runs a day or so in arrears).
Rewriting the filename might work, although it's not to my personal taste. Using PHP to effect a URI/URL-swap would likely carry the benefit of reduced server demand, due to requiring less disk read/write time than filename rewrites.
I hope that helps.

Categories