I have an image that send to affiliate for advertising.
so, how can I find it out from my server the number of times that image been downloaded?
does server log keep track of image upload count?
---- Addition ----
Thanks for the reply.. few more questions
because I want to do ads rotation, and tracking IP address, etc.
so, i think I should do it by making a dynamic page (php) and return the proper images, right?
In this case, is there anyway that I can send that information to Google Analytics from the server? I know I can do it in javascript. but now, since the PHP should just return the images file. so what I should do? :)
Well This can be done irrespective of your web Server or Language / Platform.
Assuming the File is Physically stored in a Certain Directory.
Write a program that somehow gets to know which file has to be downloaded. Through GET/POST parameters. There can be even more ways.
then point that particullar file physically.
fopen that file
read through it byte by byte
print them
fclose
store/increment/updatethe download counter in database/flatfile
and in the database you may keep the record as md5checksum -> downloadCounter
It depends on a server and how you download the image.
1) Static image (e.g. URL points to actual image file): Most servers (e.g. Apache) store each URL served (including the GET request for the URL for the image) in access log. There are a host of solutions for slicing and dicing access logs from web servers (especially Apache) and obtaining all sorts of statistics including count of accesses.
2) Another approach for fancier stuff is to serve the image by linking to a dynamic page which does some sort of computation (from simple counter increment to some fancy statistics collection) and responds with HTTP REDIRECT to a real image.
Use Galvanize a PHP class for GA that'll allow you to make trackPageView (for a virtual page representing your download, like the file's url) from PHP.
HTTP log should have a GET for every time that image was accessed.
You should be able to configure your server to log each download. Then, you can just count the number of times the image appears in the log file.
Related
For example,
if a user is accessing an image file on my server, is it possible to execute another PHP file at the same time?
I want to record the ip, HTTP_USER_AGENT and HTTP_ACCEPT_LANGUAGE of the users who have accessed this image file. The difficult point is how to execute (or may say “trigger”) that PHP recording file at the time the user is accessing the image. I know I can read the content of the image to the PHP first then output it as an image and record the user's information meanwhile, but this method may occupy so much memory usage when the size of the image file is very large. For this reason, is it possible to make the user access the image file directly but also can trigger another PHP file at the same time?
If you just want to log record $remote_addr, as well as $http_user_agent and $http_accept_language, you don't really have to use PHP at all — the whole procedure can be accomplished with a custom log format more elaborate than the combined one, in both nginx as well as Apache.
Additionally, you could also utilise the functionality of ngx_http_addition_module to perform a subrequest before or after each other request, where you would do your logging, and supply nothing in return (such as not to actually corrupt the images).
Alternatively, you could also have all requests go to PHP, and subsequently use X-Accel-Redirect with the value of the request, subsequently having nginx serve the image without the involvement of your php script, hence alleviating your memory-use concerns.
Reference Docs:
http://nginx.org/r/access_log
http://nginx.org/r/log_format
http://nginx.org/r/add_before_body
http://nginx.org/r/add_after_body
http://nginx.org/r/fastcgi_ignore_headers
http://nginx.org/r/internal
You have 2 solutions
tell apache to route to your index.php (or whatever) for every image request (rewrite all request to yourscript.php, process and route to the correct file/controller) exemple: https://stackoverflow.com/a/9453207/4196542
don't give real image url, and pass through a php controller like "/getImage.php?image=foo.png". Record the ip and all the data you want then serve the image.
Hello everyone I was wondering if there's a way to have a random text generated and replaces my file names (and of course still have the link work) ?
I've noticed how people like to have files (mostly video) have very random characters as file names instead of a relevant file names. Are they generated as each user view the page? Or is it really a file with a name of random characters? I cannot imagine this can be efficient if you have large amounts of files.
I'm currently using the script from http://css-tricks.com/snippets/php/generate-expiring-amazon-s3-link/
This allows me to not share my files publicly but the users cannot access the file directly. I hope that makes sense, let me know what you think.
Thank you
As of today, you could probably use S3 Website Redirects to accomplish this.
Simply create a random new file with the correct HTTP header set, and have it redirect back to the real file. This uses an HTTP 301 redirect, so loading it in a web browser or requesting it with cURL or wget will end up resolving the real location.
I'm looking for a way to send a user a regular file (mp3s or pictures), and keeping count of how many times this file was accessed without going through an HTML/PHP page.
For example, the user will point his browser to bla.com/file.mp3 and start downloading it, while a server-side script will do something like saving data to a database.
Any idea where should I get started?
Thanks!
You will need to go through a php script, what you could do is rewrite the extensions you want to track, preferably at the folder level, to a php script which then does the calculations you need and serves the file to the user.
For Example:
If you want to track the /downloads/ folder you would create a rewrite on your webserver to rewrite all or just specific extensions to a php file we'll call proxy.php for this example.
An example uri would be proxy.php?file=file.mp3 the proxy.php script sanitizes the file parameter, checks if the user has permission to download if applicable, checks if the file exists, serves the file to the client and perform any operations needed on the backend like database updates etc..
Do you mean that you don't want your users to be presented with a specific page and interrupt their flow? If you do, you can still use a PHP page using the following steps. (I'm not up to date with PHP so it'll be pseudo-code, but you'll get the idea)
Provide links to your file as (for example) http://example.com/trackedDownloader.php?id=someUniqueIdentifer
In the tracedDownloader.php file, determine the real location on the server that relates to the unique id (e.g. 12345 could map to /uploadedFiles/AnExample.mp3)
Set an appropriate content type header in your output.
Log the request to your database.
Return the contents of the file directly as page output.
You would need to scan log files. Regardless you most likely would want to store counters in a database?
There is a great solution in serving static files using PHP:
https://tn123.org/mod_xsendfile/
I have a web page that lists thousands of links to image files. Currently the way this is handled is with a very large HTML file that is manually edited with the name of the image and links to the image file. The images aren't managed very well so often many of the links are broken or the name is wrong.
Here is an example of one line of the thousands of lines in the HTML file:
<h4>XL Green Shirt<h4>
<h5>SKU 158f15 </h5>
[TIFF]
[JPEG]
[PNG]
<br />
I have the product information about the images in a database, so my solution was to write a page in PHP to iterate through each of the product numbers in the database and see if a file existed with the same id and then display the appropriate link and information.
I did this with the PHP function file_exists() since the product id is the same as the file name, and it worked fine on my local machine. The problem is all the images are hosted on AmazonS3, so running this function thousands of times to S3 always causes the request to time out. I've tried similar PHP functions as well as pinging the URL and testing for a 200 or 404 response, all time out.
Is there a solution that can check the existence of a file on a remote URL and consume few resources? Or is there a more novel way I can attack this problem?
I think you would be better served to make sure you enforce the existence of a file upon placing the record in the database than trying to check for the existence of thousands of files on each and every page load.
That being said, an alternate solution would possibly to use s3fs with local storage cache directory within which to check for existence of the file. This would be much faster than checking your S3 storage directly. s3fs would also provide a convenient way to write new files into the S3 storage.
This is theoretical question.
Twitter keeps user profile images as following :
https://twimg0-a.akamaihd.net/profile_images/2044921128/finals_normal.png
It's impossible to imagine that they have a server which contains 2044921128 directories (for example).Maybe this URL is created using mod_rewrite?
So how to store an extremely large number of user images?
How to complete this scheme:
User chooses and PHP script uploads an image that's supposed to be his profile picture.
PHP script renames it, sets the PATH to store this image, moves it and finally adds this path to database for further use.
So how PATH must look like?
Nothing says that Akamai (which stores the pictures for Twitter based on your URL) actually stores the files in a directory structure. It's entirely possible that they are stored in memory (backed by say a directory structure), in a database (SQL / NoSQL) or any other storage mechanism that Akamai finds efficient.
You can route all requests for a URL that start with
https://yourService.com/profile_images/
to a PHP script of your choice that then parses the rest of the URL to determine which image is being requested and store/retrieve from whatever storage mechanism you want (perhaps a database) based on the parsed URL.
Here's a short blog post that shows one method of doing that using mod_rewrite
http://www.phpaddiction.com/tags/axial/url-routing-with-php-part-one/
Most OS-es discourage having more than 1024 directories/files within a single directory as any number above that slowly makes scanning and locating specific resources within it slower, so I think it is safe to think that akamai would not be having 2044921128 directories within profile_images!
Either it is a special unique identifier number generated within profile_images or one of the numerous ways in with url routing can be used to locate a resource. In any case, I do not think it would correspond to the number of directories..