Currently I've got a web app that retrieves the URL of a mp3 on an external server, but to conserve data I'd like to check first that the page my server is retrieving is actually a redirect, not the actual content (so I can grab the URL of the mp3 and NOT the actual mp3 itself.
The external PHP script requires that json data is POSTed to it, making it hard to get the client to do it themselves.
The problem is that although the external PHP script usually redirects me to a standard URL to GET from, sometimes it returns the actual mp3 itself in the body, using up my bandwidth rather than the user's.
What would be the best solution to fix this to make me not waste my bandwidth?
Thanks.
The best solution would be to use the Http verb HEAD.
From RFC2616
The HEAD method is identical to GET
except that the server MUST NOT return
a message-body in the response.
However, the question is, does the remote server support HEAD?
Related
Is there a way to distinguish between a browser loading a web page that requests a resource (e.g., <script src="https://www.example.com/requested_script.js"></script> vs. a resource being requested directly (no referer) by typing its URL directly into the URL bar, by using either PHP or .htaccess?
I'm trying to secure a JavaScript file from being R & D'ed (Ripped-Off and Duplicated) by directing the server to serve a bogus (i.e., fake) JavaScript file if the would-be thief tries to view the file contents/source-code directly, while at the same time, serving the real JavaScript file to the browser as a legitimate resource.
Maybe add a hash as the end of the redirects, one only being the genuine one.
Something like
<script src="https://www.example.com/requested_script.js?frX543FVd34fgf"></script>
To make it more difficult to read use an obfusctator, for example:
https://closure-compiler.appspot.com/home
It will also reduce the size of the file.
But if your main concern is that you're running "proprietary" operations in the Javascript you better move those to the server side, then just send to the client the final calculations/values, this way no one sees how you obtain them.
Any other info unfortunately will be ALWAYS visible to the client, because that's the way the browser reads it.
This is going to sound like an odd request.
I have a PHP script pulling a mp3 stream from SoundCloud and repeating the stream with the correct headers to allow WinAmp to play the file. But it only shows the local url I have the script running from. Before anyone asks, I am injecting ID3v1 into the file before echoing it.
Is there any way to provide WinAmp with the meta data from php?
Just to clarify, you are effectively proxying an MP3 file from SoundCloud, and you want to embed metadata into it?
Winamp will pick up ID3 tags in an HTTP-served MP3 file. However, if you are using ID3v1, those tags don't exist until the very end of the file. If you want the file to be identified without having to download the whole file, you must use ID3v2 which are typically located at the beginning of the file. (I actually recommend using both ID3v1 and ID3v2 for broader player compatibility, but almost everything supports ID3v2, so it is your choice.)
Now, there is another method but if you use this method the metadata won't be saved in the file when downloaded. You can use SHOUTcast-style metadata. Basically, Winamp and other clients (like VLC) send a request header, Icy-MetaData: 1. This tells the server that it supports SHOUTcast-style metadata. In your server response, you would insert metadata every 8KB or so. Basically, you want the reverse of what I have detailed here: https://stackoverflow.com/a/4914538/362536
In the end, simply adding ID3v2 tags will solve your problem in the best way, but I wanted to mention the alternative option in case you needed it for something else.
Is it possible to save all resources downloaded from a HTTP request with PHP?
For example: using curl, wget or similar to get all files necessary to load the page in browser instead of only getting the HTML content of the page.
I don't want to get all links and then download each link with a direct curl. I would like a way to do it only once. I assume it's possible since in a browser I also only do one url request to get all resources.
Edit:
The point here is to simulate the browser behavior. How can I save an entire page with PHP. If it must be done with several steps what should be the logic to follow?
I have huge problems in getting all files from a page even after extracting links since I find it very hard to store session data and reuse it for further curls.
You can use Ajax for that and save that as objects and show them when needed. does that works for you?
Hi,
I download a large amount of files for data mining. I used to use PHP for this purpose but I am finding it to be too slow. Also I just want a small part of the web page. I want to achieve two things
Curl should be able to utilize all my download bandwidth
Is there any way to download only a part of the web page where my data resides.
I am not confined to PHP. If curl works better in terminal I would use that.
Yes, you can download only a part of the page by using the CURLOPT_RANGE option, and you can also provide a write callback function that simply returns an error when you've received "enough" data and you want to stop and move on.
Are you downloading HTML? Your comment leads me to believe that you are. If that's the case, simply load up the html with Simple PHP DOM and get only the part that you want. Although, I find it hard to believe that grabbing just the HTML is slowing you down. Are you downloading any files or media as well?
Link : http://simplehtmldom.sourceforge.net/
There is no way to download only part of a page. When you request a URL, the server response is what it is.
Utilize more of your bandwidth by using cURL's ability to make multiple connections at once.
how is it possible to download multiple files in one HTTP request?
what i mean it's like when you have multiple attachments and you select what you want to download then press download so they will be automaticcaly downloaded and you don't have to click on each one manually.
i'm using PHP as a serverside srcipting.
It is possible to send a multipart response in HTTP:
In general, HTTP treats a multipart message-body no differently than any other media type: strictly as payload. […] an HTTP user agent SHOULD follow the same or similar behavior as a MIME user agent would upon receipt of a multipart type.
[…] If an application receives an unrecognized multipart subtype, the application MUST treat it as being equivalent to "multipart/mixed".
But since Firefox is the only browser that I know about to support such multipart responses (apart from multipart/byterange), you should use some archive file format for this purpose.
That is practically not usable due to poor browser support. You can pack them into a tar or zip file at server side and serve the archive file though.
I think it's not possible since each HTTP request has only one URI.
You can zip the file with PHP, serverside, and request the file or return it from within your script by setting the appropriate headers, see ZipArchive class
Or you create a special client that can parse your then self-specified message format (a flash app, a plugin) - but if your client is simply your browser you'll get one response with a fixed content-length from the server.
Dunno about one HTTP request, but if you want them all at once you can loop through them while changing the header('Location:') for each of them while redirecting to the immediate download script. Though, that would be redundant, and ugly; I think the best way would be to zip them all, there are instructions on how to do so in the PHP Documentation.
I'm using multiple invisible iframes to trigger multiple downloads at once. There will be more than one HTTP request, but the effect will be what you described.
<iframe style="display: none;" src="file1.zip"></iframe>
<iframe style="display: none;" src="file2.zip"></iframe>
<iframe style="display: none;" src="file3.zip"></iframe>
Most browsers will ask the user if they want to allow multiple downloads (this is to prevent letting website spamming the file system). Also, you have to make sure the file will actually be downloaded, and not just displayed inside the invisible iframe (e.g. by using header('Content-disposition: attachment'); inside the PHP script serving the file).
In my solution I use JavaScript to change the src after each file is downloaded, so the files load sequentially and my server has to handle fewer requests at once. But that solution requires AJAX and is much more complicated.
One easy way is to encode the files in base64 and send in a JSON. Though, that would make the response something like 4/3 the size, so definitely don't use that on large files.
May be you can use JS to open multiple popups each loading a download URL. I hope its the only way..