How to read .xhprof files? - php

I have .xhprof files generated from XHProf and UProfiler.
Tried using SugarCrm XHProf Viewer and UProfiler viewer, but both of them doesn't read .xhprof files.
Do I have to do any conversion to read this reports?

I am maintaining XHProf since it is not maintained by Facebook anymore.
As part of the project, there is an xhprof_html sub-directory. If you can reach its index.php directly from a URL (maybe you will have to create a separate VirtualHost and/or putting it somewhere under your document root), then it should show you a list of profiles that have been generated (by default, it is in the temporary directory (/tmp?)).

Related

Laravel Public vs Storage directories for audio files

I am a little bit confused about in which directory I should put my .mp3 files in Laravel 5.
public/, my first choice as my MP3 can be considered as media (like images which are already stored there)
storage/app, according to docs this directory is kind of recommended for this purpose.
I don't really mind being able to get an URL of these audio files since I serve it as a stream (to somehow prevent downloads, lol).
Any advice? Thanks.
Guess it just depends on whether you want direct public access or not.
public/ is simpler. Anything in there you can link to directly. Just like your js or css or images resources.
storage/app is obviously more secure, no way to access it directly. That's what I would use (actually I think I'd do storage/app/audio to be specific) so I have more control over how they are accessed.

How can I parse .unity3d file on sever side by php

I have my own .unity3d files on my server for downloading by users to update our game. And I try to create a web-interface to add some additional data linked to this .unity3d files. This interface have to be able to show images from assets from .unity3d files. These files conain .png asset and information about sprites. How can i parse .unity3d file to show images from it in my web-interface?
unity3d files are a proprietary package format used for distribution, and thus are not meant to be edited.
If you need to build your web interface it will be easier to do so using the original files.
Having said that, there are tools that decompile .unity3d files See this answer in the Unity official SO

PHP Large Files Zipping with folders that don't exist in the file-system

I'm creating some kind of file-sharing application. In this application you should be able to up-& download files and structure these files.
I've opted to not keep on manipulating the file-system. But I Upload the files into folders based on timestamps, and store the important information in the database and structure them inside the DB. So there are no REAL subfolders, just relations in the database to structure thes files into folders.
No I want to let users download folders (including subfolders) by zipping it. But I want to recreate this folder-structure(that the user sees in the front-end) inside the zip-archive that will be downloaded. I've managed to do this using the ZipArchive class of PHP: http://php.net/manual/en/class.ziparchive.php
But there's 1 big issue with this. It uses a lot of memory & cpu when compressing big files. And the system must be able to handle large files (>1GB) I can't possibly allow PHP to use >1GB of memory?!
Now I've found a Stackoverflow question to zip large files using less memory in php:
https://stackoverflow.com/a/4357904/383731
This seems to use the Unix Zip command, But is it possible to create folders inside these zips without them existing inside the file-system?
create a folder in your temp directory
add subfolders and links to real files
zip it
send it
kill temp files
The part about the links is important. If you just use links you do not actually have to copy the file contents. All you do is give the zip tool a hint where it can find the contents you want to include in the archive
EDIT:
Both symbolic links and hard links work, The link above explains how to do it with symlinks.

PHP Apache - How do I create a secure working directory, invisible to web

I have created a PHP script that takes a large html file and, using DOMDocument, chops it up into smaller files. To save on script memory and without having to use a DB, I've done this sequentially and saved them as hundreds of html files. My question is, how do I make sure these files cannot be visible to the outside world, but still retain the ability to use them as resources for future processing (piece together various files and display them on a page)?
I'm using Amazon EC2 - Centos 6/Apache.
Thank you!
Put them on a directory which isn't a subdirectory of your web root directory (i.e. the publicly opened directory).
Another possible approach (if you are using Apache), is to use an .htaccess file to Deny from all in a directory.
By far the best approach is to store them outside the document root (perhaps one level below).
Otherwise, perhaps at a future point, your settings, .htaccess file httpd.conf or other elements may change and reveal the directory contents.
Storing them outside the docroot means they will never become visible.

How to find files in website directory?

I'm creating a web crawler. I'm ganna give it an URL and it will scan through the directory and sub directories for .html files. I've been looking at two alternatives:
scandir($url). This works on local files but not on http sites. Is this because of file permissions? I'm guessing it shouldn't work since it would be dangerous for everyone to have access to your website files.
Searching for links and following them. I can do file_get_contents on the index file, find links and then follow them to their .html files.
Do any of these 2 work or is there a third alternative?
The only way to look for html files is to parse throuhg the file content returned by the server, unless by small chance they have enabled directory browsing on the server, which is one of the first things disabled usually, you dont have access to browse directory listings, only the content they are prepared to show you, and let you use.
You would have to start a http://www.mysite.com and work onwards scanning for links to html files, what if they have asp/php or other files which then return html content?
Have you considering using wget? It can crawl a website and download only files with a particular extension.

Categories