Json files to own domain - php

Don't know if this is the right place but, i'm using an API, Fortnite to be more precise, and the json files has the images url, for example www.apiwebsite.com/fortniteimage1.png. Is possible to pass that image to my own url automatically, like media.myurl.com/fortniteimage1.png ?

You need to download the images to your local server. I recommend using curl for that, just look at the docs, there is a lot of examples there.
After downloading them, they must be in a directory that is served publicly, if you are using some framework, usually is the "public" directory, where other assets (JS, CSS, images) are also located.
That way, the images will be in your domain and will be served from there, like:
https://my-crazy-domain.net/images/fortnite/person-avatar.jpg
I think that if the question was more closely related to "How can I do this in PHP" would be more fit.
Anyway, I hope you achieve what you need.

Related

Laravel Public vs Storage directories for audio files

I am a little bit confused about in which directory I should put my .mp3 files in Laravel 5.
public/, my first choice as my MP3 can be considered as media (like images which are already stored there)
storage/app, according to docs this directory is kind of recommended for this purpose.
I don't really mind being able to get an URL of these audio files since I serve it as a stream (to somehow prevent downloads, lol).
Any advice? Thanks.
Guess it just depends on whether you want direct public access or not.
public/ is simpler. Anything in there you can link to directly. Just like your js or css or images resources.
storage/app is obviously more secure, no way to access it directly. That's what I would use (actually I think I'd do storage/app/audio to be specific) so I have more control over how they are accessed.

php, own little file manager instead of FTP. Good idea?

Im planning to add file manager (very basic once) because I never used FTP functions, and it looks easier (FTP connection loses when scripts is done). I would simply use POST request (or what should I?) instead of FTP functions. Is it good idea? Anyone knows restrictions?
As far as I can see only FTP functions are to post and receive files.
What you need to do is add dynamic form where you can select multiple files and upload them to specific directory of your chose.
You will need to get all available directories and files in them, probably with some kind of recursive function. More optimal way is to get directories/files of current folder and when you click on folder it will get files/folder for it.
Can it be done - sure. Is it a good idea - no. People will have access for uploading malicious files, we are not talking about images here, php scripts, shell scripts, executable viruses and so on...
If you are doing this only for yourself, for file posting and receiving I suggest you to use FTP clients for that.
I wouldn't recommend it, but it's probably best to use a 3rd party tool, rather than to write your own.
PHP File Manager
PHPfileNavigator2
FileManager
...
Keep in mind that both PHP and your webserver can put certain restrictions on the size of files that you can transfer, it is of course possible to change these in the configuration files.

How to find files in website directory?

I'm creating a web crawler. I'm ganna give it an URL and it will scan through the directory and sub directories for .html files. I've been looking at two alternatives:
scandir($url). This works on local files but not on http sites. Is this because of file permissions? I'm guessing it shouldn't work since it would be dangerous for everyone to have access to your website files.
Searching for links and following them. I can do file_get_contents on the index file, find links and then follow them to their .html files.
Do any of these 2 work or is there a third alternative?
The only way to look for html files is to parse throuhg the file content returned by the server, unless by small chance they have enabled directory browsing on the server, which is one of the first things disabled usually, you dont have access to browse directory listings, only the content they are prepared to show you, and let you use.
You would have to start a http://www.mysite.com and work onwards scanning for links to html files, what if they have asp/php or other files which then return html content?
Have you considering using wget? It can crawl a website and download only files with a particular extension.

online editors for various file types, doc, dwg, image

I have a media archive with various file types on my server. If a user wants to update that file, he has to download the file to his local machine, edit it with his desktop software (Word, AutoCAD, etc), then reupload the file. It's kind of a pain for my users. Has anybody run into this problem and solved it in the past. I'm aware of using samba, but that's not exactly what i want. Are there any tools out there that can help me edit files that are on a server? I'm thinking kind of like a Google Docs kind of thing, your file actually sits somewhere in Googleland, but you can access and edit it in your browser. The files I would want to edit are:
.doc
.dwg (AutoCAD)
.jpg (is there a good image editing client out there?)
My language of choice is PHP, but I can do anything really if I have to.
The bottom line is I have a doc, for example, on my server in some directory. I want the user to edit the content of that doc and have it be replaced in place with as minimal hassle on the user as possible.
https://www.autocadws.com/ (CAD)
https://chrome.google.com/webstore/detail/dcjeclnkejmbepoibfnamioojinoopln (CAD)
https://docs.google.com (documents)
http://www.aviary.com/ (Images)
Online image editors are the easiest and most plentiful to find. You can roll your own half-baked version or use one of the API's like Aviary.
Google documents is very nice for the business side of things (Excel, Email, Word documents with multiple people working at a time).

Save full webpage

I've bumped into a problem while working at a project. I want to "crawl" certain websites of interest and save them as "full web page" including styles and images in order to build a mirror for them. It happened to me several times to bookmark a website in order to read it later and after few days the website was down because it got hacked and the owner didn't have a backup of the database.
Of course, I can read the files with php very easily with fopen("http://website.com", "r") or fsockopen() but the main target is to save the full web pages so in case it goes down, it can still be available to others like a "programming time machine" :)
Is there a way to do this without read and save each and every link on the page?
Objective-C solutions are also welcome since I'm trying to figure out more of it also.
Thanks!
You actually need to parse the html and all css files that are referenced, which is NOT easy. However a fast way to do it is to use an external tool like wget. After installing wget you could run from the command line
wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://example.com/mypage.html
This will download the mypage.html and all linked css files, images and those images linked inside css.
After installing wget on your system you could use php's system() function to control programmatically wget.
NOTE: You need at least wget 1.12 to properly save images that are references through css files.
Is there a way to do this without read and save each and every link on the page?
Short answer: No.
Longer answer: if you want to save every page in a website, you're going to have to read every page in a website with something on some level.
It's probably worth looking into the Linux app wget, which may do something like what you want.
One word of warning - sites often have links out to other sites, which have links to other sites and so on. Make sure you put some kind of stop if different domain condition in your spider!
If you prefer an Objective-C solution, you could use the WebArchive class from Webkit.
It provides a public API that allows you to store whole web pages as .webarchive file. (Like Safari does when you save a webpage).
Some nice features of the webarchive format:
completely self-contained (incl. css,
scripts, images)
QuickLook support
Easy to decompose
Whatever app is going to do the work (your code, or code that you find) is going to have to do exactly that: download a page, parse it for references to external resources and links to other pages, and then download all of that stuff. That's how the web works.
But rather than doing the heavy lifting yourself, why not check out curl and wget? They're standard on most Unix-like OSes, and do pretty much exactly what you want. For that matter, your browser probably does, too, at least on a single page basis (though it'd also be harder to schedule that).
I'm not sure if you need a programming solution to 'crawl websites' or personally need to save websites for offline viewing, but if its the latter, there's a great app for Windows — Teleport Pro and SiteCrawler for Mac.
You can use IDM (internet downloader management) for downloading full webpages, there's also HTTrack.

Categories