How can I change the site url in an image source url to a new site url, something like this:
Original Image: http://domain.com/theme/wp-content/uploads/2014/12/image.jpg
After Replace: http://new-domain.com/new-theme/wp-content/uploads/2014/12/image.jpg
How can I do this using PHP?
I make it by using str_replace
http://php.net/str_replace
$old_src = 'http://domain.com/theme/wp-content/uploads/2014/12/image.jpg';
$old_url = 'http://domain.com/theme/';
$new_url = 'http://new-domain.com/new-theme/';
$new_src = str_replace($old_url, $new_url, $old_src ).'<br/>';
echo $new_src;
The easiest way to make this change is at a database level.
I do this kind of thing all the time when I deploy a site and I need to change the staging domain (staging.test.com) to the live domain (test.com). Generally, Wordpress generally uses the WP_SITEURL set in wp-config.php to determine asset paths but this is not the case for images.
I use the Search and Replace for Wordpress databases script and it's pretty solid, just replace:
http://domain.com/theme/
with
http://new-domain.com/new-theme/
and you should be good to go...
Alternatively, if you're not moving a site then you can change specific instances in the database - for this I'd recommend using a tool like Navicat (for OSX) as this can perform search/replace operations on specific tables.
Related
I hope I can manage to explain this problem...
I have an T3-extension that handles shared content.
In this shared content, we have links (page-ids) that are defined and converted into something like /en/clients/contact, using
$cObj = t3lib_div::makeInstance('tslib_cObj');
$href = $cObj->getTypoLink_URL($linkValue); // $linkValue is an integer (e.g. 153)
This works fine - until I change the language on the page. Then, the last used URL kinda «sticks» and the language indicator isn't present in the URL anymore.
Means:
call the german page -> works
change to english -> works
change back to german -> the english link is presented.
So the above link turns out like clients/contact (the leading slash is gone as well).
Oddly enough, I have a local installation of the same page where the problem doesn't occur. It's just on the page that's online.
I tried to find differences in the configuration, but there aren't any.
The only difference I could find so far is, that I use Typo3 v4.5.35 for the local installation and v4.7.17 for the online installation.
Any ideas???
This was very odd... but, I found a solution.
Instead of using $cObj->getTypoLink_URL($linkValue); I'm using this:
$configurations['additionalParams'] = "&L=".(int)t3lib_div::_GP('L');
$configurations['returnLast'] = 'url'; // get it as URL
$configurations['parameter'] = $linkValue;
$href = $cObject->typolink(NULL, $configurations);
It seems that when I created the $cObj, the L-Parameter got lost somewhere, somewhen. By adding it manually, the Link works as expected.
I'm currently building a simple MVC framework and I've hit a bit of a road block in terms of breaking the URL down on a localhost but also having it work on a live production server as well.
So basically, my localhost URL is:
localhost/project/public/controller/action
The live version would be:
www.example.com/controller/action
My initial thought was to just use $_SERVER['REQUEST_URI'] which will work perfectly on a live server but on my localhost it returns:
/project/public/controller/action
What I need is:
controller/action
I've had a search around and the only answer I could find was to set up a virtual host which I don't really want to do - this code will be shared between people who may or may not know how to set that up so I want to avoid it if possible.
EDIT: For the record - this is the answer I found - How to get the same $_SERVER['REQUEST_URI'] on both localhost and live server
I also can't remove /project/public/ because this folder structure won't always be the same.
So I basically need to get the path up until the public/ part but I can't even use that because the public folder may be called something else.
I know this must be possible because frameworks such as Laravel do it but even looking at the source for that - I can't quite figure it out.
Thanks for any help.
EDIT: Possible Answer
It's odd how often you have a brainwave as soon as you post something...
I've had the thought that I can just run basename(DIR) at my entry point which will give me the folder's name regardless of what it is. I can then use that to remove everything before (in including) the first instance of that folder.
I'll try this out but if there are more elegant solutions out there, I'd still like to hear them.
I also can't remove /project/public/ because this folder structure won't always be the same.
But i assume, you'll always have controller and action parts? If yes, then do this:
$uriParts = explode('/', $_SERVER['REQUEST_URI']);
$count = count($uriParts);
$controller = isset($uriParts[$count - 2]) ? $uriParts[$count - 2] : null;
$action = isset($uriParts[$count - 1]) ? $uriParts[$count - 1] : null;
Try this to get ending string from your REQUEST_URI which isn't part of the server path:
substr($_SERVER['REQUEST_URI'], strlen(dirname($_SERVER['SCRIPT_NAME'])));
I'm working on converting a website. It involved standardizing the directory structure of images and media files. I'm parsing path information from various tags, standardizing them, checking to see if the media exists in the new standardized location, and putting it there if it doesn't. I'm using string manipulation to do so.
This is a little open-ended, but is there a class, tool, or concept out there I can use to save myself some headaches? For instance, I'm running into problems where, say, a page in a sudirectory (website.com/subdir/dir/page.php) has relative image paths (../images/image.png), or other kinds of things like this. It's not like there's one overarching problem, but just a lot of little things that add up.
When I think I've got my script covering most cases, then I get errors like Could not find file at export/standardized_folder/proper_image_folderimage.png where it should be export/standardized_folder/proper_image_folder/image.png. It's kind of driving me mad, doing string parsing and checks to make sure that directory separators are in the proper places.
I feel like I'm putting too much work into making a one-off import script very robust. Perhaps someone's already untangled this mess in a re-useable way, one which I can take advantage of?
Post Script: So here's a more in-depth scoop. I write my script that parses one "type" of page and pulls content from the same of its kind. Then I turn my script to parse another type of page, get all knids of errors, and learn that all my assumptions about how paths are referenced must be thrown out the window. Wash, rinse, repeat.
So I'm looking at doing some major re-factoring of my script, throwing out all assumptions, and checking, re-checking, and double-checking path information. Since I'm really trying to build a robust path building script, hopefully I can avoid re-inventing the wheel. Is there a wheel out there?
If your problems have their root in resolving the relative links from a document and resolve to an absolute one (which should be half the job to map the linked images paths onto the file-system), I normally use Net_URL2 from pear. It's a simple class that just does the job.
To install, as root just call
# pear install channel://pear.php.net/Net_URL2-0.3.1
Even if it's a beta package, it's really stable.
A little example, let's say there is an array with all the images srcs in question and there is a base-URL for the document:
require_once('Net/URL2.php');
$baseUrl = 'http://www.example.com/test/images.html';
$docSrcs = array(...);
$baseUrl = new Net_URL2($baseUrl);
foreach($docSrcs as $href)
{
$url = $baseUrl->resolve($href);
echo ' * ', $href, ' -> ', $url->getURL(), "\n";
// or
echo " $href -> $url\n"; # Net_URL2 supports string context
}
This will convert any relative links into absolute ones based on your base URL. The base URL is first of all the documents address. The document can override it by specifying another one with the base elementDocs. So you could look that up with the HTML parser you're already using (as well as the src and href values).
Net_URL2 reflects the current RFC 3986 to do the URL resolving.
Another thing that might be handy for your URL handling is the getNormalizedURL function. It does remove some potential error-cases like needless dot segments etc. which is useful if you need to compare one URL with another one and naturally for mapping the URL to a path then:
foreach($docSrcs as $href)
{
$url = $baseUrl->resolve($href);
$url = $url->getNormalizedURL();
echo " $href -> $url\n";
}
So as you can resolve all URLs to absolute ones and you get them normalized, you can decide whether or not they are in question for your site, as long as the url is still a Net_URL2 instance, you can use one of the many functions to do that:
$host = strtolower($url->getHost());
if (in_array($host, array('example.com', 'www.example.com'))
{
# URL is on my server, process it further
}
Left is the concrete path to the file in the URL:
$path = $url->getPath();
That path, considering you're comparing against a UNIX file-system, should be easy to prefix with a concrete base directory:
$filesystemImagePath = '/var/www/site-new/images';
$newPath = $filesystemImagePath . $path;
if (is_file($newPath))
{
# new image already exists.
}
If you've got problems to combine the base path with the image path, the image path will always have a slash at the beginning.
Hope this helps.
Truepath() to the rescue!
No, you shouldn't use realpath() (see why).
I want to create a web directory site, and I need to get these site screenshots. How to get a site screenshot quickly using PHP?
I tried IECAPT,webscreencapture, khtml2png, but they are all slowly. And they all get screenshot one url by one url.
Is IECAPT depends on a ie browser? if it is, why it can not open many ie tags so that work at the same time?
Is there anyone can recommend me a PHP screenshots software using online? according to my above requirements? Thank you.
Your requirements are unrealistic. Your best bet is to integrate with WebKit through something like CutyCapt that doesn't run an actual browser, but just the WebKit rendering engine. You shouldn't have any concurrency issues, but it it isn't going to be fantastic.
These external services are developing fast. Take a look at:
http://immediatenet.com/thumbnail_api.html
it renders thumbnails extremely fast and caches them like the other similar services.
Probably the easiest way is to use an external service. There used to be Alexa Site Thumbnail but it has been discontinued, so you must look for alternatives. For example http://www.pageglimpse.com/ seems to be one.
I have tried CutyCapt, I copied 3 CutyCapt.exe and renamed them. But it also catch the screenshot one by one , not run the 3 processes at one time.
<?php
set_time_limit(0);
$url1 = 'http://www.google.co.uk';
$out1 = '1.jpg';
$path1 = 'CutyCapt1.exe';
$cmd1 = "$path1 -u=$url1 -o=$out1";
//exec($cmd);
system($cmd2);
$url2 = 'http://www.google.com';
$out2 = '2.jpg';
$path2 = 'CutyCapt2.exe';
$height2 = '1200 ';
$cmd2 = "$path2 -u=$url2 -o=$out2";
//exec($cmd);
system($cmd2);
$url3 = 'http://www.google.co.jp';
$out3 = '2.jpg';
$path3 = 'CutyCapt3.exe';
$height3 = '1200 ';
$cmd2 = "$path3 -u=$url3 -o=$out3";
//exec($cmd);
system($cmd3);`
?>
I do not think many thumbnail service site, like pageglimpse.com, they install many browsers on their web servers. What is the technology they use?
I have been working on a content management system (nakid) and one of my toughest challenges is the file navigation. I want to make sure the file paths and settings work on local and remote servers. Right now my setup is pretty much something like this:
first.php (used by all pages):
//Set paths to nakid root
$core['dir_cur'] = dirname(__FILE__);
$core['dir_root'] = $_SERVER['DOCUMENT_ROOT'];
//Detect current nakid directory
$get_dirnakid_1 = str_replace("\\","/",dirname(__FILE__));//If on local
$get_dirnakid_2 = str_replace("/includes/php","",$get_dirnakid_1);
$get_dirnakid_3 = str_replace($_SERVER['DOCUMENT_ROOT'],"",$get_dirnakid_2);
//remove first "/"
if(substr($get_dirnakid_3, 0,1) == "/"){
$get_dirnakid_3 = substr($get_dirnakid_3, 1);
}
//Set some default vars
$core['dir_nakid_path'] = $get_dirnakid_3;
$core['dir_nakid'] = $core['dir_root']."/".$core['dir_nakid_path'];//We need to get system() for this real value - below
The reason I also did it this way is because I want the directory that this program is sitting in to be anywhere on the server ie(/nakid)(/cms)(/admin/cms)
I'm positive I am doing something the wrong way or that there is a simpler way to take care of all this.
If it helps to get a closer look at the code and how everything is being used I have it all up at nakid.org
EDIT: Just realized what I have at nakid.org is a little different than my newly posted code, but the same idea still applies to what I am attempting to do.
By and large, it looks okay to me.
You might want to give the variables more speaking names (e.g. nakid_root_dir, nakid_relative_webroot and so on.)
Remember when converting \ to / in path names: Whenever you match another directory name to one of those settings, you need to str_replace("\\","/"...) in those too.
I don't understand what you aim at with $get_dirnakid_2, though. Why will you screw up my path if I install your application in a directory that happens to be named /etc/includes/php/nakid?
Anyway, you should make those settings user overwritable as well. Sometimes, the user may want to set different settings from what you get from DOCUMENT_ROOT and consorts.
I don't fully understand what you try to get, but maybe getcwd() is what you look for:
http://www.php.net/manual/en/function.getcwd.php