I use Uploadifive to upload big files. That's working fine, except for Firefox on Android.
When selecting an uploadfile from Google Drive, that file is first downloaded to the tablet and then uploaded using uploadifive.
But Firefox is renaming this filename; and puts in tmp_ before and some numbers behind the filename.
So, if test-1.mp3 is my filename; i get tmp_20950-test-1-5487457458.mp3.
I don't think i can overcome the renaming done by Firefox.
But i can rename it via the script.
So far i can remove the 'tmp_', but not the numbers.
There could be 4 or 5 numbers in the start, or 8 or 10 numbers in the end.
if (preg_match('/tmp_/',$destination_file)){
$destination_file = str_replace('tmp_','',$destination_file);
}
So i search for commands to clear those numbers. The difficulty is not knowing how many numbers to clear. The only 'fixed' element is the stripe '-' after and before the number. Maybe can i use that in my command ? But don't know how.
You can try something like this pattern to get the filename:
/^tmp_\d+-(.*?)-\d+\.(.*?)$/
the first match will be the filename and the second the extension.
regex tester
Related
Scenario:
I have a php file that I'm using by a zip code lookup form. It has number arrays of five digit zip codes running anywhere from 500 to 1400 zip codes. So far it works but I get PHP sniffer warnings in my code editor (Brackets) that I'm exceeding the 120 character limit.
Question:
Will this stop my PHP from running in certain browsers?
Do I have to go to every 120 characters and do a return just to keep the line length in compliance?
It appears, I need to place these long strings into a database and call them in to the array rather than hang them all inside the PHP.
I am front-end designer so a lot to learn.
<?php
$zip = $_GET['zip']; //your form method is post
// Region 01 - PersonOne Name Zips
$loc01 = array (59001,59002,59003,59004,59006);
// Region 02 - PersonTwo Name Zips
$loc01 = array ("00001","00002","00003","00004","00006");
// Above numeric strings could include 2000 zips
// Region 01 - PersonTwo Name Zips
if (in_array($zip, $loc01)) {
header("Location: https://company.com/personone");
// Region 02 - PersonTwo Name Zips
if (in_array($zip, $loc02)) {
header("Location: https://company.com/persontwo");
Question: Will this stop my PHP from running in certain browsers?
No, PHP runs entirely on the server. Browsers have nothing to do with PHP -- browsers are clients. Languages like HTML, CSS and (most) JavaScript are browser languages, but PHP is only server-side.
Do I have to go to every 120 characters and do a return just to keep the line length in compliance?
No, but I would highly suggest using a database to store tons of records like this. It's exactly what databases are for. Alternatively you could put them in a file and simply read the file in with PHP's file_get_contents function.
I will try to:
Add each array into a mysql database record.
Create a PHP script that fetches each array and applies it to the
respective location.
This will eliminate the bloated lines of arrays numbers in PHP.
BTW, I also need to define these as 5 digit numeric strings as many of the zips start with one or two zeros which are ignored by the POST match.
Thanks everyone for the input.
well, I have a very strange problem generating 2D-Barcodes (PDF417) with PHP using TCPDF (TCPDF-Website). This is my small code:
<?php
require_once ("tcpdf/tcpdf_barcodes_2d.php");
$type = "PDF417";
$code="123456789012";
$barcodeobj = new TCPDF2DBarcode($code, $type);
$barcodeobj->getBarcodePNG();
?>
This code works well and generates the barcode. But when I change the line with the code in
$code="1234567890123";
it does not generate any output. I tried several strings and found out, that everytime I try to use a string with more than 12 digits following one after the other I get no output. It does not depend on which position the digits were.
For example:
$code="abcdefghijklmnopqrstuvwxyz123456789012abcdefghijklmnopqrstuvwxyz";
works finde, but
$code="abcdefghijklmnopqrstuvwxyz1234567890123abcdefghijklmnopqrstuvwxyz";
fails.
I use tcpdf 6.0.037 and also tried to download it from annother source. I even tried Version 6.0.020 - no change.
Server is openSuSE 12.2 64bit , PHP 5.3.15
Edit:
It's getting really strange: I tried annother barcode generator - and I get the same error. This one provides a online demo. When I fill in 1234567890123 online, I get the appropreate barcode. But on my own server the same string does not work.
"123456-7890123" works
"1234567890123" does not work
"123456789012" works
"12e34567890123" works
"123456789012sometext123456789012" works
"123456789012sometext1234567890123" does not work
Every string with more than 12 numbers in a row does not work - no matter how long the string is.
U see what I mean with "strange" ?
Any help would be highly appreciated.
I too had this problem. We are using PDF417 & QR Code barcodes. I have not found a great solution for this, but I have found a solution that works for now. If anyone has a better solution, please advise.
Problem:
In our barcodes we have phone numbers that are stored and sometimes they are 13 digits or longer. The 13 digit phone number was causing the barcode to not print correctly.
Solution:
Since the barcode would not print due to this we just add a space every 10 digits and this keeps the barcode intact and our software can read in the phone number without spaces so we should be A-OK!
Example Phone Number:
123456789012345 (15 digits)
PHP Code to run on phone number:
$phone = chunk_split($phone, 10, ' ');
Example Phone Number after split:
1234567890 12345
The libraries for these barcodes (Google/TCPDF) don't like numbers longer than 12 so it's definitely strange, but maybe I"m missing something easy to see.
Thanks and hope this helps someone.
Currently I need to merge that 50+ PDF files into 1 PDF. I am using PDFTK. Using the guide from: http://www.johnboy.com/blog/merge-multiple-pdf-files-with-php
But it is not working. I have verified the following:
I have tried the command to merge 2 pdfs from my PHP and it is working.
I have echo the final command and copied that command and paste into command prompt and run manually and all the 50 PDFs are successfully merged.
Thus exec in my PHP and the command to merge 50 PDFs are both correct but it is not working when done together in PHP. I have also stated set_time_limit(0) to prevent any timeout but still not working.
Any idea what's wrong?
You can try to find out yourself:
print exec(str_repeat(' ', 5000) . 'whoami');
I think it's 8192, at least on my system, because it fails with strings larger than 10K, but it still works with strings shorter than 7K
I am not sure if there is a length restriction on how long a single command can be but I am pretty sure you can split it accross multiple lines with "\" just to check if thats the problem. Again I dont think it is... Is there any error output when you try to run the full command with PHP and exec, also try system() instead of exec().
PDFTK versions prior to 1.45 are limited to merge 26 files cuz use "handles"
/* Collate scanned pages sample */
pdftk A=even.pdf B=odd.pdf shuffle A B output collated.pdf
as you can see "A" and "B" are "handles", but should be a single upper-case letter, so only A-Z can be used, if u reach that limit, maybe you script outputs an error like
Error: Handle can only be a single, upper-case letter
but in 1.45 this limitation was removed, changelog extract
You can now use multi-character input handles. Prior versions were
limited to a single character, imposing an arbitrary limitation on
the number of input PDFs when using handles. Handles still must be all
upper-case ASCII.
maybe you only need update your lib ;)
If I use a local filename, the filename is properly copied, however, if you leave local filename empty, you are supposed to receive the content of the file.
Example code:
$stat = $sftp->get('xmlfile.cml','xmlfile.xml');
print "$stat";
(This works fine)
$xmlcontent = $sftp->get('cp1301080801_status.xml');
print "Content of file = $xmlcontent<>";
*(This prints what looks more like the stat of the file instead of the content. It starts with the date (which is the modofoed timestamp of file, followed by some numbers and the name of the web server repeated about 10 times with a number after it that increases each time - like maybe a port number or byte offset) *
It would make things easier if I didn't have to fopen the local file after the transfer. Anyone have an idea what is going on here?
Can you post a copy of the logs? Here's an example of how to get them:
http://phpseclib.sourceforge.net/ssh/examples.html#logging
Note the define() and the $ssh->getLog() stuff.
As for the specific problem you're having... what does print "$stat" do? It should print "1".
Also, fwiw, you're opening two different files in your example. My best guess, atm, is that you're thinking you're opening the same files and expecting the content to be the same when in fact they should be different and that what you're getting with both of the $sftp->get()'s is, in fact, correct.
The logs will tell us for sure.
Here is the protocol:
1) I generate a text file online with PHP containing alphanumeric characters. Then I download it and note its size (from Properties menu).
2) I open the text file with Notepad++ and cut all the content in a new text file, then I save the new file (with the same name).
3) To my astonishment, even thought both files have the exact same text content, their size isn't the same!
--TEST 1--
Downloaded file: 1529 Ko
New copy file: 1594 Ko
--TEST 2--
Downloaded file: 52 Ko
New copy file: 54 Ko
So what? Why am I posting this here? Because the file in question is available to my users for download on my website, and they can use it to replace a file in a game's save. However, the game reacts to the new file by rejecting it, whilst the copied one (with the above protocol) works fine.
The only difference I see between both files is their size (slight difference as shown above) - but the content and the name is the same. Any idea why there is that size difference?
This will most likely be newlines that are converted between unix (1 byte) and windows (2 bytes).
As mentioned in the comments, it could also be encoding, but NotePad++ is pretty good at encoding. It's also unlikely to account for the difference.
You need to convert the "\r\n" to "\n" to get the smaller filesize. Here's a page I just found with a few options: http://darklaunch.com/2009/05/06/php-normalize-newlines-line-endings-crlf-cr-lf-unix-windows-mac
Another thingto watch for is a trailing "newline" which is not very obvious. Again, strip it out before doing your comparisson.
Is your client and server are different platforms ? Say linux and windows ? In that case there is a difference in the way new line characters are stored. This can cause size difference.
Another reason could be character encoding used but it is little less likely.