So I've been trying to see if the plugins I've installed for moodle (moodle-tool_objectfs)
is working correctly. It seems like the cronjob is working perfectly but whenever I uploaded any files (etc : sample.png), it's still not stored in s3. When i check the logs in the task_log it says
Execute scheduled task: Object file system upload task (tool_objectfs\task\push_objects_to_storage)
... started 10:51:02. Current memory use 32.1MB.
No candidate objects found.
... used 1 dbqueries
... used 0.0039408206939697 seconds
Scheduled task complete: Object file system upload task (tool_objectfs\task\push_objects_to_storage)
Here's is my config for the plugin
Why is it saying
No candidate objects found.
Did i miss anything or is my setup wrong?
Nvm. I've fixed it by setting the Min. Age to 0 seconds.
Minimum age that a object must exist on the local filedir before it will be considered for transfer
Related
I will be putting everything in bullets to not make the question lengthy..
I have 800,000++ rows of CSV
Uploads it thru cakephp 3 $this->Form->upload() WORKING
Save the file in webroot for now WORKING
Open the file using box/spout WORKING
Loop thru each row and save each row to the database PARTIALLY WORKING
So everything seems to be working fine except after about 5 mins and ~200,000 records saved in the database it returns an error 503 service unavailable. I am able to save 800,000 rows on my localhost, but this error appears on the live site which is hosted on GoDaddy.
Are there any settings I can change to prevent this error from happening? Maybe increase a timeout for this specific error? (I have set_time_limit(0) and ini_set('memory_limit','-1') just to make it work. I have even set max_execution_time to a bigger number in the servers php.ini.
Not really sure what solution I can do to fix this. Any suggestion would be appreciated thanks!
If this is a one time job, then you can split the CSV data in multiple CSV. Like you can set 100,000 rows per CSV and following this you need to upload 8 to 10 CSV to save all record in live Database.
If it is dynamic from user end, you don't know how much rows that CSV will contain, you should limit the number of rows.
Or you can save the CSV file in the server (maybe in webroot) and tell user to wait and can save CSV records to Database through Cake Shell. Cake shell does not run in HTTP protocol, you don't have any execution time limit if you use set_time_limit(0) and ini_set('memory_limit','-1')
Cake shell is same like we write code in controller and model.
Details: https://book.cakephp.org/3/en/console-and-shells/shells.html
We use cake shell for larger task like creating reports for 200000 users in one shot.
I am calling filemtime() from a PHP file executed by POST from a JavaScript/HTML app. It returns the same time stamp for a separate test HTML file every two seconds even when I edit the test file with a text editor and I can see its DTM change in the local file system.
If I reload the entire app (Ctrl+F5), the timestamp reported stays the same. At times (once after 4 hours) the time stamp changes, but I don't know what makes this happen.
The PHP part of my code looks like this:
clearstatcache(true,$FileArg);
$R=filemtime($FileArg);
if ($R===false)
echo "error: file not found";
else
echo $R;
This code is called by synchronous Ajax, given only its PHP filename, using setInterval every 2 seconds.
Windows 10 Home, Apache 2.4.33 running locally for HTTP access, PHP 7.0.30 .
ADDED:
The behavior is the same in Firefox, Chrome, Opera, and Edge.
The results are being cached: http://php.net/manual/en/function.filemtime.php
Note: The results of this function are cached. See clearstatcache() for more details.
It almost sounds like Windows is doing some write caching...
stat() on the other hand has an additional note:
Note:
Note that time resolution may differ from one file system to another.
Maybe worth checking stat output.
edit
Maybe it's a bug, or Windows not playing nice, but you could also do a shell_exec with the Windows command showing DTM.
News: it turns out to be an ordinary bug in my app. I copied my Ajax call and forgot to edit it to apply to the test file. So it applied to one of my app files instead and the DTM only got updated when I edited that app file (FTAdjust.js).
When I specify the correct test file, the DTM updates just fine each time I edit it in another process.
It can sometimes be hard to find one's own bug even when it stares one in the face! I kept looking everywhere else but where the mistake was.
Is there a way to delete a thread from Stack Overflow, since it is irrelevant to others?
I have an issue with Magento Custom Cache.
I have Observer method which launches by cron, i write value to the cache:
Mage::app()->saveCache($visitorsCount, 'cached_google_analytics_visitors_count', [], $twoDaysInSeconds);
Value is successfuly saved and i'm able to extract it from cache here. And files
mage---4ae_CACHED_GOOGLE_ANALYTICS_VISITORS_COUNT
and
mage---internal-metadatas---4ae_CACHED_GOOGLE_ANALYTICS_VISITORS_COUNT
are here two.
Now it's time to extract value from cache in my block, so i do this way:
$visitorsCount = Mage::app()->loadCache('cached_google_analytics_visitors_count');
But it returns me false. I've investigated that the reason is that there is no CACHED_GOOGLE_ANALYTICS_VISITORS_COUNT in metadatasArray in Zend_Cache_Backend_File class, but the file of metadatas exists.
More then, metadatasArray has this value when i'm writting value to the cache.
Hope your help.
Regards, Nikolay
i've got the reason of error:
cron was running from another user than web-server, so php-proccess didn't have permissions to read the file with metadatas. I've launched cron from www-data user and it works correctly now
Im playing with MongoDB and Im trying to import .csv files to DB and Im getting strange error. In process of uploading script just ends for no reason and when I try to run it again nothing happens only solution is to restart apache. I have already set unlimited timeout in php.ini Here is the script.
$dir = "tokens/";
$fileNames = array_diff( scandir("data/"), array(".", "..") );
foreach($fileNames as $filename)
if(file_exists($dir.$filename))
exec("d:\mongodb\bin\mongoimport.exe -d import -c ".$filename." -f Date,Open,Next,Amount,Type --type csv --file ".$dir.$filename."");
I got around 7000 .csv files and it manage to insert only about 200 before script ends.
Can anyone help? I would appreciate any help
You are missing back end infrastructure. It is just insane to try to load 7000 files into a database as part of a web request that is supposed to be short lived and is expected, by some of the software components as well as the end user, to only last a few seconds or maybe a minute.
Instead, create a backend service and command and control for this procedure. In the web app, write each file name to be processed to a database table or even a plain text file on the server and then tell the end user that their request has been queued and will be processed within the next NN minutes. Then have a cron job that runs every 5 minutes (or even 1 minute) that looks in the right place for stuff to do and can create reports of success or failure and/or send emails to tell the original requestor that it is done.
If this is intended as an import script and you are set on using PHP, it would be preferable to at least use the PHP CLI environment instead of performing this task through a web server. As it stands, it appears the CSV files are located on the server itself, so I see no reason to get HTTP involved. This would avoid an issue where the web request terminates and abruptly aborts the import process.
For processing the CSV, I'd start by looking at fgetcsv or str_getcsv. The mongoimport command really does very little in the way of validation and sanitization. Parsing the CSV yourself will allow you to skip records that are missing fields, provide default values where necessary, or take other appropriate action. As you iterate through records, you can collect documents to insert in an array and then pass the results on to MongoCollection::batchInsert() in batches. The driver will take care of splitting up large batches into chunks to actually send over the wire in 16MB messages (MongoDB's document size limit, which also applies to wire protocol communication).
I am having trouble uploading files to S3 from on one of our servers. We use S3 to store our backups and all of our servers are running Ubuntu 8.04 with PHP 5.2.4 and libcurl 7.18.0. Whenever I try to upload a file Amazon returns a RequestTimeout error. I know there is a bug in our current version of libcurl preventing uploads of over 200MB. For that reason we split our backups into smaller files.
We have servers hosted on Amazon's EC2 and servers hosted on customer's "private clouds" (a VMWare ESX box behind their company firewall). The specific server that I am having trouble with is hosted on a customer's private cloud.
We use the Amazon S3 PHP Class from http://undesigned.org.za/2007/10/22/amazon-s3-php-class. I have tried 200MB, 100MB and 50MB files, all with the same results. We use the following to upload the files:
$s3 = new S3($access_key, $secret_key, false);
$success = $s3->putObjectFile($local_path, $bucket_name,
$remote_name, S3::ACL_PRIVATE);
I have tried setting curl_setopt($curl, CURLOPT_NOPROGRESS, false); to view the progress bar while it uploads the file. The first time I ran it with this option set it worked. However, every subsequent time it has failed. It seems to upload the file at around 3Mb/s for 5-10 seconds then drops to 0. After 20 seconds sitting at 0, Amazon returns the "RequestTimeout - Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed." error.
I have tried updating the S3 class to the latest version from GitHub but it made no difference. I also found the Amazon S3 Stream Wrapper class and gave that a try using the following code:
include 'gs3.php';
define('S3_KEY', 'ACCESSKEYGOESHERE');
define('S3_PRIVATE','SECRETKEYGOESHERE');
$local = fopen('/path/to/backup_id.tar.gz.0000', 'r');
$remote = fopen('s3://bucket-name/customer/backup_id.tar.gz.0000', 'w+r');
$count = 0;
while (!feof($local))
{
$result = fwrite($remote, fread($local, (1024 * 1024)));
if ($result === false)
{
fwrite(STDOUT, $count++.': Unable to write!'."\n");
}
else
{
fwrite(STDOUT, $count++.': Wrote '.$result.' bytes'."\n");
}
}
fclose($local);
fclose($remote);
This code reads the file one MB at a time in order to stream it to S3. For a 50MB file, I get "1: Wrote 1048576 bytes" 49 times (the first number changes each time of course) but on the last iteration of the loop I get an error that says "Notice: fputs(): send of 8192 bytes failed with errno=11 Resource temporarily unavailable in /path/to/http.php on line 230".
My first thought was that this is a networking issue. We called up the customer and explained the issue and asked them to take a look at their firewall to see if they were dropping anything. According to their network administrator the traffic is flowing just fine.
I am at a loss as to what I can do next. I have been running the backups manually and using SCP to transfer them to another machine and upload them. This is obviously not ideal and any help would be greatly appreciated.
Update - 06/23/2011
I have tried many of the options below but they all provided the same result. I have found that even trying to scp a file from the server in question to another server stalls immediately and eventually times out. However, I can use scp to download that same file from another machine. This makes me even more convinced that this is a networking issue on the clients end, any further suggestions would be greatly appreciated.
This problem exists because you are trying to upload the same file again. Example:
$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
$s3->putObjectFile('file.jpg','bucket-name','newname-file.jpg');
To fix it, just copy the file and give it new name then upload it normally.
Example:
$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
now rename file.jpg to newname-file.jpg
$s3->putObjectFile('newname-file.jpg','bucket-name','newname-file.jpg');
I solved this problem in another way. My bug was, that filesize() function returns invalid cached size value. So just use clearstatcache()
I have experienced this exact same issue several times.
I have many scripts right now which are uploading files to S3 constantly.
The best solution that I can offer is to use the Zend libraries (either the stream wrapper or direct S3 API).
http://framework.zend.com/manual/en/zend.service.amazon.s3.html
Since the latest release of Zend framework, I haven't seen any issues with timeouts. But, if you find that you are still having problems, a simple tweak will do the trick.
Simply open the file Zend/Http/Client.php and modify the 'timeout' value in the $config array. At the time of writing this it existed on line 114. Before the latest release I was running at 120 seconds, but now things are running smooth with a 10 second timeout.
Hope this helps!
There are quite a bit of solutions available. I had this exact problem but I don't wanted to write a code and figure out the problem.
Initially I was searching for a possibility to mount S3 bucket in the Linux machine, found something interesting:
s3fs - http://code.google.com/p/s3fs/wiki/InstallationNotes
- this did work for me. It uses FUSE file-system + rsync to sync the files in S3. It kepes a copy of all filenames in the local system & make it look like a FILE/FOLDER.
This saves BUNCH of our time + no headache of writing a code for transferring the files.
Now, when I was trying to see if there is other options, I found a ruby script which works in CLI, can help you manage S3 account.
s3cmd - http://s3tools.org/s3cmd - this looks pretty clear.
[UPDATE]
Found one more CLI tool - s3sync
s3sync - https://forums.aws.amazon.com/thread.jspa?threadID=11975&start=0&tstart=0 - found in the Amazon AWS community.
I don't see both of them different, if you are not worried about the disk-space then I would choose a s3fs than a s3cmd. A disk makes you feel more comfortable + you can see the files in the disk.
Hope it helps.
You should take a look at the AWS PHP SDK. This is the AWS PHP library formerly known as tarzan and cloudfusion.
http://aws.amazon.com/sdkforphp/
The S3 class included with this is rock solid. We use it to upload multi GB files all of the time.