A client needs to beat to a server every time page loads.
The same file will be loaded in diferent computers via Resilio Sync (similar to Google Drive) - not from a server!
I wanted to read a local txt file outside the synced folder so as to get a different parameter(playerid) and beat accordingly.
I read and discovered that that is not possible due to security (open local file).
Can you think of something else?
client.send();
var xmlhttp = new XMLHttpRequest();
xmlhttp.open("GET", "http://192.168.1.105/ds2/controllers/latir.php?playerid=" + playerid, true);
xmlhttp.send();
}
Solution
This is a Digital Signage Project.
I will be running a bash script on #reboot wich will read a PlayedId and VersionID stored in a file and make a curl inside a while(true) every 20 seconds (sleep(20)) and that will provide the sense of Online/Offline: if last poll was more than 20 seconds ago, the player is offline. The curl will also compare the VersionID against the last version of the set of files and will rsync if needed.
The html will be local due to an Offline First approach.
Then i have two posibilities:
-The ajax from local file (will be possible due to Chromium, wich has a parameter called --disable-web-security, will check if Chromium needs to update.
-The bash script can also update the Chromium instance if needed.
Related
I'm building a web app that retrieves dynamic generated content through puppeteer. I have set up (apache + php) docker containers, one for the p5js project that generates an svg based on a (large, 2MB) json file, and one container with PHP that retrieves that svg. Dockers runs in an Nginx config (nginx for routing, apache for quicker PHP handling). I'm using the cheapest CENTOS server available on digitalocean. So upgrading would definitley help.
I don't want the javascript in the p5js project to be exposed to the public, so I thought a nodejs solution would be best in this scenario.
The PHP page does a shell_exec("node pup.js"). It basically runs in approx 1-3 seconds which is perfect.
Problem is when I try to test a multi user scenario and open 5 tabs to run this PHP page, the loadtime drops to even 10+ seconds, which is killing for my app.
So the question would be how to set up this architecture (php calling a node command) for a multi user environment.
===
I've tried several frameworks like x-ray, nightmare, jsdom, cheerio, axios, zombie, phantom just trying to replace puppeteer. Some of the frameworks returned nothing, some just didn't work out for me. I think I just need a headless browser solution, to be able to execute the p5js. Eventually puppeteer gets the job done, only not in a multi-user environment (I think due to my current php shell_exec puppeteer architecture).
Maybe my shell_exec workflow was the bottleneck, so I ended up building a simple node example.js which waits 5 seconds before finish (not using puppeteer), and I ran this with several tabs simultaneously, works like a charm. All tabs load in about 5-6 seconds.
I've also tried pm2 to test if my node command was the bottleneck, I did some testing on the commandline, with no major results and I couldn't get PHP to run a pm2 command, so I dropped this test.
I've tried setting PuPHPeteer up, but couldn't get it to run.
At some time I thought it had something to do with multiple puppeteer browsers launched, but I've read that this should be no problem.
The PHP looks like:
<?php
$puppeteer_command = "node /var/www/pup.js >&1";
$result = shell_exec($puppeteer_command);
echo $result;
?>
My puppeteer code:
const puppeteer = require('puppeteer');
let url = "http://the-other-dockercontainer/";
let time = Date.now();
let scrape = async () => {
const browser = await puppeteer.launch({
args: ['--no-sandbox']
});
const page = await browser.newPage();
await page.goto(url);
await page.waitForSelector('svg', { timeout: 5000 });
let svgImage = await page.$('svg');
await svgImage.screenshot({
path: `${time}.png`,
omitBackground: true,
});
await browser.close();
return time;
}
scrape().then((value) => {
console.log(value); // Success!
});
I was thinking about building the entire app in nodejs if that is the best solution, but I've put so many hours in this PHP infrastructure, I'm at the point of really like getting some advice :)
Since I have full control over the target and destination site one brainfart would be to have node to serve a server which accepts a json file and return the svg based on a local p5js site, but don't now (yet) if this would be any different.
UPDATE
So thanks to some comments, I've tried a new approach: not using p5js, but native processing code (java). I've exported the processing code to a linux 64bit application and created this little nodejs example:
var exec = require('child_process').exec;
var cmd = '/var/www/application.linux64/minimal';
exec(cmd, processing);
// Callback for command line process
function processing(error, stdout, stderr) {
// I could do some error checking here
console.log(stdout);
};
When I call this node example.js within a shell_exec in PHP, I get this:
First call takes about 2 seconds. But when I hit a lot of refreshes, time is again building up by a lot of seconds. So, clearly, my understanding of multithreading is not that good, or am I missing something crucial in my testing?
I have a setup similar to you. The biggest problem is your droplet. You should use at minimum the cheapest CPU optimized droplet ($40 per month from memory). The cheapest generic droplets on DO are in a shared environment (noisy neighbours create performance fluctuations). You can easily test this out by making a snapshot of your server and cloning your drive.
Next as someone else suggested is to reduce the cold starts. On my server a cold start adds around 2 extra seconds. I take 10 screenshots before opening a new browser. Anything more than that and you may run into issues with memory.
If you are trying to use puppeteer with php you should use make sure you have composer installed then while inside the project folder open terminal windows and type composer require nesk/puphpeteer
also install nesk/rialto, then require autoload everything should work. One thing when working with puppeteer setting const use $ and skip using await command plus replace "." with ->
<?php
require 'vendor/autoload.php';
use Nesk\Puphpeteer\Puppeteer;
use Nesk\Rialto\Data\JsFunction;
$puppeteer = new Puppeteer;
$browser = $puppeteer->launch(['headless'=>false,'--proxy-server=000.00.0.0:80']);// Type Proxy
$bot = $browser->newPage();
$data = $bot->evaluate(JsFunction::createWithBody('return document.documentElement.innerHTML'));
$urlPath = $bot->url();
$bot->goto('https://google.com');//goes to google.com
$bot->waitForTimeout(3000);// waits
$bot->type('div', '"cellphonemega"');//searches
$bot->keyboard->press('Enter');//presses enter
$bot->waitForTimeout(8000);//waits
$bot->click('h3', 'https:');//clicks webpage
$bot->waitForTimeout(8000);//waits while site loads
$bot->screenshot(['path' => 'screenshot.png']);// TAKES SCREENSHOT!
$browser->close();//Shuts Down
I am calling filemtime() from a PHP file executed by POST from a JavaScript/HTML app. It returns the same time stamp for a separate test HTML file every two seconds even when I edit the test file with a text editor and I can see its DTM change in the local file system.
If I reload the entire app (Ctrl+F5), the timestamp reported stays the same. At times (once after 4 hours) the time stamp changes, but I don't know what makes this happen.
The PHP part of my code looks like this:
clearstatcache(true,$FileArg);
$R=filemtime($FileArg);
if ($R===false)
echo "error: file not found";
else
echo $R;
This code is called by synchronous Ajax, given only its PHP filename, using setInterval every 2 seconds.
Windows 10 Home, Apache 2.4.33 running locally for HTTP access, PHP 7.0.30 .
ADDED:
The behavior is the same in Firefox, Chrome, Opera, and Edge.
The results are being cached: http://php.net/manual/en/function.filemtime.php
Note: The results of this function are cached. See clearstatcache() for more details.
It almost sounds like Windows is doing some write caching...
stat() on the other hand has an additional note:
Note:
Note that time resolution may differ from one file system to another.
Maybe worth checking stat output.
edit
Maybe it's a bug, or Windows not playing nice, but you could also do a shell_exec with the Windows command showing DTM.
News: it turns out to be an ordinary bug in my app. I copied my Ajax call and forgot to edit it to apply to the test file. So it applied to one of my app files instead and the DTM only got updated when I edited that app file (FTAdjust.js).
When I specify the correct test file, the DTM updates just fine each time I edit it in another process.
It can sometimes be hard to find one's own bug even when it stares one in the face! I kept looking everywhere else but where the mistake was.
Is there a way to delete a thread from Stack Overflow, since it is irrelevant to others?
For security reasons, there is a certain file on my web server I want to be able to monitor access to. Every time it is accessed, I want to have an entry added to a MySQL log table. This way, I can actively respond to security breaches from within the web application.
The Apache HTTP Server provides logging capabilities.
The server access log records all requests processed by the server. The location and content of the access log are controlled by the CustomLog directive. The LogFormat directive can be used to simplify the selection of the contents of the logs. This section describes how to configure the server to record information in the access log.
It can be used to write the log to a file. If you need to store in a MySQL table, run a cron job to import the file into the database.
Further information on logs is here:
http://httpd.apache.org/docs/1.3/logs.html#accesslog
Its been removed from PHP7 but for anyone else who finds this post there are a number of options within the FAM (now PECL) extension. This function http://php.net/manual/en/function.fam-monitor-file.php seems to describe what is needed here
Additionally you can access a lot of detail about the files status with http://php.net/manual/en/function.stat.php. Put this within a cron or sleep driven script and you can then see when its changed.
The file may be accessed from three points:
Direct filesystem access
Call to the url like www.example.com/importantfile.jpg (apache serves the file)
Call to some php script on your server www.example.com/readfile.php?name=important.jpg which reads the file.
If you are concerned only about case 2 then check the solution of Rishi Dua.
But if you want more than that then you should write a script with fileatime() call and then add it to cron to run every minute for example.
The pseudocode for it:
<?php
$previous_access_time = get_previous_access_time(); // get the previous last access time from you remembered in db or textfile or whatever
$current_access_time = fileatime('path/to/very_important_file.jpg');
if ($previous_access_time != $current_access_time) {
log_access_to_db();
save_new_access_time(); // update the new last access time
}
This solution however has some problems.
First is that you can get only the access time but not the user-id or ip of who accessed the file.
Second is that as the manual says, some Unix system do not update the access time and so the solution would fail.
If you are seriously concerned about the security, then I think you have to check for some audit util like this
I am having trouble uploading files to S3 from on one of our servers. We use S3 to store our backups and all of our servers are running Ubuntu 8.04 with PHP 5.2.4 and libcurl 7.18.0. Whenever I try to upload a file Amazon returns a RequestTimeout error. I know there is a bug in our current version of libcurl preventing uploads of over 200MB. For that reason we split our backups into smaller files.
We have servers hosted on Amazon's EC2 and servers hosted on customer's "private clouds" (a VMWare ESX box behind their company firewall). The specific server that I am having trouble with is hosted on a customer's private cloud.
We use the Amazon S3 PHP Class from http://undesigned.org.za/2007/10/22/amazon-s3-php-class. I have tried 200MB, 100MB and 50MB files, all with the same results. We use the following to upload the files:
$s3 = new S3($access_key, $secret_key, false);
$success = $s3->putObjectFile($local_path, $bucket_name,
$remote_name, S3::ACL_PRIVATE);
I have tried setting curl_setopt($curl, CURLOPT_NOPROGRESS, false); to view the progress bar while it uploads the file. The first time I ran it with this option set it worked. However, every subsequent time it has failed. It seems to upload the file at around 3Mb/s for 5-10 seconds then drops to 0. After 20 seconds sitting at 0, Amazon returns the "RequestTimeout - Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed." error.
I have tried updating the S3 class to the latest version from GitHub but it made no difference. I also found the Amazon S3 Stream Wrapper class and gave that a try using the following code:
include 'gs3.php';
define('S3_KEY', 'ACCESSKEYGOESHERE');
define('S3_PRIVATE','SECRETKEYGOESHERE');
$local = fopen('/path/to/backup_id.tar.gz.0000', 'r');
$remote = fopen('s3://bucket-name/customer/backup_id.tar.gz.0000', 'w+r');
$count = 0;
while (!feof($local))
{
$result = fwrite($remote, fread($local, (1024 * 1024)));
if ($result === false)
{
fwrite(STDOUT, $count++.': Unable to write!'."\n");
}
else
{
fwrite(STDOUT, $count++.': Wrote '.$result.' bytes'."\n");
}
}
fclose($local);
fclose($remote);
This code reads the file one MB at a time in order to stream it to S3. For a 50MB file, I get "1: Wrote 1048576 bytes" 49 times (the first number changes each time of course) but on the last iteration of the loop I get an error that says "Notice: fputs(): send of 8192 bytes failed with errno=11 Resource temporarily unavailable in /path/to/http.php on line 230".
My first thought was that this is a networking issue. We called up the customer and explained the issue and asked them to take a look at their firewall to see if they were dropping anything. According to their network administrator the traffic is flowing just fine.
I am at a loss as to what I can do next. I have been running the backups manually and using SCP to transfer them to another machine and upload them. This is obviously not ideal and any help would be greatly appreciated.
Update - 06/23/2011
I have tried many of the options below but they all provided the same result. I have found that even trying to scp a file from the server in question to another server stalls immediately and eventually times out. However, I can use scp to download that same file from another machine. This makes me even more convinced that this is a networking issue on the clients end, any further suggestions would be greatly appreciated.
This problem exists because you are trying to upload the same file again. Example:
$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
$s3->putObjectFile('file.jpg','bucket-name','newname-file.jpg');
To fix it, just copy the file and give it new name then upload it normally.
Example:
$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
now rename file.jpg to newname-file.jpg
$s3->putObjectFile('newname-file.jpg','bucket-name','newname-file.jpg');
I solved this problem in another way. My bug was, that filesize() function returns invalid cached size value. So just use clearstatcache()
I have experienced this exact same issue several times.
I have many scripts right now which are uploading files to S3 constantly.
The best solution that I can offer is to use the Zend libraries (either the stream wrapper or direct S3 API).
http://framework.zend.com/manual/en/zend.service.amazon.s3.html
Since the latest release of Zend framework, I haven't seen any issues with timeouts. But, if you find that you are still having problems, a simple tweak will do the trick.
Simply open the file Zend/Http/Client.php and modify the 'timeout' value in the $config array. At the time of writing this it existed on line 114. Before the latest release I was running at 120 seconds, but now things are running smooth with a 10 second timeout.
Hope this helps!
There are quite a bit of solutions available. I had this exact problem but I don't wanted to write a code and figure out the problem.
Initially I was searching for a possibility to mount S3 bucket in the Linux machine, found something interesting:
s3fs - http://code.google.com/p/s3fs/wiki/InstallationNotes
- this did work for me. It uses FUSE file-system + rsync to sync the files in S3. It kepes a copy of all filenames in the local system & make it look like a FILE/FOLDER.
This saves BUNCH of our time + no headache of writing a code for transferring the files.
Now, when I was trying to see if there is other options, I found a ruby script which works in CLI, can help you manage S3 account.
s3cmd - http://s3tools.org/s3cmd - this looks pretty clear.
[UPDATE]
Found one more CLI tool - s3sync
s3sync - https://forums.aws.amazon.com/thread.jspa?threadID=11975&start=0&tstart=0 - found in the Amazon AWS community.
I don't see both of them different, if you are not worried about the disk-space then I would choose a s3fs than a s3cmd. A disk makes you feel more comfortable + you can see the files in the disk.
Hope it helps.
You should take a look at the AWS PHP SDK. This is the AWS PHP library formerly known as tarzan and cloudfusion.
http://aws.amazon.com/sdkforphp/
The S3 class included with this is rock solid. We use it to upload multi GB files all of the time.
I am trying to get my head round node.js...
I am very happy with my LAMP set up as it currently fulfils my requirements. Although I want to add some real-time features into my PHP app. Such as showing all users currently logged into my site and possible chat features.
I don't want to replace my PHP backend, but I do want scalable real-time solutions.
1. Can I throw node.js into the mix to serve my needs without rebuilding the whole application server-side script?
2. How best could node.js serve my 'chat' and 'currently logged in' features?
Great to hear your views!
W.
I suggest you use Socket.io along side node.js. Install and download the libs from http://socket.io/. You can run it along side your Apache server no problems.
First create a node server:
var http = require('http')
, url = require('url')
, fs = require('fs')
, io = require('../')//path to your socket.io lib
, sys = require(process.binding('natives').util ? 'util' : 'sys')
, server;
server = http.createServer(function(req, res){
var path = url.parse(req.url).pathname;
}),
server.listen(8084);//This could be almost any port number
Second, run your server from the command line using:
node /path/to/your/server.js
Third, connect to the socket using client side js:
var socket = new io.Socket(null, {port: 8084, rememberTransport: false});
socket.connect();
You will have to have include the socket.io lib client side aswell.
Send data from client side to the node server using:
socket.send({data:data});
Your server.js should also have functions for processing requests:
io.on('connection', function(client){
//action when client connets
client.on('message', function(message){
//action when client sends msg
});
client.on('disconnect', function(){
//action when client disconnects
});
});
There are two main ways to send data from the server to the client:
client.send({ data: data});//sends it back to the client making the request
and
client.broadcast({ data: data});//sends it too every client connected to the server
I suspect the chat as well as the logged in listing would work via Ajax.
The chat part would be pretty easy to program in Node.js, use one of the mysql modules for Node to connect to your existing database and query login information and such and then do all the actual chatting via Node.js, I recommend you to check out Socket.io since it makes Browser/Node.js communcation really trivial, this should allow you to focus on the actual chat logic.
Also, you could take a look at the "official" chat demo of Node.js, for some inspiration.
As far as the currently online part goes, this is never easy to implement since all you can do is to display something along the lines of "5 users active in the last X minutes".
Of course you could easily add some Ajax that queries the chat server and display the userlist from that on the homepage.
Or you completely crazy and establish a Socket.io connection for every visitor and monitor it this way, although it's questionable whether this is worth the effort.
What about using a socket file
just like pedro did with ngnx ?
http://nodetuts.com/tutorials/25-nginx-and-nodejs.html
You can run php from node js using node-php: https://github.com/mkschreder/siteboot_php
I'm running a wss (secure websocket) server alongside my LAMP setup.
Node.js can easily run alongside any other web server (apache) you want. In #KitCarrau example, he lets node run on port 8084 - that's where it's running and listening to, not 80 or 443 etc (those are usually taken by apache anyway). But you can still use the same port to also serve http/https (in my case just stating some conf and general info that the service is up).
Starting from the console isn't the best way (remotely, node stops when closing the console).
I recommend taking a look at Running node as service
Easy to track log in realtime (log with console.log("hello"); in your application) with:
tail -f /var/.../SocketServer.log
Sample script (node-server.conf):
author ....
description "node.js server"
# used to be: start on startup
# until we found some mounts weren't ready yet while booting:
start on started mountall
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
# Max open files are # 1024 by default. Bit few.
limit nofile 32768 32768
script
# Not sure why $HOME is needed, but we found that it is:
export HOME="/root"
exec node /var/.../SocketServer.js >> /var/www/node/.../SocketServer.log 2>&1
end script
post-start script
# Optionally put a script here that will notifiy you node has (re)started
# /root/bin/hoptoad.sh "node.js has started!"
echo "\n*********\nServer started\n$(date)\n*********" >> /var/.../SocketServer.log
end script