I have a task of running tests to a Drupal 8 website.
I have a linux box.
I have successfully configured Behat + Mink.
My tests runs OK when I'm using the default goutte in behat.yml. But when I add #javascript so it runs with selenium2, it takes too long to run (up to 25 minutes for a login test).
So I read the docs to see if I did something wrong but can't understand how it works.
I have installed Selenium for Python3 and I can do a very simple get and assert of a webpage, and it is supposed to use Firefox in headless mode, it runs somewhat fast (less than one minute), so I don't know what could be wrong in my PHP setup, using composer.
The question is, do I need the Selenium Server all tutorials talk about? (those tutorials are aged). In the Selenium docs it says that Selenium Server is optional and I only need this if doing Non-remote. What would be this? Does Non-remote means that it is not meant to be run in a specialized server? I only need to run my tests in the machine hosting the app.
Also, why could it be taking so much to run a simple test? What logs can I look at?
You need selenium server + driver for specific browser for when you are running on local pc.
You need a selenium server running, so you need to start one(local/non-remote) or point to a machine that has selenium server(remote) for example when using services la BrowserStack or SauceLabs.
#javascript is so that he knows to start a driver with JS enabled.
If it take so much time to run a login test then you are doing something wrong, maybe you have some fixed waits or other conditions that are never true and they run until timeout.
Run Behat with -vvv flag, this is to increase the details of the logs.
Debug step by step and see where the issues is, try on your local pc first.
You should check for Behat tutorials, different frameworks are handling things differently, some you need only the driver, some both driver and the selenium server and some none of them because they have scripts to download and start the server automatically.
Also check for some best practices if you are new to automation.
Other related question is this one.
For starting selenium check this.
Related
I'm tasked with maintaining several web apps, all of them using the LAMP stack. Some of them run on PHP 5.6, some of them on PHP 7.0, some using Wordpress, some using Symfony... Ideally, I'd like to set up at home testing/development environments that are as identical possible as the production ones.
I've been investigating Docker (warning: total novice here!) to see if it suits my needs. I'll be working on Windows and Mac, and I'd like to have in my machine several LAMP environments, each of them with their version of PHP/MySQL/etc., isolated from each other and all of them running in the same VM (because otherwise I might as well just use what I'm familiar with and set up different VMs). Can Docker do this?
(Sorry if this sounds like a silly question: reading about Docker, my impression was that the container philosophy allowed you precisely to do what I described without wasting resources like with VMs, and yet, I haven't found any guides about running more than one LAMP environment at the same time).
Php Docker Stack
Php Docker stack to run Php Apps in Production and Development, using Docker Compose Services to run any Php version, Databases, Cache, Queues, Logs and much more...
From now on, no need to keep messing around the Operating System to have a full development stack ready to build our awesome Php Apps.
It can be included in each php project via composer:
https://packagist.org/packages/exadra37-docker/php-docker-stack
Or if you prefer you can clone it directly from here.
It comes with some default images for each service but is all configurable via .env, thus we can pass any docker image we want for any of the supported services in the docker compose file.
Php Docker Stack Services:
Http - Nginx, Apache, etc.
Php - Php-Fpm.
Database - Perconna, MariaDB, Mysql, etc.
Cache - Redis, MemCached, etc.
Logs - Logstash -> ElasticSearch <- Kibana.
Queue - Beenstalkd, RabbitMQ, ActiveMQ, Apache Kafka, etc.
Cron Jobs - Just to schedule cron jobs.
Dev CLI - Access to the container shell.
Database CLI - Like the awesome mycli propmt for mysql.
Cache CLI - Like the redis cli.
I was using it daily at my old job for development.
I am the author of it and I have some local enhancements that need to be polished and merged upstream.
Feel free to try it and reach to me for any doubts or feedback.
Okay, after a lot of time, I thought I should share the solution I found and that I'm currently using: devilbox. It's awesome, and once you get your head around it, it's incredibly powerful, flexible and customisable.
I set up beanstalkd with laravel on my local environment a month back for testing purposes. Composer required it, and the note I left myself to turn the queue on was "php artisan queue:work --queue=beanstalkd --tries=3". It was working great!
However, I restarted my computer for the first time since I got it running, and I have now confirmed the queue isn't running (not a surprise), and I just need to get it started again. Running that command I posted above in my terminal just causes the given command to sit idle, which definitely wasn't happening before, and it definitely doesn't turn beanstalkd on.
My best guess is I'm missing a step that I don't remember that I did, but I can't seem to find something that works while googling the solution. Been tinkering for what I know is a really simple solution for hours now.
Thanks in advance.
That command will run the workers - but unless the server is also running, there is nothing for it to connect to.
How that would be started depends on how you have it setup. One common way on a linux-like system would be with /etc/init.d/beanstalkd start. It can be setup to auto-start on a server, but again, that depends on which OS you are using, how you have it installed and what systems you normally use.
So I am fairly new to PHP and websockets and server management in general. But have been tasked with designing a web-app and have determined that the best way to implement this is with websockets. So I found ratchet. And began trying to get it to work. I have a linuxbox with apache already setup, and created a new directory in the webroot and began Ratchet's tutorial. However no matter what I did I could not get even the in-line telnet part working.
I have a composer.phar, I have their script within my composer.json, I followed their instructions for installing ratchet. however when I run the php chat_server.php command it doesn't seem to do anything. The tutorial states that it should take control of the console, and as I saw in a video tutorial it does.
So my ultimate question is - what is causing this not to run properly? Is it a bad installation (Did I mess up installing somewhere or just not install something that is required but was not explicitly stated?) all the code is identical to what is documented here http://socketo.me/docs/hello-world
Some questions when ratchet is not functioning while setting it up.
Is the php process (#php chat_server.php) open?
Are you telnetting from another cli?
can you check if the port on which the chat server should be running is allocated?(#netstat -pln)
Is the zmq library added to the apache modules? (Probably your problem)
If the above aren't working you might try to check if some firewall is blocking the connection internally.
I would suggest starting from a clean machine to execute all set up steps.
I've successfully deployed the Laravel application to Heroku.
It works online.
But when I try to run "heroku local" I get:
vendor/bin/heroku-php-apache2: No such file or directory
Which makes sense, since looking into "vendor/bin", the only thing listed is:
psysh -> ../psy/psysh/bin/psysh
So, where's my heroku-php-apache or how do I fix this?
You should have these lines in your composer.json
"require-dev": {
"heroku/heroku-buildpack-php": "*"
}
be sure to run composer update after you add them.
After extensive research, trial and error and talking to the Heroku Support team, I found out that, although Slow Loris's answer was a part of the process, the following answer was given to me by Heroku's Support:
To cut a long story short, heroku local is not officially supported for PHP >applications. The reason is that unlike all the other languages we support on the >platform, PHP has no web servers written in userland. Instead, we use PHP-FPM >together with Apache or Nginx, and the boot scripts (vendor/bin/heroku-(php|hhvm)-(apache2|nginx)) dynamically inject the correct configuration for port >binding and the FastCGI comms sockets.
This works with vanilla PHP and Apache builds, provided that:
1) the current user has all the correct permissions (in your case, >/var/log/apache2/ isn't writable);
2) the correct proxy modules are loaded in the main httpd.conf;
3) the main httpd.conf doesn't bind to a port at all, or at least not to one >under 1024 (which are reserved for superusers).
The main config also needs to be handled by each user on their own, because >sometimes, the modules to be loaded are in libexec/, sometimes in >lib/apache2/modules/, and so forth. Just too many variations; otherwise, we could >ship a full Apache config to users and the experience would be much better.
But the problems don't end there. FPM does not work at all on Windows, and on >most Linux systems, httpd is not a command that works; instead, apache2ctl >handles starting and stopping, and thus, running a server in the foreground is >not possible. In the end, there are simply too many possible permutations in >system configs that make it impossible to ensure every user has a great >experience.
It's simply the current reality in PHP land. Ruby, Python, Node, Java all have >web servers that are written in each respective language, and you don't need >external servers. Which also makes it possible to stream file uploads, handle web >socket upgrades, and so forth. Maybe with PHP 7 we'll see something like that >emerge soon (in PHP 5 it's simply not feasible at all, because a fatal error >kills the engine, so your web server would be gone too).
I know this question is a little dated but I recently deployed a heroku app for the first time and was unable to get heroku local to work for me. I'm on the current branch of Laravel which is 5.8, I am also on Windows 10 using VS Code. I searched all over trying to rectify this issue and could not get it to work no matter what.
I did come up with a solution to be able to work on this locally with only a few lines in terminal.
In VS Code, I used gitbash terminal, once in my heroku project folder composer require laravel/homestead --dev, once that is complete, then we need to install homestead, vendor/bin/homestead make, and then once that is complete, simply run vagrant up and your app will be accessible through localhost:8000.
Docs - https://laravel.com/docs/5.8/homestead
Hope this helps someone!
Has anyone been able to get xinc to run correctly under OpenBSD's chrooted default Apache? I'd like to keep our development server running fully chrooted just like our Production server so that we make sure our code runs just fine chrooted.
Have you posted the issue on the Xinc bug tracker? Xinc itself should run fine as it runs both as a daemon and as a web app. As you alluded to, the issue may be that the daemon is not running in a chroot'ed environment where as the web interface is, leading to either side not grabbing the files.
#dragonmantank
In Xinc's case, I hope you used PEAR to install it.
pear list-files xinc/Xinc
This should do it, and show you where your Xinc install put its files. So even though Xinc is "just" one big PHP script, it's still spread out into rc scripts and all those other things which are necessary to make an application run. I'm sure you don't need to add all paths listed there, but probably some in order to make it run.
Aside from Xinc itself, I think it also needs phpUnit and a bunch of other PEAR libs to run, so what I'd propose is this:
pear config-get php_dir
And then you need to add that path (like Henrik suggested) to the chroot environment.
Having never used xinc myself, I can only hint as to how I usually get to chrooting apps.
First step would be to gather information on everything the app needs to run; this I usually accomplish by running systrace(1) and ldd(1) to find out what is needed to run the software.
Go through the output of
systrace -A -d. <app>
ldd <app>
and make sure that everything the app touches and needs (quite a lot of apps touch stuff it doesn't actually need) is available in the chroot environment. You might need to tweak configs and environment variables a bit. Also, if there is an option to have the app log to syslog, I usually do that and create a syslog socket (see the -a option of syslogd(8)) in order to decrease the places the app needs write access to.
What I just described is a generic way to make just about any program run in a chroot environment (however, if you need to import half the userland and some suid commands, you might want to just not do chroot :). For apps running under Apache (I'm sure you're aware that the OpenBSD httpd(8) is slightly different) you have the option (once the program has started; any dynamic libraries still needs to be present in the jail) of using apache to access the files, allowing the use of httpd.conf to import resources in the chroot environment without actually copying them.
Also useful (if slightly outdated) is this link, outlining some gotchas in chrooted PHP on OpenBSD.
First step would be to gather information on everything the app needs to run; this I usually accomplish by running systrace(1) and ldd(1) to find out what is needed to run the software.
I'll give this a try. The big issue I've found with xinc is that while it is a PHP application, it wants to know application installation paths (yet it still spreads stuff into other folders) and runs some PHP scripts in daemon mode (those scripts being the hardest to get running). So, for example, I told it to install to /var/www/xinc and then made a symlink of
/var/www/var/www/xinc -> /var/www/xinc
and it partially worked. I got the GUI to come up bit it refused to recognize any projects that I had set up. I think the biggest problem is that part of it is running a chroot and the other half is running outside.
If all else fails I'm going to just have to build something as we program inside chrooted environments since our production is chrooted. We've run into issues where we code outside of a chroot and then have to back track to find what we need to make it work inside a chroot.