I followed this tutorial for installing here:
https://github.com/facebook/hiphop-php/wiki/Building-and-installing-HHVM-on-Ubuntu-13.04
But I can't figure out how to run it. I've gone to to the hphp/hhvm/hhvm and I've run this on hhhm
root#hhvm-ubuntu:~/dev/hiphop-php/hphp/hhvm# ls
CMakeFiles CMakeLists.txt hhvm main.cpp process_init.cpp
cmake_install.cmake global_variables.cpp link_hphp.sh Makefile process_init.h
The problem is each time I run, the server crashes. Actually the server is slow with hhvm install, its a 1 GB instance on Rackspace. But how am I suppose to run hip-hop after compiling from source?
You just run hphp/hhvm/hhvm some_file.php if you want it in command line or hphp/hhvm/hhvm -m server /some/document_root/ for a server. Look on the wiki for more config information.
I don't have the link handy, but 1 gb is not enough to run HipHopVM. The process itself will easily chew up that amount of ram by itself. When it chews up more ram than you have it will slow to a crawl and then eventually crash.
Try using it in a 4gig instance. You may have better luck.
Take a look at this article for some more info on configuring hhvm.
Related
I started to use Devilbox on Mac instead of Valet Plus. Devilbox is great but it is extremly slow. I found Performance issues on Docker for Mac in documentation, so I added MOUNT_OPTIONS=,cached to .env file. Result is better performance, but still too slow (30 seconds to load page in Symfony). Devilbox as such runs fast but projects with cache folder nope.
This is my current Docker setting (I enabled maximum of sources):
That might be related this answer which I've answered last week.
Docker in MacOs is very slow
Because of Mac OS Docker client doesn't equal real Docker performance on Linux.
OK, I tried docker-sync and I did not realize any speed up. I decided to install Valet plus as I need to have multiple PHP version (easily switchable), MailHog, Xdebug, SSL on local domains, DnsMasq etc. All of this comes out of the box in Valet plus. I thought it would be much better to develop in Docker but Symfony uses really a lot of cached files on disk so this was really unusable (as page load was between 30 to 60 seconds).
I have the latest PHPUnit as a phar, placed in /usr/local/bin/phpunit (4.1.3). When I execute this file on my vagrant host (ubuntu 12.04, php 5.3.10), it takes what seems to be 30s to 60s before it actually starts performing the unit tests. I cannot figure out why.
Any ideas?
I ran into a similar problem when running phars in production (AWS's phar. Unfortunately, PHP doesn't do any caching around phar archives, even when using APC as an OpCode cache. So, on each request, PHP is unarchiving and parsing the entire phar. My workaround has been to avoid phars in production unless the archive is small.
If you have the option to upgrade PHP 5.5 w/ OpCode caching, you shouldn't have this problem.
I am going to close this for now as I believe the primary issue is that I am working with a shared folder, and I cannot update to 5.5 like monte suggested. I appreciate your response! I will just have to deal with it for now. If I get a moment I will try a vagrant with 5.5 and OpCode like you suggested, and just run phpunit to see if it is faster, even in a shared folder. If so, I will change my accepted answer.
Im running lemonstand which is a php based ecommerce software on a dreamhost vps.
About once or twice a week I get an email that my vps went more than 10% over its memory allocation and it is being automatically rebooted.
How do I find the source of this memory spike? I was trying to get newrelic installed to try and hunt it down, but it apparently didnt install correctly when I did the install command via ssh and well, I just dont know where to go next with it.
Is there another application or another way to hunt down the source of my memory spike?
How can I test performance of a PHP app using Apache Benchmark?
My environment is Ubuntu Linux - are there packages I can install?
If you have Apache 2 installed, Apache Benchmark is already installed. See man ab on how to use it. In most cases its just something like
ab -n 1000 -c 10 http://localhost/path/to/app
Where -n is the number of all requests, that should be performed and -c is the number of requests, that should be performed in concurrency.
Note, that you don't test the performance of your php project this way, but test everything, that is affected, beginning with the webserver, PHP, your application, the database, your filesystem, and so on. This means, that if you got poor results, that can also be caused by low memory, or you have just many stuff running in the background, or such. Use a profiler to analyze the performance of a php application. A profiler is built-in within xdebug.
I have created a php script to import rss feed into the database. The feed which is huge (from year 2004 to 2010, approx 2 million records) has to be inserted into the database. I have been running the script in browser but the pace it is inserting (approx. 1 per second) i doubt it takes another 20-25 days to input this data even if i run it 24 hrs a day. I have tried it running on different browser windows at the same time and have finished only 70000 records in last two days. I am not sure how the server would react if i run 10-12 instances of it simultaneously.
A programmer at my client's end says that i could run it directly on the server through command line. Could anyone tell me how much difference it would make if i run it through command line? Also what is the command line syntax to run it? I am on apache, php/mysql. I tried out over the web for a similar answer but they seem quite confusing to me as i am not a system administrator or that good in linux although i have done tasks like svn repositories and installing some apache modules on server in the past so i hope i could manage this if someone tell me how to do it.
Difference in speed: Minimal. All you save on is blocking on NET I/O and connection (and the apache overhead which is negligible).
How to do it:
bash> php -f /path/to/my/php/script.php
You may only have the php5-mod package installed which is php for apache, you may have to install the actual command line interpreter, however a lot of distros install both. Personally I think you have an efficiency problem in the algorithm. Something taking days and days seems like it could be sped up by caching & worst-case performance analysis (Big-O notation).
Also, php vanilla isn't very fast, there's lots of ways to make it really fast, but if you're doing heavy computation, you should consider c/c++, C#/Mono (Maybe), possibly python (can be pre-compiled, may not actually be much faster).
But the exploration of these other outlets is highly recommended.
Only providing the filename to execute is sufficient:
php -f <YourScriptHere.php>
See the documentation for more command line options.
To run a php script in the command line just execute:
php yourscript.php
If you want to keep this process running in background do:
php yourscript.php &
You can then run several processes at the same time. To identify the instances of the script that are currently running execute:
ps aux | grep yourscript.php
However, if you think it takes too long, try to find out whether there's any bottleneck in your code and optimize it.
in linux:
php -f file.php
type
php --help
for other options
You may also need the -n option (no php.ini file) or options to specify where php-cli.ini or php.ini file can be found.