I am using PHP 5.4 RC5, and starting the server via terminal
php -S localhost:8000
Currently using Aura.Router , and at the root I have index.php file with code
<?php
$map = require '/Aura.Router/scripts/instance.php';
$map->add('home', '/');
$map->add(null, '/{:controller}/{:action}/{:id}');
$map->add('read', '/blog/read/{:id}{:format}', [
'params' => [
'id' => '(\d+)',
'format' => '(\.json|\.html)?',
],
'values' => [
'controller' => 'blog',
'action' => 'read',
'format' => '.html',
]
]);
$path = parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH);
$route = $map->match($path, $_SERVER);
if (! $route) {
// no route object was returned
echo "No application route was found for that URI path.";
exit;
}
echo " Controller : " . $route->values['controller'];
echo " Action : " . $route->values['action'];
echo " Format : " . $route->values['format'];
A request for http://localhost:8000/blog/read/1 works as expected.
But when a dot json or dot html like http://localhost:8000/blog/read/1.json , http://localhost:8000/blog/read/1.html request comes, the php throws
Not Found
The requested resource /blog/read/1.json was not found on this server.
As I am running the server with the built in php server, where can I fix not to throw the html and json file not found error ?
Or do I want to go and install apache and enable mod rewrite and stuffs ?
You are trying to make use of a router script for the PHP built-in webserver without specifying it:
php -S localhost:8000
Instead add your router script:
php -S localhost:8000 router.php
A router script should either handle the request if a request matches, or it should return FALSE in case it want's the standard routing to apply. Compare Built-in web serverĀDocs.
I have no clue if Aura.Router offers support for the built-in web server out-of-the-box or if it requires you to write an adapter for it. Like you would need to configure your webserver for that router library, you need to configure the built-in web server, too. That's the router.php script in the example above.
you can specify the document root by passing the -t option like so:
php -S localhost:8888 -t c:\xampp\htdocs
Sorry, I'm not familiar with Aura.Router. However, I would use whatever is going on the production server. You might find some unexpected errors when you go live with the project if you do not sync the same program versions on your test and production servers.
Related
I'm facing a problem with my Laravel application. It is deployed on the server and uses nginx as the main web server.
I am unable to read two files from the path I set in the deployment server, but on the local environment, it is working fine. These two files are confidential, one is .crt file and second is .pem file. I placed both of these files in laravel-app/app/Certificates directory.
laravel-app/app/Certificates/test2-2023_cert.pem
laravel-app/app/Certificates/curl-bundle.crt
The file path in .env is set like this:
PEM=/Certificates/test2-2023_cert.pem
CRT=/Certificates/curl-bundle.crt
My config/services file:
<?php
return [
'pay' => [
'pem' => app_path() . env('PEM', ''),
'crt' => app_path() . env('CRT', '')
]
];
My Controller construct function:
public function __construct()
{
$this->pem = config('services.pay.pem');
$this->crt = config('services.pay.crt');
}
It is returning the following path on server:
/var/www/laravel-app/app/Certificates/test2-2023_cert.pem
/var/www/laravel-app/app/Certificates/curl-bundle.crt
but the server is not loading the file and gives me an error response in the API call. What I'm doing wrong?
I am trying to do a scan and scroll operation on an index as shown in the example :
$client = ClientBuilder::create()->setHosts([MYESHOST])->build();
$params = [
"search_type" => "scan", // use search_type=scan
"scroll" => "30s", // how long between scroll requests. should be small!
"size" => 50, // how many results *per shard* you want back
"index" => "my_index",
"body" => [
"query" => [
"match_all" => []
]
]
];
$docs = $client->search($params); // Execute the search
$scroll_id = $docs['_scroll_id']; // The response will contain no results, just a _scroll_id
// Now we loop until the scroll "cursors" are exhausted
while (\true) {
// Execute a Scroll request
$response = $client->scroll([
"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "30s" // and the same timeout window
]
);
// Check to see if we got any search hits from the scroll
if (count($response['hits']['hits']) > 0) {
// If yes, Do Work Here
// Get new scroll_id
// Must always refresh your _scroll_id! It can change sometimes
$scroll_id = $response['_scroll_id'];
} else {
// No results, scroll cursor is empty. You've exported all the data
break;
}
}
The first $client->search($params) API call executes fine and I am able to get back the scroll id. But $client->scroll() API fails and I am getting the exception : "Elasticsearch\Common\Exceptions\NoNodesAvailableException No alive nodes found in your cluster"
I am using Elasticsearch 1.7.1 and PHP 5.6.11
Please help
I found the php driver for elasticsearch is riddled with issues, the solution I had was to just implement the RESTful API with curl via php, Everything worked much quicker and debugging was much easier
I would guess the example is not up to date with the version you're using (the link you've provided is to 2.0, and you are sauing you use 1.7.1). Just add inside the loop:
try {
$response = $client->scroll([
"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "30s" // and the same timeout window
]
);
}catch (Elasticsearch\Common\Exceptions\NoNodesAvailableException $e) {
break;
}
Check if your server running with following command.
service elasticsearch status
I had the same problem and solved it.
I have added script.disable_dynamic: true to elasticsearch.yml as explained in Digitalocan tutorial https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-14-04
So elasticsearch server was not started.
I removed following line from elasticsearch.yml
script.disable_dynamic: true
restart the elastic search service and set the network host to local "127.0.0.1".
I would recommend on using php curl lib directly for elasticsearch queries.
I find it easier to use than any other elasticsearch client lib, you can simulate any query using cli curl and you can find many examples, documentation and discussions in the internet.
Maybe you should try to telnet on your machine
telnet [your_es_host] [your_es_ip]
to check if you can access to it.
If not please try to open that port or disable your machine's firewall.
That error basically means it can't find your cluster, likely due to misconfiguration on either the client's side or the server's side.
I have had the same problem with scroll and it was working with certain indexes but not with others. It must have had been a bug in the driver as it went away after I have updated elasticsearch/elasticsearch package from 2.1.3 to 2.2.0
Uncomment in elasticsearch.yml:
network.host:198....
And set to:
127.0.0.1
Like this:
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
I use Elasticsearch 2.2 in Magento 2 under LXC container.
I setup Elasticsearch server in docker as the doc, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
But it uses a different network (networks: - esnet) and it cannot talk to the application network. After remove the networks setting and it works well.
If you setup Elasticsearch server in docker as the doc, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
But it uses a different network (networks: - esnet) from other services and it cannot talk to the application network. After remove the networks setting and it works well.
Try:
Stop your elasticsearch service if it's already running
Go to your elasticsearch directory via terminal, run:
> ./bin/elasticsearch
This worked for me.
We are trying to recreate some of the responses from a live API locally for testing purposes.
We are trying to build a very basic PHP replica that responds to the Ajax requests with JSON.The code below is what I have it returning right now as a string and the error on the other end throws up an error:
"Uncaught TypeError: Cannot read property 'instanceId' of undefined".
code:
$var = "{'request':{'instanceId':'1234546','usage':'1'}}";
echo($var);
We have tested and it works with the live API. So it something that I am doing wrong when trying to return the dummy JSON data. Now as far as I am aware this is not a valid JSON response, is there a way to easily 'fake' the response with something like I have above?
That isn't valid json. Reverse your quotes. Single on the outside, double on the inside.
You may also need to return a correct content-type header.
$var = '{"request":{"instanceId":"1234546","usage":"1"}}';
header("Content-Type", "application/json");
echo($var);
Or better yet:
$obj = array("request" => array("instanceId" => "123456", "usage" => "1"));
header("Content-Type", "application/json");
echo(json_encode($obj));
To keep your tests local you can use Jaqen, a very light server built for testing scripts that depend of an API.
You set whatever you want it to respond directly on the request.
For example:
# Console request to a Jaqen instance listening at localhost:9000
$ curl 'http://localhost:9000/echo.json?request\[instanceId\]=123456&request\[usage\]=1' -i
=> HTTP/1.1 200 OK
=> Content-Type: application/json
...
=> {"request":{"instanceId":"1234546","usage":"1"}}
There are several ways to tell it what to do and also serves static files to allow loading test pages and assets, check out the documentation for more.
It's built on node.js so it's really easy to install and use it locally.
# To install it (once you have node.js):
$ npm install -g jaqen
# To run it just type 'jaqen' on the console:
$ jaqen
=> Jaqen v1.0.0 is listening on port 9000.
=> Press Ctl+C to stop the server.
That's it!
I'm trying to set up the cron jobs for my Codigniter application, however when I run the cron, it throws me memcached errors:
PHP Fatal error: Call to a member function get() on a non-object in /var/www/domain.com/www/dev/system/libraries/Cache/drivers/Cache_memcached.php on line 50
Fatal error: Call to a member function get() on a non-object in /var/www/domain.com/www/dev/system/libraries/Cache/drivers/Cache_memcached.php on line 50
Although I have no idea why this is throwing all the time, I can't find any errors in my cron job file, nor how to solve this problem because I don't know where this is being called, I looked into my autoloaded libraries and helpers, none of them seem to be wrong.
I also can confirm that memcached is installed, if I visit my site, memcached indeed works.
I tried suppressing the get() in Cached_memcached.php with a #, but this didn't help because no output is shown (but there is supposed to be output shown).
The command I run for the cron (user: www-data) is:
/usr/bin/php -q /var/www/domain.com/www/dev/index.php cron run cron
I'm running Ubuntu 11.10 x86_64.
This is my cron file:
<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');
class Cron extends CI_Controller {
var $current_cron_tasks = array('cron');
public function run($mode)
{
if($this->input->is_cli_request())
{
if(isset($mode) || !empty($mode))
{
if(in_array($mode, $this->current_cron_tasks))
{
$this->benchmark->mark('cron_start');
if($mode == 'cron')
{
if($this->cache->memcached->get('currency_cache'))
{
if($this->cache->memcached->delete('currency_cache'))
{
$this->load->library('convert');
$this->convert->get_cache(true);
}
}
echo $mode . ' executed successfully';
}
$this->benchmark->mark('cron_end');
$elapsed_time = $this->benchmark->elapsed_time('cron_start', 'cron_end');
echo $elapsed_time;
}
}
}
}
}
The first thing to try would be the following to determine if memcached is supported.
var_dump($this->cache->memcached->is_supported());
The second thing to ensure is that you've got a memcached.php file in application/config/
It should contain a multidimensional array of memcached hosts with the following keys:
host
port
weight
The following example defines two servers. The array keys server_1 and server_2 are irrelevant, they can be named however.
$config = array(
'server_1' => array(
'host' => '127.0.0.1',
'port' => 11211,
'weight' => 1
),
'server_2' => array(
'host' => '127.0.0.2',
'port' => 11211,
'weight' => 1
)
);
The next thing I'd try is check to see if the controller can be run in the web browser as opposed to the CLI or if you get the same error.
Also, explicitly loading the memcached driver might be worthwhile trying. The following will load the memcached driver, and failing that call upon the file cache driver.
$this->load->driver('cache', array('adapter' => 'memcached', 'backup' => 'file'));
Using this method allows you to call $this->cache->get(); to take into account the fallback too.
Another thing to check is that you're not using separate php.ini files for web and CLI.
On Ubuntu it's located in
/etc/php5/cli/php.ini
And you should ensure that the following line is present, and not commented out
extension=memcache.so
Alternatively, you can create a file /etc/php5/cond.d/memcache.ini with the same contents.
Don't forget to restart services after changing configuration files.
You can check memcached is indeed set up correctly using the CLI by executing the following
php -i | grep memcache
The problem is that $this->cache->memcached is NULL (or otherwise uninitialized), meaning that the cache hasn't been initialized.
An easy fix would be to simply create the memcache object yourself. The proper fix, however, would be to look through the source and trace how the memcache object normally gets instantiated (look for new Memcache and set a debug_print_backtrace() there. Trace the debug stack back and compare it with what your cron does - look where it goes wrong then correct it). This is basic debugging btw, sorry.
Also, make sure you do load the drivers. If your cron uses a different bootstrap function than your normal index (never used CI, is that even possible?) then make sure that the memcache init is placed in the right location.
-edit-
$this->cache->memcached probably isn't actually NULL, but the actual connection to the Memcache server definitely wasn't made before you started calling get().
I am a PHP developer who has started learning Ruby on Rails. I love how easy it is to get up and running developing Rails applications. One of the things I love most is WEBrick. It makes it so you don't have to configure Apache and Virtual Hosts for every little project you are working on. WEBrick allows you to easily start up and shut down a server so you can click around your web application.
I don't always have the luxury of working on a Ruby on Rails app, so I was wondering how I might configure (or modify) WEBrick to be able to use it to serve up my PHP projects and Zend Framework applications.
Have you attempted this? What would be the necessary steps in order to achieve this?
To get php support in webrick you can use a handler for php files. To do this you have to extend HttpServlet::AbstractServlet and implement the do_GET and do_POST methods. These methods are called for GET and POST requests from a browser. There you just have to feed the incoming request to php-cgi.
To get the PHPHandler to handle php files you have to add it to the HandlerTable of a specific mount. You can do it like this:
s = HTTPServer.new(
:Port => port,
:DocumentRoot => dir,
:PHPPath => phppath
)
s.mount("/", HTTPServlet::FileHandler, dir,
{:FancyIndexing => true, :HandlerTable => {"php" => HTTPServlet::PHPHandler}})
The first statement initializes the server. The second adds options to the DocumentRoot mount. Here it enables directory listings and handling php files with PHPHandler. After that the server can be started with s.start().
I have written a PHPHandler myself as I haven't found one somewhere else. It is based on webricks CGIHandler, but reworked to get it working with php-cgi. You can have a look at the PHPHandler on GitHub:
https://github.com/questmaster/WEBrickPHPHandler
You can use nginx or lighttpd
Here's a minimal lighttpd config.
Install PHP with FastCGI support and adjust the "bin-path" option below for your system. You can install it with MacPorts using sudo port install php5 +fastcgi
Name this file lighttpd.conf
then simply run lighttpd -f lighttpd.conf from any directory you'd like to serve.
Open your webbrowser to localhost:8000
lighttpd.conf:
server.bind = "0.0.0.0"
server.port = 8000
server.document-root = CWD
server.errorlog = CWD + "/lighttpd.error.log"
accesslog.filename = CWD + "/lighttpd.access.log"
index-file.names = ( "index.php", "index.html",
"index.htm", "default.htm" )
server.modules = ("mod_fastcgi", "mod_accesslog")
fastcgi.server = ( ".php" => ((
"bin-path" => "/opt/local/bin/php-cgi",
"socket" => CWD + "/php5.socket",
)))
mimetype.assign = (
".css" => "text/css",
".gif" => "image/gif",
".htm" => "text/html",
".html" => "text/html",
".jpeg" => "image/jpeg",
".jpg" => "image/jpeg",
".js" => "text/javascript",
".png" => "image/png",
".swf" => "application/x-shockwave-flash",
".txt" => "text/plain"
)
# Making sure file uploads above 64k always work when using IE or Safari
# For more information, see http://trac.lighttpd.net/trac/ticket/360
$HTTP["useragent"] =~ "^(.*MSIE.*)|(.*AppleWebKit.*)$" {
server.max-keep-alive-requests = 0
}
If you'd like to use a custom php.ini file, change bin-path to this:
"bin-path" => "/opt/local/bin/php-fcgi -c" + CWD + "/php.ini",
If you'd like to configure nginx to do the same, here's a pointer.
I found this, but I really think it isn't worth the hassle. Is making a virtual host (which isn't even necessary) that difficult? In the time it would take you to set this up to work with PHP, if you can even get it working, you could have written a script that creates virtual host entries for you, making it as easy as webrick.
It looks like WEBrick has CGI support, which implies that you can get PHP running by invoking it as a CGI script. The #! line at the top of each executable file would just need to point towards the absolute path to php-cgi.exe.
It's worth noting that you'd need to remove the #! line when moving the file to any other server that doesn't think of PHP as a CGI script, which would be ... uh ... all of'em.