I am facing problem in Redis in laravel framework. Actullay. I have done almost everything. I am putting and getting data in Redis like this:-
use Illuminate\Support\Facades\Redis;
public function redisSet(){
Redis::set('name', 'Taylor');
echo "redis set successfully"; die;
}
public function redisget(){
echo Redis::get('name'); die;
}
Now there are two urls like below:-
http://localhost:8000/redis-set
http://localhost:8000/redis-get
Both above url working fine. Now problem is when i hit the set url in Google chrome and trying to get in mozilla firefox its also printing in mozilla firefox. that must not happen. If set redis in Google chrome its must be get in google chrome only not other browser. See images below:-
Now when i hit the get url in uc browser. its data is showing. but it must not happen. because i have set the redis in google chrome.
Below is my database.php file :-
'redis' => [
'client' => 'predis',
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
],
My env file:-
BROADCAST_DRIVER=log
CACHE_DRIVER=redis
SESSION_DRIVER=redis
SESSION_LIFETIME=120
QUEUE_DRIVER=sync
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
I have also installed the prdeis of larvel. Please help me how to resolve this issue. My system is connected with network when i access the same url in other system its also showing the redis data. Please help me to resolve this issue.
Redis is a server side storage service just like mysql. It communicate with php not browser, and gives you back what you stored before.
If you want different data save for different user, try session and use Redis as session driver. HTTP Session
For this kind of behaviour that you are looking for use Session instead of Redis. because Redis is a database which can be used as a session driver in Laravel
public function redisSet(){
Session::set('name', 'Taylor');
echo "redis set successfully"; die;
}
public function redisget(){
echo Session::get('name'); die;
}
Redis is a server-side service, so there is no matter what browser you are using. You can use $request->header('User-Agent'); to determine what browser is used, but this is not the best way. Instead of using user-agent header i recommend you to use cookies/session, coz it's independend for each browser. Then you will be able to work with redis data given their source.
As I mentioned on my comment, redis works on your server, not on users browsers. If you want to store different values for different browsers. You need to check users browser first and store them with different key.
I suggest you to use this browser detect package. You can easly install via composer.
after installed the package;
switch(Browser::browserFamily()){
case "Chrome":
Redis::set('chrome', 'Taylor');
break;
case "Firefox":
Redis::set('firefox', 'Hasan');
break;
case "Opera":
Redis::set('opera', 'Kunal');
break;
// etc
}
Then you can easly to access these values using their keys
Related
I'm migrating our inhouse backup app onto Laravel8, so far so good. I'm interested in using the laravelcollective/remote facade (SSH) to ssh onto the remote server and run some commands which it looks like this would be very good at (rather than using php exec() methods the current backup app uses).
My question however is, can i build an array/object from the database and use these details as a connection without having to manually maintain the config/remote.php file? Maintaining this with any server changes will be a nightmare to maintain as we frequently update users and sites are added removed on a regular basis! any ideas? As mentioned we are storing the ssh creds in the database which is populated via a connection form within the app.
I've built a simple test function in a controller and stepped into this with my debugger. I expected to see the array/object which is created from the config/remote file and was planning to just add new items but i couldn't find any array/objects containing the default empty production config set as default in the config/remote.php file.
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use SSH;
class SecureServerController extends Controller
{
public function test() {
SSH::into()->run([
'cd /var/www',
'git pull origin master'
], function($line)
{
echo $line.PHP_EOL;
});
}
}
The following is the route used:
use App\Http\Controllers\SecureServerController;
Route::get('/test', [SecureServerController::class, 'test']);
thanks
*** EDIT ***
SO I had a look at the code for the SSH facade and found I could create a config file and pass this via the connect function:
$config = [
'host' => '123.123.123.123',
'username' => 'the-user',
'password' => 'a-password'
];
SSH::connect($config)->run([
'cd /var/www',
'git pull origin master'
], function($line)
{
echo $line.PHP_EOL;
});
However i see no way to use any port except 22. almost all our servers use a non default port as an additional level of obfuscation.
I am trying to use redis in my application but I am not sure if my app is using redis or file driver as I can't create tags but I can create normal keys fine.
I have set CACHE_DRIVER=redis and also in my cache.php I have:
'default' => env('CACHE_DRIVER', 'redis'),
also in my database.php there is:
'redis' => [
'client' => 'predis',
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
],
The reasons for my suspicion are I cannot create tags and running redis-cli flushall under homestead(ssh) does not seem to get rid of the cahce. I had to use Cache::flush() in laravel instead.
So How can I effectively find out which cache driver my application is using?
Its pretty simple , you can use redis cli monitor command to check the get/set are happening or not
redis-cli monitor
And try to run the application .. if u able to see the keys then redis cache is running
u can also check the redis key by following command
redis-cli
Then enter following
keys *
I hope its useful.
You should simply query your Redis DB like so:
redis-cli
Then, being on redis console:
SELECT <DB-NUMBER>
KEYS *
If you see some keys like
1) PREFIX:tag:TAG:key
2) PREFIX:tag:DIFFERENT-TAG:key
it is quite a hint that Laravel is using Redis as its cache backend. Otherwise, take a look at
<YOUR-APP-DIR>/storage/framework/cache
If you find some files/sub-folders in there, it is quite a hint, Laravel is using file-based caching.
My situation is the following. I have a cakephp project and a seperated plain php script running on the same server.
When I use my client browser to connect to the cakephp project, it builds up a session as it should.
Now I want to continue the session data with my plain php script. Again I use the same client browser to access the plain php script (so the request meta data should be the same and the session should be recognized) and I set cakephp session option to PHP.
'Session' => [
'defaults' => 'php',
],
However, I cant find out how to continue the session on the plain php script.
I would have assumed the following two lines of my plain php script would do the magic:
session_start();
echo json_encode($_SESSION);
Kind regards,
Marius
CakePHPs PHP session defaults (like all built-in defaults) do change the name of the cookie / the name of the session (session.name INI setting) to CAKEPHP:
https://github.com/cakephp/cakephp/blob/3.5.3/src/Network/Session.php#L133-L138
So you either have to change that to match the defaults used by your vanilla PHP app (which is most probably PHPSESSID, ie the PHP default):
'Session' => [
'defaults' => 'php',
'cookie' => session_name(), // would use the PHP default
],
// ...
or change the latter app to use the name configured for your CakePHP application:
session_name('CAKEPHP');
session_start();
// ...
Also make sure that the session.cookie_path and session.cookie_domain configuration covers both of your applications locations.
See also
Cookbook > Sessions > Session Configuration
Cookbook > Sessions > Setting ini directives
I am trying to do a scan and scroll operation on an index as shown in the example :
$client = ClientBuilder::create()->setHosts([MYESHOST])->build();
$params = [
"search_type" => "scan", // use search_type=scan
"scroll" => "30s", // how long between scroll requests. should be small!
"size" => 50, // how many results *per shard* you want back
"index" => "my_index",
"body" => [
"query" => [
"match_all" => []
]
]
];
$docs = $client->search($params); // Execute the search
$scroll_id = $docs['_scroll_id']; // The response will contain no results, just a _scroll_id
// Now we loop until the scroll "cursors" are exhausted
while (\true) {
// Execute a Scroll request
$response = $client->scroll([
"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "30s" // and the same timeout window
]
);
// Check to see if we got any search hits from the scroll
if (count($response['hits']['hits']) > 0) {
// If yes, Do Work Here
// Get new scroll_id
// Must always refresh your _scroll_id! It can change sometimes
$scroll_id = $response['_scroll_id'];
} else {
// No results, scroll cursor is empty. You've exported all the data
break;
}
}
The first $client->search($params) API call executes fine and I am able to get back the scroll id. But $client->scroll() API fails and I am getting the exception : "Elasticsearch\Common\Exceptions\NoNodesAvailableException No alive nodes found in your cluster"
I am using Elasticsearch 1.7.1 and PHP 5.6.11
Please help
I found the php driver for elasticsearch is riddled with issues, the solution I had was to just implement the RESTful API with curl via php, Everything worked much quicker and debugging was much easier
I would guess the example is not up to date with the version you're using (the link you've provided is to 2.0, and you are sauing you use 1.7.1). Just add inside the loop:
try {
$response = $client->scroll([
"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "30s" // and the same timeout window
]
);
}catch (Elasticsearch\Common\Exceptions\NoNodesAvailableException $e) {
break;
}
Check if your server running with following command.
service elasticsearch status
I had the same problem and solved it.
I have added script.disable_dynamic: true to elasticsearch.yml as explained in Digitalocan tutorial https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-14-04
So elasticsearch server was not started.
I removed following line from elasticsearch.yml
script.disable_dynamic: true
restart the elastic search service and set the network host to local "127.0.0.1".
I would recommend on using php curl lib directly for elasticsearch queries.
I find it easier to use than any other elasticsearch client lib, you can simulate any query using cli curl and you can find many examples, documentation and discussions in the internet.
Maybe you should try to telnet on your machine
telnet [your_es_host] [your_es_ip]
to check if you can access to it.
If not please try to open that port or disable your machine's firewall.
That error basically means it can't find your cluster, likely due to misconfiguration on either the client's side or the server's side.
I have had the same problem with scroll and it was working with certain indexes but not with others. It must have had been a bug in the driver as it went away after I have updated elasticsearch/elasticsearch package from 2.1.3 to 2.2.0
Uncomment in elasticsearch.yml:
network.host:198....
And set to:
127.0.0.1
Like this:
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
I use Elasticsearch 2.2 in Magento 2 under LXC container.
I setup Elasticsearch server in docker as the doc, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
But it uses a different network (networks: - esnet) and it cannot talk to the application network. After remove the networks setting and it works well.
If you setup Elasticsearch server in docker as the doc, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
But it uses a different network (networks: - esnet) from other services and it cannot talk to the application network. After remove the networks setting and it works well.
Try:
Stop your elasticsearch service if it's already running
Go to your elasticsearch directory via terminal, run:
> ./bin/elasticsearch
This worked for me.
I am using magento community edition 1.7.0.2.I am not able to login to back end of magento.I know this problem can be because of chrome not accepting cookies.
But how to fix that please help.
Thanks
If you enabled the https for the Magento admin panel, then make sure to set "NO" for the option "Use HTTP Only" under System->configuration->web->Session and Cookie Management."
If you have access to the database then open the table "core_config_data" and search for the Path "web/cookie/cookie_httponly" and set the value to "0".
Make sure to delete the var/cache folder. Now try to login to Magento admin panel. Mostly you can now. If not post your issue in this thread.
So this "Not able to login Magento admin panel" issue mostly relates to the Magento cookies settings. So don't get worried if you encounter this tiny issue. With the list of answers in this thread you can easily sort this out in a few minutes time.
There are two solutions for this, either one will work:
Change the cookie lifetime configuration.Go to backend -> Sytem -> Configuration -> Web -> Session and Cookie Management
Set cookie lifetime to 86400 and save it .
Go to app/code/core/Mage/Core/Model/Session/Abstract/Varien.php file within your magento directory.
Find the code:
session_set_cookie_params(
$this->getCookie()->getLifetime(),
$this->getCookie()->getPath(),
$this->getCookie()->getDomain(),
$this->getCookie()->isSecure(),
$this->getCookie()->getHttponly()
);
or
// session cookie params
$cookieParams = array(
'lifetime' => $cookie->getLifetime(),
'path' => $cookie->getPath(),
'domain' => $cookie->getConfigDomain(),
'secure' => $cookie->isSecure(),
'httponly' => $cookie->getHttponly()
);
and replace with
session_set_cookie_params(
$this->getCookie()->getLifetime(),
$this->getCookie()->getPath()
//$this->getCookie()->getDomain(),
//$this->getCookie()->isSecure(),
//$this->getCookie()->getHttponly()
);
or
// session cookie params
$cookieParams = array(
'lifetime' => $cookie->getLifetime(),
'path' => $cookie->getPath()
// 'domain' => $cookie->getConfigDomain(),
// 'secure' => $cookie->isSecure(),
// 'httponly' => $cookie->getHttponly()
);
After this save the file.
This so far is the best solution rather than changing the code elsewhere http://iamtheshadowonthesun.blogspot.com/2012/10/magento-cannot-login-to-admin-panel.html
Using phpMyAdmin, in your magento database, look for the core_config_data table and click it. Click the "Search" tab. Then on the "path" column set the operator to LIKE %...% and the Value to cookie and click the "Go" button to search.
After searching, set the value of web/cookie/cookie_path, web/cookie/cookie_domain, web/cookie/cookie_httponly, and web/browser_capabilities/cookies to NULL
what worked for me is what Haijerome, unfortunatelly I can't login into the backend to change the config.
This is what I execute whenever I install a new fresh magento:
insert into core_config_data(scope, scope_id, path, value) values("default", "0", "web/cookie/cookie_httponly", "0");
then:
rm -Rf var/cache/mage--*
One simple solution is to do the installation using Opera browser and use it to log in because it saves the cookies itself. It works!
Our Chrome users were unable to add items to their cart... changing the Cookie Lifetime to the recommended 86400 fixed it.
Magento Community 1.7
Thank you!
Jeff
the problem is that chrome isnt storing the login cookie, this can be seen by looking at the cookies in chrome | settings | content | advanced | all cookies and site data
there's probably a number of reasons why this can happen, cookie lifetime for sure is one of them..
personally I encountered this problem when running magento in localhost / on a virtual machine and connecting from a browser on the same machine. specifically the problem seems to be that chrome will not store cookies if the domain name is not qualified. so if your domain name is 'http://localhost/magento' or 'http://somename/magento' chrome will not store the cookie and consequently you will not be able to login
here's the fix:
to keep this simple i'm sticking to the example where magento is running on localhost. the same trick will work if magento is running on a vm and you're accessing from localhost, but you need to modify the hosts file on both guest os and client in such a case. (and remember that the guest ip can change so from time to time you need to update the hosts file on the host)
first choose your domainname. it's only in local so you dont need to register. i'm choosing 'dansmagentodev.com'. then in magento | system | web modify baseurl in both secure and unsecure to be
http://dansmagentodev.com/magento/
next, in the same place, modify the session cookie management 'cookie domain' to be 'dansmagentodev.com'
next we need to configure your system to know that dansmagentodev.com is really localhost. we do this via the hosts file. on windows this file is in C:\Windows\System32\drivers\etc\hosts. your virus checker will probably try to stop you modifying it (for good reason, disable virus checker while you make the modification). then add the line
127.0.0.1 dansmagentodev.com
And now log in from chrome.
My problem was the fact that the server I was running was an Ubuntu fresh install with very little server maintenance configuration.
It had not updated it's date & time and it was 3h behind.
This made cookies received by Chrome to look as if they were already expired so Chrome discarded them.
If on firefox works. Then the problem is cookies on chrome, try to clear your chrome's cookie.