call recording using twilio proxy api - php

I am doing communication between two people using Twilio proxy when any customer replies to my Twilio proxy reserved number, then the session will create and the session will handle all communication "SMS as well as call" how can I record call in this scenario.
When i am adding Twilio number to proxy pool, the "call comes in" and "message comes in" URL is changed to proxy service name.
Thanks

I found a pretty good project that examples that! And the best part is that really works very well!
Basically, you only need to set up the webhooks when a call is made and on this webhook you call the processRecordCall :D
The method is here:
async function processRecordCall(req){
let callSID = req.body.inboundResourceSid;
console.log(req.body.inboundResourceSid);
console.log(req.body.inboundResourceStatus);
console.log(req.body.outboundResourceStatus);
//The only Proxy callback we care about is the one that provides us context for the initiated outbound dial. We will ignore all other callbacks (including those with context for SMS). Alternatively and depending on the use case, we could use the Proxy "Intercept" callback, but this has additional requirements for handling error status.
if(req.body.outboundResourceStatus == "initiated")
{
//Get all recordings for a given call.
let recordings = await client.recordings.list({callSid: callSID});
console.log(recordings.length);
//We only want to start recording if there are no existing recordings for this callSid. *This may be overkill as I haven't hit this edge case. Consider it preventitive for now*
if(recordings.length == 0)
{
let recordingSet = false;
let callStatus = "";
//Self-invoking function to facilitate iterative attempts to start a recording on the given call. See example here: https://stackoverflow.com/a/3583740
(function tryRecord (i) {
setTimeout(async function () {
if(!recordingSet && i > 0)
{
console.log(`I is ${i}`);
//Fetch the call to see if still exists (this allows us to prevent unnecessary attempts to record)
//Prod code probably needs callback or async added to ensure downstream execution is as expected
let call = await client.calls(callSID).fetch();
callStatus = call.status;
//Only attempt to create the recording if the call has not yet completed
if(callStatus != "completed")
{
//Try to create the recording
try{
let result = await axios.post(`https://api.twilio.com/2010-04-01/Accounts/${accountSid}/Calls/${callSID}/Recordings.json`, {},
{
auth: {
username: accountSid,
password: authToken
}
});
//This code assumes an HTTP 201 *Created* status and thus sucessful creation of recording. Production code may need to explicitly ensure HTTP 201 (eg. handle any other possible status')
console.log(`statusCode: ${result.status}`);
recordingSet = true; //This isn't entirely necessary (eg. could just set i=1 to force stop iteration), but I'm making an assumption that most use cases will prefer having this for reporting purposes
} catch(err){
//TODO(?) - There may be specific errors that should be explicitly handeled in a production scenario (eg. 21220)
}
}
else //The call is completed, so there's no need to loop anymore
{
console.log("Force stop iteration");
i = 1; //Set i = 1 to force stop the iteration
}
}
if (--i || !recordingSet) tryRecord(i); // decrement i and call myLoop again if i > 0 or if recording is not set
}, 1000) //NOTE: 1 second time delay before retry (this can be whatever works best --responsibly-- for the customer)
})(15); //NOTE: 15 attempts to record by default (unless forced to stop or recording already started. This can be whatever works best --responsibly-- for the customer)
}
else
{
console.log("recording exists already");
}
}
else
{
console.log(`We ignored callback with outgoing status '${req.body.outboundResourceStatus}'`);
}
}
I'm linking the full project below
https://github.com/Cfee2000/twilio-proxy-admin-app

Related

Close support chat after connection timeout with Laravel Echo

I'm building a support chat application. It's built on Laravel Echo through Pusher.js.
There are two sides - support/admin and client. When a client starts a chat, support can accept it and they can chat together. It's working like it should be, but there is one thing. When the client goes offline (close browser, leave site, lost internet connection...) it should wait for about a few seconds (to make sure it was not a mistake) and then close the chat. So when he comes back in about an hour, there would not be any active chat.
I'm checking both sides' online status with presence channel with simple code:
this.presence = Echo.join('chat');
this.presence
.listen('.pusher:subscription_error', (result) => {
if(this.debug) {
console.log(result);
}
})
.listen('.pusher:member_added', (result) => {
if(!!result.info.is_admin) {
this.presence_users.push(result.info);
}
})
.listen('.pusher:member_removed', (result) => {
let found = _.find(this.presence_users, ['id', result.id]);
let index = this.presence_users.indexOf(found);
this.presence_users.splice(index, 1);
})
.here((result) => {
this.presence_users = _.filter(result, ['is_admin', true]);
});
On the support side it's a little different, but still the same logic (also don't worry - user's id is not id from database, but unique md5 identifier).
Presence channel is working good. But I can't find anywhere on the internet, how to set up connection_timeout URL? I just think it could be URL, where Pusher.js will post some data when the user goes offline, or connection is lost - my custom id field, for example. As I noted in the start, it should have some "cooldown", when user goes offline by mistake. This would help to close the chat when the user is not available to respond.
Do you have any experience with a similar problem? If so, how did you solve it? Or - is it even possible to solve it with Pusher.js?
Well, 7 days are gone and no answer here, so I think it's not possible the way I describe. But there can be a "hacky" way:
Create a CRON job which runs every 10 minutes
Script will get all chats from database with flag active or pending
When chat has no recent messages (nothing from last 5-10 minutes), then check if users are online
Get users from presence channel
$response = $pusher->get('/channels/chat/users');
if($response['status'] == 200) {
$users = json_decode($response['body'], true)['users'];
}
If there is at least one of them online, skip, otherwise wait for a short time (5 seconds, just to be sure), check online status again and when they are still offline, close the chat.
Haven't tested it, since it is not required yet. Maybe someone will find this helpful.

Salesforce callout using PHP

Apologies, since I may not know the terminologies for the salesforce API. I just started programming a connector to interact with salesforce and I am stuck.
I have a requirement, where each time a new entry is added to the Leads section, I will have to retrieve a couple of fields (Firstname and Product Code) and pass it to a different software that makes use of PHP.
<?php
require "conf/config_cleverbridge_connector.inc.php";
require "include/lc_connector.inc.php";
// Start of Main program
// Read basic parameters
if ($LC_Username === "")
{
$LC_Username = readParam("USER");
}
if ($LC_Password === "")
{
$LC_Password = readParam("PASSWORD");
}
$orderID = "";
$customerID = substr(readParam("PURCHASE_ID"), 0, 10);
$comment = readParam("EMAIL")."-".readParam("PURCHASE_ID");
// Create product array
$products = array();
$itemID = readParam("INTERNAL_PRODUCT_ID");
$quantity = 1;
if (!ONCE_PER_PURCHASED_QUANTITY)
{
$quantity = readParam("QUANTITY");
}
// Add product to the product array
$products[] = array (
"itemIdentification" => $itemID,
"quantity" => $quantity,
);
// Create the order
$order = array(
"orderIdentification" => $orderID,
"customerIdentification" => $customerID,
"comment" => $comment,
"product" => $products,
);
// Calling webservice
$ticket = doOrder($LC_Username, $LC_Password, $order);
if ($ticket)
{
Header("HTTP/1.1 200 Ok");
Header("Content-Type: text/plain");
print TICKET_URL.$result->order->ticketIdentification;
exit;
}
else
{
$error = "No result from WSConnector_doOrder";
trigger_error($error, E_USER_WARNING);
printError(500, "Internal Error.");
exit;
}
// End of Main program
?>
Now this is the code that I got and have to work with. And this is hosted on a different remote server.
I am very very new to salesforce and I am not really sure how to trigger calling this php file over a remote site.
The basic idea is:
1. New entry in Lead is created.
2. Immediately 2 fields (custID and prodID) are sent to this PHP file I have pasted above (some of the variables are different)
3. This does its processing and sends 2 fields back to salesforce.
Any help or guidance is appreciated. Even links to read up on is okay as I am completely clueless.
PS: I have another example where it makes use of JSON Messages if that may make any difference.
Thanks
I'll repost the links from my comment :)
https://salesforce.stackexchange.com/questions/23977/is-it-possible-to-get-the-record-id
Web hook in salesforce?
If your PHP endpoint is visible on the open web (not a part of some intranet or just your own localhost) then simplest thing to do would be to send an Outbound Message from Salesforce. No coding required, just some XML document you'll have to parse on the PHP side. Plus it will automatically attempt to resend the messages if the host is unreachable...
If your app can't be accessed from SF servers then I think your PHP app will have to be the "actor". Querying SF every X minutes for new Leads or maybe subscribing to Streaming API... This will mean you'd have to store credentials to SF on your PHP app and remember to either change the password periodically or set on the "integration user"'s profile the "password never expires" checkbox.
So you're getting the notification, you generate your tickets, time to send them back. Will you want to pretend the update of Lead was done by the person that created it or will you want to see "last modified by: Integration User"? Outbound message can contain session id which you can use to act as the person who initiated the action (created the lead and fired the workflow) - at least until they log out or the session timeouts.
For message back you can use SOAP or REST salesforce apis - read the docs to figure out how to send an update command (and if you want to make it clear it was done by special user associated with this PHP app - how to log in to the APIs). I think the user's profile must have "API enabled" ticked before you could reuse somebody's session so maybe it's better to have a dedicated account for integrations like that...
Another thing to keep in mind if it'd be outbound messages is to ignore the messages sent from sandboxes so if somebody makes a test environment you will not call your "production" database of tickets. You can also remember to modify the outbound message and remote site setting every time a sandbox is made so you'll have "prod talking to prod, test talking to test". I know you can include user's session id in the OM - so maybe you can also add organization's id (for production it'll stay the same, every new sandbox will have new id).
The problem with this approach is that it might not scale. If 1000 leads is inserted in one batch (for example with Data Loader) - you'll get spammed with 1000 outbound messages. Your server must be able to handle such load... but it will also mean you're using 1 API request to send every single update back. You can check the limit of API requests in Setup -> Company Information. Developer Edition will have this limit very low, sandboxes are better, production is best (it also depends how many user licenses have you bought). That's why I've asked about some batching them up.
More coding but also more reliable would be to ask SF for changes every X minutes (Streaming API? Normal query? check the "web hook" answer) and send an update of all these records in one go. SELECT Id, Name FROM Lead WHERE Ticket__c = null (note there's nothing about AND LastModifiedDate >= :lastTimeIChecked)...

Google API request every 30 seconds

I'm using Live Reporting Google APIs to retrieve active users and display the data inside a mobile application. On my application I'd like to make a HTTP request to a PHP script on my server which is supposed to return the result.
However I read on Google docs that it's better not to request data using APIs more often than 30 seconds.
I prefer not to use a heavy way such as a cron job that stores the value inside my database. So I'd like to know if there's a way to cache the content of my PHP scrpit na dmake it perform an API request only when the cache expires.
Is there any similar method to do that?
Another way could be implementing a very simple cache by yourself.
$googleApiRequestUrlWithParameter; //This is the full url of you request
$googleApiResponse = NULL; //This is the response by the API
//checking if the response is present in our cache
$cacheResponse = $datacache[$googleApiRequestUrlWithParameter];
if(isset($cacheResponse)) {
//check $cacheResponse[0] for find out the age of the cached data (30s or whatever you like
if(mktime() - $cacheResponse[0] < 30) {
//if the timing is good
$googleApiResponse = $cacheResponse[1];
} else {
//otherwise remove it from your "cache"
unset($datacache[$googleApiRequestUrlWithParameter]);
}
}
//if you do no have the response
if(!isset($googleApiResponse)) {
//make the call to google api and put the response in $googleApiResponse then
$datacache[] = array($googleApiRequestUrlWithParameter => array(mktime(), $googleApiResponse)
}
If you data are related to the user session, you could store $datacahe into $_SESSION
http://www.php.net/manual/it/reserved.variables.session.php
ortherwise define $datacache = array(); as a global variable.
There is a lot of way of caching things in PHP, the simple/historic way to manage cache in PHP is with APC http://www.php.net/manual/book.apc.php
Maybe I do not understard correctly your question.

Ratchet PHP WAMP - React / ZeroMQ - Specific user broadcast

Note: This is not the same as this question which utilises MessageComponentInterface. I am using WampServerInterface instead, so this question pertains to that part specifically. I need an answer with code examples and an explanation, as I can see this being helpful to others in the future.
Attempting looped pushes for individual users
I'm using the WAMP part of Ratchet and ZeroMQ, and I currently have a working version of the push integration tutorial.
I'm attempting to perform the following:
The zeromq server is up and running, ready to log subscribers and unsubscribers
A user connects in their browser over the websocket protocol
A loop is started which sends data to the specific user who requested it
When the user disconnects, the loop for that user's data is stopped
I have points (1) and (2) working, however the issue I have is with the third one:
Firstly: How can I send data to each specific user only? Broadcast sends it to everyone, unless maybe the 'topics' end up being individual user IDs maybe?
Secondly: I have a big security issue. If I'm sending which user ID wants to subscribe from the client-side, which it seems like I need to, then the user could just change the variable to another user's ID and their data is returned instead.
Thirdly: I'm having to run a separate php script containing the code for zeromq to start the actual looping. I'm not sure this is the best way to do this and I would rather having this working completely within the codebase as opposed to a separate php file. This is a major area I need sorted.
The following code shows what I currently have.
The server that just runs from console
I literally type php bin/push-server.php to run this. Subscriptions and un-subscriptions are output to this terminal for debugging purposes.
$loop = React\EventLoop\Factory::create();
$pusher = Pusher;
$context = new React\ZMQ\Context($loop);
$pull = $context->getSocket(ZMQ::SOCKET_PULL);
$pull->bind('tcp://127.0.0.1:5555');
$pull->on('message', array($pusher, 'onMessage'));
$webSock = new React\Socket\Server($loop);
$webSock->listen(8080, '0.0.0.0'); // Binding to 0.0.0.0 means remotes can connect
$webServer = new Ratchet\Server\IoServer(
new Ratchet\WebSocket\WsServer(
new Ratchet\Wamp\WampServer(
$pusher
)
),
$webSock
);
$loop->run();
The Pusher that sends out data over websockets
I've omitted the useless stuff and concentrated on the onMessage() and onSubscribe() methods.
public function onSubscribe(ConnectionInterface $conn, $topic)
{
$subject = $topic->getId();
$ip = $conn->remoteAddress;
if (!array_key_exists($subject, $this->subscribedTopics))
{
$this->subscribedTopics[$subject] = $topic;
}
$this->clients[] = $conn->resourceId;
echo sprintf("New Connection: %s" . PHP_EOL, $conn->remoteAddress);
}
public function onMessage($entry) {
$entryData = json_decode($entry, true);
var_dump($entryData);
if (!array_key_exists($entryData['topic'], $this->subscribedTopics)) {
return;
}
$topic = $this->subscribedTopics[$entryData['topic']];
// This sends out everything to multiple users, not what I want!!
// I can't send() to individual connections from here I don't think :S
$topic->broadcast($entryData);
}
The script to start using the above Pusher code in a loop
This is my issue - this is a separate php file that hopefully may be integrated into other code in the future, but currently I'm not sure how to use this properly. Do I grab the user's ID from the session? I still need to send it from client-side...
// Thought sessions might work here but they don't work for subscription
session_start();
$userId = $_SESSION['userId'];
$loop = React\EventLoop\Factory::create();
$context = new ZMQContext();
$socket = $context->getSocket(ZMQ::SOCKET_PUSH, 'my pusher');
$socket->connect("tcp://localhost:5555");
$i = 0;
$loop->addPeriodicTimer(4, function() use ($socket, $loop, $userId, &$i) {
$entryData = array(
'topic' => 'subscriptionTopicHere',
'userId' => $userId
);
$i++;
// So it doesn't go on infinitely if run from browser
if ($i >= 3)
{
$loop->stop();
}
// Send stuff to the queue
$socket->send(json_encode($entryData));
});
Finally, the client-side js to subscribe with
$(document).ready(function() {
var conn = new ab.Session(
'ws://localhost:8080'
, function() {
conn.subscribe('topicHere', function(topic, data) {
console.log(topic);
console.log(data);
});
}
, function() {
console.warn('WebSocket connection closed');
}
, {
'skipSubprotocolCheck': true
}
);
});
Conclusion
The above is working, but I really need to figure out the following:
How can I send individual messages to individual users? When they visit the page that starts the websocket connection in JS, should I also be starting the script that shoves stuff into the queue in PHP (the zeromq)? That's what I'm currently doing manually, and it just feels wrong.
When subscribing a user from JS, it can't be safe to grab the users id from the session and send that from client-side. This could be faked. Please tell me there is an easier way, and if so, how?
Note: My answer here does not include references to ZeroMQ, as I am not using it any more. However, I'm sure you will be able to figure out how to use ZeroMQ with this answer if you need to.
Use JSON
First and foremost, the Websocket RFC and WAMP Spec state that the topic to subscribe to must be a string. I'm cheating a little here, but I'm still adhering to the spec: I'm passing JSON through instead.
{
"topic": "subject here",
"userId": "1",
"token": "dsah9273bui3f92h3r83f82h3"
}
JSON is still a string, but it allows me to pass through more data in place of the "topic", and it's simple for PHP to do a json_decode() on the other end. Of course, you should validate that you actually receive JSON, but that's up to your implementation.
So what am I passing through here, and why?
Topic
The topic is the subject the user is subscribing to. You use this to decide what data you pass back to the user.
UserId
Obviously the ID of the user. You must verify that this user exists and is allowed to subscribe, using the next part:
Token
This should be a one use randomly generated token, generated in your PHP, and passed to a JavaScript variable. When I say "one use", I mean every time you reload the page (and, by extension, on every HTTP request), your JavaScript variable should have a new token in there. This token should be stored in the database against the User's ID.
Then, once a websocket request is made, you match the token and user id to those in the database to make sure the user is indeed who they say they are, and they haven't been messing around with the JS variables.
Note: In your event handler, you can use $conn->remoteAddress to get the IP of the connection, so if someone is trying to connect maliciously, you can block them (log them or something).
Why does this work?
It works because every time a new connection comes through, the unique token ensures that no user will have access to anyone else's subscription data.
The Server
Here's what I am using for running the loop and event handler. I am creating the loop, doing all the decorator style object creation, and passing in my EventHandler (which I'll come to soon) with the loop in there too.
$loop = Factory::create();
new IoServer(
new WsServer(
new WampServer(
new EventHandler($loop) // This is my class. Pass in the loop!
)
),
$webSock
);
$loop->run();
The Event Handler
class EventHandler implements WampServerInterface, MessageComponentInterface
{
/**
* #var \React\EventLoop\LoopInterface
*/
private $loop;
/**
* #var array List of connected clients
*/
private $clients;
/**
* Pass in the react event loop here
*/
public function __construct(LoopInterface $loop)
{
$this->loop = $loop;
}
/**
* A user connects, we store the connection by the unique resource id
*/
public function onOpen(ConnectionInterface $conn)
{
$this->clients[$conn->resourceId]['conn'] = $conn;
}
/**
* A user subscribes. The JSON is in $subscription->getId()
*/
public function onSubscribe(ConnectionInterface $conn, $subscription)
{
// This is the JSON passed in from your JavaScript
// Obviously you need to validate it's JSON and expected data etc...
$data = json_decode(subscription->getId());
// Validate the users id and token together against the db values
// Now, let's subscribe this user only
// 5 = the interval, in seconds
$timer = $this->loop->addPeriodicTimer(5, function() use ($subscription) {
$data = "whatever data you want to broadcast";
return $subscription->broadcast(json_encode($data));
});
// Store the timer against that user's connection resource Id
$this->clients[$conn->resourceId]['timer'] = $timer;
}
public function onClose(ConnectionInterface $conn)
{
// There might be a connection without a timer
// So make sure there is one before trying to cancel it!
if (isset($this->clients[$conn->resourceId]['timer']))
{
if ($this->clients[$conn->resourceId]['timer'] instanceof TimerInterface)
{
$this->loop->cancelTimer($this->clients[$conn->resourceId]['timer']);
}
}
unset($this->clients[$conn->resourceId]);
}
/** Implement all the extra methods the interfaces say that you must use **/
}
That's basically it. The main points here are:
Unique token, userid and connection id provide the unique combination required to ensure that one user can't see another user's data.
Unique token means that if the same user opens another page and requests to subscribe, they'll have their own connection id + token combo so the same user won't have double the subscriptions on the same page (basically, each connection has it's own individual data).
Extension
You should be ensuring all data is validated and not a hack attempt before you do anything with it. Log all connection attempts using something like Monolog, and set up e-mail forwarding if any critical's occur (like the server stops working because someone is being a bastard and attempting to hack your server).
Closing Points
Validate Everything. I can't stress this enough. Your unique token that changes on every request is important.
Remember, if you re-generate the token on every HTTP request, and you make a POST request before attempting to connect via websockets, you'll have to pass back the re-generated token to your JavaScript before trying to connect (otherwise your token will be invalid).
Log everything. Keep a record of everyone that connects, asks for what topic, and disconnects. Monolog is great for this.
To send to specific users, you need a ROUTER-DEALER pattern instead of PUB-SUB. This is explained in the Guide, in chapter 3. Security, if you're using ZMQ v4.0, is handled at the wire level, so you don't see it in the application. It still requires some work, unless you use the CZMQ binding, which provides an authentication framework (zauth).
Basically, to authenticate, you install a handler on inproc://zeromq.zap.01, and respond to requests over that socket. Google ZeroMQ ZAP for the RFC; there is also a test case in the core libzmq/tests/test_security_curve.cpp program.

Is it possible to block cookies from being set using Javascript or PHP?

A lot of you are probably aware of the new EU privacy law, but for those who are not, it basically means no site operated by a company resident in the EU can set cookies classed as 'non-essential to the operation of the website' on a visitors machine unless given express permission to do so.
So, the question becomes how to best deal with this?
Browsers obviously have the ability to block cookies from a specific website built in to them. My question is, is there a way of doing something similar using JS or PHP?
i.e. intercept any cookies that might be trying to be set (including 3rd party cookies like Analytics, or Facebook), and block them unless the user has given consent.
It's obviously possible to delete all cookies once they have been set, but although this amounts to the same thing as not allowing them to be set in the first place, I'm guessing that it's not good enough in this case because it doesn't adhere to the letter of the law.
Ideas?
I'm pretty interested in this answer too. I've accomplished what I need to accomplish in PHP, but the JavaScript component still eludes me.
Here's how I'm doing it in PHP:
$dirty = false;
foreach(headers_list() as $header) {
if($dirty) continue; // I already know it needs to be cleaned
if(preg_match('/Set-Cookie/',$header)) $dirty = true;
}
if($dirty) {
$phpversion = explode('.',phpversion());
if($phpversion[1] >= 3) {
header_remove('Set-Cookie'); // php 5.3
} else {
header('Set-Cookie:'); // php 5.2
}
}
Then I have some additional code that turns this off when the user accepts cookies.
The problem is that there are third party plugins being used in my site that manipulate cookies via javascript and short of scanning through them to determine which ones access document.cookie - they can still set cookies.
It would be convenient if they all used the same framework, so I might be able to override a setCookie function - but they don't.
It would be nice if I could just delete or disable document.cookie so it becomes inaccessible...
EDIT:
It is possible to prevent javascript access to get or set cookies.
document.__defineGetter__("cookie", function() { return '';} );
document.__defineSetter__("cookie", function() {} );
EDIT 2:
For this to work in IE:
if(!document.__defineGetter__) {
Object.defineProperty(document, 'cookie', {
get: function(){return ''},
set: function(){return true},
});
} else {
document.__defineGetter__("cookie", function() { return '';} );
document.__defineSetter__("cookie", function() {} );
}
I adapted Michaels codes from here to come up with this.
Basically it uses the defineGetter and defineSetter methods to set all the cookies on the page and then remove the user specified ones, this role could of course also be reversed if this is what you are aiming for.
I have tested this with third party cookies such as Google Analytics and it appears to work well (excluding the __utmb cookie means I am no longer picked up in Google Analytics), maybe you could use this and adapt it to your specific needs.
I've included the part about if a cookies name is not __utmb for your reference, although you could easily take these values from an array and loop through these that way.
Basically this function will include all cookies except those specified in the part that states if( cookie_name.trim() != '__utmb' ) { all_cookies = all_cookies + cookies[i] + ";"; }
You could add to this using OR or AND filters or pull from an array, database, user input or whatever you like to exclude specific ones (useful for determining between essential and non-essential cookies).
function deleteSpecificCookies() {
var cookies = document.cookie.split(";");
var all_cookies = '';
for (var i = 0; i < cookies.length; i++) {
var cookie_name = cookies[i].split("=")[0];
var cookie_value = cookies[i].split("=")[1];
if( cookie_name.trim() != '__utmb' ) { all_cookies = all_cookies + cookies[i] + ";"; }
}
if(!document.__defineGetter__) {
Object.defineProperty(document, 'cookie', {
get: function(){return all_cookies; },
set: function(){return true},
});
} else {
document.__defineGetter__("cookie", function() { return all_cookies; } );
document.__defineSetter__("cookie", function() { return true; } );
}
}
You can not disable it completely but you can override the default setting with .htaccess
Try
SetEnv session.use_cookies='0';
If it is optional for some users don't use .htaccess
if(!$isAuth)
{
ini_set('session.use_cookies', '0');
}
A little bit old but I think you deserve a answer that works:
Step 1: Don't execute the third party script code.
Step 2: Show the cookie banner.
Step 3: Wait until user accepts, now you can execute the third party script code..
Worked for me.
How about not paying attention to hoaxes?
Aside from the fact that this is old news, the text clearly says that it only applies to cookies that are not essential to the site's function. Meaning session cookies, a shopping basket, or anything that is directly related to making the site work is perfectly fine. Anything else (tracking, stats, etc.) are "not allowed" without permission.

Categories