I am using the 'Bandwidth Throttle' library to throttle API requests - essentially prevent someone from the same IP making tons of requests within a set timeframe. This creates a bucket (simply a file) that is stored within the buckets directory.
As this is will build up considerably over time what process does everyone use for this - would you recommend a x amount of time to purge this folder, if so what timeframe would be suggested.
use bandwidthThrottle\tokenBucket\Rate;
use bandwidthThrottle\tokenBucket\TokenBucket;
use bandwidthThrottle\tokenBucket\storage\FileStorage;
$ip = $_SERVER['REMOTE_ADDR'];
$storage = new FileStorage(__DIR__ . "/buckets/$ip.bucket"); //this will build up quickly
$rate = new Rate(10, Rate::SECOND);
$bucket = new TokenBucket(10, $rate, $storage);
$bucket->bootstrap(10);
if (!$bucket->consume(1, $seconds)) {
http_response_code(429);
header(sprintf("Retry-After: %d", floor($seconds)));
exit();
}
Related
I have an app made using Symfony 5, and I have a script that upload a video located on the server to the logged-in user channel.
Here's basically the code of my controller:
/**
* Upload a video to YouTube.
*
* #Route("/upload_youtube/{id}", name="api_admin_video_upload_youtube", methods={"POST"}, requirements={"id" = "\d+"})
*/
public function upload_youtube(int $id, Request $request, VideoRepository $repository, \Google_Client $googleClient): JsonResponse
{
$video = $repository->find($id);
if (!$video) {
return $this->json([], Response::HTTP_NOT_FOUND);
}
$data = json_decode(
$request->getContent(),
true
);
$googleClient->setRedirectUri($_SERVER['CLIENT_URL'] . '/admin/videos/youtube');
$googleClient->fetchAccessTokenWithAuthCode($data['code']);
$videoPath = $this->getParameter('videos_directory') . '/' . $video->getFilename();
$service = new \Google_Service_YouTube($googleClient);
$ytVideo = new \Google_Service_YouTube_Video();
$ytVideoSnippet = new \Google_Service_YouTube_VideoSnippet();
$ytVideoSnippet->setTitle($video->getTitle());
$ytVideo->setSnippet($ytVideoSnippet);
$ytVideoStatus = new \Google_Service_YouTube_VideoStatus();
$ytVideoStatus->setPrivacyStatus('private');
$ytVideo->setStatus($ytVideoStatus);
$chunkSizeBytes = 1 * 1024 * 1024;
$googleClient->setDefer(true);
$insertRequest = $service->videos->insert(
'snippet,status',
$ytVideo
);
$media = new \Google_Http_MediaFileUpload($googleClient, $insertRequest, 'video/*', null, true, $chunkSizeBytes);
$media->setFileSize(filesize($videoPath));
$uploadStatus = false;
$handle = fopen($videoPath, "rb");
while (!$uploadStatus && !feof($handle)) {
$chunk = fread($handle, $chunkSizeBytes);
$uploadStatus = $media->nextChunk($chunk);
}
fclose($handle);
}
This basically works, but the problem is that the video can be very big (10G+), so it's taking a very long time, and basically Nginx terminates before it's ended and returns a "504 Gateway Timeout" before the upload is completed.
And anyway, I don't want the user to have to wait for a page to load while it's uploading.
So, I'm looking for a way to, instead of just immediately running that script, execute that script in some kind of background thread, or in a asynchronous way.
The controller returns a 200 to the user, I can tell him that uploading is happening and to come back later to check progress.
How to do this?
There are many ways to accomplish this, but what you basically want is to decouple the action trigger and its execution.
Simply:
Remove all heavy work from your controller. Your controller should at most just check that the video id provided by the client exists in your VideoRepository.
Exists? Good, then you need to store this "work order" somewhere.
There are many solutions for this, depending on what you have already installed, what technology you feel more comfortable with, etc.
For sake of simplicity, let's say you have a PendingUploads table, with videoId, status, createdAt and maybe userId. So the only thing your controller would do is to create a new record in this table (maybe checking that the job is not "queued" yet, that kind of detail is up to your implementation).
And then return 200 (or 202, which could be more appropriate in the circumstances)
You would need then to write a separate process.
Very likely a console command that you would execute regularly (using cron would be the simplest way)
On each execution that process (which would have all the Google_Client logic, and probably a PendingUploadsRepository) would check which jobs are pending to upload, process them sequentially, and set status to whatever you signify to done. You could have status to either 0 (pending), 1 (processing), and 2 (processed), for example, and set the status accordingly on each step of the script.
The details on exactly to implement this are up to you. That question would be too broad and opinionated. Pick something that you already understand and allows you to move faster. If you are storing your jobs in Rabbit, Redis, a database, or a flat-file is not particularly important. If you start your "consumer" with cron or supervisor, either.
Symfony has a ready made component that could allow you to decouple this kind of messaging asynchronously (Symfony Messenger), and it's pretty nice. Investigate if it's your cup of tea, although if you are not going to use it for anything else in your application I would keep it simple to begin with.
I've created a function to get the likes for my facebook page using the graph api. However, the level rate limit keeps on getting reached as it's being called on every request.
How would i cache this so it doesn't make the call every time?
The code i'm currently using is:
function fb_like_count() {
$id = '389320241533001';
$access_token = 'access token goes here';
$json_url ='https://graph.facebook.com/v3.2/'.$id.'?fields=fan_count&access_token='.$access_token;
$json = file_get_contents($json_url);
$json_output = json_decode($json);
if($json_output->fan_count) {
return like_count_format($json_output->fan_count);
} else{
return 0;
}
}
There are many cache mechanism in PHP that you can use depending on your project size.
I would suggest you to check memcached or Redis. These are in-memory cache mechanisms that are pretty fast and would help you to gain better performance.
You can read more about how to implement memcached here or for redis here.
The second and easier way is to use file caching. It works like this:
You send a request to Facebook API and when response is returned you save it to a file. When you want to send the second response you can check if there is any content in your file first and if there is you can return that directly to your application otherwise you will send the request to Facebook API
Simple integration is like this
if (file_exists($facebook_cache_file) && (filemtime($facebook_cache_file) > (time() - 60 * 15 ))) {
// Cache file is less than 15 minutes old but you can change this.
$file = file_get_contents($facebook_cache_file); // this holds the api data
} else {
// Our cache is out-of-date, so load the data from our remote server,
// and also save it over our cache for next time.
$response = getFacebookData() // get data from facebook and save into file
file_put_contents($facebook_cache_file, $response, LOCK_EX);
}
Anyway I would suggest you to use any PHP library for doing file cache.
Below you can find some that might be interesting to look at:
https://github.com/PHPSocialNetwork/phpfastcache
https://symfony.com/doc/current/components/cache.html
Am creating a REST api with laravel that allows a user to select an image file from their android device, then upload it to the server. The mage is converted to base64 before it's sent to the server alongside other parameters. I want to convert this base64 to a file and store it on the server then generate a link that can be used to access it. here is what i have tried so far and it doesnt work: I have already created a symlink to storage
public function create(Request $request)
{
$location = new Location();
$location->name = $request->location_name;
$location->latitude = $request->latitude;
$location->longitude = $request->longitude;
$location->saveOrFail();
$provider = new Provider();
$provider->name = $request->provider_name;
$provider->location_id = $location->id;
$provider->category_id = $request->category_id;
$provider->description = $request->description;
$provider->image = request()->file(base64_decode($request->encoded_image))->store('public/uploads');
$provider->saveOrFail();
return json_encode(array('status'=>'success', 'message'=>'Provider created successfully'));
}
As already commented by Amando, you can use the Intervention/Image package, having used it for many years I can say it will do what you want and very well.
What I would also add though, is you may also want to consider, whether you indeed need to store it as a file at all.
Depending on what it will be used for, and the size etc, you could consider storing it in the DB itself, along with any other information. This removes the dependency on a file server, and will make your application much more flexible with regards to infrastructure requirements.
At the end of the day, files are just data, if you will always get the file when you get the other data, reduce the steps and keep related data together.
Either way, hope you get it sorted :)
I am in the process of coding a cloud monitoring application and coudnt find useful logic of getting performance counters from AZURE php SDK documentation(such as CPU utilization, disk utilization, ram usage).
can anybody help ??
define('PRODUCTION_SITE', false); // Controls connections to cloud or local storage
define('AZURE_STORAGE_KEY', '<your_storage_key>');
define('AZURE_SERVICE', '<your_domain_extension>');
define('ROLE_ID', $_SERVER['RoleDeploymentID'] . '/' . $_SERVER['RoleName'] . '/' . $_SERVER['RoleInstanceID']);
define('PERF_IN_SEC', 30); // How many seconds between times dumping performance metrics to table storage
/** Microsoft_WindowsAzure_Storage_Blob */
require_once 'Microsoft/WindowsAzure/Storage/Blob.php';
/** Microsoft_WindowsAzure_Diagnostics_Manager **/
require_once 'Microsoft/WindowsAzure/Diagnostics/Manager.php';
/** Microsoft_WindowsAzure_Storage_Table */
require_once 'Microsoft/WindowsAzure/Storage/Table.php';
if(PRODUCTION_SITE) {
$blob = new Microsoft_WindowsAzure_Storage_Blob(
'blob.core.windows.net',
AZURE_SERVICE,
AZURE_STORAGE_KEY
);
$table = new Microsoft_WindowsAzure_Storage_Table(
'table.core.windows.net',
AZURE_SERVICE,
AZURE_STORAGE_KEY
);
} else {
// Connect to local Storage Emulator
$blob = new Microsoft_WindowsAzure_Storage_Blob();
$table = new Microsoft_WindowsAzure_Storage_Table();
}
$manager = new Microsoft_WindowsAzure_Diagnostics_Manager($blob);
//////////////////////////////
// Bring in global include file
require_once('setup.php');
// Performance counters to subscribe to
$counters = array(
'\Processor(_Total)\% Processor Time',
'\TCPv4\Connections Established',
);
// Retrieve the current configuration information for the running role
$configuration = $manager->getConfigurationForRoleInstance(ROLE_ID);
// Add each subscription counter to the configuration
foreach($counters as $c) {
$configuration->DataSources->PerformanceCounters->addSubscription($c, PERF_IN_SEC);
}
// These settings are required by the diagnostics manager to know when to transfer the metrics to the storage table
$configuration->DataSources->OverallQuotaInMB = 10;
$configuration->DataSources->PerformanceCounters->BufferQuotaInMB = 10;
$configuration->DataSources->PerformanceCounters->ScheduledTransferPeriodInMinutes = 1;
// Update the configuration for the current running role
$manager->setConfigurationForRoleInstance(ROLE_ID,$configuration);
///////////////////////////////////////
// Bring in global include file
//require_once('setup.php');
// Grab all entities from the metrics table
$metrics = $table->retrieveEntities('WADPerformanceCountersTable');
// Loop through metric entities and display results
foreach($metrics AS $m) {
echo $m->RoleInstance . " - " . $m->CounterName . ": " . $m->CounterValue . "<br/>";
}
this is the code I crafted to extract processor info ...
UPDATE
Do take a look at the following blog post: http://blog.maartenballiauw.be/post/2010/09/23/Windows-Azure-Diagnostics-in-PHP.aspx. I realize that it's an old post but I think this should give you some idea about implementing diagnostics in your role running PHP. The blog post makes use of PHP SDK for Windows Azure on CodePlex which I think is quite old and has been retired in favor of the new SDK on Github but I think the SDK code on Github doesn't have diagnostics implemented (and that's a shame).
ORIGINAL RESPONSE
Since performance counters data is stored in Windows Azure Table Storage, you could simply use Windows Azure SDK for PHP to query WADPerformanceCountersTable in your storage account to fetch this data.
I have written a blog post about efficiently fetching diagnostics data sometime back which you can read here: http://gauravmantri.com/2012/02/17/effective-way-of-fetching-diagnostics-data-from-windows-azure-diagnostics-table-hint-use-partitionkey/.
Update
Looking at your code above and source code for TableRestProxy.php, you could include a query as the 2nd parameter to your retrieveEntities call. You could something like:
$query = "(CounterName eq '\Processor(_Total)\% Processor Time` or CounterName eq '\TCPv4\Connections Established')
$metrics = $table->retrieveEntities('WADPerformanceCountersTable', $query);
Please note that my knowledge about PHP is limited to none so the code above may not work. Also, please ensure to include PartitionKey in your query to avoid full table scan.
Storage Analytics Metrics aggregates transaction data and capacity data for a storage account. Transactions metrics are recorded for the Blob, Table, and Queue services. Currently, capacity metrics are only recorded for the Blob service. Transaction data and capacity data is stored in the following tables:
$MetricsCapacityBlob
$MetricsTransactionsBlob
$MetricsTransactionsTable
$MetricsTransactionsQueue
The above tables are not displayed when a listing operation is performed, such as the ListTables method. Each table must be accessed directly.
When you retrieve metrics,use these tables.
Ex:
$metrics = $table->retrieveEntities('$MetricsCapacityBlob');
URL:
http://msdn.microsoft.com/en-us/library/windowsazure/hh343264.aspx
Hey Guys,
I was bent on improving my page speed factors and yesterday I got some cloud space on rackspacecloud. Now before this i was serving static content from a cookieless domain with proper cache control through htaccess.
Now after I moved on to cloud my htaccess does not control the cloud files. There is a TTL parameter on rackspace that sets values for how long the files should stay on CDN. That value reflects on my Page Speed settings (google + firebug). Now the default setting can me maximum 72 hours but I need something above 7 days. I need some api for that and its kinda complex..
Is there any way I can enforce cache control on my cloud files?
what do these query strings do domain.com/file.css?cache=0.54454334 ???
Do they achieve what I am looking for?
Any help is appreciated.
You may have figured it out already, but heres a link to checkout: Set far-future expires headers with Rackspace Cloud Files (sort of).
He is using the cloudfiles PHP API, and so am I. You can manually set the TTL (aka expires) headers to whatever you want to. Right now I have them set to 365 days (maybe a little excessive).
The documentation is fairly straightforward. If you need any help, this code should help you get started:
<?php
// include the API
require('cloudfiles.php');
// cloud info
$username = "myusername"; // username
$key = "c2dfa30bf91f345cf01cb26d8d5ea821"; // api key
// Connect to Rackspace
$auth = new CF_Authentication($username, $key);
$auth->authenticate();
$conn = new CF_Connection($auth);
// Get the container we want to use
$container = $conn->create_container('images');
// store file information
$filename = "images/logo.jpg";
// upload file to Rackspace
$object = $container->create_object($filename);
$object->load_from_filename($localfile);
// make public, and set headers
$container->make_public(86400 * 365); // expires headers set to 365 days
?>