It has been my late-childhood dream to create a game, and now as I actually know how I thought I should fulfill my dream and started working on a little game project in my spare time. It's basically a combat type of game where you have up to 3 units, as well as your opponent, and you take turns ( since it's http, you know that feel ) to attack each other and cast spells and stuff. The issue I came across with is with abilities and how to store them. Basically if I were to store abilities in an array it would look something like
$abilities = array(
0 => array(
'name' => 'Fire ball',
'desc' => 'Hurls a fire ball at your enemy, dealing X damage.'
'effect' => function($data){
$data['caster']->damage($data['target'], $data['caster']->magicPower);
}
),
1 => array(...
);
But if I were to store abilities this way, every time I needed to fetch information about a single ability I would need to load the whole array and it's probably going to get pretty big in time, so that would be a tremendous waste of memory. So I jumped to my other option, to save the abilities in a mysql table, however I'm having issues with the effect part. How can I save a function into a mysql field and be able to run it on demand?
Or if you can suggest another way to save the abilities, I may have missed.
To answer your question related to storing arrays into database like MySQL I would like you to serialize the Array as String. The normal direct serialization not going to work because they don't deal with closure.
You need to use classes like super_closure which can serialize the methods and convert them into string. Read more here
https://github.com/jeremeamia/super_closure
$helloWorld = new SerializableClosure(function($data){
$data['caster']->damage($data['target'], $data['caster']->magicPower);
});
$serializedFunc = serialize($helloWorld);
Now you can create array like this:
$abilities = array(
0 => array(
'name' => 'Fire ball',
'desc' => 'Hurls a fire ball at your enemy, dealing X damage.'
'effect' => $serializedFunc
));
This Array can now be saved directly, serialized or encoded to JSON.
I would recommend you to look at Redis or Memcache for caching query results and don't use MySQL to store functions.
You could have tree tables
spell
id
name
description
spell_effect
id
name
serversidescript
spell_effect_binder
spell_id
spell_effect_id
This would make sure, that your logic is in php files, where ever you would like them to be located, but all the meta of the spells, effects and how they bind together in the database. Meaning you will only load the function/script of the ones in need. Plus, giving you the possibility to append multiple effects to one spell.
//Firedamage.php
public function calculateEffects($level,$caster,$target) {
$extraDamage = 5*$level;
$randDamage = rand(10,50);
$caster->damage( $target, ($randDamage+$extraDamage) );
}
Spell_effect entry
id = 1
name = 'firedamage'
serversidescript = 'Firedamage.php'
spell
id = 1
name = 'Fireball'
description = 'Hurls a fireball at your foe'
spell_effect_binder
spell_id = 1
spell_effect_id = 1
Related
I'm retrieving some traffic data of a website using "scan" option in Dynamodb. I have used filterExpression to filter those out.
I will be doing scanning against a large table which will have more than 20GB of data.
I found that DynamoDB scans throguh the entire table and filter the results out. The document says it only returns 1MB of data and then i have to loop through again to get the rest. It seems to be bad way to make this work.
got the reference from here: Dynamodb filter expression not returning all results
For a small table that should be fine.
MySQL dose the same I guess. I'm not sure.
Which is faster to read is it MySQL select or DynamoDB scan on a large set of data. ?
Is there any other alternative? what are your thoughts and suggestions?
I'm trying to migrate those traffic data into Dynamodb table and then query it out. It seems like a bad idea to me now.
$params = [
'TableName' => $tableName,
'FilterExpression' => $this->filter.'=:'.$this->filter.' AND #dy > :since AND #dy < :now',
'ExpressionAttributeNames'=> [ '#dy' => 'day' ],
'ExpressionAttributeValues'=> $eav
];
var_dump($params);
try {
$result = $dynamodb->scan($params);
After considering the suggestion this is what worked for me
$params = [
'TableName' => $tableName,
'IndexName' => self::GLOBAL_SECONDARY_INDEX_NAME,
'ProjectionExpression' => '#dy, t_counter , traffic_type_id', 'KeyConditionExpression' => 'country=:country AND #dy between :since AND :to',
'FilterExpression' => 'traffic_type_id=:traffic_type_id' 'ExpressionAttributeNames' => ['#dy' => 'day'],
'ExpressionAttributeValues' => $eav
];
If your data is like Key-Value pair and you have fixed fields on which you want to index, use DynamoDB - you can create indexes on all fields you want to query and it will work great
If you require complex querying on multiple indexes, then any RDBMS is good.
If you can query on just about anything, think about Elastic search
If your queries are very simple, but you have large data to be retrieved in each query. Think about S3. Maybe you can index metadata in DynamoDb and actual data can be in S3
I have an array like this and it has 120 elements in it
`array (size=120)
0 =>
array (size=8)
'name' => That the quick brown fox jumps over the lazy - 7' (length=53)
'url' => string 'google.com/zyx' (length=134)
'category' => string 'search-engine' (length=6)
1 =>
array (size=8)
'name' => string 'Mr. john brandy gave me a wall nut of quite' (length=67)
'url' => string 'yahoo.com/dzxser' (length=166)
'category' => string 'indian' (length=6)`
I want to insert them to my bookmark table which model I have created and I want to make sure duplication doesn't occur. I have found this https://laravel.com/docs/5.4/eloquent#other-creation-methods specially firstOrCreate method.
I assume I have to use foreach but I am not sure how. Can anyone help me with some workaround.
Actually you don't need firstOrCreate, you need updateOrCreate. Checking Laravel Other Creation methods You will find that method.
Say that array is in $alldata:
foreach($alldata as $data) {
MyModel::updateOrCreate($data); //the fields must be fillable in the model
}
This will run update/or create of 120 queries while cycling through the loop. The advantage is that, you cannot have a duplicate, rather if there is a repetition, its only going to perform an update to the table.
However the best way to ensure that there is no duplication in whatever way the data comes is to set it up when making your database table. You can set unique constraints on many fields if thats your case.
If you don't want duplication to occur when inserting array of records then all you have to do it set a constraint making sure fields are unique.
If you're using migrations to create databse schema you can use something like this: $table->string('name')->unique();
Now for example, this will make sure that 'name' column data is
I have looked in a lot of places and on this site and I am at a development block. I'm making a website and using MySQL to store stuff using PHP. Part of this website is I want to store the types of games a user plays which would be split up into three categories, Game / Platform / Username.
As a coder, I want to make a set of parallel vectors/arrayLists to hold these things, since the amount of games they might play is undefined. I was thinking of making a table but every time I try it doesn't show up in the database. From what I found, people say to not use arrays for databases since that ruins the purpose of a database. How would I store these fields dynamically?
Also from what I found, there is a thing called serialize() which I don't understand exactly how to use it, I can look that up. However, people say that it's not the "proper" way of doing something like this, but they could be wrong.
Serialize takes an array and converts it to a string. You can then take that string and save it a single table entry.
It might go something like this:
$user = array();
$games = array();
$games[] = array("Game" => "Chess", "Platform" => "Computer", "UserName" => "ChessMaster");
$games[] = array("Game" => "Checkers", "Platform" => "Mobile", "UserName" => "CheckersPlayer");
$games[] = array("Game" => "Solitaire", "Platform" => "Computer", "UserName" => "SolitaireUser");
$user[] = $games;
Now in order to save the entry into a database you convert the user array into a string with serialize().
$serializedUser = serialize($user);
Which will output this:
//echo $serializedUser;
a:1:{i:0;a:3:{i:0;a:3:{s:4:"Game";s:5:"Chess";s:8:"Platform";s:8:"Computer";s:8:"UserName";s:11:"ChessMaster";}i:1;a:3:{s:4:"Game";s:8:"Checkers";s:8:"Platform";s:6:"Mobile";s:8:"UserName";s:14:"CheckersPlayer";}i:2;a:3:{s:4:"Game";s:9:"Solitaire";s:8:"Platform";s:8:"Computer";s:8:"UserName";s:13:"SolitaireUser";}}}
I have array like this
$conditions = array("Post.title" => "This is a post");
And using $conditions array in this method.
$this->Post->find('first', array('conditions' => $conditions));
I want convert the $conditions array to normal sql query.
I want use
$this->Post->query($converted_query);
instead of
$this->Post->find('first', array('conditions' => $conditions));
$null=null;
echo $this->getDataSource()->generateAssociationQuery($this, NULL, NULL, NULL, NULL, $query_array, false,$null);
To do what you want you could do two things:
1) Combine your $conditions arrays and let CakePHP build your new query so you can simply use $this->Model->find() again.
2) Use this. It's an expansion for the mysql datasource that adds the option to do $this->Model->find('sql', array('conditions' => $conditions)) which will return the SQL-query. This option might cause trouble, because for some find calls (especially when you're fetching associated models) CakePHP uses multiple queryies to fetch the associated models (especially in case of hasMany-associations).
If at all possible, option 1 will probably cause the least trouble. Another problem with going with 2 is that if you're trying to combine two queries with conflicting conditions (like 'name = Hansel' in query 1 and 'name = Gretel' in query 2) you will just find nothing unless you plan on writing extra code to parse the resulting queries and look for conflicts..
Going with 1 will probably be a lot simpler and will probably avoid lots of problems.
I'm working with an MLS real estate listing provider (RETS). Every 48 hours we will be pulling data from their server in a cron job to an SQL database. I'm charged with the task of writing a php script that will be run after the data from the remote server is dumped into our "raw" tables. In these raw tables, all columns are VARCHAR(255), and we want to move the data into optimized tables. Before I send my script to the guy in charge of setting up the cron job, I wondered if there is a more efficient way to do it so I don't look foolish.
Here's what I'm doing:
There are 8 total tables, 4 raw and 4 optimized - all in the same database. The raw table column names are non descriptive, like c1,c2,c2,c4 etc. This is intentional because the data that goes in each column may change. The raw table column names are mapped to the correct optimized table columns with php, something like this:
$tables['optimized_table_name1']['raw_table'] = 'raw_table_name1';
$tables['optimized_table_name1']['data_map'] = array(
'c1' => array( // <--- "c1" is the raw table column name
'column_name' => 'id',
// I use other values for table creation,
// but they don't matter to the question.
// Just explaining why the array looks like this
//'type' => 'VARCHAR',
//'max_length' => 45,
//'primary_key' => FALSE,
// etc.
),
'c9' => array('column_name' => 'address'),
'c25' => array('column_name' => 'baths'),
'c2' => array('column_name' => 'bedrooms') //etc.
);
I'm doing the same thing for each of the 4 tables: SELECT * FROM the raw table, read the config array and create a huge SQL insert statement, TRUNCATE the optimized table, then run the INSERT query.
foreach ($tables as $table_name => $config):
$raw_table = $config['raw_table'];
$data_map = $config['data_map'];
$fields = array();
$values = array();
$count = 0;
// Get the raw data and create an array mapped to the optimized table columns.
$query = mysql_query("SELECT * FROM dbname.{$raw_table}");
while ($row = mysql_fetch_assoc($query))
{
// Reading column names from my config file on first pass
// Setting up the array, will only run once per table
if (empty($fields))
{
foreach ($row as $key => $val)
{// Produces an array with the column names
$fields[] = $data_map[$key]['column_name'];
}
}
foreach ($row as $key => $val)
{// Assigns data to an array to be imploded later
$values[$count][] = $val;
}
$count++;
}
// Create the INSERT statement string
$insert = array();
$sql = "\nINSERT INTO `{$table_name}` (`".implode('`,`', $fields)."`) VALUES\n";
foreach ($values as $key => $vals)
{
foreach ($vals as &$val)
{
// Escape the data
$val = mysql_real_escape_string($val);
}
// Using implode for simplicity, could avoid the nested foreach if I wanted to
$insert[] = "('".implode("','", $vals)."')";
}
$sql .= implode(",\n", $insert).";\n";
// TRUNCATE optimized table and run INSERT query here
endforeach;
Which produces something like this (only larger - about 15,000 records max per table, and one insert statement per table):
INSERT INTO `optimized_table_name1` (`id`,`beds`,`baths`,`town`) VALUES
('50300584','2','1','Fairfield'),
('87560584','3','2','New Haven'),
('76545584','2','1','Bristol');
Now I'll admit, I have been under the wing of an ORM for a long time and am not up on my vanilla mysql/php. This is a pretty simple task and I want to keep the code simple.
My questions:
Is the TRUNCATE/INSERT method a good way to do this?
Is there anything about my code that you can see being a problem? I know you see nested foreach loops and just shudder, but I want to keep the code as small clean as possible and avoid lots of messy string concatenation (to produce the insert query). Like I said, I also haven't used native php functions for SQL in a long time.
I feel like it really doesn't matter if the code is not optimized if it is run at 3AM every 2 days. Does it matter? Is this code going to preform OK?
Is there a better overall strategy to accomplish this task?
Do I need to be using transactions?
How can I be aware of errors that may occur in cron scripts?
Apologize if I don't use correct cron jargon, it's new to me.
Keep it simple. ORM would be swell for this task.
Answers:
Yes.
Your code is readable. At least I did not have any problems to read it.
We had a script that ran early in the morning. It was not optimized and consumed a lot of memory. After FOUR years it started to consume over 512 Mb. I've spent 2 hours to optimize it to, so now it consumes 7 Mb (pretty good optimization, huh? :) ). I personally think it is "ok" that your script is not optimized now. If this script will start failing, you'll figure what the problem is. Maybe it will exhaust memory, maybe your SQL queries will cause deadlocks... maybe you will later optimize it to READ from slave servers... I don't know, but it works fine now, that's okay.
I'd do something similar to your code. But I'd probably generate file first and load data into the server by running shell command mysql -u username --password=password < import_file.sql. So I'd have my file stored somewhere on a disk so I cal always take a look at it. And maybe even edit for one-time correction load. But you still can do it by writing your sql statement into file.
No. It is just one query. If you use InnoDB engine it is already a transaction.
First, use error_reporting(E_ALL & ~E_NOTICE). Second, use mysql_error PHP function to ensure your query performed correctly. Third, in your cronjob output errors stream into some file like so: 0 7 * * 0 /path/to/php -c /path/to/php.ini /path/to/script.php 2> /tmp/errors_file And thus you can create SECOND script runnin after first one to notify about errors in script.php by email or.... whatever way of notifying you prefer. I'd prefer to register_shutdown_functions that would check for error_file and if it is not empty, notify you and delete it afterwards.
Just my opinion, but I hope my answer helps though.