I have looked in a lot of places and on this site and I am at a development block. I'm making a website and using MySQL to store stuff using PHP. Part of this website is I want to store the types of games a user plays which would be split up into three categories, Game / Platform / Username.
As a coder, I want to make a set of parallel vectors/arrayLists to hold these things, since the amount of games they might play is undefined. I was thinking of making a table but every time I try it doesn't show up in the database. From what I found, people say to not use arrays for databases since that ruins the purpose of a database. How would I store these fields dynamically?
Also from what I found, there is a thing called serialize() which I don't understand exactly how to use it, I can look that up. However, people say that it's not the "proper" way of doing something like this, but they could be wrong.
Serialize takes an array and converts it to a string. You can then take that string and save it a single table entry.
It might go something like this:
$user = array();
$games = array();
$games[] = array("Game" => "Chess", "Platform" => "Computer", "UserName" => "ChessMaster");
$games[] = array("Game" => "Checkers", "Platform" => "Mobile", "UserName" => "CheckersPlayer");
$games[] = array("Game" => "Solitaire", "Platform" => "Computer", "UserName" => "SolitaireUser");
$user[] = $games;
Now in order to save the entry into a database you convert the user array into a string with serialize().
$serializedUser = serialize($user);
Which will output this:
//echo $serializedUser;
a:1:{i:0;a:3:{i:0;a:3:{s:4:"Game";s:5:"Chess";s:8:"Platform";s:8:"Computer";s:8:"UserName";s:11:"ChessMaster";}i:1;a:3:{s:4:"Game";s:8:"Checkers";s:8:"Platform";s:6:"Mobile";s:8:"UserName";s:14:"CheckersPlayer";}i:2;a:3:{s:4:"Game";s:9:"Solitaire";s:8:"Platform";s:8:"Computer";s:8:"UserName";s:13:"SolitaireUser";}}}
Related
I'm retrieving some traffic data of a website using "scan" option in Dynamodb. I have used filterExpression to filter those out.
I will be doing scanning against a large table which will have more than 20GB of data.
I found that DynamoDB scans throguh the entire table and filter the results out. The document says it only returns 1MB of data and then i have to loop through again to get the rest. It seems to be bad way to make this work.
got the reference from here: Dynamodb filter expression not returning all results
For a small table that should be fine.
MySQL dose the same I guess. I'm not sure.
Which is faster to read is it MySQL select or DynamoDB scan on a large set of data. ?
Is there any other alternative? what are your thoughts and suggestions?
I'm trying to migrate those traffic data into Dynamodb table and then query it out. It seems like a bad idea to me now.
$params = [
'TableName' => $tableName,
'FilterExpression' => $this->filter.'=:'.$this->filter.' AND #dy > :since AND #dy < :now',
'ExpressionAttributeNames'=> [ '#dy' => 'day' ],
'ExpressionAttributeValues'=> $eav
];
var_dump($params);
try {
$result = $dynamodb->scan($params);
After considering the suggestion this is what worked for me
$params = [
'TableName' => $tableName,
'IndexName' => self::GLOBAL_SECONDARY_INDEX_NAME,
'ProjectionExpression' => '#dy, t_counter , traffic_type_id', 'KeyConditionExpression' => 'country=:country AND #dy between :since AND :to',
'FilterExpression' => 'traffic_type_id=:traffic_type_id' 'ExpressionAttributeNames' => ['#dy' => 'day'],
'ExpressionAttributeValues' => $eav
];
If your data is like Key-Value pair and you have fixed fields on which you want to index, use DynamoDB - you can create indexes on all fields you want to query and it will work great
If you require complex querying on multiple indexes, then any RDBMS is good.
If you can query on just about anything, think about Elastic search
If your queries are very simple, but you have large data to be retrieved in each query. Think about S3. Maybe you can index metadata in DynamoDb and actual data can be in S3
It has been my late-childhood dream to create a game, and now as I actually know how I thought I should fulfill my dream and started working on a little game project in my spare time. It's basically a combat type of game where you have up to 3 units, as well as your opponent, and you take turns ( since it's http, you know that feel ) to attack each other and cast spells and stuff. The issue I came across with is with abilities and how to store them. Basically if I were to store abilities in an array it would look something like
$abilities = array(
0 => array(
'name' => 'Fire ball',
'desc' => 'Hurls a fire ball at your enemy, dealing X damage.'
'effect' => function($data){
$data['caster']->damage($data['target'], $data['caster']->magicPower);
}
),
1 => array(...
);
But if I were to store abilities this way, every time I needed to fetch information about a single ability I would need to load the whole array and it's probably going to get pretty big in time, so that would be a tremendous waste of memory. So I jumped to my other option, to save the abilities in a mysql table, however I'm having issues with the effect part. How can I save a function into a mysql field and be able to run it on demand?
Or if you can suggest another way to save the abilities, I may have missed.
To answer your question related to storing arrays into database like MySQL I would like you to serialize the Array as String. The normal direct serialization not going to work because they don't deal with closure.
You need to use classes like super_closure which can serialize the methods and convert them into string. Read more here
https://github.com/jeremeamia/super_closure
$helloWorld = new SerializableClosure(function($data){
$data['caster']->damage($data['target'], $data['caster']->magicPower);
});
$serializedFunc = serialize($helloWorld);
Now you can create array like this:
$abilities = array(
0 => array(
'name' => 'Fire ball',
'desc' => 'Hurls a fire ball at your enemy, dealing X damage.'
'effect' => $serializedFunc
));
This Array can now be saved directly, serialized or encoded to JSON.
I would recommend you to look at Redis or Memcache for caching query results and don't use MySQL to store functions.
You could have tree tables
spell
id
name
description
spell_effect
id
name
serversidescript
spell_effect_binder
spell_id
spell_effect_id
This would make sure, that your logic is in php files, where ever you would like them to be located, but all the meta of the spells, effects and how they bind together in the database. Meaning you will only load the function/script of the ones in need. Plus, giving you the possibility to append multiple effects to one spell.
//Firedamage.php
public function calculateEffects($level,$caster,$target) {
$extraDamage = 5*$level;
$randDamage = rand(10,50);
$caster->damage( $target, ($randDamage+$extraDamage) );
}
Spell_effect entry
id = 1
name = 'firedamage'
serversidescript = 'Firedamage.php'
spell
id = 1
name = 'Fireball'
description = 'Hurls a fireball at your foe'
spell_effect_binder
spell_id = 1
spell_effect_id = 1
I'm making a website for a music promotion company. The website contains an individual page for each promoted artist; in which their upcoming events appear. There is also a seperate 'events' page.
I was wondering how to create and use arrays so that I could update any upcoming events in one place and for the information to be echoed out on these two seperate pages.
Also, on the events page all of the artists' events will need to be echoed out in chronological order.
Is this the right way of approaching it?
<?php
$donevents = array();
$donevents[101] = array (
"venue" = "The Moon Club",
"date" = "5th December 2013",
"link" = "www.candyratrecords.com"
);
$donevents[102] = array (
"venue" = "Chapel Arts Centre",
"date" = "8th August 2013",
"link" = "www.chapelarts.co.uk"
);
?>
One Little Character Makes Such A Difference (TM):
$donevents[101] = array (
"venue" => "The Moon Club",
"date" => "5th December 2013",
"link" => "www.candyratrecords.com"
);
Check out the PHP manual on Arrays.
Also, it's very bad design to store events in program code.
If you are not using a database, a CSV file may be a good alternative to keep your data separate from your code. Look into using PHP's functions for reading CSV files into arrays, and you can store your file above the webroot so that it can't be publicly accessed.
Using a CSV file will make it much easier to maintain your data if using an actual database is not an option.
Here is code as exists now:
while($row=mysql_fetch_assoc($count_query_result))
$output[]=$row;
while($row=mysql_fetch_assoc($average_query_result))
$output2[]=$row;
while($row=mysql_fetch_assoc($items_query_result))
$output3[]=$row;
print(json_encode(array($output,$output2,$output3)));
mysql_close();
My question:
How do I take a single column from each of the three query results, and make a JSON array out of it, like so:
[{ 'att1' : 'data'}, { 'att2' : 'data'}, { 'att3' : 'data'}]
ASSUMING:
att1 came from the $count_query_result/$output
att2 came from the $average_query_result/$output2
att3 came from the $items_query_result/$output3
Therefore, encoding only one variable, not 3.
Well I answered my own issue. I had to get to the very root of the problem. The MySQL queries. I have joined them all so now there is just one. This creates a single JSON array for what I need. I believe there is something to be said about just doing it ... right .. the first time.
$result = array('att1' => $row['data'],
'att2' => $row['data']
echo json_encode($result)
where $row['data'] is the information that you want returned from each of your queries
I'm working with an MLS real estate listing provider (RETS). Every 48 hours we will be pulling data from their server in a cron job to an SQL database. I'm charged with the task of writing a php script that will be run after the data from the remote server is dumped into our "raw" tables. In these raw tables, all columns are VARCHAR(255), and we want to move the data into optimized tables. Before I send my script to the guy in charge of setting up the cron job, I wondered if there is a more efficient way to do it so I don't look foolish.
Here's what I'm doing:
There are 8 total tables, 4 raw and 4 optimized - all in the same database. The raw table column names are non descriptive, like c1,c2,c2,c4 etc. This is intentional because the data that goes in each column may change. The raw table column names are mapped to the correct optimized table columns with php, something like this:
$tables['optimized_table_name1']['raw_table'] = 'raw_table_name1';
$tables['optimized_table_name1']['data_map'] = array(
'c1' => array( // <--- "c1" is the raw table column name
'column_name' => 'id',
// I use other values for table creation,
// but they don't matter to the question.
// Just explaining why the array looks like this
//'type' => 'VARCHAR',
//'max_length' => 45,
//'primary_key' => FALSE,
// etc.
),
'c9' => array('column_name' => 'address'),
'c25' => array('column_name' => 'baths'),
'c2' => array('column_name' => 'bedrooms') //etc.
);
I'm doing the same thing for each of the 4 tables: SELECT * FROM the raw table, read the config array and create a huge SQL insert statement, TRUNCATE the optimized table, then run the INSERT query.
foreach ($tables as $table_name => $config):
$raw_table = $config['raw_table'];
$data_map = $config['data_map'];
$fields = array();
$values = array();
$count = 0;
// Get the raw data and create an array mapped to the optimized table columns.
$query = mysql_query("SELECT * FROM dbname.{$raw_table}");
while ($row = mysql_fetch_assoc($query))
{
// Reading column names from my config file on first pass
// Setting up the array, will only run once per table
if (empty($fields))
{
foreach ($row as $key => $val)
{// Produces an array with the column names
$fields[] = $data_map[$key]['column_name'];
}
}
foreach ($row as $key => $val)
{// Assigns data to an array to be imploded later
$values[$count][] = $val;
}
$count++;
}
// Create the INSERT statement string
$insert = array();
$sql = "\nINSERT INTO `{$table_name}` (`".implode('`,`', $fields)."`) VALUES\n";
foreach ($values as $key => $vals)
{
foreach ($vals as &$val)
{
// Escape the data
$val = mysql_real_escape_string($val);
}
// Using implode for simplicity, could avoid the nested foreach if I wanted to
$insert[] = "('".implode("','", $vals)."')";
}
$sql .= implode(",\n", $insert).";\n";
// TRUNCATE optimized table and run INSERT query here
endforeach;
Which produces something like this (only larger - about 15,000 records max per table, and one insert statement per table):
INSERT INTO `optimized_table_name1` (`id`,`beds`,`baths`,`town`) VALUES
('50300584','2','1','Fairfield'),
('87560584','3','2','New Haven'),
('76545584','2','1','Bristol');
Now I'll admit, I have been under the wing of an ORM for a long time and am not up on my vanilla mysql/php. This is a pretty simple task and I want to keep the code simple.
My questions:
Is the TRUNCATE/INSERT method a good way to do this?
Is there anything about my code that you can see being a problem? I know you see nested foreach loops and just shudder, but I want to keep the code as small clean as possible and avoid lots of messy string concatenation (to produce the insert query). Like I said, I also haven't used native php functions for SQL in a long time.
I feel like it really doesn't matter if the code is not optimized if it is run at 3AM every 2 days. Does it matter? Is this code going to preform OK?
Is there a better overall strategy to accomplish this task?
Do I need to be using transactions?
How can I be aware of errors that may occur in cron scripts?
Apologize if I don't use correct cron jargon, it's new to me.
Keep it simple. ORM would be swell for this task.
Answers:
Yes.
Your code is readable. At least I did not have any problems to read it.
We had a script that ran early in the morning. It was not optimized and consumed a lot of memory. After FOUR years it started to consume over 512 Mb. I've spent 2 hours to optimize it to, so now it consumes 7 Mb (pretty good optimization, huh? :) ). I personally think it is "ok" that your script is not optimized now. If this script will start failing, you'll figure what the problem is. Maybe it will exhaust memory, maybe your SQL queries will cause deadlocks... maybe you will later optimize it to READ from slave servers... I don't know, but it works fine now, that's okay.
I'd do something similar to your code. But I'd probably generate file first and load data into the server by running shell command mysql -u username --password=password < import_file.sql. So I'd have my file stored somewhere on a disk so I cal always take a look at it. And maybe even edit for one-time correction load. But you still can do it by writing your sql statement into file.
No. It is just one query. If you use InnoDB engine it is already a transaction.
First, use error_reporting(E_ALL & ~E_NOTICE). Second, use mysql_error PHP function to ensure your query performed correctly. Third, in your cronjob output errors stream into some file like so: 0 7 * * 0 /path/to/php -c /path/to/php.ini /path/to/script.php 2> /tmp/errors_file And thus you can create SECOND script runnin after first one to notify about errors in script.php by email or.... whatever way of notifying you prefer. I'd prefer to register_shutdown_functions that would check for error_file and if it is not empty, notify you and delete it afterwards.
Just my opinion, but I hope my answer helps though.