Multiple select list on same page -> error - php

i'm going crazy trying to solve a VERY weird error in a PHP script (Joomla). I have a page that displays multiple select dropdown lists, all of them showing the same list of items (the selected item changes from one list to another, but listed items are the same). This list has around 35-40 items. This works fine until a certain amount of selects, but when i put more than 20 or 25 selects on the same page, it doesn't work and shows only a white page. No errors, there is no text displayed, no errors in php logs, nothing; just a white page. If using THE SAME CODE, i display 11 dropdown select lists... it works.
I'm guessing that this problem is related to memory or something like that, but i can't be sure cause as i've said, there is no errors displayed. Does anyone knows about a simmilar issues? can anyone give me a tip about how to address this problem? i don't know what to do and i've tried many things but it still doesn't work. Any help will be very much appreciated and wellcomed...
NOTE: The select list are filled with values from a DB table and each select list has a different selected item based on contents from another table. It's not a very complex code and as i've said, it works fine when i use less select lists on the same page. The problem is when i reach a certain number of select lists on the same page (i think that it's around 20 or 25 input selects). I think that the amount of data is not very exagerated, so i can't understand why it doesn't work¿!?

A quick google for your issue turns this up:
for jos_session, which is the only table I suggested you empty, any logged on users will be logged off...any work in progress (forum posts/articles would be lost)...
I also empty recaptcha...
Please remember to always back up your db first.
I empty these two for a higher volume joomla 1.5 system once a week...we also set the session lifetime (no activity) varies 60-90 minutes...6-7k day volume site...this also helps our akeeba back up, as the two aforementioned tables can get very large without proper maintenance.
Just some general ramblings...
You should also review your mysql site report via phymyadmin "Show MySQL runtime information". Look for things that are in 'red'.
As for your overall question about performance. Please remember that there are many ways to improve websites performance. It's yes another job required by site administrators and at least an interesting process.
The joomla performance forum is a great place to have your site reviewed and get good help tuning your site including the minimum base server you need (shared/vps/dedicicated).
IMHO...First objective is to turn off joomla cache and joomla gzip by enabling standard server modules like mod_deflate and mod_expires (mod_expires is one of the best fixes for returning visitors). Make sure you mysql configuration enabling query_cache are or can be set. You will need a minimum of a vps. and there's more!...jajajaja
A little note about running shared server, and not having certain server modules available,
check this out: http://www.webogroup.com/ It's really one heck of a product. I've used it on the aforementioned site until I could implement the changes on the server. As I implemented each new server module I turned off the webo...site is now boring fast
have fun

Related

Cannot access published story after Permission Rebuild

So yeah, the issue is there are some articles, (very old one 2015~), which anonymous users can not access after i've done rebuilding permissions.
New content does not seem to be affected though. One solution I can do (after researching) is to re-save these articles, BUT I think i can not keep doing this because there are a lot of articles., im telling more than 100K
Is there a better way to resolve this?
PS: I confirm that the permission is set correctly for anonymous users to see published content.
We can only speculate what happen, without diving into your custom code or checking at the list of contrib modules that you are using it is almost impossible to detect where the issue is. If you are sure that re-save makes the problem go away, add views bulk operation module - which will allow you to select all nodes in one go than select publish bulk operation (this will trigger node save). You can narrow down and add filters to your admin/content view to show only older nodes (around 10k nodes takes 15 mins to resave - regular node article) - this will not bring down your site or slow it down since it's bulk operation and you can always do this during the night when there are few users on the site... Do a db backup before, resave nodes on live so the users can access nodes, import the database on your local machine and peacefully (since live is working) hunt down the source of the issue.

Wordpress mysqld has gone away

I have wordpress with thousands of catgories/custom taxonomies and tens of thousands of posts.
I have a hard time keeping it online without cache, because processor reaches 100% ( used by mysql server not php ).
I have isolated a problem, due to mysql update,
WordPress database error: [MySQL server has gone away]
UPDATE wphn_options SET option_value = ........... ' WHERE option_name = 'rewrite_rules', this is executed on every page load.
This is an example of that the option_value looks like: `WordPress database error: [MySQL server has gone away] ( this is not every 1% of the query, just a short preview).
Anyone know how i can stop this query from executing?
UPDATE `wphn_options` SET `option_value` = 'a:7269:{s:18:\"sitemap_trolio.xml\";s:33:\"index.php?aiosp_sitemap_path=root\";s:29:\"sitemap_trolio_(.+)_(\\d+).xml\";s:71:\"index.php?aiosp_sitemap_path=$matches[1]&aiosp_sitemap_page=$matches[2]\";s:23:\"sitemap_trolio_(.+).xml\";s:40:\"index.php?aiosp_sitemap_path=$matches[1]\";s:34:\"sitemap(-+([a-zA-Z0-9_-]+))?
Reading the content of that update to the options table, you can see it's related to the sitemap of your site. You may have a sitemap plugin. That sitemap plugin may do something on every page load. Try disabling it.
If you have access to phpmyadmin, first make a backup of your installation and database (if you aren't doing so already). Then issue the SQL command OPTIMIZE TABLE wphn_options; and see if it helps. If it does, great. Try optimizing some of the other tables as well. OPTIMIZE TABLE wphn_posts; might be a good one to try.
But look: Your WordPress installation is underprovisioned. You need better server resources. You've gone to the trouble of creating tens of thousands of posts. By using such a weak server configuration, you are intentionally concealing those posts from your audience, just to save a few coins.
And, you're running the risk of corrupting your site by using a weak server. Is this not the very definition of "penny wise, pound foolish?"
Your question is like "My car's battery is low. I want to stop wasting electricity on my brake lights. Please tell me how to cut the wires to the brake lights." With respect, the only rational answer is "Are you crazy? You'll risk smashing your car to avoid fixing your battery? Fix your battery!"
I have found the solution, it seems that because of the large number of posts and categories the query could not be built and mysql server crashed to protect it's self.
I have fixed the issue by adding max_allowed_packet=256M in the MySQL configuration file

Optimize loading time of php page

I have got a simple PHP page requesting a list of addresses from a MySQL database. The database table has got 1257 entries. I also include a dynamically loaded side menu to browse to other sites.
Together I got 5 MySQL requests
The addresses
Pagination
Check whee the user has got permission to browse
get all the groups for side menu
get all sub entries for side menu
The site takes about 5 seconds to load.
I Googled for site load time improvement and found the Google Developer tools with page speed did all the improvements it told me to like enable deflate, change banner size, and so on but it is still at nearly the same loading time. I would like to know if this is common or if there anything I can do to improve the loading time.
EDIT: I have also indexed the columns and enabled the MySQL cache. I also use foreign keys in the sub entries table which are from the menu group table
EDIT2: I found the solution the problem was that i used localhost to connect to my db but since im using windows 7 it tried to connect via ipv6 now i change all localhost to 127.0.0.1 and it only takes about 126ms to load my page
In the first place, find out what's taking a page so long to load with browser's console. If the cause of the delay is at server side, e.g. the html file itself is being generated for a long time, then check the following:
Try to log slow mysql queries and make sure that you have none.
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
If you really have some expensive calculations going on (which is not likely in your case), try to cache it.
Don't forget about benefits of PHP code accelerators like APC and mysql optimizations (query cache etc).
... many other ways to speed things up, but got to profile the app itself and see what's going on.
Have you done indexing for the columns using in where condition. If not pl index the columns and check it

Average query count for expression engine site

I just took over development of an existing EE website and am new to the cms and to blog development as well. First thing I noticed was that the site performed really poorly, so I just started doing some profiling using XDebug. I noticed that the query count is around 550. Is this normal? I know that it all comes down to what kind of queries are being run etc.. but I’m used to much lower numbers using other frameworks, but like I said: I’m new to blog development.
TLDR: What is the average ballpark query count for an EE homepage?
Thanks!
On my test install of EE2, an empty template pulls 13 queries (these have to do with sessions, tracking, grabbing the template, etc). Beyond that, there's no "average", as the amount of content can vary so widely from site-to-site.
550 queries is certainly outlandish. My guess would be that there are multiple embeds, several Channel Entries loops, and perhaps some Playa fields within those (Playa is a bit of a query monster).
I'd suggest turning on the Output Profiler to see where the load is coming from (Admin → System Administration → Output and Debugging).
Then, make sure you're making use of tag caching on your Channel Entries and other tags, and consider looking at a third-party caching solution such as CE Cache.
You can also disable some of the default tracking to save on queries (Admin → Security and Privacy → Tracking Preferences).
I've built a ton of EE sites and 500 is crazy, crazy high. With a complex build out of Structure/Matrix/Playa/ even pretty complex pages only run 200-300. And when I say "only" I mean that's way still too high.
I do think it's important to find a balance between making something delightful for your client to use and still not to processor intensive. If you are using a single template for this page (i.e. the template won't be used for a bunch of other entries) you can turn on caching and it will help substantially.
The biggest question is - what are you doing on this page? What kinds of tags/addons, etc... that would help us track it down.

replicating specific data between database tables

I've seen some posts on here regarding similar ideas to this, but to be specific I thought I should point out my requirement exactly.
I have a database driven site, and the client wants a replica of it for users from the USA. They want most of the site to be the same, except some of data, which they want to be different for US visitors.
The site runs on a php/MySQL database content management system I have written. I think we are going to approach the 'USA' version like this...
Place a clone of the whole site in a folder called /us (no surprises there)
Duplicate all the tables, but precede the names with us_
I'm thinking, of adding a field to the original sites tables called 'replicate' for example, and then every 15 mins or so, run a script to copy all the records from the original table to the us_ tables where the replicate field is marked yes
On the US version of the content management system, all records that are copied from the UK site are somehow locked so only records marked no at the original site can differ on the UK site.
Does this sound like I'm heading along the right lines ?
Why not make a new database for ONLY the tables/rows that are for US visitors only.
make a php array or something that says what table should be called from what database.
Seems like less effort to me.
Did you check the MySQL manual for topics about replication?

Categories