php - Optimize a large MySQL query (caching or something similar?) -


i'm not used working such big objects (hehe). i've got query runs through of subdomains mysql query:

$blogs = $wpdb->get_results(    "select blog_id,path {$wpdb->blogs}    blog_id != {$wpdb->blogid}    , site_id = '{$wpdb->siteid}'    , spam = '0'    , deleted = '0'    , archived = '0'    order blog_id", array_a ); 

then run foreach on them data (blog name, specifically)

foreach( $blogs $blog ) :      switch_to_blog( $blog[ 'blog_id' ] );     if(strpos(strtolower($blog_details->blogname), strtolower($_get['squery'])) !== false){     //show site's title , link site } endforeach; 

i'm doing because need users able search site it's name isn't available in $wpdb->blogs table. url is, url may smsalem" user searching "service master" or "service".

i upped memory limit 256mb (is high? or can go higher?) because getting memory exhausted error.

now completes fine, , echoed memory usage , got 201043248, 201mb. initial $blogs array has ~1,400 items in it.

this "works" i'm afraid 201mb high everytime uses page (we have dedicated server), i'm wondering if there's way optimize bit or if these numbers low enough not worry (we're looking @ 5k+ sites in next year or two)

try using limit in sql statement, data smaller chunks.

something like:

$result = $wpdb->get_row("select count(id) rowcount {$wpdb->blogs}"); $rowcount = $result->rowcount;  for($offset=0; $offset<$rowcount; $offset+=50) {     $blogs = $wpdb->get_results(          "select blog_id,path {$wpdb->blogs}              blog_id != {$wpdb->blogid}                  , site_id = '{$wpdb->siteid}'                  , spam = '0'                  , deleted = '0'                  , archived = '0'                      order blog_id                         limit {$offset},50", array_a);      foreach( $blogs $blog ) :          switch_to_blog( $blog[ 'blog_id' ] );         if(strpos(strtolower($blog_details->blogname),              strtolower($_get['squery'])) !== false){             //show site's title , link site     }     endforeach; } 

by doing you're definetly increasing number of sql queries, anyways, overall memory usage lower. 50 less 1400, yep.

ps: , yeah, text search, should better try sphinx/solr/elasticsearch/etc mysql isn't fastest thing handle it.


Comments

Popular posts from this blog

linux - Does gcc have any options to add version info in ELF binary file? -

android - send complex objects as post php java -

charts - What graph/dashboard product is facebook using in Dashboard: PUE & WUE -