开发者

MySql: Best way to run high number of search queries on a table

开发者 https://www.devze.com 2023-03-30 02:26 出处:网络
I have two tables, one is static database that i need to search in, the other is dynamic that i will be using to search the first database. Right now i have two separate queries. First on page load, v

I have two tables, one is static database that i need to search in, the other is dynamic that i will be using to search the first database. Right now i have two separate queries. First on page load, values from second table are passed to first one as search term, and i am "capturing" the search result using cURL. This is very inefficient and probably really wrong way to do it, so i need help in fixing this issue. Currently page (html, front-end) takes 40 seconds to load.

Possible solutions: Turn it into function, but still makes so many calls out. Load table into memory and then run queries and unload cache once done. Use regexp to help speed up query? Possible join? But i am a noob so i can only imagine...

Search script:

require 'mysqlconnect.php';

    $id = NULL;
    if(isset($_GET['n'])) {     $id = mysql_real_escape_string($_GET['n']);     }
    if(isset($_POST['n'])) {    $id = mysql_real_escape_string($_POST['n']);    }

    if(!empty($id)){
        $getdata = "SELECT id, first_name, last_name, published_name,
                    department, telephone FROM $table WHERE id = '$id' LIMIT 1"; 

        $result = mysql_query($getdata) or die(mysql_error());
        $num_rows = mysql_num_rows($result);

        while($row = mysql_fetch_array($result, MYSQL_ASSOC))
        {
            echo <<<PRINTALL
            {$row[id]}~~::~~{$row[first_name]}~~::~~{$row[last_na开发者_开发技巧me]}~~::~~{$row[p_name]}~~::~~{$row[dept]}~~::~~{$row[ph]} 
PRINTALL;
        } 
    }

HTML Page Script:

require 'mysqlconnect.php';
    function get_data($url)
    {
      $ch = curl_init();
      $timeout = 5;
      curl_setopt($ch,CURLOPT_URL,$url);
      curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
      curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
      $data = curl_exec($ch);
      curl_close($ch);
      return $data;
    }

    $getdata = "SELECT * FROM $table WHERE $table.mid != '1'ORDER BY $table.$sortbyme $o LIMIT $offset, $rowsPerPage";
    $result = mysql_query($getdata) or die(mysql_error());

    while($row = mysql_fetch_array($result, MYSQL_ASSOC))
    {
            $idurl = 'http://mydomain.com/dir/file.php?n='.$row['id'].'';
            $p_arr = explode('~~::~~',get_data($idurl));
            $p_str = implode(' ',$p_arr);

           //Use p_srt and p_arr if exists, otherwise just output rest of the
           //html code with second table values


    } 

As you can see, second table may or may not have valid id, hence no results but second table is quiet large, and all in all, i am reading and outputting 15k+ table cells. And as you can probably see from the code, i have tried paging but that solution doesn't fit my needs. I have to have all of the data on client side in single html page. So please advice.

Thanks!

EDIT

First table:

id_row    id          first_name   last_name    dept    telephone
1         aaa12345    joe          smith        ANS     800 555 5555
2         bbb67890    sarah        brown        ITL     800 848 8848

Second_table:

id_row    type        model        har               status    id         date         
1         ATX         Hybrion      88-85-5d-id-ss    y         aaa12345   2011/08/12
2         BTX         Savin        none              n         aaa12345   2010/04/05
3         Full        Hp           44-55-sd-qw-54    y         ashley a   2011/07/25
4         ATX         Delin        none              _         smith bon  2011/04/05

So the second table is the one that gets read and displayed, first is read and info displayed if ID is positive match. ID is only unique in the first one, second one has multi format input so it could or could not be ID as well as could be duplicate ID. Hope this gives better understanding of what i need. Thanks again!


A few things:

  1. Curl is completely unnecessary here.
  2. Order by will slow down your queries considerably.
  3. I'd throw in an if is_numeric check on the ID.

Why are you using while and mysql_num_rows when you're limiting to 1 in the query? Where are $table and these other things being set? There is code missing.

If you give us the data structure for the two tables in question we can help you with the queries, but the way you have this set up now, I'm surprised its even working at all.

What you're doing is, for each row in $table where mid!=1 you're executing a curl call to a 2nd page which takes the ID and queries again. This is really really bad, and much more convoluted than it needs to be. Lets see your table structures.

Basically you can do:

select first_name, last_name, published_name, department, telephone FROM $table1, $table2 WHERE $table1.id = $table2.id and $table2.mid != 1;

Get rid of the curl, get rid of the exploding/imploding.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号