What kind of speed might I expect with request a count of selected items?


I realize my question is vague because the machine specs matter. Let's say it's a reasonably fast Mac OS X or Linux machine with 3 GB of RAM and a 5200 rpm HD. A typical machine.

Just for an example, suppose there are 200,000 user records in the users table. Each record has an integer column called "counter." I want to know how many user records have a counter greater than zero.

I still am studying SQL so my request syntax might not be right, but I think the request is something like:

SELECT COUNT(user) FROM users WHERE counter > 0;

In this case the COUNT parameter itself probably doesn't matter as long as it is some column in users, right

Anyway, the reason I'm asking is because while I'm posting this I have a while loop executing in a different, non-SQL database. It's counting this one-by-one for all users and it looks like it will take maybe 3 hours to finish.

With a MySQL request such as this, approximately how long would I have to wait I don't need exact numbers, just orders of magnitude. Like would you expect such a request to take seconds, tens of seconds, minutes, hours

Also, what if the "counter" property was in a separate counters table and I did the same kind of request with an INNER JOIN like:

SELECT COUNT(user) FROM users INNER JOIN counters ON id = counters.userid WHERE counter > 0;

Would that take much longer than if the request was in one table

Posted On: Thursday 25th of October 2012 10:28:54 PM Total Views:  142
View Complete with Replies

RELATED TOPICS OF MYSQL PROGRAMMING LANGUAGE




Problems getting data for a windrose chart with speed ranges

! I have a table that stores wind speed and wind direction over time. The table stores a new row every 10 minutes exactly. The wind direction are always fixed in 16 different values from 0 to 360. (0, 22.5, 45, 67.5, ... 337.5) Code: Table: wind_data date | avg_speed | wind_dir 2011-01-01 00:00:00 | 6.24 | 45 2011-01-01 00:10:00 | 5.00 | 22.5 2011-01-01 00:20:00 | 3.38 | 22.5 2011-01-01 00:30:00 | 4.16 | 45 ... I need to generate a table to create a windrose chart showing the number of registers of wind for each direction grouped in wind ranges: Hours of wind per direction in "ranges" of: 0-2m/s => In SQL : avg_speed >= 0 and avg_speed < 3 2-4m/s => In SQL : avg_speed >= 3 and avg_speed < 5 4-6m/s => .... 6-10m/s ... 10-12m/s ... 12-99m/s ... To explain it better, here's a "sample" of the resulting chart (this doesn't have the wind ranges I need as you can see). http://jpgraph.net/images/gallery/wind1.png I've tried to do it using php and sql, with a couple "for" loops that executes a SQL for each wind direction and for each wind range. That makes the process extremely long and overloads the sql server as the table can be very big. This is (in simplified code) what I do: Code: for wind_step = 0 to 360 step 22.5 do foreach ranges as range SELECT wind_dir, COUNT(wind_dir) AS regs FROM wind_data WHERE wind_dir = wind_step AND range (for example avg_speed >= 0 and avg_speed
VIEWS ON THIS POST

376

Posted on:

Thursday 11th October 2012
View Replies!

What kind of table structure is this?

I have created a table in which i have categories and products all in one table. i have id and pid column. Id is the main uniq id and pid is for the parent id. so for any first category the pid is 0 which means it is the top most. consider for example the first category is id 1 and inside that i am creating another category say cat2. this would have id=2 and pid=1 and it can go like a tree structure. Now i want to manipulate data in this table. for example i might start at any mid point and i want all its children as result set. for example i have a structure like this... cat1->cat2->cat3->cat4->cat5 now i want to delete cat3 and its sub items. i have the id of cat3 and now i dont know how to form a query to list all its sub items or to fetch all its sub items ids to delete or to do other process. also what is the term to such structure in database concepts
VIEWS ON THIS POST

63

Posted on:

Friday 12th October 2012
View Replies!

Improve UPDATE query speed

, I'm trying to a table to store values for as a map that I am making. So the table is pretty much set up as: +-------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+---------+------+-----+---------+-------+ | x | int(11) | YES | | NULL | | | y | int(11) | YES | | NULL | | | tile | int(11) | YES | | NULL | | +-------+---------+------+-----+---------+-------+ and I attempt to update a tile using UPDATE mapArray Set tile=tileValue WHERE x=xValue AND y=yValue; However, do to the size of the table (16 million entries for a 4000, 4000 world) it takes 22 seconds to update. Query OK, 0 rows affected (22.62 sec) Are there any suggestions on improving the update speed I'm still fairly new to databases.
VIEWS ON THIS POST

105

Posted on:

Friday 12th October 2012
View Replies!

Select speed of: NOT then LIKE vs LIKE then NOT?

hello! can anyone tell me if there is a speed benefit for excluding rows at the beginning of your "where" options in a select statement, as opposed to excluding at the end for eg. Code: select * from table where col NOT REGEXP '[[::]]' AND col REGEXP '[[::]]$' VS Code: select * from table where col REGEXP '[[::]]$' AND col NOT REGEXP '[[::]]' regan
VIEWS ON THIS POST

247

Posted on:

Friday 12th October 2012
View Replies!

How to speed up this query?

Could do with a little help optimising this query... Code: SELECT p.*, COALESCE(pi2.url, '') AS imageURL2, COALESCE(pi3.url, '') AS imageURL3, COALESCE(pi4.url, '') AS imageURL4, COALESCE(pi5.url, '') AS imageURL5, pt.size AS palletSize, sr.name AS regionName, COALESCE(p.imageurl, '') AS imageurl, (p.quantityAvailable - COALESCE(SUM(soi.quantity), 0)) AS remainingQuantity, COALESCE(SUM(soi.quantity), 0) AS ordered, offerTypes.name AS offerTypeName, sc.amount AS shippingCost, p.price AS basePrice, srcCurr.rate AS srcRate, destCurr.rate AS destRate, MOD(p.id, 15) AS mainItem, COALESCE(duties.dutypercase, 0) AS duty FROM products p JOIN palletTypes pt ON pt.id=p.palletType LEFT JOIN relatedItems ri ON ri.product=p.internalid JOIN currencies srcCurr ON srcCurr.id=p.currency JOIN currencies destCurr ON destCurr.id='1' JOIN shippingcosts sc ON sc.fromregion=p.region AND sc.toregion=45 LEFT JOIN offerTypes ON p.offertype=offerTypes.id LEFT JOIN salesOrderItems soi ON soi.product=p.internalID LEFT JOIN regions sr ON sr.internalid=p.region LEFT JOIN regions dr ON dr.internalid=45 LEFT JOIN duties ON duties.product=p.id AND duties.country=dr.country LEFT JOIN productimages pi2 ON pi2.id=p.image2 LEFT JOIN productimages pi3 ON pi3.id=p.image3 LEFT JOIN productimages pi4 ON pi4.id=p.image4 LEFT JOIN productimages pi5 ON pi5.id=p.image5 WHERE (p.id=15 OR EXISTS(SELECT 1 FROM relatedItems ri2 WHERE ri2.product=25 AND ri2.groupid=ri.groupid)) AND p.palletised0 AND p.live=1 AND p.reason = '' GROUP BY p.id HAVING p.offerType 0 AND (p.offerType=3 OR ((p.offerType=2 OR p.offerType=1) AND (fullLoad=0 AND remainingquantity>=minimumQuantity) OR (fullLoad=1 AND remainingquantity>palletsPerFullLoad))) ORDER BY mainItem An EXPLAIN returns the following... Code: 1 PRIMARY destCurr const PRIMARY PRIMARY 4 const 1 Using temporary; Using filesort 1 PRIMARY p index PRIMARY,palletised_2 PRIMARY 4 NULL 762 Using where 1 PRIMARY srcCurr eq_ref PRIMARY PRIMARY 4 fmcgtrade_live.p.currency 1 Using where 1 PRIMARY sc ref fromto fromto 8 fmcgtrade_live.p.region,const 1 1 PRIMARY offerTypes eq_ref PRIMARY PRIMARY 4 fmcgtrade_live.p.offertype 1 1 PRIMARY soi ref product product 4 fmcgtrade_live.p.internalID 2 1 PRIMARY pt eq_ref PRIMARY PRIMARY 4 fmcgtrade_live.p.palletType 1 1 PRIMARY ri ref product product 4 fmcgtrade_live.p.internalID 1 Using where 1 PRIMARY sr eq_ref internalid internalid 4 fmcgtrade_live.p.region 1 1 PRIMARY dr const internalid internalid 4 const 1 Using where 1 PRIMARY duties ref product product 8 fmcgtrade_live.p.id,fmcgtrade_live.dr.country 1 1 PRIMARY pi2 eq_ref PRIMARY PRIMARY 4 fmcgtrade_live.p.image2 1 1 PRIMARY pi3 eq_ref PRIMARY PRIMARY 4 fmcgtrade_live.p.image3 1 1 PRIMARY pi4 eq_ref PRIMARY PRIMARY 4 fmcgtrade_live.p.image4 1 1 PRIMARY pi5 eq_ref PRIMARY PRIMARY 4 fmcgtrade_live.p.image5 1 2 DEPENDENT SUBQUERY ri2 ref product,groupid product 4 const 1 Using where Been staring at this for a while and tried a number of indexes, but the 'p' table still requires 760+ rows. Any ideas to try The products (p) table setup is... Code: CREATE TABLE `products` ( `id` int(11) NOT NULL auto_increment, `created` datetime NOT NULL default '2000-01-01 00:00:00', `modified` datetime NOT NULL default '2000-01-01 00:00:00', `internalID` int(11) NOT NULL default '0', `name` varchar(100) NOT NULL default '', `summary` text, `description` text, `price` float NOT NULL default '0', `currency` tinyint(4) NOT NULL default '0', `unitprice` float NOT NULL default '0', `unitspercase` int(11) NOT NULL default '0', `casesperpallet` int(11) NOT NULL default '0', `minQty` int(11) NOT NULL default '0', `onHand` double NOT NULL default '0', `imageid` int(11) NOT NULL default '0', `imageurl` varchar(200) NOT NULL default '', `family` int(11) NOT NULL default '0', `sector` int(11) NOT NULL default '0', `brand` int(11) NOT NULL default '0', `live` smallint(6) NOT NULL default '0', `region` int(11) NOT NULL default '0', `limitedQuantity` smallint(6) NOT NULL default '0', `quantityAvailable` int(11) NOT NULL default '0', `soldByPallet` smallint(6) NOT NULL default '0', `fullLoad` smallint(6) NOT NULL default '0', `palletsPerFullLoad` int(11) NOT NULL default '0', `stackablePallets` smallint(6) NOT NULL default '0', `minimumQuantity` int(11) NOT NULL default '0', `maximumquantity` int(11) NOT NULL default '0', `quotationQuantity` int(11) NOT NULL default '0', `minimumOrder` int(11) NOT NULL default '0', `markuppercA` float NOT NULL default '0', `markupvalA` float NOT NULL default '0', `markupPercB` float NOT NULL default '0', `markupValB` float NOT NULL default '0', `markuppercC` float NOT NULL default '0', `markupvalc` float NOT NULL default '0', `markupPercD` float NOT NULL default '0', `markupValD` float NOT NULL default '0', `palletWeight` float NOT NULL default '0', `offertype` int(11) NOT NULL default '0', `leadtime` int(11) NOT NULL default '0', `added` int(11) NOT NULL default '0', `customer` int(11) NOT NULL default '0', `customFamily` varchar(50) default NULL, `customSector` varchar(50) default NULL, `customBrand` varchar(50) default NULL, `session` varchar(50) NOT NULL default '0', `confirmed` smallint(6) NOT NULL default '0', `incoterms` int(11) NOT NULL default '0', `palletType` int(11) NOT NULL default '0', `country` int(11) NOT NULL default '0', `languages` text NOT NULL, `shelflife` varchar(200) NOT NULL default '', `palletised` smallint(6) NOT NULL default '0', `caseWeight` float NOT NULL default '0', `alcoholcategory` int(11) NOT NULL default '0', `alcoholpercentage` float NOT NULL default '0', `annualProduction` int(11) NOT NULL default '0', `litresPercase` float NOT NULL default '0', `plato` int(11) NOT NULL default '0', `dutystatus` int(11) NOT NULL default '0', `image2` int(11) NOT NULL default '0', `image3` int(11) NOT NULL default '0', `image4` int(11) NOT NULL default '0', `image5` int(11) NOT NULL default '0', `image1` int(11) NOT NULL default '0', `productOK` smallint(6) NOT NULL default '0', `reason` varchar(100) NOT NULL default '', `sector_new` smallint(6) NOT NULL default '0', `category` smallint(6) NOT NULL default '0', `subcategory` smallint(6) NOT NULL default '0', `subcategory2` smallint(6) NOT NULL default '0', `brand_new` smallint(6) NOT NULL default '0', `packtype` smallint(6) NOT NULL default '0', `unitsize` smallint(6) NOT NULL default '0', PRIMARY KEY (`id`), KEY `internalID` (`internalID`), KEY `family` (`family`), KEY `sector` (`sector`), KEY `brand` (`brand`), KEY `fullLoad` (`fullLoad`), KEY `palletsPerFullLoad` (`palletsPerFullLoad`), KEY `quotationQuantity` (`quotationQuantity`), KEY `session` (`session`), KEY `palletised_2` (`palletised`,`live`,`reason`,`internalID`,`offertype`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1046 ;
VIEWS ON THIS POST

124

Posted on:

Friday 12th October 2012
View Replies!

Slow db access. anyway to test the "speed"?

hey all, I have worked with a few mysql dbs on different servers but i have recently been asked to work with one on nicnames. It seems horribly slow. Working in phpMyAdmin (which i had to install myself) it takes ages when i want to do anything. View the table structure, view the data..etc. Any way i can test the speed so that i can compare it against another server i work with and proove there is a speed issue and take it to nicnames cus it is crazy and is going to affect the speed of the website!! Cheers, RF
VIEWS ON THIS POST

84

Posted on:

Friday 12th October 2012
View Replies!

Different query order, different query speed

ALL, To optimize a SQL statement, I tested it with 10 queries. Each of them has different query words. They are represented by Q1, Q2,......,Q10 respectively. I found that the query speed for a given query depends on the order of executing the sequency of queries. Before executing the queries, I reboot the MySQL server to clean the cache. The query execution time for the case where I run Q1 first is much longer than the time for the case where I run Q1 as the last query, i.e., executing Q1 after Q2, Q3,......,Q10. THis is true for other queries. Generally, later queries tend to be faster than earlier queries. I am wondering why it is this case. I guess it is due to the cache mechanism. But I am not very sure. Can anyone explain this for me By the way. I found that Code: /usr/local/mysql/bin/mysqladmin shutdown cannot clean the cache. Why. To test the effect of a modification to a SQL statement, should I reboot the MySQL server before testing
VIEWS ON THIS POST

111

Posted on:

Friday 12th October 2012
View Replies!

[SQL Server] Optimize speedy query

Hi all, I need optimize this query. Exists other way to make the query more speedy The fields [DATE-V] -clustered-, [TZZ], [COD1], [COD2], [COD3] and [COD4] are indexes not clustered.
VIEWS ON THIS POST

113

Posted on:

Saturday 20th October 2012
View Replies!

I need to speed up Deletion of Duplicate Rows

Dear List, I have developed the following code for finding duplicate rows. I then calculate the average of each column in the duplicate rows and store them into a temp table. I then delete the duplicate rows in the main table. Then I insert the results from the temp table into main table. Does anyone know how I can speed this process up. It runs extremely slow.
VIEWS ON THIS POST

126

Posted on:

Sunday 21st October 2012
View Replies!

how can I test the speed of a table in MYSQL?

my web hoster told me that my site is slow because of my DB (MYSQL) and I have to perform some actions (linke indexing). how do I check if a table has an index tanks
VIEWS ON THIS POST

122

Posted on:

Sunday 21st October 2012
View Replies!

mysqldump speed on a DB with thousands of tables

I am running Wordpress MultiUser (WPMU) to host a quite large french blog hosting network (close to 100000 users). WPMU creates 10 tables per user (basically duplicates wordpress'tables for each user). This makes quite a lot of tables as you can imagine. Now, I'm migrating my DB to a custom multi-DB setup, basically seperating users into multiple DBs instead of keeping them just in one. Thing is, when i run mysqldump for just a few tables on the initial DB (trying to migrate one user's data for instance), the mysqldump takes *really long* to perform. It takes me basically 1 hour to migrate 50 users, and considering the total number of users that's gonna take forever to move around. So, I've made some investigation, and using show processlist, I've seen that most of the time spent by mysqldump seemed to be running queries like show table status like tablename or show table like tablename These are probably taking very long due to the large number of tables in the DB. Is there a way I can make these faster Or just have mysqldump use something else at all
VIEWS ON THIS POST

107

Posted on:

Monday 22nd October 2012
View Replies!

MySQL update speed with 10K+ of rows

I Have table called `users` which have 10K+ rows. Each user have a fields called `gold` and `level`. Now each user should get "pow(1.25, level)*100" gold per/hour and it should divide so that users will get 1 gold per time unit. example: user level is 7 and he should get 1.25^7 * 100 gold = 476 gold per hour (1 gold every 7.5 seconds). Now I have a problem. I can't update all users at once every second because it would create too much load for server (mysql load was 100% if I did it). Even if I would do this update in every 7 seconds, it will be too much load for the server. maybe we can determine user gold amount so: user gold = user gold + (seconds_after_last_update/3600); and do update every prime hour but what happens the when user wants to spend this gold My mysql-updateing daemon is running in infinite loop doing 1 action every second. This is for my little browserbased game
VIEWS ON THIS POST

96

Posted on:

Monday 22nd October 2012
View Replies!

Do index columns increase the speed of Order By

Sorry if this is confusing, I have read that making columns, that contain certain values, can increase the speed of a query if you make them an index. Does this hold true to columns that you intend to sort against in Order By, but not necessarily in the where clause I'm guessing it has no effect on Order by, but I want to be sure.
VIEWS ON THIS POST

94

Posted on:

Wednesday 24th October 2012
View Replies!

Mysql: speed up query, order by column is too slow!

I have a query that takes ages: SELECT tbl1.a, tbl2.b MATCH (tbl1.a) AGAINST ( 'someValue'IN BOOLEAN MODE ) AS score FROM `a` , `b` WHERE tbl2.b = 'someRestriction' ORDER BY score DESC, b.tbl1 DESC LIMIT 20 but because there are thousands of rows it takes ages to do both of the orders, however: ORDER BY score LIMIT 20 -- is really quick and : ORDER BY b.tbl1 LIMIT 20 -- is really slow Basically I only want to ORDER BY score DESC LIMIT 20 and then order these 20 result by b.tbl1 DESC (which shouldnt take that long) ORDER BY score DESC, b.tbl1 DESC LIMIT 20 seems to order the entire set by b.tbl1 DESC (and then give the top 20) my demi code (that doenst work): ORDER BY score DESC LIMIT 20, b.tbl1 DESC is there a way of doing this
VIEWS ON THIS POST

104

Posted on:

Wednesday 24th October 2012
View Replies!

Help speeding up a SELECT

I have worked on this SELECT as much as I can and I can't get it to work any faster, but I am positive someone here can see where the problem is: Code: SELECT t1.id, t4.tag_id, t4.name as tag, t4.slug as tag_slug, t4.description as tag_description, t1.header, t1.description as teaser, t1.slug FROM news.news AS t1, news.tag_relation as t2, news.tag_taxonomy as t3, news.tags AS t4 WHERE t2.post_id = t1.id AND t3.tag_id = t2.tag_id AND t4.tag_id = t3.tag_id GROUP BY t1.id ORDER BY t1.`date`DESC, t1.id DESC LIMIT 0,25 I believe I have indexed everything necessary and have simplified this thing as much as possible. The tag select is what is slowing it down since each article has more than one record in the tag_selection table, which is the reason for the DISTINCT and GROUP BY.
VIEWS ON THIS POST

88

Posted on:

Wednesday 24th October 2012
View Replies!

Join query - can i speed it up?

Hi , I have a query that works fine (quickly) until I add the LEFT OUTER JOIN with the appointments table (along w/ the SUM aggregate that goes along with it). The query isn't finished at this point in terms of getting all the information I need, but it really slows down to the point where I really can't use it. Is it possible to speed this up and/or is there a better way to do this query Code: SELECT u.idusers , MIN(u.firstname) fname , MIN(u.lastname) lname , COUNT(c.idclients) totalLeads , SUM(CASE WHEN a.appointmentid IS NULL THEN 0 ELSE 1 END) totalApt FROM clients c INNER JOIN leadsource ls ON ls.idleadsource = c.leadsourceid INNER JOIN store s ON s.storeid = c.storeid INNER JOIN users u ON u.idusers = c.createdby_userid INNER JOIN userrole_users ru ON ru.userid = u.idusers INNER JOIN userrole r ON r.iduserrole = ru.userroleid LEFT OUTER JOIN appointments a ON a.userid = u.idusers WHERE c.createdby_date BETWEEN '2008-02-01' AND NOW() AND u.active = 1 AND r.iduserrole = 9 GROUP BY u.idusers ORDER BY lname, fname, ASC Relationship between tables: clients -> leadsource: 1 to 1 clients -> store: 1 to 1 clients -> users: 1 to 1 users -> userrole_users: 1 to 1 userrole_users -> userrole: 1 to 1 usres -> appointments: 1 to many Here is the information that I need out of the query: Employee Name, Leads Taken, Appointments Set, Showed Customers, Sold Jim Smith, 23, 17, 10, 3 I am not really worried about getting this information correct in terms of display but getting the query to get it in a timely fashion since this is an internet application. Thank you for your time and help, cranium
VIEWS ON THIS POST

117

Posted on:

Wednesday 24th October 2012
View Replies!

MySQL speed question

The issue I am currently having, is that once something in a table is updated the entire table appears to be stored on the hard drive. So basically, after making one small update to the table the next query takes the full 2.5 seconds (250,000 rows or so), instead of reading out of memory. Can I get MySQL to store changes in memory, and simply update to the HD I don't think I understand the process. Heres what I think is going on. 1. Table is in memory 2. Table is slightly altered 3. Table is dropped from memory 4. Table is stored on HD 5. Table is read back into memory upon query Why can't it just: 1. Table is in memory 2. Table in memory is altered 3. Change made to HD
VIEWS ON THIS POST

106

Posted on:

Wednesday 24th October 2012
View Replies!

Special kind of query

i need to query my database for a script to return tables starting with a certain string of text. ie Code: mysql db tables -------- happy_people_land my_prefix_lala_land my_prefix_candy_land something_my_prefix_ PHP Code: $table_prefix='my_prefix_'; $result=mysql_query("SHOWTABLESFROM$dbnameWHEREtablenamebeginswith$table_prefix"); $row=mysql_fetch_assoc($result); //returnsmy_prefix_lala_landandmy_prefix_candy_land also i bet is tired of me asking how to do WHEREs for all my mysql queries. is there a super list or anything that has every single WHERE clause the websites documention seems to be just a bunch of fluff and its hard to go on it and really find anything im looking for
VIEWS ON THIS POST

68

Posted on:

Wednesday 24th October 2012
View Replies!

Multiple databases - any speed increase?

Do you folks know if there is any speed increase associated with using multiple databases rather than a single database for all of one's tables I.e. obviously speed is increased with more tables that are smaller. But is speed increased with more databases that are smaller
VIEWS ON THIS POST

311

Posted on:

Wednesday 24th October 2012
View Replies!

Mysql table with 9 million rows... speed up queries?

Hi all I've been asked by a client to have a look at one of their databases (MySQL 4.1 on Windows 2003). They have a MyISAM table with 9 million records in it. Queries against this table with one join to another table using DISTINCT are clocking in at somewhere between 6 and 15 seconds each. Unfortunately, due to constraints, I'm unable to refactor the schema in any way. This table is being updated once a month from an external resource, and is crucial in a daily update which can be upwards of 10000 records, as well as user-initiated searches. I've reduced the query against this table in this daily update down to the bare minimum required for usable results, and I've asked them to up the ram in the box to 4Gb. As a public website soon to be released, this is expected to take a pounding, and I don't feel confident that its going to be able to stand up to use. And at 6-15 seconds per search under light load, I'm guessing most users won't hang around to enjoy the benefit of the data. I'm planning on replacing their default my.cnf with my-huge.cnf, but I was wondering if there was anything else I could do to help bring down the query speed. It crossed my mind to try and use a HEAP table to match against, however the server just doesn't like my attempts to duplicate the table. Anything else I can do
VIEWS ON THIS POST

97

Posted on:

Wednesday 24th October 2012
View Replies!