Efficient way to count groups of records


I have a database that has items and rank and I want to get the count of items for all of the ranks. So the data is like...

item1 rank 0
item2 rank 0
item3 rank 0
item4 rank 1
item5 rank 1
item6 rank 1
item7 rank 2
item8 rank 2

Where there are about 500 items and ranks 0-10.

What would be the best SQL query to get the following output

rank0 - 3
rank1 - 3
rank2 - 2

I know I could do a count when looking for rank0, but I prefer not to call it 11times.
Posted On: Thursday 25th of October 2012 10:53:05 PM Total Views:  112
View Complete with Replies

RELATED TOPICS OF MYSQL PROGRAMMING LANGUAGE




How to efficiently count results without returning them all

I'm starting to end up with quite complex select statements, and I limit them based on the maximum to show per page and the number of the starting page. However I also need to know the total number of results returned in order to provide links to pages to see more results. What's the best way to achieve this
VIEWS ON THIS POST

84

Posted on:

Friday 12th October 2012
View Replies!

Most efficient way to get number of elements in each buddy icon category in a table

Thank you for looking at this thread and attempting to offer some assistance... I am doing some work on a buddy icon website and would like some advice and possibly some code. I have a table named "icons" that has the following filed: id, filename, title, descript, downloads, submitter, date, category, active, viewid, rb and sname. The category is a varchar(255) but holds a number 1 - 22 which relates to a category anywhere from Sound Icons to Dolls and Pride icons. My task is to GET AND STORE TO A VARIABLE the number of icons in each category so I can later PRINT THEM OUT. I am currently doing this with a while loop and know this must be VERY inefficient.... PHP Code: $select=mysql_query("select*fromiconswhereactive='yes'"); \t\techomysql_error(); \t\t$zero=0; \t\twhile($numicons=mysql_fetch_array($select)){ \t\t\t\tif($numicons[category]==0)$zero++; \t\t\t\telseif($numicons[category]==1)$one++; \t\t\t\telseif($numicons[category]==2)$two++; \t\t\t\telseif($numicons[category]==3)$three++; \t\t\t\telseif($numicons[category]==4)$four++; \t\t\t\telseif($numicons[category]==5)$five++; \t\t\t\telseif($numicons[category]==6)$six++; \t\t\t\telseif($numicons[category]==7)$seven++; \t\t\t\telseif($numicons[category]==8)$eight++; \t\t\t\telseif($numicons[category]==9)$nine++; \t\t\t\telseif($numicons[category]==10)$ten++; \t\t\t\telseif($numicons[category]==11)$eleven++; \t\t\t\telseif($numicons[category]==12)$twelve++; \t\t\t\telseif($numicons[category]==13)$thirteen++; \t\t\t\telseif($numicons[category]==14)$fourteen++; \t\t\t\telseif($numicons[category]==15)$fifteen++; \t\t\t\telseif($numicons[category]==16)$sixteen++; \t\t\t\telseif($numicons[category]==17)$seventeen++; \t\t\t\telseif($numicons[category]==18)$eighteen++; \t\t\t\telseif($numicons[category]==19)$nineteen++; \t\t\t\telseif($numicons[category]==20)$twenty++; \t\t\t\telseif($numicons[category]==21)$twentyone++; \t\t\t\telseif($numicons[category]==22)$twentytwo++; \t\t\t\telseif($numicons[category]==23)$twentythree++; \t\t\t\telseif($numicons[category]==24)$twentyfour++; \t\t\t\telseif($numicons[category]==25)$twentyfive++; \t\t\t\telseif($numicons[category]==26)$twentysix++; \t\t\t\telseif($numicons[category]==27)$twentyseven++; \t\t} \t\tmysql_free_result($select); And then I print out the variable $zero - $twentysix next to the category name on the website. PHP Code: echo"Sound($zero)"; \t\techo"Dolls($twentyfour)"; \t\techo"Holiday($one)"; I figured this would be more efficient and less server intensive than making a bunch of queries and calling num_rows on each. I've read about a way to COUNT(*) in the select query, and thought it may be possible to group by category...but can not figure out how to get this to work. I would appreciate if anyone could offer some advice or code snippets on the MOST EFFICIENT way to go about completing the task of STORING and PRINTING the number of icons in each category.... I will check this thread several times a day for a response so please, feel free to post your reply here. Once again, Thank you for your time and effort, -Jesse
VIEWS ON THIS POST

206

Posted on:

Friday 12th October 2012
View Replies!

Querying a where numerous fields occur in a second table (efficiently)

I did some googling which didn't do me much good, so I've registered to this forum. Hoorah! My problem is as follows, I have 2 tables, one (games) contains a list of games. Each game contain 2 fields, p1 and p2 representing the IDs of the players participating in the specific game, as well as other non-important information. Another table (topplayers) contains only 2 fields, pid which corresponds to a player id, and score which corresponds to its score. entries of pid occur in games.p1 or games.p2, but the opposite is not true. ie: the 2 tables are games(p1 int(11), p2 int(11)...) and toppplayers(pid int(11), score int(11)) I want to launch a query which returns all games (from games) where both p1 and p2 are appear in topplayers.pid I managed to write some horrible abomination using 2 instances of 'Exist', but it took 5(!) minutes to run, which is _way_ too slow for my needs. What is the right way of doing it EDIT: I've realized I better post my current query: Code: select p1, p2 from games where exists(select * from topplayers where p1 = pid) and exists(select * from topplayers where p2 = pid) EDIT EDIT: The above code was slightly modified because I am a bit thick. still slow though.
VIEWS ON THIS POST

95

Posted on:

Friday 12th October 2012
View Replies!

Horribly inefficient query

im trying to speed up the performance of this particular database im using for event management. It currently runs WAY to slow, and indexes didnt help much. what i have is 2 relational databases, events and events_dates. I seperated the date database because i need to be able to reuse the informtion in the events table regardless of the dates the event happens. ie dates are disposable, the events are not. so here is the rundown of what i have put together. some of the fields for the events table are legacy fields that dont serve much purpose. but from my diagnoses its the events_dates table that is slowing down the queries. Code: mysql> describe events; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | ID | int(11) | | PRI | NULL | auto_increment | | UserID | int(11) | | | 0 | | | TypeID | int(11) | | | 0 | | | VenueID | int(11) | | | 0 | | | Phone | varchar(20) | | | | | | Website | varchar(120) | | | | | | Title | mediumtext | | MUL | | | | Text | text | | | | | | Image | varchar(100) | | | | | | NuvoPick | int(1) | | | 0 | | | Date | int(11) | | | 0 | | | GroupID | int(11) | | | 0 | | | Print | int(1) | | | 0 | | | BandID | int(11) | | | 0 | | | Continuing | int(1) | | | 0 | | | Alpha | varchar(100) | | | | | | recureTime | int(11) | YES | | NULL | | | recureInterval | int(2) | YES | | NULL | | | endDate | int(11) | YES | | NULL | | | subTypeID | int(11) | YES | | NULL | | | notes | text | YES | | NULL | | +----------------+--------------+------+-----+---------+----------------+ and Code: mysql> describe events_dates; +---------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------+---------+------+-----+---------+----------------+ | id | int(11) | | PRI | NULL | auto_increment | | eventid | int(11) | YES | MUL | NULL | | | date | int(11) | YES | | NULL | | +---------+---------+------+-----+---------+----------------+ here is an example query i am using to pull data from the events databases. Code: SELECT events.ID, events.Title, events.Text, events.Phone AS eventPhone, events.Website, events.NuvoPick, events_types.Type, events_dates.date, venues.Name, venues.Address, venues.Phone AS venuePhone, venues.City, venues.State, venues.Zip, venues.ID AS venueID, venues_regions.Name AS Region FROM events,events_types,events_dates,venues,venues_regions WHERE events_dates.eventid = events.ID AND events_dates.date > 1184644800 AND events_dates.date < 1185249600 AND venues_regions.ID = venues.Region AND events_types.ID = events.TypeID AND venues.ID = events.VenueID ORDER BY events_dates.date ASC LIMIT 25 OFFSET 0;
VIEWS ON THIS POST

89

Posted on:

Friday 12th October 2012
View Replies!

How can i achieve this query best(quickest and most efficient)

Ok, i run an online game. I want to give each user a ranking, based on how high thier score is. If for example, i had 20,000 players, i dont want to have to update them all one by one, it may strain the server and take a long time. Is there another way i can assign a rank number (rank 1 has the top score and so on) mike
VIEWS ON THIS POST

113

Posted on:

Friday 12th October 2012
View Replies!

What is the most efficient way to manage order satus

I am designing a small system that manages orders (sort of). These are the statuses that an order can have at any one time: new - booked less than 24 hours pending - payment to be received before processing awaiting collection - ready and wating to be collected cancelled - normally by customer suspended - temporary hold completed - collected by customer stopped - by organization for fraudulent reasons The two solutions i have for the oders table are: To have just one order table and have a field which content i would set to any of the above values depending, then periodically have a routine to move completed, cancelled and stopped entries to their respective history tables. Then provide a management program for dealing with awaiting collection, suspended and pending orders as per business rules. or Have seperate tables for each of these order statuses and move the details from one table to the order as soon as the status changes occurs and the have coresponding history tables for each table. Which of these ideas is most efficient performance wise. If none of the above can one suggest an alternative
VIEWS ON THIS POST

98

Posted on:

Sunday 21st October 2012
View Replies!

how can i count number of rows for several tables in one efficient query?

i need to count number of rows for 4 diff table, Code PHP: $sql = mysql_query("SELECT userid FROM table_1 WHERE userid='$member_id' AND foo=1"); $total_table_1_row = mysql_num_rows($sql); $sql = mysql_query("SELECT userid FROM table_2 WHERE userid='$member_id' AND foo=1"); $total_table_2_row = mysql_num_rows($sql); $sql = mysql_query("SELECT userid FROM table_3 WHERE userid='$member_id'"); $total_table_3_row = mysql_num_rows($sql); $sql = mysql_query("SELECT name FROM table_4 WHERE name='$member_name'"); $total_table_4_row = mysql_num_rows($sql); it is lightning fast for 1 or 2 rows while testing on localhost, but what if i have millions of rows for each table how can i combine them, with UNION
VIEWS ON THIS POST

132

Posted on:

Sunday 21st October 2012
View Replies!

most efficient way to import large data dump from phpmyadmin?

I have a large data dump (export) from phpmyadmin my client just gave me - 200k - . What's the most efficient way to import that into a new database The new db has not defined tables since that is part of the phpmyadmin export.
VIEWS ON THIS POST

222

Posted on:

Sunday 21st October 2012
View Replies!

Help with efficient and quick database design

I'm designing a website that has several (30 or so) categories for layouts. In each category there could be upwards of 1000 layouts that are displayed 10 at a time. Each layout entry will have the following information: title, datestamp, and possibly category... which brings me to my question. In designing for optimal memory usage and efficiency, which database design should I choose 1) Use a separate database tables for each category of layouts (30 or so) which would have about 1000 rows in each and the would be comprised of only title and datestamp. 2) Use a master database table that includes all the layouts (30,000 or more) and consists of title, datestamp, and category. (and then possibly even a join for the category names) 3) Something else I'm using PHP and MySQL and this database should be able to serve over 100,000 unique visits a day... which is why I'm trying to design for efficiency and speed. Also, does anyone have any idea on indexing for this design for maximum efficiency Your help and input is greatly appreciated!
VIEWS ON THIS POST

113

Posted on:

Sunday 21st October 2012
View Replies!

Most efficient date field type for selects?

I have searched this forum seeking the answer to this question, to no avail. I am making an events diary that will have a lot of entries. Entries will be entered in all kinds of order, including multiple events spanning a year or so. Even multiple events of the same type will be individual entries. (e.g. weekly chess club meeting every weds at 7pm will be 52 entries) My question is : Which is the best format to store dates Native mysql date type Integer timestamp A.n.other I suppose a worst case scenario is imagine I want to generate a calendar for July (31 days) and I want to show how many events there are for each day. 1st July (13 events) 2nd July (22 events) and so on ... it is possible that any given day could have up to 100 entries. What in your opinion is the best column type to use and ss there any kind of Indexing I can use to speed up select by dates Or any other tip or trick you'd be prepared to share with me
VIEWS ON THIS POST

96

Posted on:

Sunday 21st October 2012
View Replies!

Most efficient way of storing data from multiple accounts

As part of a system I am putting together I need to allow users to create thier own accounts on my servers. Each user can create their own account, and then have their users register for it. Each account needs it data seperate from the others, a member registered for one account should not be able to view another account and a username registered with one account should still be available to the other accounts. The ways I have been looking at are: 1. Create a new database for every account created so that all users are kept in seperate databases. 2. Have one table for users, one for topics, one for posts etc and then associate each row within this table with the relevent account. So for example a user could register with the forum with the ID 4, so their user entry would be Userid: 234 Username: xxxx Password: xxxx Forumid: 4 Then when a new member registers with any account I simply check that there is not another user with the same account ID and username. Indexes on relevent fields in this system could help speed up huge tables. I expect to quickly have 20,000 plus accounts (and in theory it could go up to hundreds of thousands). I guess my question is which of these methods is better from a speed point of view once we get a large number of accounts and users. Also, are there restrictions on the number of fields in a table that could cause problems Using MySQL by the way, on an Apache server.
VIEWS ON THIS POST

115

Posted on:

Sunday 21st October 2012
View Replies!

How inefficient is this query?

So I'm running a mailserver using a postfix/dovecot with a mySQL backend and postfixadmin to manage it. Postfixadmin itself defines the database schema for everything and it's... well, not so great. It works fine for what it does out of the box, but trying to make it dance to a different tune can be trying. Anyhow, it came to light that there was a feature on our previous mailservers that no longer worked on the new postfix machine: logging with a domain alias. I was not even aware that this was possible until recently. The standard login query for dovecot, with irrelevant fields removed, runs like so: sql Code: Original - sql Code SELECT DISTINCT mb.username as user, mb.password, --other fields from mb FROM mailbox mb WHERE mb.username = '%u' AND mb.active = '1'
VIEWS ON THIS POST

133

Posted on:

Wednesday 24th October 2012
View Replies!

Page 2 - Querying a where numerous fields occur in a second table (efficiently)

Quote: Originally Posted by r937 first of all, the empty string is not the same as NULL and if you wanted to match, then you'd use INNER instead of LEFT I know that empty isn't the same as NULL. But in my sample table, checking for empty removed the non-matching rows, whereas checking for null didn't. INNER JOIN, LEFT JOIN, I still screw those up
VIEWS ON THIS POST

75

Posted on:

Wednesday 24th October 2012
View Replies!

Which is more efficient?

Hi! I need to create a large csv file from my DB. Is it better to have mysql create the csv file directly using INTO OUTFILE or to just have mysql return the result set to php and have php create the csv file from that
VIEWS ON THIS POST

85

Posted on:

Wednesday 24th October 2012
View Replies!

Page 2 - How can i achieve this query best(quickest and most efficient)

Code: select game.username , S.score , ( select count(*) + 1 from score where score > S.score ) as rank from game inner join score as S on S.id = game.id where game.id = $Userid make sure there is an index on score.score
VIEWS ON THIS POST

261

Posted on:

Wednesday 24th October 2012
View Replies!

More efficient than Count(*)?

I'd like to get a row count for a large table, but I've recently been considering if count(*) is too resource intensive Is there a better way A factor here might be that an exact count is not necessary, so if it's off by a few, I'm not concerned. Now, you might say "not an exact count!!" You see, I was looking at SHOW TABLE STATUS, but I don't want to have to cycle through the rows to get to the table I'm interested in...
VIEWS ON THIS POST

36

Posted on:

Wednesday 24th October 2012
View Replies!

How to efficiently design this database?

I'm building a "car part" inventory system.... The system contains thousands of parts. However, each part has completely different criteria.... I.e. a Tire might have max pressure, expiration, size, tread, etc... whereas a car battery might have voltage, expiration, etc... Many fields are different, but many are the same. So for example, if I wanted to search my inventory database for all parts that are expiring soon, I'd want both the battery and the tire to return in the results. My question is, what is the best way to organize this in a database Here are the three possibilities that I've come up with so far: Have all possible fields in one table regardless if only some are used Pros: Only 1 table is needed, and only 1 row is required for each part entry. Cons: Could be up to like 100+ fields in the parts table. Only some are used for each part, based on the type. Have a table for each type of part. Pros: Only needs the specific columns that are required for this type of part. Cons: Will have a lot of tables, one table for each type of part. Have a table every field be custom, and store the needed fields in a global database for each entry. Pros: Only need 1 record in the master table for each part. Cons: The many-to-many table that holds the part, field id, and value, would be large. Each part entry could have up to 50 fields that are entered. --- So my question is, which idea is the best Maybe something completely different Keep in mind that this will house like 100,000 parts, so speed is very important. Any suggestions on how to structure this database so its the most efficient for storage, and most importantly, the most efficient for searching across all parts in the database for the matches.
VIEWS ON THIS POST

100

Posted on:

Thursday 25th October 2012
View Replies!

Could not select database - MySQL server has gone away

Hi Guys, Firstly I appreciate anyone reading this trying to help me because its driving me crazy! I have a script that imports a lot of csv files, these run fine and then all of a sudden stop! It stops at the same place everytime, but there is no code error as its all the same for every csv file. It does stop on a large file (45MB) but this may be just a coinsidence because i have set all the php limits to way above this. I have setup some security recently and wondered if it is this which is blocking me out of the mysql server after so many requests etc. Any ideas what could be causing this Many
VIEWS ON THIS POST

141

Posted on:

Thursday 25th October 2012
View Replies!

Easiest way to import excel file to database

Is there an easy way to import an excel file into a database making sure the cells in excel go to the correct rows in the mysql database
VIEWS ON THIS POST

208

Posted on:

Thursday 25th October 2012
View Replies!

Fastest way to search a large database table

I have a table of about 10 million rows and need to search 4 of the columns. Right now I am using mysql's built in fulltext searching capabilities, but as far as speed, it isn't cutting it. I'm getting anywhere from 10-50 seconds for a single search depending on the length of the query. Plus, since the results are paginated, the query needs to be run twice. I was just thinking there must be a method that is faster than this. Could anyone give me some input Any help is appreciated. Here is an example of what I'm using now. Code: SELECT SQL_CALC_FOUND_ROWS links. * , zip_codes.latitude, zip_codes.longitude, MATCH ( name, address, description, url ) AGAINST ( 'deli' ) AS rel, ROUND( ( ( ACOS( SIN( 34.169953 * PI( ) /180 ) * SIN( zip_codes.latitude * PI( ) /180 ) + COS( 34.169953 * PI( ) /180 ) * COS( zip_codes.latitude * PI( ) /180 ) * COS( ( - 77.87113 - zip_codes.longitude ) * PI( ) /180 ) ) *180 / PI( ) ) *60 * 1.1515 ), 1 ) AS dist FROM links INNER JOIN zip_codes ON links.zip = zip_codes.zip WHERE links.zip IN ( "28409", "28409", "28412", "28403", "28480", "28405", "28401", "284HH", "28411", "28428" ) AND MATCH ( name, address, description, url ) AGAINST ( 'this is a test search' )
VIEWS ON THIS POST

150

Posted on:

Thursday 25th October 2012
View Replies!