Club Ratings
founder
slip a dollar in her g-string for me
I would like to replace it with a rating that takes a lot of items into account like : how many reviews, how many reviewers, the review scores, photos, discussions etc.
Any math whizzes out there are welcome to chime in
Got something to say?
Start your own discussion
45 comments
Latest
Contact Juice - he's very educated - dude spent 7 years in high school
rating * (recency_weight * 1/days_since_rated) * (reviewer_count_weight * num_reviews_by_reviewer) * (reviewer_discussion_weight * num_discussion_contributions) * (reviewer_count_weight * total_number_of_unique_reviewers)
Then you can just fiddle with the weights to decide how much emphasis to give the number of previous reviews vs how much to discount older reviews, etc..
That said what would be even better is being able to sort reviews by shift. Ie Follies, never been, seems like a totally different place night and day. So is CH3 in Vegas. Would like to be able to pull a club, hit afternoon and see the reviews.
For example, if a reader values club dancer quality over club looks, he could adjust the formula to make dancer quality 1.5 to 2.0 times more influential in the total score.
Parking/Security
Club layout/comfort/drink quality
Dancer Quality
Lap Dance /VIP / Champagne Room Cost
Extra fun time potential
Add the numbers up and divide by the total number of categories (5 in above example) to get the final graded number. Thus, 1 is a shitty experience and 5 is a “you gonna get fucked in a good way” experience.
Rate each item on a 10 point scale. Report the average and range for each subcategory (e.g., Parking/Security = 9.2, range 6 to 10; Mileage/extras = 2.4, range 0 to 4). The calculate an overall score across all subcategories for the overall club rating on a ten point scale.
Do not count ratings that are over two (or three??) years old towards the average (clubs change over time).
Change the category labels slightly: The first three are OK but change the last two:
Lap Dance/VIP/Champagne room cost and quality of facilities;
Mileage and extras rating.
By way of example, here's how it would work using just 4 ratings to keep the math easy. Say the ratings were 6, 4, 9, and 3 for the last four ratings, from newest to oldest. The linearly weighted average calculation would then be as follows: For the numerator: (6*4)+(4*3)+(9*2)+(3*1)=57. For the denominator, you add up the number of weights: 4+3+2+1=10. The resulting linearly weighted average for that club would be 57/10=5.7 By just taking a certain number of recent ratings (it could be 10, 20, 30, whatever) and weighing the most recent ratings more heavily, that prevents the club from being penalized for crappy bad experiences in the past, or being unfairly rewarded for past performance that isn't reflective of current conditions.
As for how to score, I actually liked how each scoring component was broken out regarding club rating, value, girls, etc. That gave me a rough idea on what to expect, though it wasn't clear to me how the ratings were calculated, i.e., was it a total average of all scores from the beginning of time? Or something else?
Obviously, this could be fiddled with a lot depending on what you (or the consensus) believes is more important.
Caveat: This post was made with humorous intent only, but this is TUSCL.
It is hard to do better than the mean of ratings. However, especially trusted and respected reviewers could be given a higher weight. One solution would be Nidan’s: when reviews are approved the adjudicator would click a box to nominate a reviewer for trusted status. Everybody starts with a base weight of one. If a reviewer gets some number (perhaps 5) trusts their weight would initially increase to a value of 2.
This leads to an obvious problem: some reviewers would build up an overwhelming weight. This can be corrected by reducing the increment by some factor every time a reviewer gets another 5 (or whatever cutoff is chosen) trusts their weight increases by 1/(K*times increased), where K is some constant, perhaps 2. Obviously, you would have to substitute 1 for the first increment.
So imagine that a reviewer has 15 trusts and K=2. Then their weight would be:
Score = 1 (the base weight) + 1 (first trust increment) + 0.5 (they’ve been increased once, so 1/(K*1) = 0.5) + 0.25 (now they’ve been increased twice, so 1/(K*2) = 0.25)
Thus, the overall reviewer weight is 2.75. I leave it to founder and the group to decide on the best value of K. Determining it depends on how often review adjudicators nominate reviewers for “good reviewer” status. This has the desirable feature of allowing reviewers to approve but not nominate.
Obviously, adjudicators would have only one nomination per review. This has two desirable features. First, it prevents any adjudicator from excessively upweighting any reviewers. Second, it will allow active reviewers to achieve higher weights, but only if they impress the adjudicators.
One could, of course, allow adjudicators to approve but register a review as “worth approving, but not very good”. This could then be used to calculate a multiplier with a value <1.
Final score = Score * M
The question, of course, is how to get the multiplier. This lion suggests:
M = 1 / 2^( #downvotes * D)
Where D is some constant, say 0.05. This would mean that one downvote yields
M = 1 / 2^(1*0.05) = 0.966
And 5 downvotes yields:
M = 1 / 2^(5*0.05) = 0.841
You might want to give reviewers a floor of some number such that further downvoting would not further diminish the reviewer’s weight. But this may not be necessary. After all, 100 downvotes would yield:
M = 1 / 2^(100*0.05) = 0.3125
This would be the weight of a reviewer with 100 downvote and no upvotes, and perhaps it should be. Obviously, the values of the constants may need to be tweaked here, and a threshold for the downvotes could be included, i.e.,
<5 downvotes, keep M = 1
>= 5, calculate M, but use #downvotes - 4
Note that any reliability metric, no matter how complex can be gamed. This is something TUSCL will have to live with.
Note also that I considered logarithmic functions, but they performed poorly in simulations.
Now this lion will contemplate the notion of pseudocounts. Bayesian approaches, like pseudocounts, have both desirable and undesirable features. Like Fermat writing in a book’s margin, I will leave this for later. I just hope this doesn’t take this lion away from his groundbreaking work on M theory for too long. ROAR!!!
I travel a lot (hence my handle) so I review clubs against clubs from all over. I have been in clubs from San Diego to Boston, from North Dakota to San Antonio, Seattle to Miami & Vegas(not much value there). I do try to reflect local conditions in my reviews and always review the physical layout of the club for safety, cleanliness, costs(I usually drink bottled water so booze pricing may not be included), menus availability etc. When I first found TUSCL I used the rating but now I read reviews and determine if any clubs in the area are worth the $$. For Example; I tried a Long Island NY club with a good rating, not in the reviews was valet parking ($25 + tip)(not including the $30 cover) This club supposedly had beauties but was a no touch club. I didn't go in, more than $50 to just get in the door to me is a waste of $$.
How do you include this in the ratings because the locals are used to the expense? Flight Club in Inkster has mandatory Valet but the club has value.
I'm quite sure my negative rating of this club had almost no effect on the overall ratings being one of many. I like the ratings and use them as a start for my research into where to go in any given area but they can't replace reading actual reviews.
The old system doesn’t need to be replaced, just improved. If you pick a club solely based on a rating number you’re a fucking moron. The ratings were good to narrow down a city’s clubs when traveling to a new place. That’s it. Same with the clubs on a map to pick a hotel location.
1. Keep the old categories - girls, value, niceness of club.
2. Decide on a ranking for those values and weight them.
3. Add something for total number of reviews and weight it.
4. Make sure the review selection menus equate to the categories. In other words, get rid of ‘vampiness’ and just leave it as ‘hotness of girls’. Again, KISS.
5. Add some time value in the formula to weight recent reviews. Maybe cut it off after a review is a year old? I don’t care as long as more recent reviews get some priority.
6. Provide guidance to the reviewer as to whether they should rate against local clubs or nationally. I don’t care which one as long as its consistent so that the reader knows how to compare the values of Camelot in DC vs. Follies.
7. Fuck super reviewer weighting.
I don't think giving more weight to certain reviewers is a good idea bc everyone has their biases and different POVs of what a "good" club is.
I also don't agree with local vs non-local distinctions - local clubs are what they are and they should be rated against other local clubs bc that is what's available in that area - as has been mentioned the ratings should be an approximation/guide not the end-all-be-all nor sole determination in a PL picking a club.
+ physical apperan of the club (dive, midtier, upscale)
+ dancers' looks (low, med, high)
+ mileage (low, medium, high)
+ costs (cheap, decent, high)
+ hustle-level (laidback, normal, high)
Not saying it has to be exactly these parameters just a ballpark reco.
And the parameters should be the same when submitting the review as when reading the results - should not be called one thing when submitting a review and appear as something-else/a-different-name when reading the reviews.
And as been mentioned perhaps old (over a year) reviews/ratings should maybe not be considered w.r.t. the ratings perhaps unless the club doesn't have a min # of reviews in a year maybe 5 (to cover the cases for clubs not reviewed too-often)?
Now, if one wishes for a simple system, why not simply ask for 1-5 scores and report the distribution, just like Amazon? Seems to work for them.
But to return to reviewer weights idea, I feel that the apes are merely upset because they cannot understand the quality of my lion-y maths. It is well known that cats are much better than apes at maths. We just don’t give enough of a fuck to use them to do things like build rockets and planes. You apes should ask yourself if planes and rockets have actually done you any good? Just embrace pure mathematics, like us cats!
With that said, I will delve into the use of exponential decay to capture anti-trusts. When a review gets a “publish this, but it still kind of sucks” rating from an adjudicator, you call it an anti-trust. Simply calculate a multiplier:
M = exp(-K * #anti-trusts)
This would mean the final reviewer score is simply the product of M and the equation above. Imagine a reviewer with one review and one anti-trust, and use K=0.05. This gives M=0.95. If a reviewer accumulates 5 anti-trusts we find M=0.779. Take it to the extreme of 100 anti-trusts and you have M=0.007. Such an individual is likely to be a fucking zebra.
I understand you apes want simple. But simple don’t build rocket ships! Wait...I just claimed that rockets kind of suck and we should all be more cat-like.
You damn dirty apes shouldn’t ask for formulae (as an intellectual cat I refuse to use “formulas”)
ROAR!!!
You had as good a system as you were going to get prior to January and I don't think that it was as meaningless as you think. Indeed, outside of a few heavily shilled clubs and markets, I found the ratings to be quite representative. This includes giving greater weighting to reviews posted by people who have covered more clubs on tuscl, which inherently makes sense as they have a much broader basis for comparison. If you wanted to tinker with the weightings a bit to strike some balance then so be it, but I think that adding a bunch of easily manipulated metrics into the mix will just make things even worse, not better.
When going to a new city, I pick the ~ 3 - 6 clubs with the highest scores, and then take a more focussed look to chose my initial visits. I wouldn't literally go straight line on the scores, I may find a club with 7.67 score more to my liking with one with a 7.87 score. A club that is 5ish or less probably won't be worthwhile to forego a 7ish club visit.
Only a damn fool would chose a singular club score to plan a road trip to. Picking from a group of high scoring clubs in a given area makes sense to me.
I'll give you my answer at the Follies TUSCL meet
Whatever scoring system is chosen don't rely exclusively on it. As one wine critic advised, scores are only a crude discriminator; it's more important to read the review itself. A high-scoring wine may not appeal to YOUR TASTES.
There's just no way to avoid reading the recent reviews w/o increasing the chances of a bad experience FOR YOU.
IDEA ONE regarding weighting reviews by the member's number of reviews:
Here's a simple weighting scheme, which could be tweaked a bit.
For each member, assign a weight from 0 to 1, which equals the number of reviews published to a maximum of 100, divided by 100. A member with 100 or more reviews has a weight of 1.00. A member with 50 reviews has a weight of 0.50
Compute the weighted average. R denotes the final weighted average of reviews; Denote r(i) as the review number (like 1 to 10) submitted by member i; ; w(i) is the weight of the reviewer; Σ is the summation command. Your formula is
R = [Σ r(i)*w(i)] / Σ w(i)
Some examples: member with weight 1.00 gives a club a 4; another member with weight 0.5 also gives that club a 4. Weighted average works out to 4.00
Another example: member with weight 1.00 gives a club a 4; another member with weight 0.5 gives the club an 8. The weighted average works out to 6.31, closer to the first member's ranking, but not ignoring the other member's ranking.
This could be put into a formula that takes more things into account.
IDEA TWO
Number of reviews would be valuable, but should be scaled somehow. Compare Follies in Atlanta to Diamond Dolls in Pompano Beach. The number of reviews compared to other options makes both of these clubs a winner, but is Follies 4x better? That's what the review count says.
So let's combine two concepts: number of reviews relative to the highest number any club has. Call it F after my guess as to which club that is. Every club is ranked relative to F. But let's also indicate that the difference between 10 reviews and 110 is much more important than the difference between 1000 and 1100. That's as easy as falling of a log. (Old math joke). So Denote n as the number of reviews a club has, and F the highest number of reviews of any club, compute the club's value as:
V = ( ln(n/F)-ln(1/F) ) / ln(F).
This value could be based on all-time number of reviews, or number within last year, or other time period.
I don't see how to add a chart showing the weighting, but here's what to do to see one: in Excel, put in numbers in a column from 1 to a big number, like 2278 (which is my guess as to highest number of reviews). enter my formula, replacing "n" with a reference to the column of numbers (like A1, A2, etc.) Replace F with the reference to the highest number, like cell A2278. Now select the cells you've calculated and tell Excel to insert chart.
OK, now I'm going back to my job.
[email protected]
I'd also prefer a more simpler review metric, just a simple yes/no. That kind of information is more valuable to a person who Googles something that brings them to this site. They don't care if some club offers a 7 out of 10 parking experience compared to a 5 out of 10 in another club; they just want to know if the majority of people who went to that club recently had a good time.
My idea:
A club's rating would be a ratio of 2 numbers. A yes review would add 1 to the numerator and a no review would add 0. For every review, 0.25 would be added to the denominator. The score for each club would then be whatever number you get from dividing the numerator by the denominator. This might not result in pretty numbers, but it will give you a comparison of all the clubs in a given area so that they can be sorted by a score. You can make the scores pretty by curving them if you want sort of like a school test where no one scores higher than 90%, so the guy who got 90% ends up having his grade curved up to 100%.
Degradation:
Reviews for each club could be sorted into 12 month groupings. As mentioned earlier, clubs that are in the 1 year old grouping would have their scores degraded once, clubs in the 2 year old grouping would be degraded twice. The degradation amount would be based on how much weight you want older reviews to have. For instance, if you wanted reviews from 5 years ago to still carry 20% of their weight, you'd degrade all scores by 28% (multiplying by 72%). If you want scores from 5 years ago to be practically useless, you'd multiply by 55%.
One other thing you'll need to do in order to combat clubs with very little reviews from topping your score list is to artificially increase the denominator of every club up to a certain amount until they have enough reviews to exceed this amount. Otherwise a club with 1 positive rating is going to find itself at the top of your score list. So with my examples, you could set the artificial number to 2, so that every club starts off with the equivalent of 8 negative reviews and their score won't be as correct until they get at least 8 reviews. This will also force clubs with very few reviews down to the bottom of the lists which is probably where they should be.
---
I derived this from my experience utilizing the EP/GP systems in MMORPGs. It created a tiered list based on the ratio of time a player put in (effort points) versus the amount of loot they took out of the system (gear points). Gear was given to players who had the highest ratio of EP to GP. In the case of strip clubs, clubs with a lot of positive reviews (EP) will have a higher ratio score compared to the clubs that get a lot of negative reviews (same increase to GP but very little increase of EP, lowering their ratio and dropping them in the list). Strip club customers are then more likely to go to the clubs with the higher ratio scores and if they have a good time will submit positive reviews of that club to keep them on top. Clubs that get bad reviews will fall in the rankings and might get less customers because of it.
I think there's a couple of challenges with respect to a single score. Some patrons would prefer a dive club with super high mileage. They may not have the best looking girls, nor best looking club. However for some, the mileage is a higher weight in a score. For others, the poshness of the club and model looking girls is more important.
Another challenge with number of reviews is, some places may have single digits, while others may have hundreds. Ascribing a weight to such a wide range -- hmm.. head scratcher.
I would also like to see only reviews within the last X months count toward score. Don't need to delete the review, they just don't count toward score. For example take HiLiter in PHX. Used to be great, now it sucks. Still has a high score due to prior reviews, which no longer accurately describe the experience.
Here's a thought. Add another couple of sorting parms, like mileage, and something else. You should be able to track how viewers sort the list of clubs. Using the popularity of each sort parameter as a guide for weighting a score.
Key points should be did you enjoy your visit
Were you treated well
Did the joint live up to your expectations
That should be sufficient to inform others which is the ultimate goal of the ratings
The only change I'd suggest is having the ratings computed from reviews that are within the last 12-24 months. Places change. But Founder, that quick sizing-up of places, along with a local map, made for efficient and useful decisions. Please bring 'em back!
# of reviewers in past 6 months
Dancer quality in past 6 months
club quality in past 6 months
value in past 6 months
FWIW, in the past, I found # of reviews to be the strongest correlation, but it needs to be only over the last 6 months to reflect that things change over time