Club Ratings

avatar for founder
founder
slip a dollar in her g-string for me
I took club cumulative ratings off because they didn't mean much.

I would like to replace it with a rating that takes a lot of items into account like : how many reviews, how many reviewers, the review scores, photos, discussions etc.

Any math whizzes out there are welcome to chime in

45 comments

Jump to latest
avatar for twentyfive
twentyfive
5 years ago
well without allowing us to see how many reviews were posted by a particular member or how many trusts a person has it's really hard to gauge the quality of the posted reviews, I'd give some props to guys that have a bunch of approved reviews and without those things being visible how do you expect us to formulate a response to your question ?
avatar for founder
founder
5 years ago
Ffs 25, it's math. That's exactly what I am asking. Give me a formula taking into account all of those things.

avatar for twentyfive
twentyfive
5 years ago
that wouldn't be my strong point sorry but there are better qualified folks right here that could probably help.
avatar for Papi_Chulo
Papi_Chulo
5 years ago
"... Any math whizzes out there are welcome to chime in ..."


Contact Juice - he's very educated - dude spent 7 years in high school
avatar for schmoe31415
schmoe31415
5 years ago
You could just string each of the parameters together to get a weighted rating and then take the median of those. So for each review, sum something like

rating * (recency_weight * 1/days_since_rated) * (reviewer_count_weight * num_reviews_by_reviewer) * (reviewer_discussion_weight * num_discussion_contributions) * (reviewer_count_weight * total_number_of_unique_reviewers)

Then you can just fiddle with the weights to decide how much emphasis to give the number of previous reviews vs how much to discount older reviews, etc..
avatar for pistola
pistola
5 years ago
I'd do 2 scores based on average overall historical and rolling 6 month the latter which is probably more important.

That said what would be even better is being able to sort reviews by shift. Ie Follies, never been, seems like a totally different place night and day. So is CH3 in Vegas. Would like to be able to pull a club, hit afternoon and see the reviews.
avatar for bullzeye
bullzeye
5 years ago
^like pistola’s suggestion for splitting the rating for the day and night shifts
avatar for rickdugan
rickdugan
5 years ago
This feels like a troll thread to me.
avatar for herbtcat
herbtcat
5 years ago
The idea of a weighted average appeals to me, but I'd suggest providing the reader with a tool to adjust the weighting of each component.

For example, if a reader values club dancer quality over club looks, he could adjust the formula to make dancer quality 1.5 to 2.0 times more influential in the total score.
avatar for Nidan111
Nidan111
5 years ago
Don’t have to make it too hard. Just have each reviewer rate the following from 1 to 5 (1 being shitty, 5 being Fuck Yeah!)

Parking/Security
Club layout/comfort/drink quality
Dancer Quality
Lap Dance /VIP / Champagne Room Cost
Extra fun time potential

Add the numbers up and divide by the total number of categories (5 in above example) to get the final graded number. Thus, 1 is a shitty experience and 5 is a “you gonna get fucked in a good way” experience.
avatar for Electronman
Electronman
5 years ago
I like Nidan's approach but with a few minor tweaks:

Rate each item on a 10 point scale. Report the average and range for each subcategory (e.g., Parking/Security = 9.2, range 6 to 10; Mileage/extras = 2.4, range 0 to 4). The calculate an overall score across all subcategories for the overall club rating on a ten point scale.

Do not count ratings that are over two (or three??) years old towards the average (clubs change over time).

Change the category labels slightly: The first three are OK but change the last two:
Lap Dance/VIP/Champagne room cost and quality of facilities;
Mileage and extras rating.
avatar for Nidan111
Nidan111
5 years ago
You could also make it so that only the VIP Members who are approving the review be the ones who rate the categories based on the information provided in the review. This would actually make those who approve the review take some time to analyze the club based on the information provided.
avatar for jacej
jacej
5 years ago
This is similar to schmoe's suggestion. How about a linearly average weighted calculation that weighs recent scores more heavily, and gives less weight to older scores as the more recent scores would be more reflective of the current state of the club. There are some articles on the Internet on how to do this, but the basic idea is that you would take the say...the most recent 10 ratings. The most recent one would be given a weight of 10, the second most recent would be given a weight of 9, etc., and the last rating would be given a weight of 1. You add all those ratings up, and divide by the number of weights.

By way of example, here's how it would work using just 4 ratings to keep the math easy. Say the ratings were 6, 4, 9, and 3 for the last four ratings, from newest to oldest. The linearly weighted average calculation would then be as follows: For the numerator: (6*4)+(4*3)+(9*2)+(3*1)=57. For the denominator, you add up the number of weights: 4+3+2+1=10. The resulting linearly weighted average for that club would be 57/10=5.7 By just taking a certain number of recent ratings (it could be 10, 20, 30, whatever) and weighing the most recent ratings more heavily, that prevents the club from being penalized for crappy bad experiences in the past, or being unfairly rewarded for past performance that isn't reflective of current conditions.

As for how to score, I actually liked how each scoring component was broken out regarding club rating, value, girls, etc. That gave me a rough idea on what to expect, though it wasn't clear to me how the ratings were calculated, i.e., was it a total average of all scores from the beginning of time? Or something else?
avatar for jacej
jacej
5 years ago
Also, since the number of ratings would remain constant, e.g., 30, and wouldn't increase as the number of ratings are submitted (so the 31st and older ratings would be ignored, regardless if there are 40, 80, 100 or more reviews), I believe that the methodology I've described above is technically a linearly weighted moving average as the weighted average "moves" along with the data as it comes in.
avatar for jacej
jacej
5 years ago
Right now, I'm seeing three different rating values being displayed for clubs: Club, Dancers, and Dollar Value. Maybe some sort of hybrid rating system could be used, where the overall individual ratings for each component is calculated using a linearly weighted moving average methodology, and then a single overall score could be calculated using all those components, but weighing the components in some fashion, which would be completely arbitrary and a judgment call on what you feel is most important. Again, this is similar to schmoe's suggestion. So to come up with a composite score, I think that the Club and Dancer components are the most important, with dollar value being last as dollar value can be all over the place depending on who you are, what you're looking for, and how good your game is. You could give the Club and Dancer components weights of, say 3, and the dollar value a weight of 1. If the linearly weighted moving average for the components are, for example, Club 8.8, Dancers 7.2, and Dollar Value 4.3, the composite score would be a weighted average calculated like this: (8.8*3)+(7.2*3)+(4.3*1)/(3+3+1) = 7.47.

Obviously, this could be fiddled with a lot depending on what you (or the consensus) believes is more important.
avatar for rockie
rockie
5 years ago
Come on Founder, Rick Dugan's had the equation for a decade! Just ask him!

Caveat: This post was made with humorous intent only, but this is TUSCL.
avatar for jacej
jacej
5 years ago
Geez...I keep adding on. Also, taking into account the number of reviews, and unique reviewers, photos, VIP, etc. those numbers could also be added in to the composite score. However, they would have to be "normalized" to a 10 point scale (which is what is currently being used). This thread might be helpful: https://math.stackexchange.com/questions…
avatar for rickthelion
rickthelion
5 years ago
Well, since one of the rare intra-rick dispute has been rickdugan’s criticism of yours truly’s inability to solve quantum mechanics, I shall try to provide a simple and useful lion-y formula along with a detailed explanation.

It is hard to do better than the mean of ratings. However, especially trusted and respected reviewers could be given a higher weight. One solution would be Nidan’s: when reviews are approved the adjudicator would click a box to nominate a reviewer for trusted status. Everybody starts with a base weight of one. If a reviewer gets some number (perhaps 5) trusts their weight would initially increase to a value of 2.

This leads to an obvious problem: some reviewers would build up an overwhelming weight. This can be corrected by reducing the increment by some factor every time a reviewer gets another 5 (or whatever cutoff is chosen) trusts their weight increases by 1/(K*times increased), where K is some constant, perhaps 2. Obviously, you would have to substitute 1 for the first increment.

So imagine that a reviewer has 15 trusts and K=2. Then their weight would be:

Score = 1 (the base weight) + 1 (first trust increment) + 0.5 (they’ve been increased once, so 1/(K*1) = 0.5) + 0.25 (now they’ve been increased twice, so 1/(K*2) = 0.25)

Thus, the overall reviewer weight is 2.75. I leave it to founder and the group to decide on the best value of K. Determining it depends on how often review adjudicators nominate reviewers for “good reviewer” status. This has the desirable feature of allowing reviewers to approve but not nominate.

Obviously, adjudicators would have only one nomination per review. This has two desirable features. First, it prevents any adjudicator from excessively upweighting any reviewers. Second, it will allow active reviewers to achieve higher weights, but only if they impress the adjudicators.

One could, of course, allow adjudicators to approve but register a review as “worth approving, but not very good”. This could then be used to calculate a multiplier with a value <1.

Final score = Score * M

The question, of course, is how to get the multiplier. This lion suggests:

M = 1 / 2^( #downvotes * D)

Where D is some constant, say 0.05. This would mean that one downvote yields

M = 1 / 2^(1*0.05) = 0.966

And 5 downvotes yields:

M = 1 / 2^(5*0.05) = 0.841

You might want to give reviewers a floor of some number such that further downvoting would not further diminish the reviewer’s weight. But this may not be necessary. After all, 100 downvotes would yield:

M = 1 / 2^(100*0.05) = 0.3125

This would be the weight of a reviewer with 100 downvote and no upvotes, and perhaps it should be. Obviously, the values of the constants may need to be tweaked here, and a threshold for the downvotes could be included, i.e.,

<5 downvotes, keep M = 1
>= 5, calculate M, but use #downvotes - 4

Note that any reliability metric, no matter how complex can be gamed. This is something TUSCL will have to live with.

Note also that I considered logarithmic functions, but they performed poorly in simulations.

Now this lion will contemplate the notion of pseudocounts. Bayesian approaches, like pseudocounts, have both desirable and undesirable features. Like Fermat writing in a book’s margin, I will leave this for later. I just hope this doesn’t take this lion away from his groundbreaking work on M theory for too long. ROAR!!!
avatar for Tiredtraveler
Tiredtraveler
5 years ago
The problem with "Club Ratings" is that local monger reviewers can't help but skew the ratings because they are rating against other local clubs, on the flip side they can be valuable because they will show up and down trends in local clubs and problem sooner than guys like me who or in and out of town randomly.
I travel a lot (hence my handle) so I review clubs against clubs from all over. I have been in clubs from San Diego to Boston, from North Dakota to San Antonio, Seattle to Miami & Vegas(not much value there). I do try to reflect local conditions in my reviews and always review the physical layout of the club for safety, cleanliness, costs(I usually drink bottled water so booze pricing may not be included), menus availability etc. When I first found TUSCL I used the rating but now I read reviews and determine if any clubs in the area are worth the $$. For Example; I tried a Long Island NY club with a good rating, not in the reviews was valet parking ($25 + tip)(not including the $30 cover) This club supposedly had beauties but was a no touch club. I didn't go in, more than $50 to just get in the door to me is a waste of $$.
How do you include this in the ratings because the locals are used to the expense? Flight Club in Inkster has mandatory Valet but the club has value.
I'm quite sure my negative rating of this club had almost no effect on the overall ratings being one of many. I like the ratings and use them as a start for my research into where to go in any given area but they can't replace reading actual reviews.
avatar for twentyfive
twentyfive
5 years ago
^ I would think reviews by locals were more in line with the general rating of a club because we get to go to the local clubs regularly and are not subjected to the vagaries of a fluke bad visit, or a good visit that happens once and never again
avatar for Hank Moody
Hank Moody
5 years ago
KISS

The old system doesn’t need to be replaced, just improved. If you pick a club solely based on a rating number you’re a fucking moron. The ratings were good to narrow down a city’s clubs when traveling to a new place. That’s it. Same with the clubs on a map to pick a hotel location.

1. Keep the old categories - girls, value, niceness of club.

2. Decide on a ranking for those values and weight them.

3. Add something for total number of reviews and weight it.

4. Make sure the review selection menus equate to the categories. In other words, get rid of ‘vampiness’ and just leave it as ‘hotness of girls’. Again, KISS.

5. Add some time value in the formula to weight recent reviews. Maybe cut it off after a review is a year old? I don’t care as long as more recent reviews get some priority.

6. Provide guidance to the reviewer as to whether they should rate against local clubs or nationally. I don’t care which one as long as its consistent so that the reader knows how to compare the values of Camelot in DC vs. Follies.

7. Fuck super reviewer weighting.
avatar for Papi_Chulo
Papi_Chulo
5 years ago
Agree with the KISS principle - I think trying to make it too complex actually increases the chances of missing the mark by more.

I don't think giving more weight to certain reviewers is a good idea bc everyone has their biases and different POVs of what a "good" club is.

I also don't agree with local vs non-local distinctions - local clubs are what they are and they should be rated against other local clubs bc that is what's available in that area - as has been mentioned the ratings should be an approximation/guide not the end-all-be-all nor sole determination in a PL picking a club.
avatar for founder
founder
5 years ago
I think papi just discovered why I took the cumulative ratings off.

avatar for twentyfive
twentyfive
5 years ago
^ how do you rate local against national if you’re not a regular traveler and it’s impossible to have a good feel for any club unless you’ve visited a few times each visit can be very different
avatar for founder
founder
5 years ago
And 25 also agrees cumulative ratings are worthless
avatar for Papi_Chulo
Papi_Chulo
5 years ago
I think a couple of basic parameters/categories should be considered for the ratings perhaps such as:

+ physical apperan of the club (dive, midtier, upscale)

+ dancers' looks (low, med, high)

+ mileage (low, medium, high)

+ costs (cheap, decent, high)

+ hustle-level (laidback, normal, high)

Not saying it has to be exactly these parameters just a ballpark reco.

And the parameters should be the same when submitting the review as when reading the results - should not be called one thing when submitting a review and appear as something-else/a-different-name when reading the reviews.
avatar for Papi_Chulo
Papi_Chulo
5 years ago
Ratings won't be perfect but I think they help more than not having anything at all.

And as been mentioned perhaps old (over a year) reviews/ratings should maybe not be considered w.r.t. the ratings perhaps unless the club doesn't have a min # of reviews in a year maybe 5 (to cover the cases for clubs not reviewed too-often)?
avatar for doctorevil
doctorevil
5 years ago
How about just adding a final 1 to 10 scale drop down to the review form asking for an overall assessment of the club and average them for say the last two years?
avatar for rickthelion
rickthelion
5 years ago
I just noticed my lion-y treatise was cut off, which is sad because it contained additional maths of such brilliance that few could even understand. Alas, I know how Fermat felt when he tried to fit his eponymous theorem in the margin of a text. Now we shall never know if Fermat actually had a proof so long before Wiles or if he was simply mistaken. Sad :(

Now, if one wishes for a simple system, why not simply ask for 1-5 scores and report the distribution, just like Amazon? Seems to work for them.

But to return to reviewer weights idea, I feel that the apes are merely upset because they cannot understand the quality of my lion-y maths. It is well known that cats are much better than apes at maths. We just don’t give enough of a fuck to use them to do things like build rockets and planes. You apes should ask yourself if planes and rockets have actually done you any good? Just embrace pure mathematics, like us cats!

With that said, I will delve into the use of exponential decay to capture anti-trusts. When a review gets a “publish this, but it still kind of sucks” rating from an adjudicator, you call it an anti-trust. Simply calculate a multiplier:

M = exp(-K * #anti-trusts)

This would mean the final reviewer score is simply the product of M and the equation above. Imagine a reviewer with one review and one anti-trust, and use K=0.05. This gives M=0.95. If a reviewer accumulates 5 anti-trusts we find M=0.779. Take it to the extreme of 100 anti-trusts and you have M=0.007. Such an individual is likely to be a fucking zebra.

I understand you apes want simple. But simple don’t build rocket ships! Wait...I just claimed that rockets kind of suck and we should all be more cat-like.

You damn dirty apes shouldn’t ask for formulae (as an intellectual cat I refuse to use “formulas”)

ROAR!!!
avatar for rickdugan
rickdugan
5 years ago
OK, assuming this is not a troll post...

You had as good a system as you were going to get prior to January and I don't think that it was as meaningless as you think. Indeed, outside of a few heavily shilled clubs and markets, I found the ratings to be quite representative. This includes giving greater weighting to reviews posted by people who have covered more clubs on tuscl, which inherently makes sense as they have a much broader basis for comparison. If you wanted to tinker with the weightings a bit to strike some balance then so be it, but I think that adding a bunch of easily manipulated metrics into the mix will just make things even worse, not better.
avatar for rickdugan
rickdugan
5 years ago
@Papi, ok, let's assume that a particular clubbing area sucks compared to some other areas, so the ratings suffer. The relatively better clubs in that area will still have better ratings than the relatively poorer clubs, so there is still some basis for differentiation and comparison. We're just comparing 6s against 5s instead of 8's against 7s.
avatar for minnow
minnow
5 years ago
I thought the old scoring system was basically OK. I'd still like to see the number of reviews (maybe put in a discriminator like number of reviews in last year- some clubs may have a shit load of reviews dating back 10 years or more.)

When going to a new city, I pick the ~ 3 - 6 clubs with the highest scores, and then take a more focussed look to chose my initial visits. I wouldn't literally go straight line on the scores, I may find a club with 7.67 score more to my liking with one with a 7.87 score. A club that is 5ish or less probably won't be worthwhile to forego a 7ish club visit.

Only a damn fool would chose a singular club score to plan a road trip to. Picking from a group of high scoring clubs in a given area makes sense to me.
avatar for Papi_Chulo
Papi_Chulo
5 years ago
"... @Papi, ok, let's assume that a particular clubbing area sucks compared to some other areas, so the ratings suffer ..."

I'll give you my answer at the Follies TUSCL meet
avatar for joker44
joker44
5 years ago
Follow-up on minnow

Whatever scoring system is chosen don't rely exclusively on it. As one wine critic advised, scores are only a crude discriminator; it's more important to read the review itself. A high-scoring wine may not appeal to YOUR TASTES.

There's just no way to avoid reading the recent reviews w/o increasing the chances of a bad experience FOR YOU.

avatar for ArtCollege
ArtCollege
5 years ago
I like the idea of Amazon-type histogram of overall ratings.

IDEA ONE regarding weighting reviews by the member's number of reviews:
Here's a simple weighting scheme, which could be tweaked a bit.

For each member, assign a weight from 0 to 1, which equals the number of reviews published to a maximum of 100, divided by 100. A member with 100 or more reviews has a weight of 1.00. A member with 50 reviews has a weight of 0.50

Compute the weighted average. R denotes the final weighted average of reviews; Denote r(i) as the review number (like 1 to 10) submitted by member i; ; w(i) is the weight of the reviewer; Σ is the summation command. Your formula is
R = [Σ r(i)*w(i)] / Σ w(i)

Some examples: member with weight 1.00 gives a club a 4; another member with weight 0.5 also gives that club a 4. Weighted average works out to 4.00

Another example: member with weight 1.00 gives a club a 4; another member with weight 0.5 gives the club an 8. The weighted average works out to 6.31, closer to the first member's ranking, but not ignoring the other member's ranking.

This could be put into a formula that takes more things into account.

IDEA TWO
Number of reviews would be valuable, but should be scaled somehow. Compare Follies in Atlanta to Diamond Dolls in Pompano Beach. The number of reviews compared to other options makes both of these clubs a winner, but is Follies 4x better? That's what the review count says.

So let's combine two concepts: number of reviews relative to the highest number any club has. Call it F after my guess as to which club that is. Every club is ranked relative to F. But let's also indicate that the difference between 10 reviews and 110 is much more important than the difference between 1000 and 1100. That's as easy as falling of a log. (Old math joke). So Denote n as the number of reviews a club has, and F the highest number of reviews of any club, compute the club's value as:
V = ( ln(n/F)-ln(1/F) ) / ln(F).

This value could be based on all-time number of reviews, or number within last year, or other time period.

I don't see how to add a chart showing the weighting, but here's what to do to see one: in Excel, put in numbers in a column from 1 to a big number, like 2278 (which is my guess as to highest number of reviews). enter my formula, replacing "n" with a reference to the column of numbers (like A1, A2, etc.) Replace F with the reference to the highest number, like cell A2278. Now select the cells you've calculated and tell Excel to insert chart.

OK, now I'm going back to my job.

avatar for founder
founder
5 years ago
Artcollege, email me the spreadsheet

[email protected]

avatar for rickdugan
rickdugan
5 years ago
The problem with using the number of reviews of a club is twofold. First, it rewards clubs that heavily shill and encourages them to ramp up even more (as if clubs like Baby Dolls need any more encouragement). Second, it unfairly penalizes smaller market clubs, which tend to be reviewed less frequently.
avatar for Mate27
Mate27
5 years ago
You idiots need to be thinking more like RicktheLion instead of Rick Dugan!
avatar for Liwet
Liwet
5 years ago
I think you want your review system to weigh newer reviews more heavily than older reviews. I think the easiest way to do this is by degrading the score of older reviews before adding it to the cumulative score of the club. This means that scores from a year ago would be worth only 80%. Scores from 2 years ago would be worth 64%, and three years would be 51%, etc.

I'd also prefer a more simpler review metric, just a simple yes/no. That kind of information is more valuable to a person who Googles something that brings them to this site. They don't care if some club offers a 7 out of 10 parking experience compared to a 5 out of 10 in another club; they just want to know if the majority of people who went to that club recently had a good time.

My idea:

A club's rating would be a ratio of 2 numbers. A yes review would add 1 to the numerator and a no review would add 0. For every review, 0.25 would be added to the denominator. The score for each club would then be whatever number you get from dividing the numerator by the denominator. This might not result in pretty numbers, but it will give you a comparison of all the clubs in a given area so that they can be sorted by a score. You can make the scores pretty by curving them if you want sort of like a school test where no one scores higher than 90%, so the guy who got 90% ends up having his grade curved up to 100%.

Degradation:

Reviews for each club could be sorted into 12 month groupings. As mentioned earlier, clubs that are in the 1 year old grouping would have their scores degraded once, clubs in the 2 year old grouping would be degraded twice. The degradation amount would be based on how much weight you want older reviews to have. For instance, if you wanted reviews from 5 years ago to still carry 20% of their weight, you'd degrade all scores by 28% (multiplying by 72%). If you want scores from 5 years ago to be practically useless, you'd multiply by 55%.

One other thing you'll need to do in order to combat clubs with very little reviews from topping your score list is to artificially increase the denominator of every club up to a certain amount until they have enough reviews to exceed this amount. Otherwise a club with 1 positive rating is going to find itself at the top of your score list. So with my examples, you could set the artificial number to 2, so that every club starts off with the equivalent of 8 negative reviews and their score won't be as correct until they get at least 8 reviews. This will also force clubs with very few reviews down to the bottom of the lists which is probably where they should be.

---

I derived this from my experience utilizing the EP/GP systems in MMORPGs. It created a tiered list based on the ratio of time a player put in (effort points) versus the amount of loot they took out of the system (gear points). Gear was given to players who had the highest ratio of EP to GP. In the case of strip clubs, clubs with a lot of positive reviews (EP) will have a higher ratio score compared to the clubs that get a lot of negative reviews (same increase to GP but very little increase of EP, lowering their ratio and dropping them in the list). Strip club customers are then more likely to go to the clubs with the higher ratio scores and if they have a good time will submit positive reviews of that club to keep them on top. Clubs that get bad reviews will fall in the rankings and might get less customers because of it.
avatar for goldmongerATL
goldmongerATL
5 years ago
If you add a rating category for VIP mileage, it would lessen the need to describe what a girl did to try and convey that info in the review. I think it needs to be a separate rating. For example, Mons Venus has high mileage dances, but no VIP or extras at all. So Dance Mileage and VIP Mileage.
avatar for vmaxhp
vmaxhp
5 years ago
Agree with the "Mileage" parameter.
I think there's a couple of challenges with respect to a single score. Some patrons would prefer a dive club with super high mileage. They may not have the best looking girls, nor best looking club. However for some, the mileage is a higher weight in a score. For others, the poshness of the club and model looking girls is more important.
Another challenge with number of reviews is, some places may have single digits, while others may have hundreds. Ascribing a weight to such a wide range -- hmm.. head scratcher.
I would also like to see only reviews within the last X months count toward score. Don't need to delete the review, they just don't count toward score. For example take HiLiter in PHX. Used to be great, now it sucks. Still has a high score due to prior reviews, which no longer accurately describe the experience.

Here's a thought. Add another couple of sorting parms, like mileage, and something else. You should be able to track how viewers sort the list of clubs. Using the popularity of each sort parameter as a guide for weighting a score.
avatar for twentyfive
twentyfive
5 years ago
Ratings are very overdone in my opinion
Key points should be did you enjoy your visit
Were you treated well
Did the joint live up to your expectations
That should be sufficient to inform others which is the ultimate goal of the ratings
avatar for doctorevil
doctorevil
5 years ago
I agree with 25. That’s why I think an overall 1 to 10 score in each review averaged over a couple of years would best capture the value of a club.
avatar for tuscl12345
tuscl12345
5 years ago
The overall ratings are key, even if they're just a starting point. Going to a town with 20 clubs, you know to read the reviews of five clubs and skip the ones where you'll be overcharged, underserved, or just generally left wishing you hadn't gone. Why leave them out, especially when there's enough data in TUSCL to make them (generally) useful? The overall ratings are essential place to start.

The only change I'd suggest is having the ratings computed from reviews that are within the last 12-24 months. Places change. But Founder, that quick sizing-up of places, along with a local map, made for efficient and useful decisions. Please bring 'em back!
avatar for mjx01
mjx01
5 years ago
# of reviews in past 6 months
# of reviewers in past 6 months
Dancer quality in past 6 months
club quality in past 6 months
value in past 6 months

FWIW, in the past, I found # of reviews to be the strongest correlation, but it needs to be only over the last 6 months to reflect that things change over time
You must be a member to leave a comment.Join Now