Jump to content

Massive influx of fake accounts


Recommended Posts

Ta****

There are two simple features the site admins could implement to reduce the SPAM and SCAM accounts:

- Add a capture to the account creation form to reduce bot account creations

- Ensure all new members complete the profile/picture verification process before they can post/comment or have any kind of interaction with other members. At present, it relies on other members flagging suspect profile to trigger this - it should be mandatory.

This site isn't unique in it has a serious scam account issue, but I don't really see them making steps to reduce the problem.

ey****
41 minutes ago, Taskmaster55 said:

- Add a capture to the account creation form to reduce bot account creations

The simpler captchas automation can crack in seconds, it's not a deterrant. The more advanced the capcha gets, the less user friendly it gets which deters 'genuine' people singing up

45 minutes ago, Taskmaster55 said:

nsure all new members complete the profile/picture verification process before they can post/comment or have any kind of interaction with other members.

Mandatory verification is again something which a lot of folk find off-putting, it's why sites try to incentivise people to verify without penalising those who don't yet wish to too much

However - users can already set their mailbox filters to only receive messages from verified accounts and already set search filters to only included verified accounts -- anyone who does this has no reason to be messages by unverified profiles nor would see in their searches.   In itself, the tools are already there, but guys choose not to use them. 

50 minutes ago, eyemblacksheep said:

Mandatory verification is again something which a lot of folk find off-putting, it's why sites try to incentivise people to verify without penalising those who don't yet wish to too much

However - users can already set their mailbox filters to only receive messages from verified accounts and already set search filters to only included verified accounts -- anyone who does this has no reason to be messages by unverified profiles nor would see in their searches.   In itself, the tools are already there, but guys choose not to use them. 

Are you staff?

ey****
7 minutes ago, BuzzLightSecond said:

Are you staff?

No. But I don't need to be.

 

Ta****
2 hours ago, eyemblacksheep said:

The simpler captchas automation can crack in seconds, it's not a deterrant. The more advanced the capcha gets, the less user friendly it gets which deters 'genuine' people singing up

Mandatory verification is again something which a lot of folk find off-putting, it's why sites try to incentivise people to verify without penalising those who don't yet wish to too much

However - users can already set their mailbox filters to only receive messages from verified accounts and already set search filters to only included verified accounts -- anyone who does this has no reason to be messages by unverified profiles nor would see in their searches.   In itself, the tools are already there, but guys choose not to use them. 

It's almost like you enjoy this shit show 🤷‍♂️

ey****
2 minutes ago, Taskmaster55 said:

It's almost like you enjoy this shit show 🤷‍♂️

I mean. You call it a shit show. But you're still here.

Use the filters and your problem is solved. 

 

Wi****
My settings are set to verified and upgraded and I get more anonymous visitors than anything.
I agree with Taskmaster.
Captcha is the easiest way.

There are two that automation can't Crack. Randomly generated text and box select graphics.

I have actually used both on my own domains and reduced automation accounts to zero.

The problem is ,all fake accounts are not automatically generated.

Live people create accounts for nefarious reasons. Those are the hard accounts to prevent.

It will take a lot of training for databases to have enough references to do IP blocking.
Threat Down by Malwarebytes is very effective for website development.
ey****
17 hours ago, Windwolf said:

There are two that automation can't Crack. Randomly generated text and box select graphics.

I have actually used both on my own domains and reduced automation accounts to zero.

The problem is ,all fake accounts are not automatically generated.

Live people create accounts for nefarious reasons. Those are the hard accounts to prevent.

It will take a lot of training for databases to have enough references to do IP blocking.

So, both types of your capcha can be beaten by automation.  They can crack it.  However, there is the case on what is currently worth their while.  And also when someone comes to review their failure report and decide what their priority is.  

What most websites have taken to doing is instead of "prove you are human by solving this puzzle" is switching to behaviour analysis.  This in itself means that most humans won't even realise there is a system, unless they trip it.  So do not run into the constraints of capchas (capcha's themselves are usually discriminatory against people with disabilities also - and ironically, most of the tools used by people with disabilities to help with capchas are jumped on by some of the automation) 

Any automated behaviour is likely to trip it, and this is then when the site takes whatever action it feels is necessary, considering that this is probably automation - but may be a false positive.  A lot of VPNs trip behaviour analysis models, as also can people messaging too fast or clicking on a lot of links too fast.   So ahem guys who write out a message and copy and paste it to multiple women will trip it. Mind, it's not like they are of value ;)

Even behaviour models can be beaten by automation, especially as more utilise AI - however at the minute that is often less desirable to crack. Especially when so many use low hanging models.

Equally to slip to your last point...

IP blocking was effective in the 90s. In 2025 it is not effective.  Both in the sense that sometimes a wide area can be on the same IP (say, a public library) and also that, well. If you turn your home internet off and back on there's a good chance you'll get a different IP address. Ban avoided.  This is before we even get into things like VPNs etc.   Geoblocking is a little more of a thing, but that works best when you want to block an entire country or state.  

But for the middle bit.  Yes is the problem - that I won't say automation isn't a problem (cos it is - and, yes this site does have some form of automation protection else it pretty much wouldn't be here) but that actual humans are the problem.  They can pass capcha, verify email addresses, passes behaviour analysis and even photo verify (including, sometimes, with photos that aren't them - both in terms of people sell verification photos on sites like fiverr while also there is the common scam guys try on women which is "Hey, can never be sure I'm not dealing with a fake, could you do this oddly specific pose for me?" and then uploading it to another site.)

The main problem of course with the real humans is that they have to actually do something wrong before they can be banned no matter how suspicious things might look.  At least with automated suspicios behaviour can trip behaviour analysis and I'll be honest, I have very few scammers/fakes/etc contact me and those who DO - I usually go to my messages, click in and find they've already been removed by the site

Ironically, a lot of these sites probably aren't suspicious looking enough, given guys message them and then complain about it haha

This said, of course, I think this is a challenge all sites face, but some are more on the ball than others. Mind. The ones not on the ball have pretty much already gone under. 

 

1 hour ago, eyemblacksheep said:

I've just looked at the profile - and there's absolutely zero evidence the person is a scammer or not who they say they are

1) Profile uses "am" statement instead of "i am."

2) Profile uses the phrase "not here for games" which is borrowed from what scammers have seen American women say on conventional dating sites and is extratopical for a fetish site.

3) Profile states "Need a true love a man with a good heart" which is borrowed from what scammers have seen American women say on conventional dating sites and is extratopical for a fetish site.

4) Profile lists no roles or fetishes which is common to other scammer profiles here.

5) Profile has not taken the BDSM Personality Test which is common to other scammer profiles here.

6) Profile gave me a spank, but when i private messaged with thanks and asked about roles and fetishes it did not respond.

If you're going to troll people who've paid Playamedia and want to clear the fake clutter from this site ("vigilantes" by your description), i challenge you to actually put some thought into your responses.  i cannot help that you're not as astute at recognizing fakes as i am.

 

1 hour ago, eyemblacksheep said:

women went "fuck that" and never came back

This reminds me of thirsty men on Fetlife i've seen who when fresh 18 year old profiles which are suspect are challenged to age verify by a member begin to whine and fuss that this is going to turn girls away from participating.  i want to participate with legitimate people here who are comfortable with adequately validating who they are, thank you very much.  If that means a significant reduction in the total number of profiles that's filtering i don't have to waste time doing myself.

 

ey****
30 minutes ago, AlmostGhost said:

If you're going to troll people who've paid Playamedia and want to clear the fake clutter from this site ("vigilantes" by your description), i challenge you to actually put some thought into your responses.  i cannot help that you're not as astute at recognizing fakes as i am.

Everything you have said is ancedotal

Yes - you might look and say "I think this is a scam" but there is no proof.  You haven't provided any.  "How you know she's a scammer?", "She looks like one!"

And surely, if someone is a scammer and gives a spank, and you message them - they're going to reply to run the scam... no?   Which would in itself give you the required proof to get the profile taken down.

(edited)

Here's another "ancedotal"(sic)...

"One week ago: "We've received your message to review the member ******. Thanks for helping out."

ABOUT ******:

*********

 

@eyemblacksheep, just want you to know that she's still available a week after being reported so you can slide into her dms with the other 385 fools.  Who knows, she may be the scammer of your dreams?!?

Edited by Deleted Member
Text passages removed due to denunciation – please do not mention nicknames or describe profile details
18 minutes ago, eyemblacksheep said:

"How you know she's a scammer?" "She looks like one!"

"How i know", indeed.

Wi****
31 minutes ago, AlmostGhost said:

"How i know", indeed.

When I first see a suspicious profile, I run three different image analysis programs to find out if it is connected to nefarious information.
I was reporting them to start with. But I have found over 50 profiles that were clearly fake. Some were removed after I reported them. But, the sheer volume is so time consuming that to evaluate each one manually would be a full time job.
I agree that having people verify won't discourage many real people. The main one's who will be discouraged are the same ones we are complaining about.

ey****
3 hours ago, AlmostGhost said:

Who knows, she may be the scammer of your dreams?!?

you're the one who said you messaged someone you thought was fake after she spanked you.

I have zero problem meeting people who are real thanks very much. 

ey****
3 hours ago, Windwolf said:

I have found over 50 profiles that were clearly fake. Some were removed after I reported them. But, the sheer volume is so time consuming that to evaluate each one manually would be a full time job.

This is very much the issue.

So, when there was the TikTok issue there were a few folk (including me) who went through profiles and started reporting those who looked suspect.

And a lot got removed. But there was an issue.  Going through manually was very time consuming. Now, some of these were clearly against TOU - they basically had in their profile they were here for payment, to be paid, etc.  and these were real people. Some had past verification, some had paid for membership.

Someone (actually) from the site asked us to stop cos it was causing issues.  Some were blatant and yes of course I'd say shouldn't be here, I believe some were given the option that they could stay here providing they changed their profile and didn't sell.  At the time there was the Dominatrix profile (which no longer exists) so some were asked to upgrade to that.

But yep, a human goes through every report - so if there are a lot of reports of things that just "look" fake but haven't done anything wrong - maybe the image search doesn't bring up anything blatant - it takes up a lot of time.   Now, if one of these accounts messages someone and asks for *** - that's different, that's reportable evidence and that gets actioned relatively quickly.

 

(edited)

So a moderator DOES have time to make the effort to blank out the specifics i used to demonstrate how i knew a profile i reported a week ago was fake.  BUT, Playamedia still hasn't taken down the profile?
Hilarious!


(i've also had images taken down where i exposed my interactions with obviously fake profiles.)


That lets me know precisely what the company's priorities are.  As i've said before, they allow the fakes because the fakes give likes, give spanks and either bots or paid scammers send private messages to thirsty men and make the site seem like the number of legitimate people using it is much much higher than what is actual.

Always be ready for greedy people to defraud, cheat, and create rules to protect their own interests.

Edited by Deleted Member
needed editing.
ey****

I can tell you (as they've explained this before) there are different teams

the forum mods are volunteers who have very limited access - they approve posts, reject posts, can close topics, and make mild edits.   They do not have access to DMs, back end, etc etc.

it's a different employed team who looks at that.

And, equally, they could have looked at the profile and said - no, there is no proof this person is violating TOU

of course, for everyone it is a case of asking if you are happy with this.  If you feel the site is badly run, then look at sites you feel are better run.  If you think they are deliberately running this site to scam people, you'd be foolish to be here.

I don't believe that. I believe there is always scope for improvement (same for any site) but I don't think they are setting out to mislead or fraud people. If I did, I simply wouldn't be here - because that would make me a mark. 

Hey, look, someone knocked down a couple of straw men.

Wi****
3 hours ago, eyemblacksheep said:

This is very much the issue.

So, when there was the TikTok issue there were a few folk (including me) who went through profiles and started reporting those who looked suspect.

And a lot got removed. But there was an issue.  Going through manually was very time consuming. Now, some of these were clearly against TOU - they basically had in their profile they were here for payment, to be paid, etc.  and these were real people. Some had past verification, some had paid for membership.

Someone (actually) from the site asked us to stop cos it was causing issues.  Some were blatant and yes of course I'd say shouldn't be here, I believe some were given the option that they could stay here providing they changed their profile and didn't sell.  At the time there was the Dominatrix profile (which no longer exists) so some were asked to upgrade to that.

But yep, a human goes through every report - so if there are a lot of reports of things that just "look" fake but haven't done anything wrong - maybe the image search doesn't bring up anything blatant - it takes up a lot of time.   Now, if one of these accounts messages someone and asks for *** - that's different, that's reportable evidence and that gets actioned relatively quickly.

 

I can speak to everyone's experience only mine.
Profiles that alert me first are Profiles that have no pictures and no information other than maybe a paragraph that looks exactly like many others.
Profiles that only have one picture with the above information are second on the list.
Normally I just ignore the first kind.
If the second type has what looks like a professional photo shoot, I run a facial comparison to see if anything shows up. I will follow up in a week if nothing shows up because the photo may not have been indexed by webcrawlers.

The third and for me the most urgent are the ones who send a message or comment on a picture asking me to go to which ever chat they happen to use.
Some of them actually have their phone number or chat information on their short profile or screen name.
Those, normally fit into several categories and for the most part are the ones that get kicked.

I'm sure that plenty of people have stories they can share too.

Personality,I don't think there's any one thing that would resolve the problem. I do however believe that there are several steps that could be taken to at least make the fakes the minority.

ey****

I think sometimes the first assumption, and I get why people get that - is that sites aren't doing anything.  And we know most are.

I guess, we can sometimes report someone and feel their takedown was slow (fair) or be surprised someone doesn't get get taken down (again, our suspicions alone aren't enough - there does need to be proof) - but of course they kinda can't always tell us what they're doing cos that advertises the security.  So sometimes, and this has happened on Facebook, Twitter, Fetlife, everywhere - there'll be a surge of bot accounts because someone has found a way to make code work - and then the site has to both deal with the clean up, and change it so the code doesn't work.  If they didn't do either the site really would be constantly overpowered by bots, which we all kinda know isn't the case.

Obviously the human element is more difficult - and - like, I do have a solution but it's grossly unpopular (it's not too dissimilar from what someone else suggested) mandatory verification with government issued ID.  However, whilst it'd solve the problems of bots and even a real human, if they're banned cannot sign back up again as their ID now blacklisted .  But, ahem - obviously this rules out people without ID - and also how trusting would most people be in handing over government ID to a dating site?  Concerns about hacks and leaks regardless of their trust in the site.   

And that's always the issue of getting the balance between keeping folk out who shouldn't be there, and keeping the site accessible to those who'd otherwise like to be there.    Mind, with assorted territories having assorted age verification bills in flight, maybe this will have to become more robust anyway.

--

But yeah, in a lot of cases - I think... it's funny that sites which are off-putting to women (whether 'real' or not) should be off-putting to men.   No profile picture, poor profile, etc.  if someone reaches out and you're unsure - don't even have to reply.   

Co****
What’s with the accounts here asking for ***am and GCSP ID’s - think that’s weird as hell
Ki****
4 minutes ago, Counter-Culture said:
What’s with the accounts here asking for ***am and GCSP ID’s - think that’s weird as hell

Just the usual scam, like for gas station toilets, if you find a telephone number offering a good time... Don't trust it! 😂

Co****
1 minute ago, KinksterDan said:

Just the usual scam, like for gas station toilets, if you find a telephone number offering a good time... Don't trust it! 😂

I literally think it’s funny how people are trying to scam people on this app

Ki****
11 minutes ago, Counter-Culture said:

I literally think it’s funny how people are trying to scam people on this app

Oh I get you, it's jus tone of those they keep trying even if it's the oldest trick in the book.
Altho remember they not after your *** 💸

ey****
41 minutes ago, Counter-Culture said:

What’s with the accounts here asking for ***am and GCSP ID’s - think that’s weird as hell

they tend to want to take you off-site because (a) they know it's a matter of time before they are purged here (b) the site cannot take action based on any exchanges that happen on ***am (c) ***am being encrypted makes it difficult to track the person if they are doing a scam, they're also fairly slow at doing much (d) having your ***am ID means even if you give up on them, they have your ID and can sell it on onto assorted 'suckers' lists.   

39 minutes ago, Counter-Culture said:

I literally think it’s funny how people are trying to scam people on this app

Dating scams/fraud are big business.   Having an App makes it even more lucrative.   Generally, I still feel the worst thing this site ever did was get an app - but they probably have stats saying it led to accelerated growth of real users as much as the downsides that go with it.

Pretty much every dating based site/app has, and will always have, similar issues - mind, this is nowhere near the levels of fakes/scams/etc as say Tindr, Plenty of Fish or Match which are all really big business. 

×
×
  • Create New...