My oh my, ain’t this the question of the hour. I’m definitely not going to win any friends from some people on this one, but folks – I’m not going to B.S. you here. There are people who philosophize laws and legislation based on all sorts of elements, there are people who make tools, there are people who are charged with helping, there are people who research theories, there are people who spend efforts for education or overzealous protection, there are people who have propoganda & agendas (good-good, bad-good, good-bad, bad-bad), and then there are the people who just gotta get the job done: every. single. day.
There are a lot of people in the pot trying to decide what “safety” means these days – especially regarding chat. I’m just gonna tell you a bit of insight from my side, the “every. single. day” perspective – think of it as the stage manager telling you what’s happening behind the curtain, but also knowing what is expected to be seen by those in front of the curtain. It’s a very different view from the director, or set designer, or critic, or actor, or audience…
Here are a bunch of questions I get:
1. What are the safeguards for chat for kids? (aka, what are “filters”)
As we know (or as you’re now learning), registration processes aren’t the only method of PII collection (PII: personally identifiable information – which is prohibited from being shared by children under the age of 13 through the legislation called COPPA). In these virtual experiences like MMOs and Virtual Worlds and Chat Clients and Social Networks – there are a thousand ways to share information. People put in “filters” that are trained to catch or allow content, based on the type of filter it is, so that content can or cannot appear within a social space…
- Dictionary Phrase list – basically a list of predetermined statements with no room for alteration
Pro: Your users cannot alter or break any of your systems, unless they’ve figured some alterations ultra-serious language using codes of first initials to sentences, lol
Con: Really, really, really frustrating. Really frustrating. Not a great user experience because everything is dictated, and unless it’s a GINORMOUS list of pre-determined statements, there is little room for off-the-cuff roleplay, and being dictated to is never something a pre-teen/tween child likes…
- Dictionary lists – basically a list of all permitted words – like an uber list straight from the dictionary (lol – hence the clever name)
- Pro: You’re only allowing certain words and blocking out any phonetic work arounds or garbled attempts of spelling (ex: words like funkyou or asstronaut are not in the dictionary and therefore caught in the filter before appearing live).
- Con: Dictionary lists are HUGE. Let me repeat HUGE. You better scan through them for medical terms like “pubic” or “pedophilia” both of which are in the dictionary, as are “address” and “phone” and “email”. Also – phrases are not in the dictionary – such as “as hole” or “read hard dead” or “name at yahoo dot com” and “my house is on third street maytown illinios”. Heck, you can even use work arounds like “my digits are ate hero hero tree tree fort hive sicks mine on” (that says 800-334-5691 which is a number i just made up using the types of easy work arounds KIDS USE EVERY SINGLE DAY – no. joke. All words in the dictionary). Also – with every user who creates a new username – there is yet another addition to your white list. Kids have to be able to speak to each other, right? 1,000,000 users = 1,000,000 additions to the dictionary… YOUR CHAT PROGRAM IS GOING TO BE VERY, VERY SLOW.
- White list – An extensive list of appropriate words (and some phrases) that your team has specifically allowed in chat (much like the dictionary chat). Typically, must have a smaller blacklist to balance out some of the issues.
- Pro: You’re starting with a set list of approved words and statements, you have a little more control over the types of conversations you wish your users to have.
- Con: Young users with issues spelling will never get to say what they’re trying to say unless you have the foresight or capability to see what they’re attempting to say and add to the white list. You have a smaller range of free community unless you’re actively keeping up with the chat of kids and making new allowances, etc. Also – good luck with symbols, characters, punctuation, and numbers – since your system has already chosen the words it likes, kids can use these other things to break what you’ve set up. Youre mini black list better be prepared for statements like “silky fingers” or “hard purple staff” or “up your skirt” or “chocolate kid” or “lets have sax”
- Black List – An extensive list of inappropriate words/phrases blocked from chat, with a subsequent white list that helps balance out the black list for appropriate content.
- Pro: It’s an active list that is monitored, changed, and edited by the day to support the growing needs and cleverness of youth & pop culture in general (which can also be considered a con, lol). You know exactly what they cannot say, and removing all negative content is the emphasis while trying to be clever enough to not break the user experience (as we know, inappropriate content changes by the day – thank you South Park and Family Guy). Urbandictionary.com is a great help. You can prepare the blacklist to look for such phrases as “my addy is” or “real name” or “in your pants”)
- Con: Unless you have a tool set that can separate words, find gem-of-words within bigger cluster-words, ignore run-on vowels or extra characters, read thru spaces and numbers and symbols, etc, well… you’re going to have problems (and there ARE tools out there that do this… you just have to look, test, research, etc). This is what I call “control over your active road map” – you need to be working to verify that all options around and through your blacklist controls are sticking tight. Example: the word “ass” is inappropriate, but can be said in “class” and “assembly”… make sure it’s not caught. On the flip side, the work “retard” is never appropriate in any variation – so the filter needs to be able to catch “uretard” or “retardation” or “ret@rd” or “r3t@rd” or “mrretardkid” < all of which I’ve seen kids attempt. Also – this is not something just anyone can pick up… knowing how to work and manage a black list effectively is a solid job and needs care & cleverness.
2. Can I be 100% certain my chat system is safe from PII collection or sharing by children?
NO. Not unless everything is pre-screened before going live (example: the phrase dictionary or canned chat alternatives). And even if you had moderators screening all content before it goes live – that is a heavy scaling issue, with a lot of room for human error.
I’ve already mentioned the types of identifiable location words that need to be removed in Dictionary Chat / White list / Black list. But what I haven’t mentioned are first name / last names. Unless you restrict first names completely (including a user’s avatar name), you’re already in the hole. Why? I don’t know about you – but just because someone once told me not to date guys with two first names doesn’t mean they don’t exist (teasing about the two first names… clearly that’s just a myth… hehehe). Ryan Edwards. Tiffany Addam. Joe Gail. Larry Drake. Then you have the first name + object last name, such as Jack Hall, Charlie Brown, Jerry Trainer, Sally Stir. There’s not a chat list in the world that’s going to block that unless it’s prescripted.
On the flip side, you also have numbers (should always be removed from even TYPING a number on a keyboard – why give what they can’t even have?), symbols should be removed (there is no need for @ or > – smilies are what emotes are for), and really the only punctuation should be the exclamation point and the question mark. Even THEN you’re going to see abuse for PII sakes… “My digits are ! !!!!!!!! nil nil !!!! !!!!! !!!!! ! !! !!!! !!!!!” and there’s an 800 phone number. Or the progression in chat for this:
“my digits are after the a. write em down. A!!!!!!!!” “A nil a nill” “A!!!!” “A!!!!!” “A!” “A!!!!” “A!!!” “A!!!!!!!!!!” Again, prescripted might help stop this.
Now… here’s the thing about prescripted agendas. YOU LIMIT A KID IN A WORLD WHERE THEY’RE EXPECTED TO FORM A COMMUNITY – AND THEY’RE NOT GOING TO STICK AROUND. Sure, if the game is fun, they’ll play the game, maybe stick around for a session or two… but why even make it a social game? YOU CAN’T BE SOCIAL IF YOU CAN’T BE SOCIAL. And, heck, kids are just going to fire up their aim and/or gchat and/or msn and/or text messages. At least with the filters and time/effort you were putting it… you were doing YOUR job in trying to protect them. Put massive restrictions on chat and lose the social experience for users to some other techniques that are less capable / less responsible to do the job YOU could be doing the right way.
Which leads me back to – WHY MAKE A SOCIAL EXPERIENCE GAME? I’ve only seen Poptropica.com do this well – and they’re not really going for a social community. They’re going for game-based/story-based interactive, educational fun without community or self-expression or role play… it’s about the agenda decided for the purpose of the game.
But how do we protect / stop users from these simple methods of info sharing – like first name + last name? Put it in your rules, your Terms of Service. Inform the users, and the parents, that there could be a chance that something is shared by accident… and that your site will remove any/every user who breaks this rule. Put forth the best effort with filters and POST MODERATION (various ad hoc methods that illuminate users who are breaking the policies you’ve set). If they can’t play by the rules and regulations you’ve set, and if a user is putting your brand/game at risk… SO LONG, GOOD RIDDANCE.
The only way we can REALLY attack this problem is through education. Either in-game, pre-game, parental education & guidance… but for me, I’d like to see POP CULTURE EDUCATION. Ad campaigns, commercials, etc.
And by the way… these are only a *few* of the examples there are in work-arounds. There are MANY, MANY more, and they change, grow, mutate by the day.
3. What is the safety method of chat filtration?
The safest method is whatever you know works the best for YOU. There’s no “one” perfect situation for every company, every philosophy, every policy. Look at what your variables are:
- Who is your target user (and what might he/she say around the lunch table with friends), who is your secondary target, and who is going to show up unwanted at the party…
- What is the type of content/genre/fantasy you’re building, and how will the language that corresponds with that effect or change the typical every day language scene (example: if you have a world where everyone is an ice cream flavor – being called vanilla kid or chocolate kid doesn’t have the same context as it does in an athletic world where kids are sassing each other)
- Who is in charge of policing your policy in your world – do they understand the type of content that needs to be caught?
- Do you/your team have a sufficient enough understanding of language / pop culture / kid behaviors / online minxiness to be able to properly control / handle what you want for your audience?
- Do you want to control your language road map – or do you wish for the aid of another company to control the language?
- Do you understand what legally CANNOT be shared in chat? Do you feel you have sufficiently restricted the public sharing of PII?
- How do you want filtration to appear to the end-user?
- Do you want them to be warned for certain language?
- Do you want to put certain words in black boxes, where only the author can see it and the rest of the social room cannot?
- Do you even want kids to know what words they can/cannot say?
- How are you going to know when kids are creating language work-arounds?
- If you allow a vendor to control your language lists, who carries the responsibility/burden if the list is not sufficient? (are you QA-ing your own policies / site?)
- How are you going to react to users who are breaking your policies regarding chat?
- Have you removed / scrubbed any content accidentally provided by users?
I have what works for me, and for now I’m very happy with my method. Naturally – I am always looking / learning / finding new ways of improvement for policy, implementation, experience, etc. That’s my job. At the end of the day, I am accountable for the users and the company. Not only is there legislation, there is a sensitive and young audience involved.
This all leads to the “what next” step of COPPA and the recent COPPA round table that happened at the start of June. To be honest – I’m scared. I’m scared because there are a lot of different ideologies floating around regarding PII and chat. The fact conversations are happening isn’t what scare me – it’s the lack of hands-on knowledge from people who have to do this every day (and I’m not talking about the directors or managers who haven’t even once signed into their tool set – trust me, there are a few of those out there).
There seems to be a lot of people looking at what’s working for others and trying to do the same… but no two sites, no two games, no two companies work the same. Chat always seems to be one of the LAST thoughts for people… not that it needs to exist – but HOW, and what the experience is like for the end-user. Font, character allowance, timing, content – it’s essential and standard and needs to be treated in design and creation with the same respect as EVERYTHING important to the agenda of the site.
I’d like to see more people close their doors, Willa Wonka style, and figure it out for themselves – so they can speak to it and cop to it, etc. I, for one, should not know your chat filter holes better than you do….