Zuckerberg wants children under 13 on Facebook?

May 20, 2011 1 comment

Zuckerberg said he wants younger kids to be allowed on social networking sites like Facebook. Currently, the Children’s Online Privacy Protection Act (COPPA) mandates that websites that collect information about users (like Facebook does) aren’t allowed to sign on anyone under the age of 13. But Zuckerberg is determined to change this.

“That will be a fight we take on at some point,” he said. “My philosophy is that for education you need to start at a really, really young age.”

But just how would Facebook’s social features be used by younger children?

“Because of the restrictions we haven’t even begun this learning process,” Zuckerberg said. “If they’re lifted then we’d start to learn what works. We’d take a lot of precautions to make sure that they [younger kids] are safe.”

http://tech.fortune.cnn.com/2011/05/20/zuckerberg-kids-under-13-should-be-allowed-on-facebook/

Here are my first thoughts.

1. PESSIMISM: Of course Mark Zuckerberg wants kids on Facebook – Facebook is a advertising & trend analysis GOLD MINE dressed as a happy, friend-connecting social network.  Kids are the largest licensing group, and advertisers would LOVE to get their hands on that kind of market.

So much for the ENTIRE POINT OF COPPA – which wasn’t created for your immediate privacy, but created to PROTECT CHILDREN FROM MARKETERS STEALING OR SWINDLING PII.

Fail.

Also see: Facebook Forced to Address Legal Gray Area of Kids and Advertising from AdAge. http://adage.com/article/digital/facebook-forced-address-kids-advertising/227633/

2. FEAR: Oh, that’s a GREAT idea.  Why not make more PERSONALLY IDENTIFIABLE INFORMATION ABOUT MINORS available?  Tre sigh.  Yes, education is VERY important – particularly about secret identities.  But, children under the age of 13 DO NOT HAVE THE COGNITIVE CAPABILITIES TO BE SOLELY RESPONSIBLE FOR THEIR PUBLIC PERSONA.  Part of being young is that you’re protected and allowed to make mistakes – by allowing that on Facebook – a public platform that reaches far beyond the lunch room, and far beyond your mom telling your aunt about that stupid detention you got?  BOO.  Not ideal.

3. LOGISTICS & CONCERNS: MODERATION. SCALABILITY. COST. Even if Facebook DID man up and start pre-screening all content contributed by U13 sources, what a nightmare!  Staff to cover something like that?  Insane.  And neither revenue nor cost efficient.

4. HOPE: Any sort of “educational program” that comes with U13 on Facebook would have to be an entire new entity.  Think: Facebook Junior, profile training wheels.  It would have to be limited, with tutorials and information, and educational guidance.  Leverage the sort of YouTube content that SweetyHigh has created (worth checking out).  But in no way, would Facebook be able to cruise right into allowing U13 without redesigning the fundamental/core use of Facebook.

4. REALITY: I deal EVERY SINGLE DAY with kid chat, and kid posts, and kid interactions, and behavior crises from U13.  I worry about social networks for children that do NOT rely on fantastical role play or themed-content.  Those two elements help protect direct attacks (or even mistaken, indirect attacks) on a sensitive and underdeveloped child by allowing creative persona & identity hiding (to a certain extent, of course – real friends playing in fantasy worlds blends that reality vs role play, and takes interaction to a different level).  Children are still in the process of social learning.  Social learning CAN be expanded – and I do applaud the idea of social network education… but tossing youth into the deep end, where there are daily Trojan attacks on accounts, stolen identity issues & account phishing, cyberbullying, advertising lures, and STRANGERS is not ideal.  Think about it: not even normal, rational adults can successfully navigate Facebook accurately…

If there is a way for Zuckerberg to incorporate social networking education, with Facebook structure, I’m eager to see it – but there are quite a few MASSIVE problems in his path.  And with this audience?  Bowling through the ideals without proper guidance, understanding, or safety nets = not a safe agenda.

I hope Zuck collects his facts, has the necessary research concluded, and (excuse the phrase) gets his shizzz straight before he really dives into something like this.  For as much as I applaud optimistic philosophy, I desire educated practicality.

Age Gate Complications

March 14, 2011 1 comment

How the Public Interprets COPPA-Prompted Age Restrictions

Most parents and youth believe that the age requirements that they encounter when signing up to various websites are equivalent to a safety warning. They interpret this limitation as: “This site is not suitable for children under the age of 13.” While this might be true, that’s not actually what the age restriction is about. Not only does COPPA fail to inform parents about the appropriateness of a particular site, but parental misinterpretations of the age restrictions mean that few are aware that this stems from an attempt to protect privacy.

While many parents do not believe that social network sites like Facebook and MySpace are suitable for young children, they often want their children to have access to other services that have age restrictions (email, instant messaging, video services, etc.). Often, parents cite that these tools enable children to connect with extended family; Skype is especially important to immigrant parents who have extended family outside of the US. Grandparents were most frequently cited as the reason why parents created accounts for their young children. Many parents will create accounts for children even before they are literate because the value of connecting children to family outweighs the age restriction. When parents encourage their children to use these services, they send a conflicting message that their kids eventually learn: ignore some age limitations but not others.

danah boyd | apophenia » How COPPA Fails Parents, Educators, Youth

I really, truly encourage you to head over to the link above and read the beginning and end (I sectioned only a portion) of Danah’s post.  She’s right. 

Back when I was an early blogger, I used to get frustrated with the casual nonchalance of parents who let their kids watch Youtube, then create accounts, and then post videos (ack!)… teachers/parents who friended their U13 kids on myspace and facebook and twitter (blergh).  There are a lot of these conflicts-of-interest I see happening regarding the dynamic between parents & children accessing the social/entertainment world online.  As the years have gone by, I’ve stopped ranting so much about these other social media sites.  I just try to make sure that the wee corners of the interwebs that I touch have some sort of care, logic, appropriateness to them.

Having said that… I, fortunately & unfortunately, have the hands-on experience working with Age Gates from one stance NOT mentioned in Danah’s post…. youth-targeted sites.

Age gates = have been a battle for many a kids biz.  Frustration points I’ve encountered, or had others relay to me:

1. Most kids, teens, adults, parents don’t even bother putting in the right info – they just choose the easiest option (either the pre-populated date or January 1, 2011) from the scroll gate option.  > Now they’re caught in the filter.

2. The session cookies.  Yes, I think on many levels a session cookie is necessary (why would you have a gate if they can cheat the gate?).  However, as mentioned by Danah, and my point 1 above – parents / adults either put in the easiest information OR they put in their CHILD’S information… > Now they’re caught in the filter and frustrated (CS ticket if you’re lucky).

3. How do you determine a child from an adult when receiving a poorly spelled (btw, yes, many parents do not spend time editting and their emails often look like a child’s – identities have been tested and proven via phone conversations, arrrg!) CS ticket regarding the age gate?  Fun times. > Now they’re caught in the filter. Cookie sessioned. And possibly a poorly educated parent looking for a bit of help for their kid.

4. TIP OFF LANGUAGE – Due to the FTC & Safe Harbor Co’s attempts at trying to keep some sort of legitimate gate-action happening… This is frustrating to navigate.  I agree with the need for non-tip-off language, however, this can get really questionable fast when you really start to analyze the language you’re using to explain how to use the age gate without explaining how to defeat the age gate.  > Now they are caught in the age gate, cookie sessioned out, confused by why, with CS tickets submitted and no where to go…

5. Every biz wants kids to enter the lists for their closed Beta… but you can’t have a minor agree to the legal documents associated to a Closed Beta session.  Ruh roh, age gates doing what they’re supposed to do against the need for the site… > Rock, meet hard place.  Also, add in: Caught in the age gate, cookie sessioned out, confused by why, with CS tickets submitted, no where to go, and now questioning the legitimacy of a kids site that won’t let kids in…

Ruh Roh + Fail whale?  Or age gate success?  Tre sigh.

I’m not going to give you my solutions to these frustrations, but having pointed them out, hopefully you’ll understand some of the yellow flags out there regarding Age Gates.  Every little heads-up helps, yeah?  I hope so.

Now go read Danah Boyd.  She’s much more eloquent than I am today… 😉

Mining for Awesome: Metrics to Identify Your Community

January 26, 2011 2 comments

This fine young man has a different type of impact on the community.  He impacts more users … without his participation, about 2% of the community no longer participates.  He does not impact the total oxygen of the community as much, in other words, he doesn’t impact the number of tweets or number of conversations.  But he does bring along 2% of the community.  And his impact lasts through the forecast cycle, meaning he impacts new participants as well.

This exercise can be run for every user in a community.  We can easily forecast what impact each user has on the overall future of a community.  By looking forward, we get to see what might happen, and we can take steps to change the future.  When we simply look back into the past, we only measure what happened in the past.

In this simple example, when we remove just two users from a community of about four hundred weekly participants, we lose close to 8% of all future activity in this community.  In spite of a ton of new users, these two folks, @michelehinojosa and @immeria, foster a wonderful and vibrant community.  That’s a decent measure of influence, don’t you think?

Kevin Hillstrom: MineThatData: Hashtag Analytics: Removing a Member of the Community

(Received in Twitter via @TiffanyRichison – my AMAZING Community Lead, who scored it via @TheCR and @mindthatdata)

Over the last year there have been THREE huge benefactors to understanding an audience that I feel like I can’t stress enough:

1. Avoiding the operational FAIL WHALE (oh man, do I have withheld rants on this)
2. Understanding that a competitive site in this industry must have diversity in everything (from gaming, to customizations, to approaching an audience)
3. SMART METRICS

For a few short minutes here, and I stress short, I simply must ramble about the importance of metrics and how our industry HAS to step to bat and start finding the value of users NOT just the abuse.

And when I say “our” industry – I mean specifically the CS, Moderation, and Community.  We need to TOTALLY BFF-up our Metrics peeps… and if there aren’t metrics peeps at your biz, then you need to step up to bat and figure out enough of a base line understanding regarding metrics and analytics to be able to support what you do.

Why? WE’RE EXPENDABLE.  That’s a lie.  I know it, you know it, but there are many a board member who don’t understand why CS & Community & (more specifically) Moderation staffing/tools/practices are so important – POST launch, when the belt gets tighter and the big bucks are takin’ a bit longer to roll in.

We’re just people who manage people – anyone can do that… interns can do that, right? PUUUUUUUUUUUKE.

WRONG. UGh. Shudder. Frustration + fist at the sky with some sort of user engagement battle cry!  Just because you have a background in marketing – that doesn’t mean you have that GUT understanding, nor ability to read a community.  Marketing folks can spin statements and emphasize the value of advertising and approaching product, but it’s not the same […feeling another tangent coming on. Must jump off this tangent path, my apologies].

What was I talking about?  Oh yes, Metrics.  Analytics.  Whatever you wanna call it – basically, this day and age those of us people-people need to have back up.  Stories are fun for conferences and for nailing a point home.  Leaderboard-esque insight into top players is great to show your front-line knowledge of the audience’s ability.  Social media platforms and conversations are great for keeping the product within fingertips of users everyday conscious.  But when it comes to number crunching – dude bettah getz some backup. For realzies.

So far, metrics have been great for game designers and registration flows.  It’s been great for microtrans and heatmaps (which, may I say – I love me some well developed heatmaps).

Blargh – OKAY, I’m biting off more than I can blog-chew at the moment.  I’m going to kinda filter through my metrics conversation from the big point (overall metrics and their importance), and wittle it down to SPECIFICALLY moderation + community necessity.

Finding ABUSE
– Individuals who abuse the system / community / experience
– What is the individual abuse (on a case by case, report, basis)
– Brings questions of WHY individuals abuse: is it the lack of game? Is it the drive of the content?
– Is it a growing group behavior?
– What exactly is the abuse of this growing group behavior?
– Brings again the questions of WHY individuals abuse: is it the lack of game? Is it the drive of the content? Is it the lack of appropriate competitive interaction?  CAN YOU FIX THIS?

Finding VALUE
– Individuals who represent the best of the community
– Individuals who engage from within
– Individuals who lead by example
– *Individuals who seem to be the best of the best, but actually become somewhat cancerous in their righteousness and maybe should be used as a best case
– Groups who lead by example
– Groups who promote desired community efforts
– Areas that promote desired goals for game or specific area
Individuals or areas that can help promote the MONETARY VALUE OF UPGRADING (via microtrans or subscription)

Remember – you want to gently lure and entice users into becoming monetary assets… and not just monetary assests but SUPER USERS.  For as gross as statement from a “purest” perspective as that is… YOU CANNOT RUN A GAME WITHOUT INCOME.  Just can’t.

Why would you just use metrics for landscapes and game agendas, or finding bad users?  Dude – it’s the day and age of community! Of social media!  Own it.

BALANCING THOSE OF ABUSE AND VALUE
Just as this AWESOME article above points out – not all users are just “good” or just “bad”… Use metrics and analytics from:
– Chat (a filter that reads positive chat and associates percentages, a chat filter that reads abusive chat and associates separate percentages)
– Interactions (Community event item clicking and purchasing metrics, guild-grouping, chat submissions, logins, time spent online, friending, time spent in social areas, time spent in gaming areas, time spent multiplayer gaming/interacting, leaderboards, time spent in “home” areas customizing, etc)
– Friending – viral quality outside of game, as well as inside the game.

If you are in the MMO or VW space… I would SERIOUSLY suggest taking a moment to have a solid “think” regarding understanding the bookends of your community, and the elements that drive the bulk middle either direction over the course of their experience.  The more you can automate that process for your moderations, customer service reps, and community managers – the stronger / swifter / and better the process will be for you!!  You will still need the insights and stories and multisocialmediaextravaganzamadskillz of community pro’s – naturally.  But you also need number crunching and proof of pudding products.

So, my dears, in this slightly confusing, probably ADD fueled post – my point is this:
Community and Moderation and CS folks… go rogue for a moment, totally ninja-BFF any metrics/analytics people on staff.  Make tools or practices that will help you to find the value, find the abuse, and back it up with the best kind of numbers you can find… AND THEN use your mad community skills to help understand why numbers show what they show, and improve your audience, your product, and the WORLD.

Make sense?  Hope so.  If not, as always, leave a comment at the beep……

Beep.

Engage! Expo Conference Prezzie

September 27, 2010 1 comment

Hello, hello. Long time no talk. Yes, I realize this, and I send my apologies.

Last week (Sept 22nd), I spoke at the Engage! Expo conference in Santa Clara on User Engagement – aka, the art of engaging users (specifically online gamers 13 and younger, although you could argue for a General rating).  It wasn’t one of my most stellar performances, I drown a bit in having FAR too much to say… but I successfully rambled a few decent points & tales, and hopefully shared some new understandings as well.

I am always grateful to the Engage! Expo team (Tonda you’re amazing), and it was great meeting some new people.

Now, prepare yourself for some Heavy. Duty. Slide. Action.  I Powerpointed it up HARD CORE (my speech teacher would be throwing ninja stars at me if he knew).  Luckily, many people have contacted me asking for my Powerpoint slides… so, I am providing a video of them here.

Questions, comments, problems, scenarios, rambles, quips, complaints, queries, and soliloquies should be directed to the comment section of this post.  I’ll do my best to get back to you.

Things I’m kickin’ myself for leaving out: Monetization and the “velvet rope”, How to use live staff well,  the Parental Unit, and The fine art of event planning and support.  Thank god there’s always future conferences – I can do a “Part Two” slide set 😉

The Conundrum that is Planet Cazmo

August 2, 2010 6 comments

Planet Cazmo is going to partner with Fox’s Teen Choice 2010 awards and entertainment mogul Tony Mottola to create a custom virtual environment called the Virtual Teen Choice Beach Party. The special virtual environment will be directly accessible from a link on the Teen Choice Website. The Teen Choice 2010 awards will air August 9 at 8 EST on Fox Users will be able to visit the virtual beach party after casting their votes online.

In the Virtual Teen Choice Beach Party, users will be able to design an avatar and a virtual home. In the virtual world, users can chat, play mini-games, virtually dance, and even purchase virtual goods. One of the goods for sale will be a branded good shaped like the award show’s signature Teen Choice Surfboard. This won’t be the first virtual event Planet Cazmo has developed for a major brand or celebrity partner. Previous projects developed by Planet Cazmo were primarily virtual concerts or music-themed, though.

Virtual Teen Choice Beach Party

Okay… So, wow.

First, I do find it absolutely RAD that Planet Cazmo has broken the start-up, non-uber-brand IP curse and managed to score such a marketing bonanza as TEEN CHOICE AWARDS on Fox.  That’s kinda huge.  Brings in the eyeballs – aka, sudden brand awareness.

For the last two years I’ve watched Planet Cazmo score quite a few influential contracts with big music peeps… They’re freakin’ email machines – no one sends as many newsletters as this site… seriously.  There is always something going on it seems.

The art is easy, not too complex. The world is expansive (almost too expansive, but they try to pack everyone into the same server- providing the PARTY! feel of busy-busy).

Again, I’m still floored by their marketing department and promotions… well played for such high profile awesomeness.

PROBLEM: I just logged in as a minor and was able to share “my” phone number (or, ya know, the Empire Carpet guy’s number, five eight eight two three zero zero), “my” address (or, ya know, the white house), amongst other things.  Then I created another account, logged in, and watched myself say the same content all over again (aka, the public can read it, its not just author-only jedi-mind-trickin’).

At least they caught “shadows are as dark as holes” – but as holes, for as swarthy a curse as it is in kid land, is NOT A LEGAL PROBLEM.

I can’t believe I just logged in, approved my “child” via email plus, and then passed out faux-personal information.  What the what?!  AND THEY’RE GOING UBER-PUBLIC WITH A TV SPONSORSHIP!  It makes me very, very nervous for them.

Talk about disappointed.  I’ve been dealing with several companies lately that are looking to ensure that they’re sponsorships/partnerships/etc with youth virtual worlds are LOCKED DOWN and safe… why the heck didn’t Fox check into the legal nature of Planet Cazmo?

I’m still absolutely astounded that I could give addresses and phone numbers. Baffled, even.

Blogged with the Flock Browser

Is there such thing as 100% Safe Chat for kids?

June 28, 2010 6 comments

My oh my, ain’t this the question of the hour.  I’m definitely not going to win any friends from some people on this one, but folks – I’m not going to B.S. you here.  There are people who philosophize laws and legislation based on all sorts of elements, there are people who make tools, there are people who are charged with helping, there are people who research theories, there are people who spend efforts for education or overzealous protection, there are people who have propoganda & agendas (good-good, bad-good, good-bad, bad-bad), and then there are the people who just gotta get the job done: every. single. day.

There are a lot of people in the pot trying to decide what “safety” means these days – especially regarding chat.  I’m just gonna tell you a bit of insight from my side, the “every. single. day” perspective – think of it as the stage manager telling you what’s happening behind the curtain, but also knowing what is expected to be seen by those in front of the curtain.  It’s a very different view from the director, or set designer, or critic, or actor, or audience…

Here are a bunch of questions I get:

1. What are the safeguards for chat for kids? (aka, what are “filters”)

As we know (or as you’re now learning), registration processes aren’t the only method of PII collection (PII: personally identifiable information – which is prohibited from being shared by children under the age of 13 through the legislation called COPPA).  In these virtual experiences like MMOs and Virtual Worlds and Chat Clients and Social Networks – there are a thousand ways to share information.  People put in “filters” that are trained to catch or allow content, based on the type of filter it is, so that content can or cannot appear within a social space…

  • Dictionary Phrase list – basically a list of predetermined statements with no room for alteration
    Pro: Your users cannot alter or break any of your systems, unless they’ve figured some alterations ultra-serious language using codes of first initials to sentences, lol
    Con: Really, really, really frustrating. Really frustrating.  Not a great user experience because everything is dictated, and unless it’s a GINORMOUS list of pre-determined statements, there is little room for off-the-cuff roleplay, and being dictated to is never something a pre-teen/tween child likes…
  • Dictionary lists – basically a list of all permitted words – like an uber list straight from the dictionary (lol – hence the clever name)
    – Pro: You’re only allowing certain words and blocking out any phonetic work arounds or garbled attempts of spelling (ex: words like funkyou or asstronaut are not in the dictionary and therefore caught in the filter before appearing live).
    – Con: Dictionary lists are HUGE. Let me repeat HUGE. You better scan through them for medical terms like “pubic” or “pedophilia” both of which are in the dictionary, as are “address” and “phone” and “email”.  Also – phrases are not in the dictionary – such as “as hole” or “read hard dead” or “name at yahoo dot com” and “my house is on third street maytown illinios”.  Heck, you can even use work arounds like “my digits are ate hero hero tree tree fort hive sicks mine on” (that says 800-334-5691 which is a number i just made up using the types of easy work arounds KIDS USE EVERY SINGLE DAY – no. joke.  All words in the dictionary).  Also – with every user who creates a new username – there is yet another addition to your white list.  Kids have to be able to speak to each other, right?  1,000,000 users = 1,000,000 additions to the dictionary… YOUR CHAT PROGRAM IS GOING TO BE VERY, VERY SLOW.
  • White list – An extensive list of appropriate words (and some phrases) that your team has specifically allowed in chat (much like the dictionary chat).  Typically, must have a smaller blacklist to balance out some of the issues.
    – Pro: You’re starting with a set list of approved words and statements, you have a little more control over the types of conversations you wish your users to have.
    – Con: Young users with issues spelling will never get to say what they’re trying to say unless you have the foresight or capability to see what they’re attempting to say and add to the white list.  You have a smaller range of free community unless you’re actively keeping up with the chat of kids and making new allowances, etc.  Also – good luck with symbols, characters, punctuation, and numbers – since your system has already chosen the words it likes, kids can use these other things to break what you’ve set up.  Youre mini black list better be prepared for statements like “silky fingers” or “hard purple staff” or “up your skirt” or “chocolate kid” or “lets have sax”
  • Black List – An extensive list of inappropriate words/phrases blocked from chat, with a subsequent white list that helps balance out the black list for appropriate content.
    – Pro
    : It’s an active list that is monitored, changed, and edited by the day to support the growing needs and cleverness of youth & pop culture in general (which can also be considered a con, lol).  You know exactly what they cannot say, and removing all negative content is the emphasis while trying to be clever enough to not break the user experience (as we know, inappropriate content changes by the day – thank you South Park and Family Guy). Urbandictionary.com is a great help.  You can prepare the blacklist to look for such phrases as “my addy is” or “real name” or “in your pants”)
    – Con: Unless you have a tool set that can separate words, find gem-of-words within bigger cluster-words, ignore run-on vowels or extra characters, read thru spaces and numbers and symbols, etc, well…  you’re going to have problems (and there ARE tools out there that do this… you just have to look, test, research, etc).  This is what I call “control over your active road map” – you need to be working to verify that all options around and through your blacklist controls are sticking tight.  Example: the word “ass” is inappropriate, but can be said in “class” and “assembly”… make sure it’s not caught.  On the flip side, the work “retard” is never appropriate in any variation – so the filter needs to be able to catch “uretard” or “retardation” or “ret@rd” or “r3t@rd” or “mrretardkid” < all of which I’ve seen kids attempt.  Also – this is not something just anyone can pick up… knowing how to work and manage a black list effectively is a solid job and needs care & cleverness.

2. Can I be 100% certain my chat system is safe from PII collection or sharing by children?

NO.  Not unless everything is pre-screened before going live (example: the phrase dictionary or canned chat alternatives).  And even if you had moderators screening all content before it goes live – that is a heavy scaling issue, with a lot of room for human error.

I’ve already mentioned the types of identifiable location words that need to be removed in Dictionary Chat / White list / Black list.  But what I haven’t mentioned are first name / last names.  Unless you restrict first names completely (including a user’s avatar name), you’re already in the hole.  Why?  I don’t know about you – but just because someone once told me not to date guys with two first names doesn’t mean they don’t exist (teasing about the two first names… clearly that’s just a myth… hehehe).  Ryan Edwards. Tiffany Addam.  Joe Gail.  Larry Drake. Then you have the first name + object last name, such as Jack Hall, Charlie Brown, Jerry Trainer, Sally Stir.   There’s not a chat list in the world that’s going to block that unless it’s prescripted.

On the flip side, you also have numbers (should always be removed from even TYPING a number on a keyboard – why give what they can’t even have?), symbols should be removed (there is no need for @ or > – smilies are what emotes are for), and really the only punctuation should be the exclamation point and the question mark.  Even THEN you’re going to see abuse for PII sakes… “My digits are ! !!!!!!!! nil nil !!!! !!!!! !!!!! ! !! !!!! !!!!!” and there’s an 800 phone number.   Or the progression in chat for this:
“my digits are after the a. write em down. A!!!!!!!!” “A nil a nill” “A!!!!” “A!!!!!” “A!” “A!!!!” “A!!!” “A!!!!!!!!!!”  Again, prescripted might help stop this.

Now… here’s the thing about prescripted agendas.  YOU LIMIT A KID IN A WORLD WHERE THEY’RE EXPECTED TO FORM A COMMUNITY – AND THEY’RE NOT GOING TO STICK AROUND.  Sure, if the game is fun, they’ll play the game, maybe stick around for a session or two… but why even make it a social game? YOU CAN’T BE SOCIAL IF YOU CAN’T BE SOCIAL.  And, heck, kids are just going to fire up their aim and/or gchat and/or msn and/or text messages.  At least with the filters and time/effort you were putting it… you were doing YOUR job in trying to protect them.  Put massive restrictions on chat and lose the social experience for users to some other techniques that are less capable / less responsible to do the job YOU could be doing the right way.

Which leads me back to – WHY MAKE A SOCIAL EXPERIENCE GAME? I’ve only seen Poptropica.com do this well – and they’re not really going for a social community.  They’re going for game-based/story-based interactive, educational fun without community or self-expression or role play… it’s about the agenda decided for the purpose of the game.

But how do we protect / stop users from these simple methods of info sharing – like first name + last name?  Put it in your rules, your Terms of Service.  Inform the users, and the parents, that there could be a chance that something is shared by accident… and that your site will remove any/every user who breaks this rule.  Put forth the best effort with filters and POST MODERATION (various ad hoc methods that illuminate users who are breaking the policies you’ve set).  If they can’t play by the rules and regulations you’ve set, and if a user is putting your brand/game at risk… SO LONG, GOOD RIDDANCE.

The only way we can REALLY attack this problem is through education.  Either in-game, pre-game, parental education & guidance… but for me, I’d like to see POP CULTURE EDUCATION.  Ad campaigns, commercials, etc.

And by the way… these are only a *few* of the examples there are in work-arounds.  There are MANY, MANY more, and they change, grow, mutate by the day.

3. What is the safety method of chat filtration?

The safest method is whatever you know works the best for YOU.  There’s no “one” perfect situation for every company, every philosophy, every policy.  Look at what your variables are:

  • Who is your target user (and what might he/she say around the lunch table with friends), who is your secondary target, and who is going to show up unwanted at the party…
  • What is the type of content/genre/fantasy you’re building, and how will the language that corresponds with that effect or change the typical every day language scene (example: if you have a world where everyone is an ice cream flavor – being called vanilla kid or chocolate kid doesn’t have the same context as it does in an athletic world where kids are sassing each other)
  • Who is in charge of policing your policy in your world – do they understand the type of content that needs to be caught?
  • Do you/your team have a sufficient enough understanding of language / pop culture / kid behaviors / online minxiness to be able to properly control / handle what you want for your audience?
  • Do you want to control your language road map – or do you wish for the aid of another company to control the language?
  • Do you understand what legally CANNOT be shared in chat?  Do you feel you have sufficiently restricted the public sharing of PII?
  • How do you want filtration to appear to the end-user?
    – Do you want them to be warned for certain language?
    – Do you want to put certain words in black boxes, where only the author can see it and the rest of the social room cannot?
    – Do you even want kids to know what words they can/cannot say?
  • How are you going to know when kids are creating language work-arounds?
  • If you allow a vendor to control your language lists, who carries the responsibility/burden if the list is not sufficient? (are you QA-ing your own policies / site?)
  • How are you going to react to users who are breaking your policies regarding chat?
  • Have you removed / scrubbed any content accidentally provided by users?

I have what works for me, and for now I’m very happy with my method.  Naturally – I am always looking / learning / finding new ways of improvement for policy, implementation, experience, etc.  That’s my job.  At the end of the day, I am accountable for the users and the company. Not only is there legislation, there is a sensitive and young audience involved.

This all leads to the “what next” step of COPPA and the recent COPPA round table that happened at the start of June.  To be honest – I’m scared.  I’m scared because there are a lot of different ideologies floating around regarding PII and chat.  The fact conversations are happening isn’t what scare me – it’s the lack of hands-on knowledge from people who have to do this every day (and I’m not talking about the directors or managers who haven’t even once signed into their tool set – trust me, there are a few of those out there).

There seems to be a lot of people looking at what’s working for others and trying to do the same… but no two sites, no two games, no two companies work the same.  Chat always seems to be one of the LAST thoughts for people… not that it needs to exist – but HOW, and what the experience is like for the end-user.  Font, character allowance, timing, content – it’s essential and standard and needs to be treated in design and creation with the same respect as EVERYTHING important to the agenda of the site.

I’d like to see more people close their doors, Willa Wonka style, and figure it out for themselves – so they can speak to it and cop to it, etc.  I, for one, should not know your chat filter holes better than you do….

Categories: Izzy Neis Links

Let’s Chat: COPPA

April 25, 2010 5 comments

Twitter. I promised a rant on twitter. I promised a rant due on Thurs. It’s Sunday.

My apologies for the lateness and the possible lack of DRAGON FIRE that I was spittin’ on Thursday.  Indeed I was angry, and it had to do with weird (if not troubling and disappointing) rumors spread about COPPA.  But like the fear-mongering such rumors create – a tantrum is not what is needed here either. Clarity is what is needed.

So, my dear poppets – lemme share the facts about COPPA: Past, Present, and Future…

PAST

COPPA is the only “real” legislation we have to enforce/protect children under the age of 13.  COPPA stands for: Children’s Online Privacy Protection Act.  It was created to stop marketers from collecting and exploiting personally identifiable information from children.  What is personally identifiable information (or PII)?

First name / last name, phone number, email address, social security number, home address.

It’s also to good to consider the following PII:

School name, instant message clients, usernames for other sites, sister/brother/parents/teacher full names, zip code, small town + states, after school activity locations. – These are not held as stringently as the first group, but they’re equally as important since you can locate any child regarding this information. Basically: if I can find you easily with the info you provide… that could be argued as PII.

Remember this tip for the kiddies and yourself: Tangible/Open Air (non computer) life = Clark Kent, Online life = Superman.

COPPA is upheld by the FTC, who regularly posts announcements on their page: http://www.ftc.gov/.  There is a program governed by the FTC called “Safe Harbor”, and it is upheld by four organizations (CARU, ESRB, TRUSTe, Privo).  If you wish to be a part of the Safe Harbor program – you will get aid in meeting regulations, suggestions for “going beyond” and being better than bare minimum, and you will have legal representation if your compliance comes into question.  I have had the privilege to work with CARU and the ESRB (whom I am very happy to work with now), and I know the fine folks at Privo.  I would definitely suggest that any company or individual wishing to learn more about Safe Harbor reach out to these companies.

At one point they tried to make additional legislation: COPA (Children’s Online Protection Act) and DOPA (Deleting Online Predators Act) – both of which have been dismissed due to First Amendment (COPA) and sheer impossibility due to variables (the latter).

PRESENT

How is COPPA being used?  Well, no longer just a deterrent for Marketers, it is the sole legislation for anyone collecting any information regarding children under 13. But why would someone need to collect info from kids?

1. Newsletters
2. Registration for games
3. submitted in conversation (chat), pictures, audio, etc (basically – UGC, “User Generated Content”)

I exist in the epicenter of business, safety, entertainment, common sense, community, and I’m telling you… there is no real arguable reason to collect PII from children.  The decision regarding the sharing of any such PII information belongs to the parents. Ahh, now there’s the rub – how do parents make/enact/provide/receive that permission?? Lemme get to that in a sec.

What I forgot to mention in the “Past” section is that – COPPA legislation pinpoints 4 acceptable ways to gain PERMISSION to collect PII: a fax with a parents signature, valid credit card, phone call acceptance, and email-plus.  Naturally there are problems with all four methods.

  • Fax = expensive, not “earth” friendly, and who really owns a fax anymore? Not to mention – kids attempt to sign and fax themselves (the wily things they are). You lose more customers than you gain when you expect them to stop at KINKOS to fax something out – too much time, so long future customer.
  • Valid Credit Card = No one wants to put their digits in (and they had the 1 dollar charge, despite the fact we dismiss the charge), kids as young as 9 are toting their parent’s credit cards, it’s an opportunity to collect PII inadvertently from a child (AGE GATE MEMBERSHIP, pls), and kids have been known to take the card from mom’s purse (the cheeky things they are). Strangely enough – for parents who do not have any intention on purchasing a membership – they don’t really want to put in any CC information. Do I blame them?  Nope.  Too many “but what if my kid can access my number” or “But I don’t want to tricked into paying” or “Ugh, I have stuff to do. Dinner is almost ready. I don’t want to do this now, let’s go eat.”  deterrent!
  • Phone Call Acceptance = Heavy lifting on the part of CS, expensive call services, and how do you determine an adult’s voice if the adult happens to be squeaky?  Or a child who has low tones?  And, kids attempt to call in pretending to be parents (the sneaky things the are). One of the easier methods “in theory” – parents can just pick up and dial and say “yes” or whatever. No biggie. Except that – parents can’t make those phone calls if they’re at work, and sadly, from what I’ve heard, more kids call in than actual parents.
  • Email Plus = The least rigid, most used, least reliable method.  You request the parent’s email during the kid registration, you send a “Welcome” email that includes a click-through link that will open up UGC possibilities, the adult visits the link and chooses to allow or not allow UGC, and 24 hours later the parent gets another email reminding them that they did this (in case kids invade the family email, they will be caught “unawares” by the follow-up – or at least that’s the theory). The problem is that – a certain percentage of kids are putting their own email into the Parent Email slot, and trump the whole parent connection.

Personally, I lean towards Email Plus as a method these days.  As I said – I’m in the epicenter of a lot of needs.  My first and foremost goal is: SAFETY, followed by ENTERTAINMENT (kid style), and then the business, etc.  Granted Email Plus isn’t the “safest” – but that’s why I have POLICY AND PROCEDURE. I have moderation toolsets and staff, and, well me (cue chip on shoulder, my apologies).  We work behind the scenes during the live existence of the game to ensure that privacy remains active, despite the audience themselves. AND TRUST – this ain’t no walk in the park.

Children DO NOT understand what they should / should not speak about, nor do they get (en masse, I’m talking about now) why they should / should not speak.  So… you can pretty much guarantee that kids will attempt to share SOMETHING – the way around collecting this is:

  • Pre-screening & scrubbing content,
  • Filters that block anything close to PII (heavy, heavy black lists, or CLEVER dictionary chat that also reads phrases),
  • Filters that jedi-mind-trick the user (have you tried chatting with another user in Club Penguin? Only like 25-30% of what you try to say actually shows up to the public – this lowers frustration from users while safety guarding them from the public),
  • Scripted chat (Poptropica is still uber-popular and there isn’t an ounce of open or filtered chat
  • Post-hoc moderation – LIVE 24/7 staff on the look out for kids who figured out “work arounds” (like toe tree fort hive stick stephen for two three four five six seven)
  • Reporting mechanisms for kids to pinpoint those who are cheating the system

You don’t have to have all of them… but it’s a big decision to make, and not lightly either. Get council (from someone not selling you a product, please).

Once I have my front-line and behind-the-scenes methods in place – my next goal is to make sure kids come in and play the game… that they’re active and enjoying it.  If I don’t have kids on my site, I have no audience: no money, no sustainability, no kids to protect, no job.  And where does that leave kids?  Instead of at Disney World with the families and the attention to detail and overpopulated staff, they’re at Six Flags with the gangs and high school peer pressure (seriously, have you BEEN to a Six Flags in the last ten years? What is up with that? Um, NO, I don’t want to watch fourteen year olds try to make babies while I’m in line to ride on Batman, thank you. And no, I didn’t bring my Latin Kings sweatshirt today, darn I don’t fit in).

I do not, not, not recommend “Email Plus” for who has no intention of truly backin’ up the LIVE safety on their site.

If you do not have valid parental sign off for your online experience: you cannot allow UGC of any kind unless it’s screened first by staff and scrubbed of possible PII.  That means: usernames, chat, forum threads, forum posts, blog comments, guest books, comment walls, upload pictures, upload video, upload audio.  Basically: anything a user can submit needs to go through filters and screening.  Anything considered PII needs to be scrubbed.

What’s good policy?  Well, even when you GET the “valid parental permission” – you still filter the content, and you still have staff moderating.  This is YOUR brand and YOUR audience.

BTW: If anyone comes to you and tells you that a toolset will solve all your problems and that it will replace human staff – you better get your warning flag up.  THEY’RE SELLING YOU. Gross.

THE FUTURE

So, about two months ago I had the EXTREME privilege to sit on a stage at the Engage! Expo conference in NYC with Phyllis Marcus.  Phyllis is from the FTC and had been commissioned to look into behaviors in virtual worlds.  She has an interesting report here regarding the behaviors that were found.

When I spoke with her – the majority of my questions were around: How, when, what.  This was just an initial peek for the FTC into behaviors, and much of what they found was from first time viewing.  We talked a fair bit about COPPA, and what was next for the FTC.

Both Congress (on April 29th) and the FTC (June 2nd roundtable) are re-examining safety and privacy – and what that means from their standpoint.  Okay, their standpoint… but what about OUR standpoint, what will that mean for us?

  1. COPPA HAS NOT CHANGED.
  2. Talks are beginning: People are looking to open up conversation, reassess, get feedback about COPPA
  3. If changes are made to any part of COPPA it will not be immediate
  4. If COPPA does receive some changes, adds, tweaks, deletes – it will have a “Goes into Effect” date
  5. If there is a “Goes into Effect” date – companies will have a GRACE PERIOD in which to react
  6. But most importantly: NOTHING HAS BEEN PUT INTO LAW YET.  And regardless of any rumors regarding: “So and so said this” or “I heard that the FTC has already decided” – etc.  Stop perpetuating rumor that scares others into reacting.

IF COPPA changes, it will probably change due to parent verification – either attempting to find better methods of verification or deleting old methods of verification considered ineffective.

This shouldn’t affect any LIST (be it black, white, etc) that you have on your site.  As long as kids who ARE NOT PARENT VERIFIED are set to default “Scripted Chat” (or pre-written chat) you’re fine.  DO NOT ALLOW KIDS TO CHAT (filters or no) WITHOUT VALID PARENT VERIFICATION.  How to do that? Talk to company offering the Safe Harbor program.  Lawyers know a lot – but they’re NOT workin’ on this side of the biz daily, and it’s basically they’re job to be paranoid about the law (not necessarily how kids are using it). With the exception of a handful (@steph3n , @amymms , @mikepink , Liisa Thomas – yes two i’s, and Jim Dunstan, etc), I’d be mindful.  Don’t overreact because of fear.  Be proactive in finding out how, why, when, what it means to address kids online, to collect information, and to safeguard kids online (people to follow: @annecollier , @joipod , @twizznerd , @amymms , @tlittleton , @larrymagid , @shapingyouth , @chasestraight to name just a small handful, there are many more).

You have the parent’s permission – now it’s about upholding that parent’s permission and your brand and the safety of your audience.  Robust chat filters are great – THERE IS NO ONE SINGLE COMPANY SELLING THE ONLY APPROVED LIST THAT FOLLOWS THE LAW.  If you hear that? That’s bullshit.  Straight up. Someone is scaring you into buying a product, and that just breaks my heart…

I would LOVE LOVE LOVE LOVE to get into a discourse about my hopes, intentions, and goals for our industry.  I have met some really amazing, dedicated, SMART people – and together we’re continually trying to improve.  But when people come in and say things to “sell”?  That. Just. Guts. Me.  I know I live in the country of capitalism… but that doesn’t mean I have to support it.

I’ve put a LOAD of information in here.  My apologies for a lengthy, not so cheeky, probably boring post.  But let’s be honest – I needed to ramble on this topic.  Clarity is good.  If you don’t believe me, or wish to dispute any claims I’ve made… please feel free to GOOGLE COPPA YOURSELF, and/or talk to lawyers AND safe harbor folks.  Heck, place some comments, questions at the beep and we can walk/talk through it together. 🙂